- feat: cost optimize: save account.cost.optimize recommendations to sqlite database, with a `dt_created` field that gets preserved between re-runs
- this helps identify the date on which a recommendation was first created
- feat: cost optimize: do not load recommendations from sqlite instead of re-calculating
- Update 2019-12-27
- initially, this was "load recommendations from sqlite instead of re-calculating"
- but I decided to just re-calculate at each request, and keep the sqlite usage to the interactive implementation
- also, cleaned up the implementation by using the `pre` listener for checking existing sqlite (which I don't use anyway now)
- Earlier notes 2019-12-26
- also made some code changes for separation of concerns
- implementation is horrible ATM, with a major requirement on how to have a different result "per ndays" request
- the `pipeline_factory` function now got very messy as well
- and there still is no way to pass a `--refresh` option to recalculate instead of load from sqlite
- bugfix: cost optimize: filter the `ec2_df` for only the latest size. This fixes the issue of cpu.max.max being a value for size s1 whereas the current size is s2
- bugfix: cost optimize: when `ec2_df` is shorter than 7 days of daily data, return "Not enough data" in classification