----------
* Build docs
* add pbr
* add pbr
* Build docs
* add pbr
* add pbr
* Build docs
* add pbr
* Build docs
* tests pass
* pull
* Build docs
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* Build docs
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* WIP: get github actions to work
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* Build docs
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* tests pass
* WIP: fixtures
* beta release
* time features added and retail dataset filtered. need to move hyperparameter tuning to prefect
* docs
* WIP charts
* time features added and retail dataset filtered. need to move hyperparameter tuning to prefect
* quickstart tests WIP: need to add time features to preprocess and filter retail dataset
* unit tests aside from quickstart passing
* tests almost passing
* synthesis wip implemented
* gridcv implemented
* unit tests passing - just need to reset fixtures for prefect test
* preprocess moved to prefect
* target dimensions working
* license updated in readme
* fix issue 172
* fix issue 172
* fix issue\_159
* fixes 149
* merge: polymorphism
* all tests passing
* WIP: polymorphism compute bottleneck
* WIP: polymorphism - divina module not found on workers
* WIP: polymorphism
* WIP: polymorphism
* spelling
* spelling
* spelling
* before svelt
* gitattributes
* gitattributes
* alpha
* remove local docs
* Build docs
* alpha
* Build docs
* alpha
* Build docs
* alpha
* alpha
* alpha
* Build docs
* quickstart charts ready
* quickstart charts ready
* quickstart charts ready
* quickstart charts ready
* dataset.py improved
* Revert "Build docs"
* tests passing - random seed passed to dask model
* add time dataset
* add time dataset
* Build docs
* add time dataset
* tests passing
* WIP: quickstart charts ready
* WIP: quickstart charts ready
* WIP: quickstart charts ready
* WIP: ffill added
* WIP: before adding ffill
* WIP: before adding ffill
* tests pass and docs test functioning
* tests passing
* WIP: charts fixed. need to reset tests
* WIP: store encode in retail example. need to reset tests
* WIP: filepath issue resolved. need to reset tests
* WIP: tests passing except for module datasets not loading filepaths
* WIP: random seed fixed - tests need to be updated
* Revert "WIP: datasets module implemented but not tested"
* WIP: random seed issues
* WIP: datasets module implemented but not tested
* WIP: datasets module implemented but not tested
* validation splits tested - integration and unit
* validation splits tested
* doc updates
* factor plot added
* confidence interval charts improved
* WIP: all tests passing - another big confidence interval bug fixed
* WIP: one test not passing
* WIP: two tests still not passing. bootstrap confidence intervals bug fixed
* WIP: all tests but integration forecast passing
* WIP: persist performance greatly improved and unit tests locked for next release
* WIP: 3d and 2d graphs working in test
* WIP: 3d and 2d graphs working in test
* WIP: blind forecast tests passing unit
* WIP: confidence intervals shaded
* WIP: simulations tested more thoroughly
* WIP: fixed forecast definition validation test
* WIP: performance bug resolved. basic plotting added to predict unit test
* WIP: various improvements - retail tests ready to lock
* all tests working with interaction features
* unit tests working with interaction features
* unit tests working with scaling and regularlization
* multiple partitions on test datasets
* multiple partitions on test datasets
* dataset aggregation fixed to first
* cleanup
* retail tests done
* WIP: retail train test setup but not passing
* datasets removed
* s3 cleaning up between tests and patches removed
* s3 cleaning up between tests and patches removed
* s3 cleaning up between tests and patches removed
* base jsonschema implemented
* binner working, but still need to add s3 path interoperability and clear s3 after tests
* binner working, but still need to add s3 path interoperability and clear s3 after tests
* binner working, but still need to add s3 path interoperability and clear s3 after tests
* binner working, but still need to add s3 path interoperability and clear s3 after tests
* confidence intervals optimized and moved to bootstrap
* WIP: before chop up predit
* confidence intervals almost done
* confidence intervals added
* bunch of fixes and added encoder and interaction terms
* quickstart tested and various improvements
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* Build docs
* Build docs
* Build docs
* Build docs
* Build docs
* update prod github action yaml
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* WIP: add codecov
* param set and get added to cli
* parameters persisted as json and overridable
* WIP: add simulation and aggregation
* WIP: add simulation and aggregation
* cleanup docs directory
* cleanup docs directory
* cleanup docs directory
* cleanup docs directory
* cleanup docs directory
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* add tag
* update prod github action yaml
* Set theme jekyll-theme-minimal
* Build docs
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* update prod github action yaml
* add sphinx build github action
* dataset and vision id's merged with paths, dataset profiles removed, documentation formatting
* add docs
* add docs
* WIP: docker test image
* add pbr
* WIP: docker test image
* WIP: point to test docker container
* add pbr
* WIP: add repo secrets
* WIP: add repo secrets
* WIP: cleanup
* parent 8bba6f0f3b568240ea59508b41df935ee46bba46 author John Hurdle <johnhurdle23gmail.com> 1630010025 -0700 committer John Hurdle <johnhurdle23gmail.com> 1674520338 -0800
* WIP: cleanup
* move models to seperate repo
* formatting
* adjust gha.yaml
* adjust gha.yaml
* add CI
* tests passing and backoff implemented for intermittent access denied s3 errors
* all initial tests pass
* all tests pass with exception of dataset build remote
* WIP: first CLI tests implemented
* testing e2e docker setup
* clean up requirements.txt
* clean up gitignore
* clean up requirements.txt
* cleaning up repo
* cleaning up repo
* clean up commits
* WIP
* add cli
* WIP
* wip e2e tests
* base integration tests implemented
* more integration tests passing
* adding everything
* datasets abstracted to filepaths and profiles added
* datasets implmemented
* dask implemented in unit tests
* WIP:packaging
* incremental unit testing
* first moto test passing
* security group with ssh access enabled on partitioning EC2
* fix naming with single partition
* large dataset build works
* base validation implemented with splits
* WIP: about to start streaming in partitioning
* base spark model implemented
* WIP spark model implementation
* partitioning implemented
* WIP files decompressed and parsed
* WIP encoding files during partitioning
* s3 source copy implemented
* modify source bucket policy for copy
* data partition works on small dataset
* ec2 cheapest instance WIP. still need to filter dedicated instance types
* emr loads and saves s3. pip3 and python3 work
* pyspark context available on EMR
* emr cluster working
* abstracted and made iam role creation optional - works
* clean up data ingestion
* first commit
* first commit