M2cgen

Latest version: v0.10.0

Safety actively analyzes 683530 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.10.0

* Python 3.6 is no longer supported.
* Added support for Python 3.9 and 3.10.
* Trained models can now be transpiled into Rust and Elixir 🎉
* Model support:
* Added support for SGDRegressor from the `lightning` package.
* Added support for extremely randomized trees in the LightGBM package.
* Added support for OneClassSVM from the `scikit-learn` package.
* Various improvements to handle the latest versions of the supported models.
* Various CI/CD improvements including migration from coveralls to codecov, automated generation of the code examples and automated GitHub Release creation.
* Minor codebase cleanup.
* Significantly reduced the number of redundant parentheses and `return` statements in the generated code.
* Latest Dart language versions are supported.
* Programming languages can provide native implementation of sigmoid and softmax functions.
* Improved code generation speed by adding new lines at the end of a generated code.

0.9.0

* Python 3.5 is no longer supported.
* Trained models can now be transpiled into F 🎉 .
* Model support:
* Added support for GLM models from the `scikit-learn` package.
* Introduced support for a variety of objectives in LightGBM models.
* The cauchy function is now supported for GLM models.
* Improved conversion of floating point numbers into string literals. This leads to improved accuracy of results returned by generated code.
* Improved handling of missing values in LightGBM models. Kudos to our first time contributor Aulust 🎉
* Various improvements of the code generation runtime.

0.8.0

* This release is the last one which supports Python 3.5. Next release will require Python >= 3.6.
* Trained models can now be transpiled into Haskell and Ruby 🎉
* Various improvements of the code generation runtime:
* Introduced caching of the interpreter handler names.
* A string buffer is now used to store generated code.
* We moved away from using the `string.Template`.
* The `numpy` dependency is no longer required at runtime for the generated Python code.
* Improved model support:
* Enabled multiclass support for XGBoost Random Forest models.
* Added support of Boosted Random Forest models from the XGBoost package.
* Added support of GLM models from the `statsmodels` package.
* Introduced fallback expressions for a variety of functions which rely on simpler language constructs. This should simplify implementation of new interpreters since the number of functions that must be provided by the standard library or by a developer of the given interpreter has been reduced. Note that fallback expressions are optional and can be overridden by a manually written implementation or a corresponding function from the standard library. Among functions for which fallback AST expressions have been introduced are: `abs`, `tanh`, `sqrt`, `exp`, `sigmoid` and `softmax`.

Kudos to StrikerRUS who's responsible for all these amazing updates 💪

0.7.0

* Bug fixes:
* Thresholds for XGBoost trees are forced to be float32 now (https://github.com/BayesWitnesses/m2cgen/issues/168).
* Fixed support for newer versions of XGBoost, in which the default value for the `base_score` parameter became None (https://github.com/BayesWitnesses/m2cgen/issues/182).
* Models can now be transpiled into the Dart language. Kudos to MattConflitti for this great addition 🎉
* Support for following models has been introduced:
* Models from the `statsmodels` package are now supported. The list of added models includes: GLS, GLSAR, OLS, ProcessMLE, QuantReg and WLS.
* Models from the `lightning` package: AdaGradRegressor/AdaGradClassifier, CDRegressor/CDClassifier, FistaRegressor/FistaClassifier, SAGARegressor/SAGAClassifier, SAGRegressor/SAGClassifier, SDCARegressor/SDCAClassifier, SGDClassifier, LinearSVR/LinearSVC and KernelSVC.
* RANSACRegressor from the `scikit-learn` package.
* The name of the scoring function can now be changed via a parameter. Thanks mrshu 💪
* The `SubroutineExpr` expression has been removed from AST. The logic of how to split the generated code into subroutines is now focused in interpreters and was completely removed from assemblers.

0.6.0

- Trained models can now be transpiled into R, PowerShell and PHP. Major effort delivered solely by StrikerRUS .
- In Java interpreter introduced a logic that splits code into methods that is based on heuristics and which doesn't rely on `SubroutineExpr` from AST.
- Added support of LightGBM and XGBoost Random Forest models.
- XGBoost linear models are now supported.
- LassoLarsCV, Perceptron and PassiveAggressiveClassifier estimators from scikit-learn package are now supported.

0.5.0

Quite a few awesome updates in this release. Many thanks to StrikerRUS and chris-smith-zocdoc for making this release happen.
- Visual Basic and C joined the list of supported languages. Thanks StrikerRUS for all the hard work!
- The `numpy` dependency is no longer required for generated Python code when no linear algebra is involved. Thanks StrikerRUS for this update.
- Fixed the bug when generated Java code exceeded the JVM method size constraints in case when individual estimators of a GBT model contained a large number of leaves. Kudos to chris-smith-zocdoc for discovering and fixing this issue.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.