Pycomex

Latest version: v0.14.1

Safety actively analyzes 688823 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 7

0.5.1

------------------

* If numpy arrays are added to the internal data store, they are automatically converted to lists, so that
they can be json serialized later.

0.5.0

------------------

* By fixing the previous bug, I introduced a new one: Essentially now that I moved the analysis context
manager to the same logical level as the experiment context manager I was facing the same problem: It got
executed when merely importing the module, which had all sorts of bad side effects. This bug is fixed now.
* While fixing that bug, I accidentally stumbled on a much better method of how to make context managers
skippable, which I find so good that I moved the experiment context manager to use the same mechanism
as well, which gets rid of the need for calling ``Experiment.prepare()``. But this means some
backwards incompatible API changes.

0.4.1

------------------

* Fixed a bug which broke the ``with e.analysis:`` functionality in Python 3.10. Rewrote ``RecordCode``
such that it no longer uses the deprecated functionality and now also works for the new version.
* ``with e.analysis:`` can now also be used on the indent level as the experiment context manager itself
which is more intuitive. Using it this way also solves some unwanted interaction with the error catching
behavior of the experiment context.

0.4.0

------------------

* Added ``pycomex.experiment.ArchivedExperiment`` which makes it possible to load an arbitrary experiment
instance from the archived folder and use it much like it is possible from within ``analysis.py``
* Added ``pycomex.experiment.ExperimentRegistry`` which can be used to load an experiment base path and
automatically discover all the (nested) namespace folders within which contain actual experiment run
archives.
* Added ``pycomex.experiment.NamespaceFolder`` which represents and allows to work with namespace
folders, for example by easily getting the ``ArchivedExperiment`` instance according to an experiment
run (numeric) index.
* Added ``psutil`` to dependencies to implement hardware resource monitoring as an additional feature
when printing the intermediate status of the experiment run with ``Experiment.status()``

0.3.1

------------------

* Fixed bug that ``e.info()`` could not be used inside the ``analysis.py`` file
* Decided to add ``numpy`` and ``matplotlib`` to the dependencies after all. Originally I did not want to
include them because I don't strictly need them and they are quite big packages. But honestly, what kind
of computational experiment works without those two nowadays?
* Renamed the template files with better naming scheme
* Updated readme

0.3.0

------------------

* Added ``Experiment.commit_json`` to directly store dict data as json file artifacts for the experiment
records
* Improved the ``analysis.py`` templating for experiments
* Using the context manager ``Experiment.analysis`` within the experiment file can be used to not only
directly execute the analysis right after the experiment is completed but also all the code within
that context managers content block is copied into the analysis template of that run and it will
work as it is
* This is due to the fact, that ``Experiment`` now automatically realizes if it is being imported
from a ``snapshot.py`` within an existing record folder. In that case it populates internal fields
such as ``Experiment.data`` by loading the persistent file artifact.
* Added ``examples/analysis.py`` which documents / explains the previously mentioned process

Page 6 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.