Lagrangebench

Latest version: v0.1.2

Safety actively analyzes 623775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.2

**Added**
- Extended docs with (https://github.com/tumaer/lagrangebench/pull/17):
- Reference to notebooks.
- Baseline results from the NeurIPS 2023 paper.
- README, mainly https://github.com/tumaer/lagrangebench/pull/22:
- LagrangeBench logo.
- Clickable Badges with URLs to the paper, RTD, PyPI, Colab, and some git workflows.
- Contribution guidelines.
- Notes on MacOS and `jax-metal`, see https://github.com/tumaer/lagrangebench/pull/18.
- Tests, see https://github.com/tumaer/lagrangebench/pull/21
- Our tests are written using `unittest`, but we run them with `pytest`. For now we keep that standard.
- Currently, the tests cover roughly 70% of the codebase, namely:
- the `case_setup` including preprocessing and integration modules,
- whether the equivariant models are equivariant,
- whether all 3 neighbor search backends give correct results on small edge cases,
- the fushforward utils, and
- the rollout loop by introducing a dummy 3D Lennard-Jones dataset of 3 particles for 2k steps.
- Github Workflows, mainly in https://github.com/tumaer/lagrangebench/pull/21:
- Linting checks with `ruff`. Ruff now replaces black.
- `pytest` under Python 3.9, 3.10, 3.11 including `codecov`.
- Automatic publishing of tagged versions to PyPI.
- Batched rollout loop using `vmap`. Promises significant speedups, as validation during training used to take around 15%-30% of the time. And of course, batching during inference is nice to have. I noticed that there is an optimal batch size without changing the inference speed much, but there is a regime for larger batches where we don't get OOM, but validation becomes significantly slower. Tuning this `batch_size_infer` parameter with a few test runs is my current best advice. See https://github.com/tumaer/lagrangebench/pull/20 and https://github.com/tumaer/lagrangebench/pull/21.
- `pkl2vtk` to convert a pickle rollout to a series of .vtk files for visualization.
- Metadata and configs in `pyproject.toml` and other config files, see https://github.com/tumaer/lagrangebench/pull/21.

**Fixed**
- Multiple neighbor list reallocations during training, see https://github.com/tumaer/lagrangebench/pull/15.
- When using both random noise and pushforward, the noise seed is now independent of the max number of pushfoward steps, see https://github.com/tumaer/lagrangebench/pull/16.

**Changed**
- Remove explicit force functions from the codebase and put them in `force.py` Python files in the dataset directory of the datasets with forces (2D DAM, 2D RPF, 3D RPF). This comes along with a new version of the datasets on Zenodo here https://doi.org/10.5281/zenodo.10491868, see https://github.com/tumaer/lagrangebench/pull/23.
- Rename some variables and improve docstrings, see https://github.com/tumaer/lagrangebench/pull/17.
- Swap the order of `sender` and `receiver` to align with jax-md, see https://github.com/tumaer/lagrangebench/pull/17.
- Upgrade dependencies and fix `jax==0.4.20`, `jax-md==0.2.8`, and `e3nn-jax==0.20.3`.

0.0.2

Code used to generate the results for the NeurIPS 2023 Datasets & Benchmarks paper.

Extensively tested functionalities (on Ubuntu 22.04 with Python 3.10.12 and Poetry 1.6.0):
- training/inference using config files - as described in README
- running the 3 notebooks

0.0.1

First release of `lagrangebench`.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.