Added
- Extended docs with (https://github.com/tumaer/lagrangebench/pull/17):
- Reference to notebooks.
- Baseline results from the NeurIPS 2023 paper.
- README, mainly https://github.com/tumaer/lagrangebench/pull/22:
- LagrangeBench logo.
- Clickable Badges with URLs to the paper, RTD, PyPI, Colab, and some git workflows.
- Contribution guidelines.
- Notes on MacOS and `jax-metal`, see https://github.com/tumaer/lagrangebench/pull/18.
- Tests, see https://github.com/tumaer/lagrangebench/pull/21
- Our tests are written using `unittest`, but we run them with `pytest`. For now we keep that standard.
- Currently, the tests cover roughly 70% of the codebase, namely:
- the `case_setup` including preprocessing and integration modules,
- whether the equivariant models are equivariant,
- whether all 3 neighbor search backends give correct results on small edge cases,
- the fushforward utils, and
- the rollout loop by introducing a dummy 3D Lennard-Jones dataset of 3 particles for 2k steps.
- Github Workflows, mainly in https://github.com/tumaer/lagrangebench/pull/21:
- Linting checks with `ruff`. Ruff now replaces black.
- `pytest` under Python 3.9, 3.10, 3.11 including `codecov`.
- Automatic publishing of tagged versions to PyPI.
- Batched rollout loop using `vmap`. Promises significant speedups, as validation during training used to take around 15%-30% of the time. And of course, batching during inference is nice to have. I noticed that there is an optimal batch size without changing the inference speed much, but there is a regime for larger batches where we don't get OOM, but validation becomes significantly slower. Tuning this `batch_size_infer` parameter with a few test runs is my current best advice. See https://github.com/tumaer/lagrangebench/pull/20 and https://github.com/tumaer/lagrangebench/pull/21.
- `pkl2vtk` to convert a pickle rollout to a series of .vtk files for visualization.
- Metadata and configs in `pyproject.toml` and other config files, see https://github.com/tumaer/lagrangebench/pull/21.
Fixed
- Multiple neighbor list reallocations during training, see https://github.com/tumaer/lagrangebench/pull/15.
- When using both random noise and pushforward, the noise seed is now independent of the max number of pushfoward steps, see https://github.com/tumaer/lagrangebench/pull/16.
Changed
- Remove explicit force functions from the codebase and put them in `force.py` Python files in the dataset directory of the datasets with forces (2D DAM, 2D RPF, 3D RPF). This comes along with a new version of the datasets on Zenodo here https://doi.org/10.5281/zenodo.10491868, see https://github.com/tumaer/lagrangebench/pull/23.
- Rename some variables and improve docstrings, see https://github.com/tumaer/lagrangebench/pull/17.
- Swap the order of `sender` and `receiver` to align with jax-md, see https://github.com/tumaer/lagrangebench/pull/17.
- Upgrade dependencies and fix `jax==0.4.20`, `jax-md==0.2.8`, and `e3nn-jax==0.20.3`.
---
**Full Changelog**: https://github.com/tumaer/lagrangebench/compare/v0.0.2...v0.1.2