This release introduces synthetic LiDAR data generation through Helios, a rework of the `agml.viz` module with better functionality, and a number of new available datasets, amongst other features.
Major Changes
- The **`agml.io`** module has been added, with a few convenience functions for working with file and directory structures.
- Currently available functions include `get_file_list` and `get_dir_list` which also work with nested structures, as well as `recursive_dirname`, `parent_path`, and `random_file`.
`agml.data`
- Three new datasets have been introduced:
- Object Detection: `ghai_iceberg_lettuce_detection`, `ghai_broccoli_detection`
- Image Classification: `riseholme_strawberry_classification_2021`
- The `agml.data.ImageLoader` has been added, which is essentially a simple loader designed only for images.
- Enables loading images from a nested directory structure.
- Enables easy resizing and transforms of the loaded images.
- The `AgMLDataLoader` now has a new method `show_sample` which can be used to visualize samples from the dataset directly from the loader.
`agml.synthetic`
- **LiDAR Data Generation**: You can generate LiDAR-based synthetic data now, using `opt.simulation_type = agml.synthetic.SimulationType.LiDAR`.
- You can recompile Helios with LiDAR enabled, as well as in parallel (for Linux and MacOS systems) using `recompile_helios(lidar_enabled = True)` and `recompile_helios(parallel = True)` [note that parallel compilation is enabled by default].
- A new loader `agml.synthetic.LiDARLoader` has been added, which can be used to load point clouds from a generated directory in the same format as `ImageLoader`, and can be used to get point clouds and visualize them.
`agml.viz`
- The entire `agml.viz` module has been reworked with new methods and a functional visualization backend which enables both `cv2` and `matplotlib` displays, depending on what the user desires.
- The following is a mapping of the old functions to new functions:
- `visualize_images_and_labels` -> `show_images_and_labels`
- `output_to_mask` -> `convert_mask_to_colored_image`
- `overlay_segmentation_masks` -> `annotate_semantic_segmentation`
- `visualize_image_and_mask` -> `show_image_and_mask`
- `visualize_overlaid_masks` -> `show_image_and_overlaid_mask`
- `visualize_image_mask_and_predicted` -> `show_semantic_segmentation_truth_and_prediction`
- `annotate_bboxes_on_image` -> `annotate_object_detection`
- `visualize_image_and_boxes` -> `show_image_and_boxes`
- `visualize_real_and_predicted_bboxes` -> `show_object_detection_truth_and_prediction`
- `visualize_images` -> `show_images`
- To swap between viz backends, use `get_viz_backend` and `set_viz_backend`.
- In order to simply display an image, you can use `display_image`.
- To visualize a point cloud (either in Open3D if installed, or matplotlib), use `show_point_cloud`.
Other Changes
- **Major Change for New Users**: `torch`, `torchvision`, and other related modeling packages for the `agml.models` package are no longer distributed with the AgML requirements -- you must install these on your own if you want to use them.
Minor/Functional Changes + Bugfixes
- Fixed backend swapping between TensorFlow and PyTorch when using clashing transforms.
- Added the ability to run prediction with a classification and a segmentation model without normalizing the input image using `model.predict(..., normalize = False)`.
- Images no longer auto-show as matplotlib figures when using `agml.viz.show_*` methods, instead they are returned as image arrays and can be displayed in a desired format.
- Improved access and setting of information in the backend `config.json` file, so information is not accidentally overwritten.
**Full Changelog**: https://github.com/Project-AgML/AgML/compare/v0.4.7...v0.5.0