Embedseg

Latest version: v0.2.5

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.1.0

2. Plan to add `tile and stitch` capability in release `v0.2.4` for handling in large 2d and 3d images during inference
3. Plan to add a parameter `max_crops_per_image` in release `v0.2.4` to set an optional upper bound on number of crops extracted from each image

0.2.5

What's Changed
* v1.0.1: 2d + 3d code examples by lmanan in https://github.com/juglab/EmbedSeg/pull/6
* v0.2.3 by lmanan in https://github.com/juglab/EmbedSeg/pull/7
* Automatic calculation of crop size by lmanan in https://github.com/juglab/EmbedSeg/pull/13
* v0.2.5 - tag (a) by lmanan in https://github.com/juglab/EmbedSeg/pull/19
* V0.2.5 - tag (b) by lmanan in https://github.com/juglab/EmbedSeg/pull/20
* V0.2.5 - tag (c) by lmanan in https://github.com/juglab/EmbedSeg/pull/21
* V0.2.5 - tag (d) by lmanan in https://github.com/juglab/EmbedSeg/pull/22
* Update train.py by ajinkya-kulkarni in https://github.com/juglab/EmbedSeg/pull/29
* Update utils.py by ajinkya-kulkarni in https://github.com/juglab/EmbedSeg/pull/28

New Contributors
* ajinkya-kulkarni made their first contribution in https://github.com/juglab/EmbedSeg/pull/29

**Full Changelog**: https://github.com/juglab/EmbedSeg/compare/v0.2.0...v0.2.5

v0.2.4-tag
This release was used to compute numbers for the MIDL [publication](https://openreview.net/forum?id=JM6GuFGayL5) and is stable.

- The normalization of the image intensities was done by dividing pixel intensities by 255 (for 8-bit images) and 65535 (for unsigned 16-bit images). While this normalization strategy lead to a faster training, it lead to a sometimes, poorer OOD performance. In the future releases, the default will be set to `min-max-percentile` (takes model longer to reach the same val IoU but leads to a better inference performance).

v0.2.3-tag
A minor update since release **`v0.2.2`**. This includes:
- Add **`display_zslice`** parameter and **`save_checkpoint_frequency`** parameter to `configs` dictionary **[here](https://github.com/juglab/EmbedSeg/blob/1f8fe3e3cfbb16a18a1db15510b6da26667baec0/EmbedSeg/utils/create_dicts.py#L305)**

1. Support for visualization for setups when `virtual_batch_multiplier` > 1 is still missing.

0.2.4

6. Plan to deal with bug while evaluating `var_loss` and to have crops of desired size by additional padding.
7. Plan to include support for more classes.
8. Normalization for 3d ==> (0,1, 2)
9. Make normalization as default option for better extensibility
10. Parallelize operations like cropping
11. Eliminate the specification of grid size in notebooks -set to some default value
12. Simplify notebooks further
13. Make colab versions of the notebooks
14. Test `center=learn` capability for learning the center freely
15. Add the ILP formulation for stitching 2d instance predictions
16. Add the code for converting predictions from 2d model on xy, yz and xz slices to generate a 3D instance segmentation
17. Add more examples from medical image datasets
18. Add `threejs` visualizations of the instance segmentations. Explain how to generate these meshes, smoothen them and import them with `threejs` script.
19. Padding with `reflection` instead of `constant` mode
20. Include `cluster_with_seeds` in case nuclei or cell detections are additionally available

v0.2.2-tag
- Add all 3d example notebooks
- Pad images with average background intensity instead of 0

0.2.0

Major changes:

- Add 3d example notebooks for two datasets
- Correct `min_object_size` (evaluated now from looking at the train and validation masks)
- Save `tif` images with datatype `np.uint16` (in the prediction notebooks )
- Provide support in case evaluation GT images are not available (during prediction)

Some things which are still incorrect in v0.2.0:

- `n_y` should be set to `n_x` for equal pixel/voxel sizes in y and x dimension. This is fixed in v0.2.1
- `anisotropy_factor` is wrongly calculated for the 3d notebooks (it was calculated as the reciprocal). This is fixed in v0.2.1
- `train_size` was set to 600 for the `bbbc012-2010` dataset. This is raised to 1200 in v0.2.1

0.1.0

- Initial functional 2d code (`min_object_size` was hard coded to 36 and will be updated in later iterations)
- Assets include:
- 2d images and GT instance annotations
- 3d images and GT instance annotations
- fully trained models (`*demo.tar`) (models trained from scratch up till 200 iterations)
- glasbey-like colormap (`cmap_60.npy`)

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.