In this release, the primary focus has been on mitigating the risk of MemoryError.
We have implemented the following approaches to achieve this:
- **Batch processing for aggregated output** - to handle large datasets more efficiently, we have introduced batch processing for aggregated output. This means that results are calculated in smaller, manageable batches, with the batch size based on the available RAM memory. This approach minimizes the likelihood of running into memory limitations during calculations.
- **Preallocated memory for individual output** - we allocate the necessary memory for individual output before the calculations begin. This allocation ensures that memory is reserved in advance, preventing unexpected MemoryErrors during the processing of individual items.