New features
Parallel reading of data
See `test/testing.py` for a full example. Modify the following around your `APReader` call:
python
import multiprocessing as mp
...
if __name__ == '__main__': this line has to be included!
without 'processes=...'!
pool = mp.Pool()
pass the pool to the reader
reader = APReader(file, parallelPool=pool)
make sure to close the pool after you are done with it
mp.close()
mp.join()
For the parallel loading to work, you have to define a parallel pool of processes in your top-level script. These processes will be accessed from within `APReader`-Functions. When passing no arguments to `mp.Pool()` it will automatically create as many processes as possible, according to the amount of threads your CPU allows (cores + virtual cores). It does not make sense to pass in more, since the `APReader` spawns the same amount of processes as there are CPU Threads. Increasing the amount of processes in your pool does not increase the amount of parallelism. It is fixed.
> Keep in mind, that parallelisation is not always faster. Spawning of processes is expensive and can be wasteful for small files.
The results from `APReader` stay the same and you can continue your analysis.
Improvements
- Typo in `Group.intervalstr` fixed (micro and nano-seconds where swapped)
- Unit of `Group.interval` is now seconds
What's Changed
* Development on Version 1.1.1 by leonbohmann in https://github.com/leonbohmann/APReader/pull/18
**Full Changelog**: https://github.com/leonbohmann/APReader/compare/v1.1.0...v1.1.1