Eqcorrscan

Latest version: v0.5.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.5.0

* core.match_filter.tribe
- Significant re-write of detect logic to take advantage of parallel steps (see 544)
- Significant re-structure of hidden functions.
* core.match_filter.matched_filter
- 5x speed up for MAD threshold calculation with parallel (threaded) MAD
calculation (531).
* core.match_filter.detect
- 1000x speedup for retrieving unique detections for all templates.
- 30x speedup in handling detections (50x speedup in selecting detections,
4x speedup in adding prepick time)
* core.match_filter.template
- new quick_group_templates function for 50x quicker template grouping.
- Templates with nan channels will be considered equal to other templates with shared
nan channels.
- New grouping strategy to minimise nan-channels - templates are grouped by
similar seed-ids. This should speed up both correlations and
prep_data_for_correlation. See PR 457.
* utils.pre_processing
- `_prep_data_for_correlation`: 3x speedup for filling NaN-traces in templates
- New function ``quick_trace_select` for a very efficient selection of trace
by seed ID without wildcards (4x speedup).
- `process`, `dayproc` and `shortproc` replaced by `multi_process`. Deprecation
warning added.
- `multi_process` implements multithreaded GIL-releasing parallelism of slow
sections (detrending, resampling and filtering) of the processing workflow.
Multiprocessing is no longer supported or needed for processing. See PR 540
for benchmarks. New approach is slightly faster overall, and significantly
more memory efficeint (uses c. 6x less memory than old multiprocessing approach
on a 12 core machine)
* utils.correlate
- 25 % speedup for `_get_array_dicts` with quicker access to properties.
* utils.catalog_to_dd
- _prepare_stream
- Now more consistently slices templates to length = extract_len * samp_rate
so that user receives less warnings about insufficient data.
- write_correlations
- New option `use_shared_memory` to speed up correlation of many events by
ca. 20 % by moving trace data into shared memory.
- Add ability to weight correlations by raw correlation rather than just
correlation squared.
* utils.cluster.decluster_distance_time
- Bug-fix: fix segmentation fault when declustering more than 46340 detections
with hypocentral_separation.

0.4.4

* core.match_filter
- Bug-fix: peak-cores could be defined twice in _group_detect through kwargs.
Fix: only update peak_cores if it isn't there already.
* core.match_filter.tribe
- Detect now allows passing of pre-processed data
* core.match_filter.template
- Remove duplicate detections from overlapping windows using `._uniq()`
* core.lag_calc._xcorr_interp
- CC-interpolation replaced with resampling (more robust), old method
deprecated. Use new method with use_new_resamp_method=True as **kwarg.
* core.lag_calc
- Added new option all_vert to transfer P-picks to all channels defined as
vertical_chans.
- Made usage of all_vert, all_horiz consistent across the lag_calc.
- Fixed bug where minimum CC defined via min_cc_from_mean_cc_factor was not
set correctly for negative correlation sums.
* core.template_gen
- Added new option all_vert to transfer P-picks to all channels defined as
vertical_chans.
- Made handling of horizontal_chans and vertical_chans consistent so that user
can freely choose relevant channels.
* utils.correlate
- Fast Matched Filter now supported natively for version >= 1.4.0
- Only full correlation stacks are returned now (e.g. where fewer than than
the full number of channels are in the stack at the end of the stack, zeros
are returned).
* utils.mag_calc.relative_magnitude
- fixed bug where S-picks / traces were used for relative-magnitude calculation
against user's choice.
- implemented full magnitude bias-correction for CC and SNR
* utils.mag_calc.relative_amplitude:
- returns dicts for SNR measurements
* utils.catalog_to_dd.write_correlations
- Fixed bug on execution of parallel execution.
- Added parallel-options for catalog-dt measurements and for stream-preparation
before cross correlation-dt measurements.
- Default parallelization of dt-computation is now across events (loads CPUs
more efficiently), and there is a new option ``max_trace_workers` to use
the old parallelization strategy across traces.
- Now includes `all_horiz`-option that will correlate all matching horizontal
channels no matter to which of these the S-pick is linking.
* utils.clustering
- Allow to handle indirect comparison of event-waveforms when (i.e., events
without matching traces which can be compared indirectly via a third event)
- Allows to set clustering method, metric, and sort_order from
scipy.cluster.hierarchy.linkage.
* tribe, template, template_gen, archive_read, clustering: remove option to read
from seishub (deprecated in obspy).

0.4.3

* core.match_filter
- match_filter:
- Provide option of exporting the cross-correlation sums for additional later
analysis.
* core.match_filter.party.write
- BUG-FIX: When `format='tar'` is selected, added a check for .tgz-file
suffix before checking the filename against an existing file. Previously,
when a filename without '.tgz'-suffix was supplied, then the file was
overwritten against the function's intention.
- Add option `overwrite=True` to allow overwriting of existing files.
* core.match_filter.party.read
- BUG-FIX: Ensure wildcard reading works as expected: 453
* core.match_filter.party.rethreshold:
- added option to rethreshold based on absolute values to keep relevant
detections with large negative detect_val.
* core.lag_calc:
- Added option to set minimum CC threshold individually for detections based
on: min(detect_val / n_chans * min_cc_from_mean_cc_factor, min_cc).
- Added the ability of saving correlation data of the lag_calc.
* core.template_gen:
- Added support for generating templates from any object with a
get_waveforms method. See 459.
* utils.mag_calc.calc_b_value:
- Added useful information to doc-string regarding method and meaning of
residuals
- Changed the number of magnitudes used to an int (from a string!?)
* utils.mag_calc.relative_magnitude:
- Refactor so that `min_cc` is used regardless of whether
`weight_by_correlation` is set. See issue 455.
* utils.archive_read
- Add support for wildcard-comparisons in the list of requested stations and
channels.
- New option `arctype='SDS'` to read from a SeisComp Data Structure (SDS).
This option is also available in `utils.clustering.extract_detections` and
in `utils.archive_read._check_available_data`.
* utils.catalog_to_dd
- Bug-fixes in 424:
- only P and S phases are used now (previously spurious amplitude picks
were included in correlations);
- Checks for length are done prior to correlations and more helpful error
outputs are provided.
- Progress is not reported within dt.cc computation
- `write_station` now supports writing elevations: 424.
* utils.clustering
- For `cluster`, `distance_matrix` and `cross_chan_correlation`, implemented
full support for `shift_len != 0`. The latter two functions now return, in
addition to the distance-matrix, a shift-matrix (both functions) and a
shift-dictionary (for `distance_matrix`). New option for shifting streams
as a whole or letting traces shift individually
(`allow_individual_trace_shifts=True`).
* utils.plotting
- Function added (twoD_seismplot) for plotting seismicity (365).

0.4.2

* Add seed-ids to the _spike_test's message.
* utils.correlation
- Cross-correlation normalisation errors no-longer raise an error
- When "out-of-range" correlations occur a warning is given by the C-function
with details of what channel, what template and where in the data vector
the issue occurred for the user to check their data.
- Out-of-range correlations are set to 0.0
- After extensive testing these errors have always been related to data issues
within regions where correlations should not be computed (spikes, step
artifacts due to incorrectly padding data gaps).
- USERS SHOULD BE CAREFUL TO CHECK THEIR DATA IF THEY SEE THESE WARNINGS
* utils.mag_calc.amp_pick_event
- Added option to output IASPEI standard amplitudes, with static amplification
of 1 (rather than 2080 as per Wood Anderson specs).
- Added `filter_id` and `method_id` to amplitudes to make these methods more
traceable.
* core.match_filter
- Bug-fix - cope with data that are too short with `ignore_bad_data=True`.
This flag is generally not advised, but when used, may attempt to trim all
data to zero length. The expected behaviour is to remove bad data and run
with the remaining data.
- Party:
- decluster now accepts a hypocentral_separation argument. This allows
the inclusion of detections that occur close in time, but not in space.
This is underwritten by a new findpeaks.decluster_dist_time function
based on a new C-function.
- Tribe:
- Add monkey-patching for clients that do not have a `get_waveforms_bulk`
method for use in `.client_detect`. See issue 394.
* utils.pre_processing
- Only templates that need to be reshaped are reshaped now - this can be a lot
faster.

0.4.1

* core.match_filter
- BUG-FIX: Empty families are no longer run through lag-calc when using
Party.lag_calc(). Previously this resulted in a "No matching data" error,
see 341.
* core.template_gen
- BUG-FIX: Fix bug where events were incorrectly associated with templates
in `Tribe().construct()` if the given catalog contained events outside
of the time-range of the stream. See issue 381 and PR 382.
* utils.catalog_to_dd
- Added ability to turn off parallel processing (this is turned off by
default now) for `write_correlations` - parallel processing for moderate
to large datasets was copying far too much data and using lots of memory.
This is a short-term fix - ideally we will move filtering and resampling to
C functions with shared-memory parallelism and GIL releasing.
See PR 374.
- Moved parallelism for `_compute_dt_correlations` to the C functions to
reduce memory overhead. Using a generator to construct sub-catalogs rather
than making a list of lists in memory. See issue 361.
* utils.mag_calc:
- `amp_pick_event` now works on a copy of the data by default
- `amp_pick_event` uses the appropriate digital filter gain to correct the
applied filter. See issue 376.
- `amp_pick_event` rewritten for simplicity.
- `amp_pick_event` now has simple synthetic tests for accuracy.
- `_sim_wa` uses the full response information to correct to velocity
this includes FIR filters (previously not used), and ensures that the
wood-anderson poles (with a single zero) are correctly applied to velocity
waveforms.
- `calc_max_curv` is now computed using the non-cumulative distribution.
* Some problem solved in _match_filter_plot. Now it shows all new detections.
* Add plotdir to eqcorrscan.core.lag_calc.lag_calc function to save the images.

0.4.0

* Change resampling to use pyFFTW backend for FFT's. This is an attempt to
alleviate issue related to large-prime length transforms. This requires an
additional dependency, but EQcorrscan already depends on FFTW itself (316).
* Refactor of catalog_to_dd functions (322):
- Speed-ups, using new correlation functions and better resource management
- Removed enforcement of seisan, arguments are now standard obspy objects.
* Add plotdir to lag-calc, template construction and matched-filter detection
methods and functions (330, 325).
* Wholesale re-write of lag-calc function and methods. External interface is
similar, but some arguments have been depreciated as they were unnecessary (321).
- This was done to make use of the new internal correlation functions which
are faster and more memory efficient.
- Party.lag_calc and Family.lag_calc now work in-place on the events in
the grouping.
- Added relative_mags method to Party and Family; this can be called from
lag-calc to avoid reprocessing data.
- Added lag_calc.xcorr_pick_family as a public facing API to implement
correlation re-picking of a group of events.
* Renamed utils.clustering.cross_chan_coherence to
utils.clustering.cross_chan_correlation to better reflect what it actually
does.
* Add --no-mkl flag for setup.py to force the FFTW correlation routines not
to compile against intels mkl. On NeSI systems mkl is currently causing
issues.
* BUG-FIX: `eqcorrscan.utils.mag_calc.dist_calc` calculated the long-way round
the Earth when changing hemispheres. We now use the Haversine formula, which
should give better results at short distances, and does not use a flat-Earth
approximation, so is better suited to larger distances as well.
* Add C-openmp parallel distance-clustering (speed-ups of ~100 times).
* Allow option to not stack correlations in correlation functions.
* Use compiled correlation functions for correlation clustering (speed-up).
* Add time-clustering for catalogs and change how space-time cluster works
so that it uses the time-clustering, rather than just throwing out events
outside the time-range.
* Changed all prints to calls to logging, as a result, debug is no longer
an argument for function calls.
* `find-peaks` replaced by compiled peak finding routine - more efficient
both in memory and time 249 - approx 50x faster
* Note that the results of the C-func and the Python functions are slightly
different. The C function (now the default) is more stable when peaks
are small and close together (e.g. in noisy data).
* multi-find peaks makes use of openMP parallelism for more efficient
memory usage 249
* enforce normalization of continuous data before correlation to avoid float32
overflow errors that result in correlation errors (see pr 292).
* Add SEC-C style chunked cross-correlations. This is both faster and more
memory efficient. This is now used by default with an fft length of
2 ** 13. This was found to be consistently the fastest length in testing.
This can be changed by the user by passing the `fft_len` keyword argument.
See PR 285.
* Outer-loop parallelism has been disabled for all systems now. This was not
useful in most situations and is hard to maintain.
* Improved support for compilation on RedHat systems
* Refactored match-filter into smaller files. Namespace remains the same.
This was done to ease maintenance - the match_filter.py file had become
massive and was slow to load and process in IDEs.
* Refactored `_prep_data_for_correlation` to reduce looping for speed,
now approximately six times faster than previously (minor speed-up)
* Now explicitly doesn't allow templates with different length traces -
previously this was ignored and templates with different length
channels to other templates had their channels padded with zeros or
trimmed.
* Add `skip_short_channels` option to template generation. This allows users
to provide data of unknown length and short channels will not be used, rather
than generating an error. This is useful for downloading data from
datacentres via the `from_client` method.
* Remove pytest_namespace in conftest.py to support pytest 4.x
* Add `ignore_bad_data` kwarg for all processing functions, if set to True
(defaults to False for continuity) then any errors related to bad data at
process-time will be supressed and empty traces returned. This is useful
for downloading data from datacentres via the `from_client` method when
data quality is not known.
* Added relative amplitude measurements as
`utils.mag_calc.relative_amplitude` (306).
* Added relative magnitude calculation using relative amplitudes weighted by
correlations to `utils.mag_calc.relative_magnitude`.
* Added `relative_magnitudes` argument to
`eqcorrscan.core.match_filter.party.Party.lag_calc` to provide an in-flow
way to compute relative magnitudes for detected events.
* Events constructed from detections now include estimated origins alongside
the picks. These origins are time-shifted versions of the template origin and
should be used with caution. They are corrected for prepick (308).
* Picks in detection.event are now corrected for prepick *if* the template is
given. This is now standard in all Tribe, Party and Family methods. Picks will
not be corrected for prepick in match_filter (308).
* Fix 298 where the header was repeated in detection csv files. Also added
a `write_detections` function to `eqcorrscan.core.match_filter.detection`
to streamline writing detections.
* Remove support for Python 2.7.
* Add warning about unused data when using `Tribe.detect` methods with data that
do not fit into chunks. Fixes 291.
* Fix 179 when decimating for cccsum_hist in `_match_filter_plot`
* `utils.pre_processing` now uses the `.interpolate` method rather than
`.resample` to change the sampling rate of data. This is generally more
stable and faster than resampling in the frequency domain, but will likely
change the quality of correlations.
* Removed depreciated `template_gen` functions and `bright_lights` and
`seismo_logs`. See 315
* BUG-FIX: `eqcorrscan.core.template_gen.py` fix conflict with special character on windows
output-filename. See issue 344

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.