Logprep

Latest version: v14.0.0

Safety actively analyzes 682404 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 8

9.0.3

Breaking

Features

* make `thread_count`, `queue_size` and `chunk_size` configurable for `parallel_bulk` in opensearch output connector

Improvements

Bugfix

* fix `parallel_bulk` implementation not delivering messages to opensearch

9.0.2

Bugfix

* remove duplicate pseudonyms in extra outputs of pseudonymizer

9.0.1

Breaking

Features

Improvements

* use parallel_bulk api for opensearch output connector

Bugfix

9.0.0

Breaking

* remove possibility to inject auth credentials via url string, because of the risk leaking credentials in logs
- if you want to use basic auth, then you have to set the environment variables
* :code:`LOGPREP_CONFIG_AUTH_USERNAME=<your_username>`
* :code:`LOGPREP_CONFIG_AUTH_PASSWORD=<your_password>`
- if you want to use oauth, then you have to set the environment variables
* :code:`LOGPREP_CONFIG_AUTH_TOKEN=<your_token>`
* :code:`LOGPREP_CONFIG_AUTH_METHOD=oauth`

Features

Improvements

* improve error message on empty rule filter
* reimplemented `pseudonymizer` processor
- rewrote tests till 100% coverage
- cleaned up code
- reimplemented caching using pythons `lru_cache`
- add cache metrics
- removed `max_caching_days` config option
- add `max_cached_pseudonymized_urls` config option which defaults to 1000
- add lru caching for peudonymizatin of urls
* improve loading times for the rule tree by optimizing the rule segmentation and sorting
* add support for python 3.12 and remove support for python 3.9
* always check the existence of a field for negated key-value based lucene filter expressions
* add kafka exporter to quickstart setup

Bugfix

* fix the rule tree parsing some rules incorrectly, potentially resulting in more matches
* fix `confluent_kafka` commit issue after kafka did some rebalancing, fixes also negative offsets

8.0.0

Breaking

* reimplemented metrics so the former metrics configuration won't work anymore
* metric content changed and existent grafana dashboards will break
* new rule `id` could possibly break configurations if the same rule is used in both rule trees
- can be fixed by adding a unique `id` to each rule or delete the possibly redundant rule

Features

* add possibility to convert hex to int in `calculator` processor with new added function `from_hex`
* add metrics on rule level
* add grafana example dashboards under `examples/exampledata/config/grafana/dashboards`
* add new configuration field `id` for all rules to identify rules in metrics and logs
- if no `id` is given, the `id` will be generated in a stable way
- add verification of rule `id` uniqueness on processor level over both rule trees to ensure metrics are counted correctly on rule level

Improvements

* reimplemented prometheus metrics exporter to provide gauges, histograms and counter metrics
* removed shared counter, because it is redundant to the metrics
* get exception stack trace by setting environment variable `DEBUG`

Bugfix

7.0.0

Breaking

* removed metric file target
* move kafka config options to `kafka_config` dictionary for `confluent_kafka_input` and `confluent_kafka_output` connectors

Features

* add a preprocessor to enrich by systems env variables
* add option to define rules inline in pipeline config under processor configs `generic_rules` or `specific_rules`
* add option to `field_manager` to ignore missing source fields to suppress warnings and failure tags
* add ignore_missing_source_fields behavior to `calculator`, `concatenator`, `dissector`, `grokker`, `ip_informer`, `selective_extractor`
* kafka input connector
- implemented manual commit behaviour if `enable.auto.commit: false`
- implemented on_commit callback to check for errors during commit
- implemented statistics callback to collect metrics from underlying librdkafka library
- implemented per partition offset metrics
- get logs and handle errors from underlying librdkafka library
* kafka output connector
- implemented statistics callback to collect metrics from underlying librdkafka library
- get logs and handle errors from underlying librdkafka library

Improvements

* `pre_detector` processor now adds the field `creation_timestamp` to pre-detections.
It contains the time at which a pre-detection was created by the processor.
* add `prometheus` and `grafana` to the quickstart setup to support development
* provide confluent kafka test setup to run tests against a real kafka cluster

Bugfix

* fix CVE-2023-37920 Removal of e-Tugra root certificate
* fix CVE-2023-43804 `Cookie` HTTP header isn't stripped on cross-origin redirects
* fix CVE-2023-37276 aiohttp.web.Application vulnerable to HTTP request smuggling via llhttp HTTP request parser

Page 4 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.