Import-tracker

Latest version: v3.2.1

Safety actively analyzes 638773 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

3.1.2

Change Log

* Fix for bug with inheriting from a class that has been trapped by lazy imports (63)

3.1.1

Change Log

* Fix bugs with how `lazy_import_errors` interacts with other libraries that also modify `sys.meta_path` (62)

3.1.0

Change Log

* Add support for identifying optional dependencies (inside `try/except/finally`) (61)
* Readme updates (60)

3.0.0

Description

This is a major change! It fundamentally rewrites the core logic for tracking imports and rearranges the arguments to the import tracking functionality. The high-level gist is:

* Rather than capturing imports during the processing of an `import_module`, the imports are computed after importing the target module by recursively inspecting the bytecode for all modules stemming from the target.
* The tracking no longer needs to launch `subprocesses` to perform recursion because it does not rely on the diff in `sys.modules`
* It's _way_ faster!

But why?

Ok, the old way was working pretty well, so why refactor it all? The obvious answer is `speed`, but the less obvious answer is actually the correct one: the old implementation was not answering the right question. The old implementation answered the question


What modules are brought into sys.modules between starting the import of <target> and concluding the import of <target>?


Instead, what we _really_ want to know is:


If we stripped away all code not required for <target>, what modules would we need to have installed for the import of <target> to work?


The difference here comes down to whether you count siblings of nested dependencies. This is much easier to describe with an example:


deep_siblings/
├── __init__.py
├── blocks
│   ├── __init__.py
│   ├── bar_type
│   │   ├── __init__.py
│   │   └── bar.py imports alog
│   └── foo_type
│   ├── __init__.py
│   └── foo.py imports yaml
└── workflows
├── __init__.py
└── foo_type
├── __init__.py
└── foo.py imports ..blocks.foo_type.foo


In this example, under the old implementation, `workflows.foo_type.foo` would depend on both `alog` and `yaml` because the `..blocks` portion of the import requires that _all_ of the dependencies of `blocks` be brought into `sys.modules`. This, however, voids the value proposition of finding separable import sets. Under the new implementation, `workflows.foo_type.foo` only depends on `yaml` because it imports `blocks.foo_type.foo` from the deepest point where the only requirement is `yaml`.

What breaks in the API?

* The `side_effects_modules` argument is gone. This was a hack to work around the fact that there were some modules that, when trapped by a `DeferredModule` would cause the overall import to fail. With the refactor, this is unnecessary as the import proceeds exactly as normal with no interferance.
* The output with `track_import_stacks` is different. It no longer attempts to look like stack traces, but it is actually more useful. Now, instead of a partially-useful stack trace, it's a list of lists where each entry is a stack of module imports that causes the given dependency allocation.
* By default, `import_module` stops looking for imports at the boundary of the target module's parent library. This means that if a third party module transitively imports another third party module, it won't be allocated to the target _unless_ `full_depth=True` is given.
* `LazyModule` is gone! This tool was a bit of a hack anyway and is no longer necessary.

2.2.2

Change Log

* 🐛 Fix how `direct` vs `transitive` is determined for attributes of modules that are defined in sub-modules (53)

2.2.1

Change Log

* Fix the bug where a `LazyModule` was triggered to import by the `import_tracker` main itself (51)

Page 2 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.