Skeletonkey

Latest version: v0.3.1.0

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.3.1.0

Release Notes

1. **Enhanced Partial Instantiation for Fixed Input/Output Sizes**

We have introduced a new feature that allows users to partially instanitate classes. This is particularly useful for models where these sizes are determined by the dataset configuration.

Example:
python
sk.unlock("configs/config.yaml")
def main(config):
model = sk.partial_instantiate(config.model)
print(type(model)) >>> type(model): <class 'functools.partial'>
ds = sk.instantiate(config.dataset)

insize, outsize = ds.get_in_out_size()
model = model(insize=insize, outsize=outsize)
print(type(model)) >>> type(model): <class '__main__.MLP'>


2. **Multiple Configurations Support with Nested Unlocks**

The `unlock` function now supports multiple unlocks with different configurations within a single program. Users can now nest `unlock` decorators, combining multiple configurations into one config object. Each `Config` object will contain only the keys from the specified YAML file, with the ability to override these keys from the command line as usual.

Key Changes:
- Multiple functions with different `unlock` calls, each pointing to different configurations, are now supported.
- Unparsed keys specified from the command line will be stored as a private variable in the config object.
- A new `to_dict` method ensures private attributes are excluded during conversion.
- A warning is issued if a config is specified from the command line when multiple unlocks are present, indicating that all configs have been overwritten.
- Nested `unlock` decorators are now supported, where keys from the last decorator applied overwrite any matching keys from previous unlocks.

3. **Skeleton Keywords Overwriting and Prefixing for Nested Configs**

Added options to overwrite skeleton keywords from `unlock` and the ability to prepend a prefix to create nested configs. This enhancement allows for easier use of nested unlocks and removes the positional use of profiles, which caused issues with unparsed arguments.

0.3.0.1

The code wasn't checking if no profile specifiers were provided before trying to access the list.

0.3.0.0

Profiles would absorb and expand upon the functionality of defaults. There would also be the option during `unlock` to override the default name for profiles, potentially making it `defaults` again.

In the most basic case, profiles can work exactly how defaults currently work:
yaml
profiles:
- /path/to/config1
- /path/to/config2

In this case, all of the values from the specified sub-configs are brought into the highest level of the main config just as they are with defaults.

---

The first additional behavior that comes with this construct is the ability to specify multiple named profiles.
yaml
profiles:
profile1:
- /path/to/config1
- /path/to/config2
profile1:
- /path/to/config3
- /path/to/config4

In essence, this is just two different sets of defaults that can be switched between via the command line. The syntax for specifying profile on the command line would look something like `python main.py --profiles profile1` or potentially `python main.py profile1`.

Additionally, the user will be able to specify configs on the command like to add to the set used. That looks something like `python main.py --profiles profile1 "/path/to/config4"`, in which case the values from the sub-config, onfig4, would be brought in on top of all of the sub-configs brought in from profile 1.

---

Additionally, the user can optionally name the configs brought in from a given profile to enable easier overriding from the command line. Here is an example of what that may look like in an ML pipeline.
yaml
profiles:
train:
model: train/model.yaml
dataset: train/dataset.yaml
debug:
model: debug/model.yaml
dataset: debug/dataset.yaml

In this case, the user specifys the mode to use while also bringing in individual components from the other mode.
`python main.py --profiles train debug.dataset`. Which would select the train profile, but also bring in the debug dataset sub-config.

It is important to note that naming sub-configs does not stop a sub-config from being brought in by path.

---

It is also be possible to have lists of sub-configs as a named set in a profile. For example, one could place several config paths within `train.model` as a list, which would indicate that each of those sub-configs should be brought in when train.model is brought in.
Additionally, sub-profiles allows more fine-grained control of combining aspects of profiles. Consider if the model key had two sub-keys, `backbone` and `head`. Each of these may have one or more sub-config paths specified. Whenever a profile is specified, all sub-configs from every sub-profile (and sub-sub-profile etc.) are also brought in.
Here is an example that exhibits both of these potential behaviors
yaml
profiles:
train:
model:
head: train/head.yaml
backbone: train/backbone.yaml
datasets:
- train/dataset1.yaml
- train/dataset2.yaml
debug:
model:
head: debug/head.yaml
backbone: debug/backbone.yaml
datasets:
- debug/dataset1.yaml
- debug/dataset2.yaml


A command for such set of profiles looks like `python main.py --profiles train debug.model.head` which indicates that the training pipeline should be run, but that the model head should be the debug version.

0.2.13

Fixed issue with typing versioning.

0.2.12

---
A new method has been introduced in the Config class that enables users to update the keys of a configuration object while the program is running. Users can pass a dictionary or another configuration object, using dot notation, to update the values of an existing Config object. This functionality is particularly useful in scenarios involving frameworks like Weights and Biases, where dynamic adjustments to the configuration might be necessary during experiments or model training.

Additionally, a problem was resolved that previously prevented users from changing the data type of values associated with keys in a configuration directly from the command line.

0.2.11

Features
- **Config Namespace Instantiation**: Introduced a new `instantiate` method in the Config namespace object, enabling users to instantiate objects in several intuitive ways:
- Utilizing the SkeletonKey framework directly:
python
model = skeletonkey.instantiate(args.model, arg1=arg1, arg2=arg2, arg3=arg3)

- Via the model's instantiation method:
python
model = args.model.instantiate(arg1=arg1, arg2=arg2, arg3=arg3)

- Through direct Config object calls:
python
model = args.model(arg1=arg1, arg2=arg2, arg3=arg3)

This addition aims to simplify and diversify the object creation process within the framework.

Enhancements
- **Project File Structure Rework**: The entire project’s file structure has been reorganized to support the new instantiation methods and to avoid issues with circular imports, enhancing the code’s modularity and maintainability.
- **Fetch Functionality**: Added new `fetch` functionality along with an example case that serves as a basic test, demonstrating the integration of remote resources or configurations seamlessly into the project.

Changes
- **Renaming of `import_class` to `import_target`**: To better reflect its expanded role in importing both classes and other targets required by the `instantiate` and `fetch` methods, `import_class` has been renamed to `import_target`.
- **Command Line Config Overrides**: The command line interface has been enhanced to allow users to directly override the main configuration, providing greater flexibility in configuration management during runtime.
- **Type Annotation Adjustments**: The explicit type annotations for the Config object in the `instantiate` method have been removed to accommodate the restructured instantiation approach. A shared Config interface is suggested as a potential solution for maintaining type checking and annotations integrity.

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.