Nomic

Latest version: v3.0.25

Safety actively analyzes 622414 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

3.0.6

- Allows specifying task type for text embeddings.

3.0.5

- Support for using Nomic API keys as your authentication method by running `nomic login <api_key>`
- Faster text embedding inference

3.0.0

New Features
- All identifiers moved to unique, human-readable, URL valid organization/project slugs. These are auto-created for you on dataset creation and are ensured to be unique across your Atlas organization
python
from nomic import AtlasDataset
dataset = AtlasDataset('my-organization/my-dataset')
print(dataset.maps[0])


- AtlasProject renamed to AtlasDataset
- Makes supplying an index_name optional (by being set to None)
- map_data unifies easy interfaces for indexing datasets.

Deprecations:
- map_embeddings, map_text in favor a single map_data
- Deprecates iterable in map_text, map_data will no longer support this iterable workflow. Use AtlasDataset and .add_data instead.

Transitioning from 2.x to 3.x
- Rename all map_text and map_embedding calls to map_data
- Replace any use of AtlasProject with AtlasDataset
- See examples in the python client examples folder for details.

1.1.14

- Visual map state can be accessed by manipulating downloading arrow files. See https://docs.nomic.ai/group_by_topics.html
- Duplicate detection results can be accessed.

1.1.0

**Atlas now relies on the Apache Arrow standard data validation and integrity**
For users this means:
- Pandas dataframes and Arrow tables can be passed in during upload and Atlas with automatically coerce data types.
- Atlas will fail less due to data formatting and typing issues and provide more informative error messages when users input malformed inputs.
- Atlas will be snappier to use due resulting improvements in over-the-wire latency.

**Technical Details**

Atlas stores and transfers data using a subset of the [Apache Arrow](arrow.apache.org) standard.

`pyarrow` is used to convert python, pandas, and numpy data types to Arrow types;
you can also pass any Arrow table (created by polars, duckdb, pyarrow, etc.) directly to Atlas
and the types will be automatically converted.

Before being uploaded, all data is converted with the following rules:

* Strings are converted to Arrow strings and stored as UTF-8.
* Integers are converted to 32-bit integers. (In the case that you have larger integers, they are probably either IDs, in which case you should convert them to strings;
or they are a field that you want perform analysis on, in which case you should convert them to floats.)
* Floats are converted to 32-bit (single-precision) floats.
* Embeddings, regardless of precision, are uploaded as 16-bit (half-precision) floats, and stored in Arrow as FixedSizeList.
* All dates and datetimes are converted to Arrow timestamps with millisecond precision and no time zone.
(If you have a use case that requires timezone information or micro/nanosecond precision, please let us know.)
* Categorical types (called 'dictionary' in Arrow) are supported, but values stored as categorical must be strings.

Other data types (including booleans, binary, lists, and structs) are not supported.
Values stored as a dictionary must be strings.

All fields besides embeddings and the user-specified ID field are nullable.

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.