Nagra

Latest version: v0.3

Safety actively analyzes 682361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.3

**New feature:** DB introspection: Nagra is now able to introspect
existing databases and infer schema:

python
from nagra import Transaction, Schema

with Transaction('sqlite://examples/weather/weather.db'):
schema = Schema.from_db()
print(list(schema.tables))
-> ['city', 'weather']

city = schema.get('city')
print(list(city.columns))
-> ['id', 'name']



**New feature:** Temporary suppression of foreign keys constraints. The
`Schema.suspend_fk` context manager is able to drop foreign keys
constraints and re-add them at the end of the block. This allows to
load more complex datasets with cross-references.

**New feature:** The method `Schema.setup_statement` can be used to
generate the simple migration statements without executing them

**Fix** delete with SQLite when a parameter is passed (issue 15)

**Breaking change:** CockroachDB support removed

**Fix** support for '-' operator when only one operand is given

**Fix** `Select.orderby` when multiple expressions are given

**Fix** `Upsert.executemany` when no data is given (empty list)

0.2.0

**Breaking change:** Rename `Schema.load` into `Schema.load_toml`

**New feature:** Cli: Add csv export on select, with `--csv` flag

**Fix:** Add proper quotes around column names on postgresql upsert

**New feature:** Add array support: So one can now declare a table like:

python
parameter_table = Table(
"parameter",
columns={
"name": "str",
"timestamps": "timestamp []",
"values": "float []",
},
natural_key=["name"],
)
upsert = parameter.upsert()
records = [
("one", ["2024-08-03T10:42", "2024-08-03T10:43"], [2, 3]),
("two", ["2024-08-03T10:44", "2024-08-03T10:45"], [4, 5]),
("three", ["2024-08-03T10:46", "2024-08-03T10:47"], [6, 7]),
]
upsert.executemany(records)


**Fix:** Cli: Fix where condition on delete when used from the cli.

**New feature:** Add query arguments support in to_dict:

python
temperature = Schema.get("temperature")
records = temperature.select().where("(= value {})").to_dict(12)


**New feature:** Type `timestamptz` is now accepted for SQLite.

0.1.2

**New feature:** Create new cursor for each execute, it allows for
example to iterate on select and update the db record by record

**New feature:** Add `Update` class and basic tests, this is mainly
useful when updating by id when the full natural key is not known (or
too long)

**Fix:** `limit` and `offset` values where lost on Select.clone

**Fix** Auto-convert pandas columns to string for non-basic dtypes

**Fix:** Ensure schema loading is correct

**New feature:** Add support for custom primary key

**New:** Add db introspection:
- Discovery of columns and types
- Discovery of primary keys and unique indexes

0.1.0

**Breaking change:** `Select.to_dict` now returns an iterable

**New feature:** `Select.to_pandas` supports a `chunked` parameter: if
set to a non-zero value the method will return an iterable yielding
dataframes instead returning of a unique (possibly large) dataframe

**New feature:** Add `table.drop()`

**New feature:** Check for inconsistencies on hill-defined tables

**Breaking change:** Foreign keys are know defined with an `ON DELETE
CASCADE` pragma if the supporting column is required

**New feature:** Add support for UUID type

**New feature:** Support for concat `||` operator

**New feature:** Add support for default values on table columns

0.0.4

**Breaking change:** `load_schema()` has been replaced by Schema.load
(and Schema.from_toml).

**New feature:** Transaction can now be used without a context
manager, this provide more flexibility in multi-threaded app.

**New feature:** CockroachDB support

**New feature:** Add new column on existing tables on schema
creation. Before any existing table was kept as-is.

**New feature:** Auto validation of rows when a condition is given to
an upsert

**New feature:** New method `to_pandas` on Select.

**New feature:** Add support for one-to-many through table aliases

**New feature:** Upsert now returns the id of the rows created or
updated

0.0.3

**Breaking change:** `load_schema()` now accept a io object or a path object or a toml
payload. A simple file name is not accepted anymore

**New feature:** Add one to many support in select queries: Table constructor now
accepts a `one2many` parameters that can be used like this:

python
person_table = Table(
"person",
columns={
"name": "varchar",
"parent": "bigint",
},
foreign_keys={
"parent": "person",
},
natural_key=["name"],
one2many = {
"skills": "skill.person",
}
)


or via toml:

toml
[person]
natural_key = ["name"]
[person.columns]
name = "varchar"
parent = "bigint"
birthdate = "date"
[person.one2many]
skills = "skill.person"


This also means that there must be a table "skill" with a foreign key
column "person" that references the person table.

With such definitions, a select can use it like:

python
person.select(
"name",
"skills.name"
).stm()


Which gives:

sql
SELECT
"person"."name", "skills_0"."name"
FROM "person"
LEFT JOIN "skill" as skills_0 ON (
skills_0."person" = "person"."id"
);


**New feature:** `Table.upsert` now returns ids of inserted or updated
rows (no id is returned when a row is left untouched).

python
>>> person.delete() start from empty table
>>> upsert = person.upsert("name")
>>> records = [("Doe",)]
>>> upsert.executemany(records)
[1]
>>> upsert.executemany(records)
[]


**New feature:** `Table.upsert` can now be used without giving the
columns of the statement. It defaults to the table columns (like
`Table.select`).

**New feature:** New method `from_pandas` on Upsert, this allows to
directly pass a dataframe to be written:

python
>>> df = DataFrame({"city": [...], "timestamp": [...], "value": [...]})
>>> upsert = temperature.upsert("city", "timestamp", "value")
>>> upsert.from_pandas(df)

Page 1 of 2

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.