**Breaking change:** `load_schema()` now accept a io object or a path object or a toml
payload. A simple file name is not accepted anymore
**New feature:** Add one to many support in select queries: Table constructor now
accepts a `one2many` parameters that can be used like this:
python
person_table = Table(
"person",
columns={
"name": "varchar",
"parent": "bigint",
},
foreign_keys={
"parent": "person",
},
natural_key=["name"],
one2many = {
"skills": "skill.person",
}
)
or via toml:
toml
[person]
natural_key = ["name"]
[person.columns]
name = "varchar"
parent = "bigint"
birthdate = "date"
[person.one2many]
skills = "skill.person"
This also means that there must be a table "skill" with a foreign key
column "person" that references the person table.
With such definitions, a select can use it like:
python
person.select(
"name",
"skills.name"
).stm()
Which gives:
sql
SELECT
"person"."name", "skills_0"."name"
FROM "person"
LEFT JOIN "skill" as skills_0 ON (
skills_0."person" = "person"."id"
);
**New feature:** `Table.upsert` now returns ids of inserted or updated
rows (no id is returned when a row is left untouched).
python
>>> person.delete() start from empty table
>>> upsert = person.upsert("name")
>>> records = [("Doe",)]
>>> upsert.executemany(records)
[1]
>>> upsert.executemany(records)
[]
**New feature:** `Table.upsert` can now be used without giving the
columns of the statement. It defaults to the table columns (like
`Table.select`).
**New feature:** New method `from_pandas` on Upsert, this allows to
directly pass a dataframe to be written:
python
>>> df = DataFrame({"city": [...], "timestamp": [...], "value": [...]})
>>> upsert = temperature.upsert("city", "timestamp", "value")
>>> upsert.from_pandas(df)