Typedspark

Latest version: v1.5.2

Safety actively analyzes 723144 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 8

1.0.3

Support nested datatypes for Schema.get_schema_definition_as_string()

1.0.2

Extends `create_partially_filled_dataset()` such that you can now also define it in a row-wise fashion:

python
create_partially_filled_dataset(
spark,
Person,
[
{Person.name: "Alice", Person.age: 20},
{Person.name: "Bob", Person.age: 30},
{Person.name: "Charlie", Person.age: 40},
{Person.name: "Dave", Person.age: 50},
{Person.name: "Eve", Person.age: 60},
{Person.name: "Frank", Person.age: 70},
{Person.name: "Grace", Person.age: 80},
]
)

1.0.1

Roll back type annotations for `DataSet.union()` and `DataSet.intersect()`. Since the columns in a `DataSet` can be of any ordering, this means that we cannot guarantee that for example `DataSet[T].union(df: DataSet[T]) -> DataSet[T]`. For this purpose, please use `DataSet.unionByName()`.

1.0.0

The first official release!

0.0.4

Remove the hard dependency on pyspark. Instead make it a dev dependency, and give people the option to install it as an optional dependency.

By deafult, `pip install typedspark` will not install `pyspark`, since many platforms (e.g. Databricks) come with ``pyspark`` preinstalled. If you want to install `typedspark` with `pyspark`, you can run `pip install typedspark[pyspark]`

0.0.3

Page 7 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.