Reminders
Please be aware of a couple things **coming down the pipe soon...**
* This is (hopefully) the penultimate release for flytekit before the 1.0 release. We have a couple major changes pending, so we plan on releasing v0.32.0 within the next few days as well. (There is a complete overhaul of flytekit's configuration system in the works, and a brand new type forthcoming (unions).)
* As of v0.32.0, we will flip the Structured Dataset [flag](https://github.com/flyteorg/flytekit/blob/457f323c0052df822c962186f44a6989a418d23d/flytekit/configuration/sdk.py#L34) to true by default. Please ensure that your backend is on Propeller version v0.16.14 and Admin version v0.6.78 or later (aka Flyte release v0.19.1 or later) otherwise you will get compiler errors if you try to mix `FlyteSchema` tasks with `pd.DataFrame` tasks. (But also, *you should upgrade* to the `StructuredDataset` type away from `FlyteSchema`.)
* We are still planning on releasing flytekit 1.0.0 in mid-April (along with all other Flyte components).
---
Main changes
Core
* `StructuredDataset` (and `FlyteSchema`) literals can now be cached based on a hash of the dataframe's contents, rather than the storage location. This is helpful if you have a non-cached task that produces a dataframe that a cached task consumes. With this feature, if the parent task produces the same data in the dataframe, the second task will see that as a cache hit. Also applicable to dynamic tasks, see the [issue](https://github.com/flyteorg/flyte/issues/1581) for more information.
* To turn on this feature just `typing.Annotate` with a hash function and make sure you're on Propeller v0.16.31 or later (Flyte release version v0.19.3). Documentation forthcoming.
* New [AWS Batch Task](https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/aws/batch/batch.html#sphx-glr-auto-integrations-aws-batch-batch-py) type. This allows users to run a task on AWS Batch instead of on a K8s cluster.
FlyteRemote