Mage-ai

Latest version: v0.9.76

Safety actively analyzes 723296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 10

0.8.29

Not secure
![Untitled](https://media.giphy.com/media/KRf4fgn5PPgqGEoiHB/giphy.gif)

Commit, push and pull code changes between Mage and Github/Gitlab repository

Mage supports Github/Gitlab integration via UI now. You can perform the following actions with the UI.

- Create new branch
- Commit & push
- Pull
- Hard reset

Doc on setting up integration: [https://docs.mage.ai/production/data-sync/git](https://docs.mage.ai/production/data-sync/git)

![Untitled](https://media.graphassets.com/sx09Cel0R3ev6PktcohZ)

![Untitled](https://media.graphassets.com/zHVOt3ieQvuPmzvrGYnP)

Deploy Mage using AWS CodeCommit and AWS CodeDeploy

Add [terraform templates](https://github.com/mage-ai/mage-ai-terraform-templates/tree/master/aws-code-pipeline) for deploying Mage to ECS from a CodeCommit repo with AWS CodePipeline. It will create 2 separate CodePipelines, one for building a docker image to ECR from a CodeCommit repository, and another one for reading from ECR and deploying to ECS.

Docs on using the terraform templates: [https://docs.mage.ai/production/deploying-to-cloud/aws/code-pipeline](https://docs.mage.ai/production/deploying-to-cloud/aws/code-pipeline)

Use ECS task roles for AWS authentication

When you run Mage on AWS, instead of using hardcoded API keys, you can also use ECS task role to authenticate with AWS services.

Doc: [https://docs.mage.ai/production/deploying-to-cloud/aws/setup#authentication-with-ecs-task-execution-role](https://docs.mage.ai/production/deploying-to-cloud/aws/setup#authentication-with-ecs-task-execution-role)

**Opening http://localhost:6789/ automatically**

Shout out to [Bruno Gonzalez](https://github.com/bruno-uy) for his contribution of supporting automatically opening Mage in a browser tab when using `mage start` command in your laptop.

Github issue: https://github.com/mage-ai/mage-ai/issues/2233

**Update notebook error display**

When executing a block in the notebook and an error occurs, show the stack trace of the error without including the custom code wrapper (useless information).

Before:

![Untitled](https://media.graphassets.com/C44Gd8XwQgJjsVq40mIr)

After:

![Untitled](https://media.graphassets.com/E8zXtZQ8Q6xe5b75YQr1)

Data integration pipeline improvements

MySQL

- Add Mage automatically created columns to destination table if table already exists in MySQL.
- Don’t lower case column names for MySQL destination.

Commercetools

- Add inventory stream for Commercetools source.

Outreach

- Fix outreach source rate limit issue.

Snowflake

- Fix Snowflake destination column comparison when altering table. Use uppercase for column names if `disable_double_quotes` is True.
- Escape single quote when converting array values.

Streaming pipeline improvements

- Truncate print messages in execution output to prevent freezing the browser.
- Disable keyboard shortcuts in streaming pipelines to run blocks.
- Add async handler to streaming source base class. You can set `consume_method = SourceConsumeMethod.READ_ASYNC` in your streaming source class. Then it'll use read_async method.

Pass event variable to kwargs for event trigger pipelines

Mag supports triggering pipelines on AWS events. Now, you can access the raw event data in block method via `kwargs['event']` . This enhancement enables you to easily customize your pipelines based on the event trigger and to handle the event data as needed within your pipeline code.

Other bug fixes & polish

- Fix “Circular import” error of using the `secret_var` in repo's metadata.yaml
- Fix bug: Tooltip at very right of Pipeline Runs or Block Runs graphs added a horizontal overflow.
- Fix bug: If an upstream dependency was added on the Pipeline Editor page, stale upstream connections would be updated for a block when executing a block via keyboard shortcut (i.e. cmd/ctrl+enter) inside the block code editor.
- Cast column types (int, float) when reading block output Dataframe.
- Fix block run status caching issue in UI. Mage UI sometimes fetched stale block run statuses from backend, which is misleading. Now, the UI always fetches the latest block run status without cache.
- Fix timezone mismatch issue for pipeline schedule execution date comparison so that there’re no duplicate pipeline runs created.
- Fix bug: Sidekick horizontal scroll bar not wide enough to fit 21 blocks when zoomed all the way out.
- Fix bug: When adding a block between two blocks, if the first block was a SQL block, it would use the SQL block content to create a block regardless of the block language.
- Fix bug: Logs for pipeline re-runs were not being filtered by timestamp correctly due to execution date of original pipeline run being used for filters.
- Increase canvas size of dependency graph to accommodate more blocks / blocks with long names.

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.27

Not secure
![shazam](https://media.giphy.com/media/FAXgrcmx38hhzL3N2Y/giphy-downsized.gif)

Great Expectations integration

Mage is now integrated with [Great Expectations](https://greatexpectations.io/) to test the data produced by pipeline blocks.

You can use all the [expectations](https://greatexpectations.io/expectations/) easily in your Mage pipeline to ensure your data quality.

![Untitled](https://media.graphassets.com/HgWNFABERMKVecEiHQ04)

![Untitled](https://media.graphassets.com/cqZRbiQNezQDHqTjX0YA)

Follow the [doc](https://docs.mage.ai/development/testing/great-expectations) to add expectations to your pipeline to run tests for your block output.

Pipeline dashboard updates

![Untitled](https://media.graphassets.com/ULCfiwiAR5ikpY0b52qb)

- Added pipeline description.
- Single click on a row no longer opens a pipeline. In order to open a pipeline now, users can double-click a row, click on the pipeline name, or click on the open folder icon at the end of the row.
- Select a pipeline row to perform an action (e.g. clone, delete, rename, or edit description).
- **Clone pipeline** (icon with 2 overlapping squares) - Cloning the selected pipeline will create a new pipeline with the same configuration and code blocks. The blocks use the same block files as the original pipeline. Pipeline triggers, runs, backfills, and logs are not copied over to the new pipeline.
- **Delete pipeline** (trash icon) - Deletes selected pipeline
- **Rename pipeline** (item in dropdown menu under ellipsis icon) - Renames selected pipeline
- **Edit description** (item in dropdown menu under ellipsis icon) - Edits pipeline description. Users can hover over the description in the table to view more of it.
- Users can click on the file icon under the `Actions` column to go directly to the pipeline's logs.
- Added search bar which searches for text in the pipeline `uuid`, `name`, and `description` and filters the pipelines that match.
- The create, update, and delete actions are not accessible by Viewer roles.
- Added badge in Filter button indicating number of filters applied.

![Untitled](https://media.graphassets.com/CSOjtdIxRJawC9xzCFMz)

- Group pipelines by `status` or `type`.

![Untitled](https://media.graphassets.com/r78cTW8HTCsMfYpLj1dL)


SQL block improvements

Toggle SQL block to not create table

Users can write raw SQL blocks and only include the `INSERT` statement. `CREATE TABLE` statement isn’t required anymore.

Support writing SELECT statements in SQL using raw SQL

Users can write `SELECT` statements using raw SQL in SQL blocks now.

Find all supported SQL statements using raw SQL in this [doc](https://docs.mage.ai/guides/sql-blocks#required-sql-statements).

Support for ssh tunnel in multiple blocks

When using [SSH tunnel](https://docs.mage.ai/integrations/databases/PostgreSQL#ssh-tunneling) to connect to Postgres database, SSH tunnel was originally only supported in block run at a time due to port conflict. Now Mage supports SSH tunneling in multiple blocks by finding the unused port as the local port. This feature is also supported in Python block when using `mage_ai.io.postgres` module.

Data integration pipeline

New source: Pipedrive

Shout out to [Luis Salomão](https://github.com/Luishfs) for his continuous contribution to Mage. The new source [Pipedrive](https://github.com/mage-ai/mage-ai/tree/master/mage_integrations/mage_integrations/sources/pipedrive) is available in Mage now.

Fix BigQuery “query too large” error

Add check for size of query since that can potentially exceed the limit.

New sensor templates

[Sensor](https://docs.mage.ai/design/blocks/sensor) block is used to continuously evaluate a condition until it’s met. Mage now has more sensor templates to check whether data lands in S3 bucket or SQL data warehouses.

Sensor template for checking if a file exists in S3

![Untitled](https://media.graphassets.com/1HDJkmx4QCkX9V8Z3Fcu)

Sensor template for checking the data in SQL data warehouse

![Untitled](https://media.graphassets.com/GrnTOZqSn2wa6lAxQwrj)

Support for Spark in standalone mode (self-hosted)

Mage can connect to a standalone Spark cluster and run PySpark code on it. You can set the environment variable `SPARK_MASTER_HOST` in your Mage container or instance. Then running PySpark code in a standard batch pipeline will work automagically by executing the code in the remote Spark cluster.

Follow this [doc](https://docs.mage.ai/integrations/spark-pyspark#standalone-spark-cluster) to set up Mage to connect to a standalone Spark cluster.

Mask environment variable values with stars in output

Mage now automatically masks environment variable values with stars in terminal output or block output to prevent showing sensitive data in plaintext.

![Untitled](https://media.graphassets.com/wAltPgI3Qt6ye9ZMFjEh)

Other bug fixes & polish

- Improve streaming pipeline logging
- Show streaming pipeline error logging
- Write logs to multiple files
- Provide the working NGINX config to allow Mage WebSocket traffic.

bash
location / {
proxy_pass http://127.0.0.1:6789;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}


- Fix raw SQL quote error.
- Add documentation for developer to add a new source or sink to streaming pipeline: [https://docs.mage.ai/guides/streaming/contributing](https://docs.mage.ai/guides/streaming/contributing)

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.24

Not secure
Disable editing files or executing code in production environment

You can configure Mage to not allow any edits to pipelines or blocks in production environment. Users will only be able to create triggers and view the existing pipelines.

![Untitled](https://media.graphassets.com/BDYxkWYhQJ2vhNXJQ6HA)

Doc: [https://docs.mage.ai/production/configuring-production-settings/overview#read-only-access](https://docs.mage.ai/production/configuring-production-settings/overview#read-only-access)

Pipeline and file versioning

- Create versions of a file for every create or update to the content. Display the versions and allow user to update current file content to a specific file version. Doc: [https://docs.mage.ai/development/versioning/file-versions](https://docs.mage.ai/development/versioning/file-versions)

![Untitled](https://media.graphassets.com/XxvwT8xQzuzYVGOGGfKB)

- Support pipeline file versioning. Display the historical pipeline versions and allow user to roll back to a previous pipeline version if pipeline config is messed up.

![Untitled](https://media.graphassets.com/BUpHIOXjTVWFtqj5MaQg)

S**upport LDAP authentication**

Shout out to [Dhia Eddine Gharsallaoui](https://github.com/dhia-gharsallaoui) for his contribution of adding LDAP authentication method to Mage. When LDAP authentication is enabled, users will need to provide their LDAP credentials to log in to the system. Once authenticated, Mage will use the authorization filter to determine the user’s permissions based on their LDAP group membership.

Follow the [guide](https://docs.mage.ai/production/authentication/overview#ldap) to set up LDAP authentication.

**DBT support for SQL Server**

Support running SQL Server DBT models in Mage.

![Untitled](https://media.graphassets.com/LRIwZIczR9mqgftiEJkM)

Tutorial for setting up a DBT project in Mage: [https://docs.mage.ai/tutorials/setup-dbt](https://docs.mage.ai/tutorials/setup-dbt)

Helm deployment

Mage can now be deployed to Kubernetes with Helm: [https://mage-ai.github.io/helm-charts/](https://mage-ai.github.io/helm-charts/)

How to install Mage Helm charts

bash
helm repo add mageai https://mage-ai.github.io/helm-charts
helm install my-mageai mageai/mageai


To customize the mount volume for Mage container, you’ll need to customize the `values.yaml`

- Get the `values.yaml` with the command

bash
helm show values mageai/mageai > values.yaml


- Edit the `volumes` config in `values.yaml` to mount to your Mage project path

Doc: https://docs.mage.ai/production/deploying-to-cloud/using-helm

Integration with Spark running in the same Kubernetes cluster

When you run Mage and Spark in the same Kubernetes cluster, you can set the environment variable `SPARK_MASTER_HOST` to the url of the master node of the Spark cluster in Mage container. Then you’ll be able to connect Mage to your Spark cluster and execute PySpark code in Mage.

Follow this [guide](https://docs.mage.ai/integrations/spark-pyspark#kubernetes) to use Mage with Spark in Kubernetes cluster.

Improve Kafka source and sink for streaming pipeline

- Set api_version in [Kafka source](https://docs.mage.ai/guides/streaming/sources/kafka#basic-config) and [Kafka destination](https://docs.mage.ai/guides/streaming/destinations/kafka#basic-config)
- Allow passing raw message value to transformer so that custom deserialization logic can be applied in transformer (e.g. custom Protobuf deserialization logic).

Data integration pipeline

- Add more streams to Front app source
- Channels
- Custom Fields
- Conversations
- Events
- Rules
- Fix Snowflake destination alter table command errors
- Fix MySQL source [bytes decode error](https://github.com/mage-ai/mage-ai/issues/2164)

Pipeline table filtering

Add filtering (by status and type) for pipelines.

![image](https://user-images.githubusercontent.com/80284865/225091205-0fb5e79b-57eb-43e9-9f63-d4501788e6a1.png)

![image](https://user-images.githubusercontent.com/80284865/225091281-a94ed5b9-5f96-4859-82bf-3c381790a067.png)

Renaming a pipeline transfers all the existing triggers, variables, pipeline runs, block runs, etc to the new pipeline

- When renaming a pipeline, transfer existing triggers, backfills, pipeline runs, and block runs to the new pipeline name.
- Prevent users from renaming pipeline to a name already in use by another pipeline.

Update the variable tab with more instruction for SQL and R variables

- Update the Variables tab with more instruction for SQL and R variables.

![Untitled](https://media.graphassets.com/l750AVqgSZmsnlc4wiIi)

- Improve SQL/R block upstream block interpolation helper hints.

![Untitled](https://media.graphassets.com/Injnycz0STSnhGBpnUF2)

![Untitled](https://media.graphassets.com/wHRQU2T3RSuvp8dP0nl4)

Other bug fixes & polish

- Update sidekick to have a vertical navigation

![Untitled](https://media.graphassets.com/zzP9wTh7SvWUeifoOXCR)

- Fix `Allow blocks to fail` setting for pipelines with dynamic blocks.
- Git sync: Overwrite origin url with the user's remote_repo_link if it already exists.
- Resolve DB model refresh issues in pipeline scheduler
- Fix bug: Execute pipeline in Pipeline Editor gets stuck at first block.
- Use the upstream dynamic block’s block metadata as the downstream child block’s kwargs.
- Fix using reserved words as column names in [mage.io](http://mage.io/) Postgres export method
- Fix error `sqlalchemy.exc.PendingRollbackError: Can't reconnect until invalid transaction is rolled back.` in API middleware
- Hide "custom" add block button in streaming pipelines.
- Fix bug: Paste not working in Firefox browser (https) (Error: "navigator.clipboard.read is not a function").

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.15

Not secure
Allow pipeline to keep running even if other unrelated blocks fail

Mage pipeline used to stop running if any of the block run failed. A setting was added to continue running the pipeline even if a block in the pipeline fails during the execution.

Check out the [doc](https://docs.mage.ai/guides/triggering-pipelines#additional-trigger-settings) to learn about the additional settings of a trigger.

![Untitled](https://media.graphassets.com/PbeX0dHzThvq0vQWiTUd)

Sync project with Github

If you have your pipeline data stored in a remote repository in Github, you can sync your local project with the remote repository through Mage.

Follow the [doc](https://docs.mage.ai/production/data-sync/github#sync-data-with-github) to set up the sync with Github.

![Untitled](https://media.graphassets.com/Ilxav8tuTuSV3jh4PGA9)

Data integration pipeline

Edit bookmark property values for data integration pipeline from the UI

Edit bookmark property values from UI. User can edit the bookmark values, which will be used as a bookmark for the next sync. The bookmark values will automatically update to the last record synced after the next sync is completed. Check out the [doc](https://docs.mage.ai/guides/data-integration-pipeline#editing-bookmark-property-values) to learn about how to edit bookmark property values.

![Untitled](https://media.graphassets.com/8gIKGluREuYA7v8GkNJr)

Improvements on existing sources and destinations

- Use TEXT instead of VARCHAR with character limit as the column type in Postgres destination
- Show a loader on a data integration pipeline while the list of sources and destinations are still loading

![Untitled](https://media.graphassets.com/JJEYTY4MQJrqstbQqC4Z)


Streaming pipeline

Deserialize Protobuf messages in Kafka’s streaming source

Specify the Protobuf schema class path in the Kafka source config so that Mage can deserialize the Protobuf messages from Kafka.

Doc: [https://docs.mage.ai/guides/streaming/sources/kafka#deserialize-message-with-protobuf-schema](https://docs.mage.ai/guides/streaming/sources/kafka#deserialize-message-with-protobuf-schema)

Add Kafka as streaming destination

Doc: [https://docs.mage.ai/guides/streaming/destinations/kafka](https://docs.mage.ai/guides/streaming/destinations/kafka)

Ingest data to Redshift via Kinesis

Mage doesn’t directly stream data into Redshift. Instead, Mage can stream data to Kinesis. You can configure streaming ingestion for your Amazon Redshift cluster and create a materialized view using SQL statements.

Doc: [https://docs.mage.ai/guides/streaming/destinations/redshift](https://docs.mage.ai/guides/streaming/destinations/redshift)

**Cancel all running pipeline runs for a pipeline**

Add the button to cancel all running pipeline runs for a pipeline.

![Untitled](https://media.graphassets.com/3kwCrK7XSSOWnDm19QMy)

![Untitled](https://media.graphassets.com/4kA8TqgTRn2Zgcw3LHjd)

Other bug fixes & polish

- For the viewer role, don’t show the edit options for the pipeline
- Show “Positional arguments for decorated function” preview for custom blocks

![Untitled](https://media.graphassets.com/kMZOOVrgTemmjI0OG43G)

- Disable notebook keyboard shortcuts when typing in input fields in the sidekick


View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.11

Not secure
Configure callbacks on block success or failure

- Add callbacks to run after your block succeeds or fails. You can add a callback by clicking “Add callback” in the “More actions” menu of the block (the three dot icon in the top right).
- For more information about callbacks, check out [the Mage documentation](https://docs.mage.ai/guides/blocks/callbacks)

![Configure-callbacks](https://media.graphassets.com/vNVNabsITcgvsry8bNGA)


Backfill improvements

- Show preview of total pipeline runs created and timestamps of pipeline runs that will be created before starting backfill.
- Misc UX improvements with the backfills pages (e.g. disabling or hiding irrelevant items depending on backfill status, updating backfill table columns that previously weren't updating as needed)

![backfill-improvements](https://media.graphassets.com/1AXnpFp4R639qs9ZMgFL)

Dynamic block improvements

- Support dynamic block to dynamic block
- Block outputs for dynamic blocks don’t show when clicking on the block run

DBT improvements

- View DBT block run sample model outputs
- Compile + preview, show compiled SQL, run/test/build model options, view lineage for single model, and more.
- When clicking a DBT block in the block runs view, show a sample query result of the model
- Only create upstream source if its used
- Don’t create upstream block SQL table unless DBT block reference it.

Handle multi-line pasting in terminal

[https://user-images.githubusercontent.com/78053898/221713029-f1557230-cf79-477b-a0d4-6164eac0624d.mp4](https://user-images.githubusercontent.com/78053898/221713029-f1557230-cf79-477b-a0d4-6164eac0624d.mp4)

File browser improvements

- Upload files and create new files in the root project directory
- Rename and delete any file from file browser

Other bug fixes & polish

- Show pipeline editor main content header on Firefox. The header for the Pipeline Editor main content was hidden for Firefox browsers specifically (which prevented users from being able to change their pipeline names on Firefox).

![Untitled](https://media.graphassets.com/zFOZhyyAStiIuqRxccia)


- Make retry run popup fully visible. Fix issue with Retry pipeline run button popup being cutoff.

![Untitled](https://media.graphassets.com/gVe6Zs1zTcqiNujZDRps)


- Add alert with details on how to allow clipboard paste in insecure contexts

![Untitled](https://media.graphassets.com/lsTTo2dNRJWdJYeP7O8N)
- Show canceling status only for pipeline run being canceled. When multiple runs were being canceled, the status for other runs was being updated to "canceling" even though those runs weren't being canceled.
![Untitled](https://media.graphassets.com/output=format:jpg/Ag7a5YRhRJGcyKOo7owR)
- Remove table prop from destination config. The `table` property is not needed in the data integration destination config templates when building integration pipelines through the UI, so they've been removed.
- Update data loader, transformer, and data exporter templates to not require DataFrame.
- Fix PyArrow issue
- Fix data integration destination row syncing count
- Fix emoji encode for BigQuery destination
- Fix dask memory calculation issue
- Fix Nan being display for runtime value on Syns page
- Odd formatting on Trigger edit page dropdowns (e.g. Fequency) on Windows
- Not fallback to empty pipeline when failing to reading pipeline yaml

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.3

Not secure
User login, management, authentication, roles, and permissions

User login and user level permission control is supported in mage-ai version 0.8.0 and above.

![Untitled](https://media.graphassets.com/X4YcoIbMRZygg4AbEf3S)

![Untitled](https://media.graphassets.com/2DC7F2SsuoCzQj2yGotg)

Setting the environment variable `REQUIRE_USER_AUTHENTICATION` to 1 to turn on user authentication.

Check out the doc to learn more about user authentication and permission control: [https://docs.mage.ai/production/authentication/overview](https://docs.mage.ai/production/authentication/overview)

Data integration

New sources

- [Datadog](https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/datadog/README.md)

New destinations

- [AWS Redshift](https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/redshift/README.md)

Full lists of available sources and destinations can be found here:

- Sources: [https://docs.mage.ai/data-integrations/overview#available-sources](https://docs.mage.ai/data-integrations/overview#available-sources)
- Destinations: [https://docs.mage.ai/data-integrations/overview#available-destinations](https://docs.mage.ai/data-integrations/overview#available-destinations)

Improvements on existing sources and destinations

- Update Couchbase source to support more unstructured data.
- Make all columns optional in the data integration source schema table settings UI; don’t force the checkbox to be checked and disabled.
- Batch fetch records in Facebook Ads streams to reduce number of requests.

Add connection credential secrets through the UI and store encrypted in Mage’s database

In various surfaces in Mage, you may be asked to input config for certain integrations such as cloud databases or services. In these cases, you may need to input a password or an api key, but you don’t want it to be shown in plain text. To get around this issue, we created a way to store your secrets in the Mage database.

![Untitled](https://media.graphassets.com/q3AyuRluQ22458CkOEK0)

Check out the doc to learn more about secrets management in Mage: [https://docs.mage.ai/development/secrets/secrets](https://docs.mage.ai/development/secrets/secrets)

Configure max number of concurrent block runs

Mage now supports limiting the number of concurrent block runs by customizing queue config, which helps avoid mage server being overloaded by too many block runs. User can configure the maximum number of concurrent block runs in project’s metadata.yaml via `queue_config`.

yaml
queue_config:
concurrency: 100


Add triggers list page and terminal tab

- Add a dedicated page to show all triggers.

![Untitled](https://media.graphassets.com/ub9fuag7TK6q4lXyyUll)

- Add a link to the terminal in the main dashboard left vertical navigation and show the terminal in the main view of the dashboard.

![Untitled](https://media.graphassets.com/5ythVf1eRKmFLb8DcYe9)

Support running PySpark pipeline locally

Support running PySpark pipelines locally without custom code and settings.

If you have your Spark cluster running locally, you can just build your standard batch pipeline with PySpark code same as other Python pipelines. Mage handles data passing between blocks automatically for Spark DataFrames. You can use `kwargs['spark']` in Mage blocks to access the Spark session.

Other bug fixes & polish

- Add MySQL data exporter template
- Add MySQL data loader template
- Upgrade Pandas version to 1.5.3
- Improve K8s executor
- Pass environment variables to k8s job pods
- Use the same image from main mage server in k8s job pods
- Store and return sample block output for large json object
- Support [SASL authentication](https://github.com/mage-ai/mage-ai/blob/master/mage_ai/data_preparation/templates/data_loaders/streaming/kafka.yaml#L15-L20) with Confluent Cloud Kafka in streaming pipeline


View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

Page 9 of 10

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.