Mage-ai

Latest version: v0.9.74

Safety actively analyzes 682404 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 10

0.8.52

Not secure
![supermariobros](https://media.giphy.com/media/ZdKa3AjRMS5vGZvxQW/giphy.gif)

ClickHouse SQL block

Support using SQL block to fetch data from, transform data in and export data to ClickHouse.

Doc: [https://docs.mage.ai/integrations/databases/ClickHouse](https://docs.mage.ai/integrations/databases/ClickHouse)

![clickhouse](https://media.graphassets.com/dyeQi5UHSNajJG00hGyn)

Trino SQL block

Support using SQL block to fetch data from, transform data in and export data to Trino.

Doc: [https://docs.mage.ai/development/blocks/sql/trino](https://docs.mage.ai/development/blocks/sql/trino)

Sentry integration

Enable Sentry integration to track and monitor exceptions in Sentry dashboard.
Doc: [https://docs.mage.ai/production/observability/sentry](https://docs.mage.ai/production/observability/sentry)

Drag and drop to re-order blocks in pipeline

Mage now supports dragging and dropping blocks to re-order blocks in pipelines.

![mage-drag-and-drop](https://media.graphassets.com/eFgZvPjXS06RfbyeOwKp)

Streaming pipeline

Add AWS SQS streaming source

Support consuming messages from SQS queues in streaming pipelines.

Doc: [https://docs.mage.ai/guides/streaming/sources/amazon-sqs](https://docs.mage.ai/guides/streaming/sources/amazon-sqs)

![amazon-sqs](https://media.graphassets.com/m1khivi9ShCX7v2zhuJs)

Add dummy streaming sink

Dummy sink will print the message optionally and discard the message. This dummy sink will be useful when users want to trigger other pipelines or 3rd party services using the ingested data in transformer.

Doc: [https://docs.mage.ai/guides/streaming/destinations/dummy](https://docs.mage.ai/guides/streaming/destinations/dummy)

![dummy-streaming-sink](https://media.graphassets.com/CnpvC4c8R9GjSAOfyUWz)

Delta Lake code templates

Add code templates to fetch data from and export data to Delta Lake.

Delta Lake data loader template

![delta-lake-data-loader-template](https://media.graphassets.com/ypraoggGQXLYdCoztgIP)

Delta Lake data exporter template

![delta-lake-data-exporter-template](https://media.graphassets.com/HvL4JCw3TBWcDfkEN8VE)

Unit tests for Mage pipelines

Support writing unit tests for Mage pipelines that run in the CI/CD pipeline using mock data.

Doc: [https://docs.mage.ai/development/testing/unit-tests](https://docs.mage.ai/development/testing/unit-tests)

Data integration pipeline

- **Chargebee source:** Fix load sample data issue
- **Redshift destination:** Handle unique constraints in destination tables.

DBT improvements

- If there are two DBT model files in the same directory with the same name but one has an extra `.sql` extension, the wrong file may get deleted if you try to delete the file with the double `.sql` extension.
- Support using Python block to transform data between DBT model blocks
- Support `+schema` in DBT profile

Other bug fixes & polish

- SQL block
- Automatically limit SQL block data fetching while using the notebook but also provide manually override to adjust the limit while using the notebook. Remove these limits when running pipeline end-to-end outside the notebook.
- Only export upstream blocks if current block using raw SQL and its using the variable
- Update SQL block to use `io_config.yaml` database and schema by default
- Fix timezone in pipeline run execution date.
- Show backfill preview dates in UTC time
- Raise exception when loading empty pipeline config.
- Fix dynamic block creation when reduced block has another dynamic block as downstream block
- Write Spark DataFrame in parquet format instead of csv format
- Disable user authentication when REQUIRE_USER_AUTHENTICATION=0
- Fix loggings for Callback blocks
- Git
- Import git only when the `Git` feature is used.
- Update git actions error message
- Notebook
- Fix Notebook page freezing issue
- Make Notebook right vertical navigation sticky
- More documentations
- Add architecture overview diagram and doc: [https://docs.mage.ai/production/deploying-to-cloud/architecture](https://docs.mage.ai/production/deploying-to-cloud/architecture)
- Add doc for setting up event trigger lambda function: [https://docs.mage.ai/guides/triggers/events/aws#set-up-lambda-function](https://docs.mage.ai/guides/triggers/events/aws#set-up-lambda-function)


View full [Changelog](https://www.notion.so/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.44

Not secure
![Untitled](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNTliODAxMDNmYTEyYjI5MGNlODEyZDc4Y2ZmZTY5ZWQ0ODhiMDUwZCZjdD1n/dAsMojnSbjtX30SeC6/giphy.gif)

Configure trigger in code

In addition to configuring triggers in UI, Mage also supports configuring triggers in code now. Create a `triggers.yaml` file under your pipeline folder and enter the triggers config. The triggers will automatically be synced to DB and trigger UI.

![config-trigger-in-code](https://media.graphassets.com/S3EnLfELQu2HWZQEAH7L)

Doc: [https://docs.mage.ai/guides/triggers/configure-triggers-in-code](https://docs.mage.ai/guides/triggers/configure-triggers-in-code)

**Centralize server logger and add verbosity control**

Shout out to [Dhia Eddine Gharsallaoui](https://github.com/dhia-gharsallaoui) for his contribution of centralizing the server loggings and adding verbosity control. User can control the verbosity level of the server logging by setting the `SERVER_VERBOSITY` environment variables. For example, you can set `SERVER_VERBOSITY` environment variable to `ERROR` to only print out errors.

Doc: [https://docs.mage.ai/production/observability/logging#server-logging](https://docs.mage.ai/production/observability/logging#server-logging)

Customize resource for Kubernetes executor

User can customize the resource when using the Kubernetes executor now by adding the `executor_config` to the block config in pipeline’s `metadata.yaml`.

![customize-k8s](https://media.graphassets.com/y8phjhITz6J5mmWZOSgk)

Doc: [https://docs.mage.ai/production/configuring-production-settings/compute-resource#kubernetes-executor](https://docs.mage.ai/production/configuring-production-settings/compute-resource#kubernetes-executor)

Data integration pipelines

- **Google sheets source:** Fix loading sample data from Google Sheets
- **Postgres source:** Allow customizing the publication name for logical replication
- **Google search console source:** Support email field in google_search_console config
- **BigQuery destination:** Limit the number of subqueries in BigQuery query
- Show more descriptive error (instead of `{}`) when a stream that was previously selected may have been deleted or renamed. If a previously selected stream was deleted or renamed, it will still appear in the `SelectStreams` modal but will automatically be deselected and indicate that the stream is no longer available in red font. User needs to click "Confirm" to remove the deleted stream from the schema.

![descriptive-error](https://media.graphassets.com/p4yKR63DSRGgI9Jn8SN4)

Terminal improvements

- Use named terminals instead of creating a unique terminal every time Mage connects to the terminal websocket.
- Update terminal for windows. Use `cmd` shell command for windows instead of bash. Allow users to overwrite the shell command with the `SHELL_COMMAND` environment variable.
- Support copy and pasting multiple commands in terminal at once.
- When changing the path in the terminal, don’t permanently change the path globally for all other processes.
- Show correct logs in terminal when installing requirements.txt.

DBT improvements

- Interpolate environment variables and secrets in DBT profile

Git improvements

- Update git to support multiple users

Postgres exporter improvements

- Support reordering columns when exporting a dataframe to Postgres
- Support specifying unique constraints when exporting the dataframe

python
with Postgres.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
loader.export(
df,
schema_name,
table_name,
index=False,
if_exists='append',
allow_reserved_words=True,
unique_conflict_method='UPDATE',
unique_constraints=['col'],
)


Other bug fixes & polish

- Fix chart loading errors.
- Allow pipeline runs to be canceled from UI.
- Fix raw SQL block trying to export upstream python block.
- Don’t require metadata for dynamic blocks.
- When editing a file in the file editor, disable keyboard shortcuts for notebook pipeline blocks.
- Increase autosave interval from 5 to 10 seconds.
- Improve vertical navigation fixed scrolling.
- Allow users to force delete block files. When attempting to delete a block file with downstream dependencies, users can now override the safeguards in place and choose to delete the block regardless.

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.37

Not secure
![Untitled](https://media.giphy.com/media/arMJIb82UoJqjHdgPr/giphy-downsized-large.gif)

Interactive terminal

The terminal experience is improved in this release, which adds new interactive features and boosts performance. Now, you can use the following interactive commands and more:

- `git add -p`
- `dbt init demo`
- `great_expectations init`

![Untitled](https://media.graphassets.com/auCAd8LUQc6c3iWgEr5j)

Data integration pipeline

New source: Google Ads

Shout out to [Luis Salomão](https://github.com/Luishfs) for adding the Google Ads source.

Doc: [https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/google_ads/README.md](https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/google_ads/README.md)

New source: Snowflake

Doc: [https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/snowflake/README.md](https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/sources/snowflake/README.md)

New destination: Amazon S3

Doc: [https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/destinations/amazon_s3/README.md](https://github.com/mage-ai/mage-ai/blob/master/mage_integrations/mage_integrations/destinations/amazon_s3/README.md)

Bug Fixes

- In the MySQL source, map the Decimal type to Number.
- In the MySQL destination, use `DOUBLE PRECISION` instead of `DECIMAL` as the column type for float/double numbers.

Streaming pipeline

New sink: Amazon S3

Doc: [https://docs.mage.ai/guides/streaming/destinations/amazon-s3](https://docs.mage.ai/guides/streaming/destinations/amazon-s3)

Other improvements

- Enable the logging of custom exceptions in the transformer of a streaming pipeline. Here is an example code snippet:

python
transformer
def transform(messages: List[Dict], *args, **kwargs):
try:
raise Exception('test')
except Exception as err:
kwargs['logger'].error('Test exception', error=err)
return messages


- Support cancelling running streaming pipeline (when pipeline is executed in PipelineEditor) after page is refreshed.

Alerting option for Google Chat

Shout out to [Tim Ebben](https://github.com/tebben) for adding the option to send alerts to Google Chat in the same way as Teams/Slack using a webhook.

Example config in project’s metadata.yaml

yaml
notification_config:
alert_on:
- trigger_failure
- trigger_passed_sla
slack_config:
webhook_url: ...


How to create webhook url: [https://developers.google.com/chat/how-tos/webhooks#create_a_webhook](https://developers.google.com/chat/how-tos/webhooks#create_a_webhook)

Other bug fixes & polish

- Prevent a user from editing a pipeline if it’s stale. A pipeline can go stale if there are multiple tabs open trying to edit the same pipeline or multiple people editing the pipeline at different times.
- Fix bug: Code block scrolls out of view when focusing on the code block editor area and collapsing/expanding blocks within the code editor.
- Fix bug: Sync UI is not updating the "rows processed" value.
- Fix the path issue of running dynamic blocks on a Windows server.
- Fix index out of range error in data integration transformer when filtering data in the transformer.
- Fix issues of loading sample data in Google Sheets.
- Fix chart blocks loading data.
- Fix Git integration bugs:
- The confirm modal after clicking “synchronize data” was sometimes not actually running the sync, so removed that.
- Fix various git related user permission issues.
- Create local repo git path if it doesn’t exists already.
- Add preventive measures for saving a pipeline:
- If the content that is about to be saved to a YAML file is invalid YAML, raise an exception.
- If the block UUIDs from the current pipeline and the content that is about to be saved differs, raise an exception.
- DBT block
- Support DBT staging. When a DBT model runs and if it’s configured to use a schema with a suffix, Mage will now take that into account when fetching a sample of the model at the end of the block run.
- Fix `Circular reference detected` issue with DBT variables.
- Manually input DBT block profile to allow variable interpolation.

![Untitled](https://media.graphassets.com/4GpY6C3TS3O7rTIJI8gk)

- Show DBT logs when running compile and preview.

![Untitled](https://media.graphassets.com/3hJC7SB1StOKsyODL8Nn)

- SQL block
- Don’t limit raw SQL query; allow all rows to be retrieved.
- Support SQL blocks passing data to downstream SQL blocks with different data providers.
- Raise an exception if a raw SQL block is trying to interpolate an upstream raw SQL block from a different data provider.
- Fix date serialization from 1 block to another.
- Add helper for using CRON syntax in trigger setting.

![Untitled](https://media.graphassets.com/1ThPPn7Sci4um5slzCMK)

- Document internal API endpoints for development and contributions: [https://docs.mage.ai/contributing/backend/api/overview](https://docs.mage.ai/contributing/backend/api/overview)

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.29

Not secure
![Untitled](https://media.giphy.com/media/KRf4fgn5PPgqGEoiHB/giphy.gif)

Commit, push and pull code changes between Mage and Github/Gitlab repository

Mage supports Github/Gitlab integration via UI now. You can perform the following actions with the UI.

- Create new branch
- Commit & push
- Pull
- Hard reset

Doc on setting up integration: [https://docs.mage.ai/production/data-sync/git](https://docs.mage.ai/production/data-sync/git)

![Untitled](https://media.graphassets.com/sx09Cel0R3ev6PktcohZ)

![Untitled](https://media.graphassets.com/zHVOt3ieQvuPmzvrGYnP)

Deploy Mage using AWS CodeCommit and AWS CodeDeploy

Add [terraform templates](https://github.com/mage-ai/mage-ai-terraform-templates/tree/master/aws-code-pipeline) for deploying Mage to ECS from a CodeCommit repo with AWS CodePipeline. It will create 2 separate CodePipelines, one for building a docker image to ECR from a CodeCommit repository, and another one for reading from ECR and deploying to ECS.

Docs on using the terraform templates: [https://docs.mage.ai/production/deploying-to-cloud/aws/code-pipeline](https://docs.mage.ai/production/deploying-to-cloud/aws/code-pipeline)

Use ECS task roles for AWS authentication

When you run Mage on AWS, instead of using hardcoded API keys, you can also use ECS task role to authenticate with AWS services.

Doc: [https://docs.mage.ai/production/deploying-to-cloud/aws/setup#authentication-with-ecs-task-execution-role](https://docs.mage.ai/production/deploying-to-cloud/aws/setup#authentication-with-ecs-task-execution-role)

**Opening http://localhost:6789/ automatically**

Shout out to [Bruno Gonzalez](https://github.com/bruno-uy) for his contribution of supporting automatically opening Mage in a browser tab when using `mage start` command in your laptop.

Github issue: https://github.com/mage-ai/mage-ai/issues/2233

**Update notebook error display**

When executing a block in the notebook and an error occurs, show the stack trace of the error without including the custom code wrapper (useless information).

Before:

![Untitled](https://media.graphassets.com/C44Gd8XwQgJjsVq40mIr)

After:

![Untitled](https://media.graphassets.com/E8zXtZQ8Q6xe5b75YQr1)

Data integration pipeline improvements

MySQL

- Add Mage automatically created columns to destination table if table already exists in MySQL.
- Don’t lower case column names for MySQL destination.

Commercetools

- Add inventory stream for Commercetools source.

Outreach

- Fix outreach source rate limit issue.

Snowflake

- Fix Snowflake destination column comparison when altering table. Use uppercase for column names if `disable_double_quotes` is True.
- Escape single quote when converting array values.

Streaming pipeline improvements

- Truncate print messages in execution output to prevent freezing the browser.
- Disable keyboard shortcuts in streaming pipelines to run blocks.
- Add async handler to streaming source base class. You can set `consume_method = SourceConsumeMethod.READ_ASYNC` in your streaming source class. Then it'll use read_async method.

Pass event variable to kwargs for event trigger pipelines

Mag supports triggering pipelines on AWS events. Now, you can access the raw event data in block method via `kwargs['event']` . This enhancement enables you to easily customize your pipelines based on the event trigger and to handle the event data as needed within your pipeline code.

Other bug fixes & polish

- Fix “Circular import” error of using the `secret_var` in repo's metadata.yaml
- Fix bug: Tooltip at very right of Pipeline Runs or Block Runs graphs added a horizontal overflow.
- Fix bug: If an upstream dependency was added on the Pipeline Editor page, stale upstream connections would be updated for a block when executing a block via keyboard shortcut (i.e. cmd/ctrl+enter) inside the block code editor.
- Cast column types (int, float) when reading block output Dataframe.
- Fix block run status caching issue in UI. Mage UI sometimes fetched stale block run statuses from backend, which is misleading. Now, the UI always fetches the latest block run status without cache.
- Fix timezone mismatch issue for pipeline schedule execution date comparison so that there’re no duplicate pipeline runs created.
- Fix bug: Sidekick horizontal scroll bar not wide enough to fit 21 blocks when zoomed all the way out.
- Fix bug: When adding a block between two blocks, if the first block was a SQL block, it would use the SQL block content to create a block regardless of the block language.
- Fix bug: Logs for pipeline re-runs were not being filtered by timestamp correctly due to execution date of original pipeline run being used for filters.
- Increase canvas size of dependency graph to accommodate more blocks / blocks with long names.

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.27

Not secure
![shazam](https://media.giphy.com/media/FAXgrcmx38hhzL3N2Y/giphy-downsized.gif)

Great Expectations integration

Mage is now integrated with [Great Expectations](https://greatexpectations.io/) to test the data produced by pipeline blocks.

You can use all the [expectations](https://greatexpectations.io/expectations/) easily in your Mage pipeline to ensure your data quality.

![Untitled](https://media.graphassets.com/HgWNFABERMKVecEiHQ04)

![Untitled](https://media.graphassets.com/cqZRbiQNezQDHqTjX0YA)

Follow the [doc](https://docs.mage.ai/development/testing/great-expectations) to add expectations to your pipeline to run tests for your block output.

Pipeline dashboard updates

![Untitled](https://media.graphassets.com/ULCfiwiAR5ikpY0b52qb)

- Added pipeline description.
- Single click on a row no longer opens a pipeline. In order to open a pipeline now, users can double-click a row, click on the pipeline name, or click on the open folder icon at the end of the row.
- Select a pipeline row to perform an action (e.g. clone, delete, rename, or edit description).
- **Clone pipeline** (icon with 2 overlapping squares) - Cloning the selected pipeline will create a new pipeline with the same configuration and code blocks. The blocks use the same block files as the original pipeline. Pipeline triggers, runs, backfills, and logs are not copied over to the new pipeline.
- **Delete pipeline** (trash icon) - Deletes selected pipeline
- **Rename pipeline** (item in dropdown menu under ellipsis icon) - Renames selected pipeline
- **Edit description** (item in dropdown menu under ellipsis icon) - Edits pipeline description. Users can hover over the description in the table to view more of it.
- Users can click on the file icon under the `Actions` column to go directly to the pipeline's logs.
- Added search bar which searches for text in the pipeline `uuid`, `name`, and `description` and filters the pipelines that match.
- The create, update, and delete actions are not accessible by Viewer roles.
- Added badge in Filter button indicating number of filters applied.

![Untitled](https://media.graphassets.com/CSOjtdIxRJawC9xzCFMz)

- Group pipelines by `status` or `type`.

![Untitled](https://media.graphassets.com/r78cTW8HTCsMfYpLj1dL)


SQL block improvements

Toggle SQL block to not create table

Users can write raw SQL blocks and only include the `INSERT` statement. `CREATE TABLE` statement isn’t required anymore.

Support writing SELECT statements in SQL using raw SQL

Users can write `SELECT` statements using raw SQL in SQL blocks now.

Find all supported SQL statements using raw SQL in this [doc](https://docs.mage.ai/guides/sql-blocks#required-sql-statements).

Support for ssh tunnel in multiple blocks

When using [SSH tunnel](https://docs.mage.ai/integrations/databases/PostgreSQL#ssh-tunneling) to connect to Postgres database, SSH tunnel was originally only supported in block run at a time due to port conflict. Now Mage supports SSH tunneling in multiple blocks by finding the unused port as the local port. This feature is also supported in Python block when using `mage_ai.io.postgres` module.

Data integration pipeline

New source: Pipedrive

Shout out to [Luis Salomão](https://github.com/Luishfs) for his continuous contribution to Mage. The new source [Pipedrive](https://github.com/mage-ai/mage-ai/tree/master/mage_integrations/mage_integrations/sources/pipedrive) is available in Mage now.

Fix BigQuery “query too large” error

Add check for size of query since that can potentially exceed the limit.

New sensor templates

[Sensor](https://docs.mage.ai/design/blocks/sensor) block is used to continuously evaluate a condition until it’s met. Mage now has more sensor templates to check whether data lands in S3 bucket or SQL data warehouses.

Sensor template for checking if a file exists in S3

![Untitled](https://media.graphassets.com/1HDJkmx4QCkX9V8Z3Fcu)

Sensor template for checking the data in SQL data warehouse

![Untitled](https://media.graphassets.com/GrnTOZqSn2wa6lAxQwrj)

Support for Spark in standalone mode (self-hosted)

Mage can connect to a standalone Spark cluster and run PySpark code on it. You can set the environment variable `SPARK_MASTER_HOST` in your Mage container or instance. Then running PySpark code in a standard batch pipeline will work automagically by executing the code in the remote Spark cluster.

Follow this [doc](https://docs.mage.ai/integrations/spark-pyspark#standalone-spark-cluster) to set up Mage to connect to a standalone Spark cluster.

Mask environment variable values with stars in output

Mage now automatically masks environment variable values with stars in terminal output or block output to prevent showing sensitive data in plaintext.

![Untitled](https://media.graphassets.com/wAltPgI3Qt6ye9ZMFjEh)

Other bug fixes & polish

- Improve streaming pipeline logging
- Show streaming pipeline error logging
- Write logs to multiple files
- Provide the working NGINX config to allow Mage WebSocket traffic.

bash
location / {
proxy_pass http://127.0.0.1:6789;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}


- Fix raw SQL quote error.
- Add documentation for developer to add a new source or sink to streaming pipeline: [https://docs.mage.ai/guides/streaming/contributing](https://docs.mage.ai/guides/streaming/contributing)

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

0.8.24

Not secure
Disable editing files or executing code in production environment

You can configure Mage to not allow any edits to pipelines or blocks in production environment. Users will only be able to create triggers and view the existing pipelines.

![Untitled](https://media.graphassets.com/BDYxkWYhQJ2vhNXJQ6HA)

Doc: [https://docs.mage.ai/production/configuring-production-settings/overview#read-only-access](https://docs.mage.ai/production/configuring-production-settings/overview#read-only-access)

Pipeline and file versioning

- Create versions of a file for every create or update to the content. Display the versions and allow user to update current file content to a specific file version. Doc: [https://docs.mage.ai/development/versioning/file-versions](https://docs.mage.ai/development/versioning/file-versions)

![Untitled](https://media.graphassets.com/XxvwT8xQzuzYVGOGGfKB)

- Support pipeline file versioning. Display the historical pipeline versions and allow user to roll back to a previous pipeline version if pipeline config is messed up.

![Untitled](https://media.graphassets.com/BUpHIOXjTVWFtqj5MaQg)

S**upport LDAP authentication**

Shout out to [Dhia Eddine Gharsallaoui](https://github.com/dhia-gharsallaoui) for his contribution of adding LDAP authentication method to Mage. When LDAP authentication is enabled, users will need to provide their LDAP credentials to log in to the system. Once authenticated, Mage will use the authorization filter to determine the user’s permissions based on their LDAP group membership.

Follow the [guide](https://docs.mage.ai/production/authentication/overview#ldap) to set up LDAP authentication.

**DBT support for SQL Server**

Support running SQL Server DBT models in Mage.

![Untitled](https://media.graphassets.com/LRIwZIczR9mqgftiEJkM)

Tutorial for setting up a DBT project in Mage: [https://docs.mage.ai/tutorials/setup-dbt](https://docs.mage.ai/tutorials/setup-dbt)

Helm deployment

Mage can now be deployed to Kubernetes with Helm: [https://mage-ai.github.io/helm-charts/](https://mage-ai.github.io/helm-charts/)

How to install Mage Helm charts

bash
helm repo add mageai https://mage-ai.github.io/helm-charts
helm install my-mageai mageai/mageai


To customize the mount volume for Mage container, you’ll need to customize the `values.yaml`

- Get the `values.yaml` with the command

bash
helm show values mageai/mageai > values.yaml


- Edit the `volumes` config in `values.yaml` to mount to your Mage project path

Doc: https://docs.mage.ai/production/deploying-to-cloud/using-helm

Integration with Spark running in the same Kubernetes cluster

When you run Mage and Spark in the same Kubernetes cluster, you can set the environment variable `SPARK_MASTER_HOST` to the url of the master node of the Spark cluster in Mage container. Then you’ll be able to connect Mage to your Spark cluster and execute PySpark code in Mage.

Follow this [guide](https://docs.mage.ai/integrations/spark-pyspark#kubernetes) to use Mage with Spark in Kubernetes cluster.

Improve Kafka source and sink for streaming pipeline

- Set api_version in [Kafka source](https://docs.mage.ai/guides/streaming/sources/kafka#basic-config) and [Kafka destination](https://docs.mage.ai/guides/streaming/destinations/kafka#basic-config)
- Allow passing raw message value to transformer so that custom deserialization logic can be applied in transformer (e.g. custom Protobuf deserialization logic).

Data integration pipeline

- Add more streams to Front app source
- Channels
- Custom Fields
- Conversations
- Events
- Rules
- Fix Snowflake destination alter table command errors
- Fix MySQL source [bytes decode error](https://github.com/mage-ai/mage-ai/issues/2164)

Pipeline table filtering

Add filtering (by status and type) for pipelines.

![image](https://user-images.githubusercontent.com/80284865/225091205-0fb5e79b-57eb-43e9-9f63-d4501788e6a1.png)

![image](https://user-images.githubusercontent.com/80284865/225091281-a94ed5b9-5f96-4859-82bf-3c381790a067.png)

Renaming a pipeline transfers all the existing triggers, variables, pipeline runs, block runs, etc to the new pipeline

- When renaming a pipeline, transfer existing triggers, backfills, pipeline runs, and block runs to the new pipeline name.
- Prevent users from renaming pipeline to a name already in use by another pipeline.

Update the variable tab with more instruction for SQL and R variables

- Update the Variables tab with more instruction for SQL and R variables.

![Untitled](https://media.graphassets.com/l750AVqgSZmsnlc4wiIi)

- Improve SQL/R block upstream block interpolation helper hints.

![Untitled](https://media.graphassets.com/Injnycz0STSnhGBpnUF2)

![Untitled](https://media.graphassets.com/wHRQU2T3RSuvp8dP0nl4)

Other bug fixes & polish

- Update sidekick to have a vertical navigation

![Untitled](https://media.graphassets.com/zzP9wTh7SvWUeifoOXCR)

- Fix `Allow blocks to fail` setting for pipelines with dynamic blocks.
- Git sync: Overwrite origin url with the user's remote_repo_link if it already exists.
- Resolve DB model refresh issues in pipeline scheduler
- Fix bug: Execute pipeline in Pipeline Editor gets stuck at first block.
- Use the upstream dynamic block’s block metadata as the downstream child block’s kwargs.
- Fix using reserved words as column names in [mage.io](http://mage.io/) Postgres export method
- Fix error `sqlalchemy.exc.PendingRollbackError: Can't reconnect until invalid transaction is rolled back.` in API middleware
- Hide "custom" add block button in streaming pipelines.
- Fix bug: Paste not working in Firefox browser (https) (Error: "navigator.clipboard.read is not a function").

View full [Changelog](https://www.notion.so/mageai/What-s-new-7cc355e38e9c42839d23fdbef2dabd2c)

Page 8 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.