Inductiva

Latest version: v0.13.1

Safety actively analyzes 701809 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

2022.2

|`gromacs = inductiva.simulators.GROMACS(version="2024.2")` | `ValueError: Version 2024.2 is not available for simulator gromacs. Available versions are: ['2022.2'].` <br> (Simulator version is not available)|
|`gromacs = inductiva.simulators.GROMACS(version="2022.2", use_dev=True)` | `Using development image of GROMACS version 2022.2` <br> (Running a specific version in dev)|
|`gromacs = inductiva.simulators.GROMACS(version="2024.2", use_dev=True)` | `ValueError: Version 2024.2 is not available for simulator gromacs. Available versions are: ['2022.2'].` <br> (Simulator version is not available in dev)|
|`gromacs = inductiva.simulators.GROMACS(version=2024.2)` | `ValueError: Version 2022.2 is not available for simulator gromacs. Available versions are: ['2022.2'].` <br> (Simulator version cannot be float)|
|`gromacs = inductiva.simulators.GROMACS(use_dev=1)` | `Using development image of GROMACS version 2022.2` <br> (Python treats 1 as True)|
|`gromacs = inductiva.simulators.GROMACS(use_dev="Hello")` | `Using development image of GROMACS version 2022.2` <br> (Python treats strings as True)|

[1010](https://github.com/inductiva/inductiva-web-api/issues/1010)
Validate that min_vms and max_vms parameters are positive for [elastic machine groups](https://docs.inductiva.ai/en/latest/how_to/set-up-elastic-machine-group.html), preventing human error (e.g. inputting a negative value) which caused the instruction to fail.

[973](https://github.com/inductiva/inductiva-web-api/issues/973), [#868](https://github.com/inductiva/inductiva-web-api/issues/868)
Explicitly close database sessions after it’s not needed anymore to save resources, and to prevent blocking all available DB sessions.

[947](https://github.com/inductiva/inductiva-web-api/issues/947)
When a machine group is terminated after a timeout by the [computational resources janitor](https://docs.inductiva.ai/en/latest/api_reference/computational_resources/compute_janitor.html), the task status is updated to `executer-terminated-ttl-exceeded` instead of `executer-terminated-by-user`, providing more clarity over what caused the machine group to be terminated.

[965](https://github.com/inductiva/inductiva-web-api/issues/965), [#181](https://github.com/inductiva/tasks/issues/181)
When a machine fails while executing [multiple tasks](https://docs.inductiva.ai/en/latest/how_to/run-parallel_simulations.html), the ´executer-failed´ error is only associated with the task that was being run at the time of the failure, and not to any previous task that had run successfully.
In this situation, the following steps could help the user learn more about what happened:
1. `inductiva tasks list` to list all tasks and check out which has a failure status;
2. `inductiva tasks info <task_id>` to inspect the status detail, which leads to the failure reason.

[144](https://github.com/inductiva/tasks/issues/144)
Resolved a bug that was preventing users with Windows OS from terminating their resources or killing their tasks from the CLI.

[951](https://github.com/inductiva/inductiva-web-api/issues/951)
Users can access all docker images available in [Kutu](https://github.com/inductiva/kutu) [dockerhub repository](https://hub.docker.com/r/inductiva/kutu/tags), so that the user takes advantage of these resources.

[71](https://github.com/inductiva/tasks/issues/71)
Ability to download the files that are associated to a specific [project](https://tutorials.inductiva.ai/intro_to_api/projects.html):
- `inductiva projects download project-name` Download all files from every task in a project;
- `inductiva projects download project-name --output-dir downloads` Download all files from every task in a project to downloads directory;
- `inductiva projects download project-name --files file1.txt` Download ‘file1.txt’ from every task in a project;
- `inductiva projects download project-name --std` Download stdout.txt and stderr.txt from every task in a project.

0.13.0

List of changes in the v0.13 release

Table of contents
- [Release's Highlight: Inductiva's API on local machines](task-runner)
- [Major upgrade in Console’s MachineGroups](consolemg)
- [Better navigation for full overview](consolemg_overview)
- [Active resources](consolemg_active)
- [Detail](consolemg_detail)
- [Terminated](consolemg_terminated)
- [Costs visibility and management](costsmgmt)
- [User’s costs breakdown](costsmgmt_userbreakdown)
- [Real-time credit management](costsmgmt_realtimecredit)
- [More Simulation options](moresimulators)
- [New OpenFOAM versions](moresimulators_openfoam)
- [Improved usability of DualSPHysics](moresimulators_dualsphysics)
- [Task’s life cycle in the python client](moresimulators_commands)

Release's Highlight: Inductiva's API on local machines <a name="task-runner"></a>
Users can now fully leverage Inductiva's API features on their local machines. With the local task-runner, simulations can be run directly on the user’s computer or any other available compute resource, making it an ideal option for testing and minimizing costs - there are no computation costs associated with local machines or any tasks executed on them, since no cloud resource is allocated. The simulation’s input and output files are still safely stored in the user’s cloud bucket while needed.
The web [Console](https://console.inductiva.ai/) and the [CLI](https://docs.inductiva.ai/en/latest/cli/cli-overview.html) commands continue to provide visibility over the computation resources being used, clearly distinguishing their cloud or local hosting, as well as where the tasks are run.

Follow the [tutorial ](https://tutorials.inductiva.ai/how_to/use-local-task-runner.html)to learn how to configure, launch, and run simulations locally.

Major upgrade in Console’s MachineGroups <a name="consolemg"></a>
The MachineGroups area in the user’s [Console](https://console.inductiva.ai/machine-groups/active) is significantly enhanced with a range of powerful new features in a key area. Explore the improved functionality designed to improve visibility and management of the [computational resources](https://docs.inductiva.ai/en/latest/intro_to_api/shared_dedicated_resources.html#resource-allocation-options) with a user-friendly interface.

Better navigation for full overview <a name="consolemg_overview"></a>
A new layout ensures effortless navigation and better visibility into the user’s resources by providing clear and organized access to three distinct sub-areas:
- [Active](https://console.inductiva.ai/machine-groups/active), displaying currently running MachineGroups;
- [Terminated](https://console.inductiva.ai/machine-groups/terminated), listing previous instances;
- [Instance Types](https://console.inductiva.ai/machine-groups/instance-types), showcasing the [available computational resources](https://github.com/inductiva/inductiva/releases#listmachines).

Active resources <a name="consolemg_active"></a>
The [Active MachineGroups](https://console.inductiva.ai/machine-groups/active) screen was restructured to provide the most significant data to provide a complete and detailed overview of the currently active machine groups, especially their duration and estimated cost, providing users with an immediate understanding of their ongoing expenses and enabling more efficient management of their active resources.

To empower the user with deeper understanding of the resources [attributes](https://docs.inductiva.ai/en/latest/api_reference/computational_resources/machinegroup_class.html#), tooltips with brief explanations can be found in the column titles, offering quick insights and clarity about the displayed data.

Detail <a name="consolemg_detail"></a>
By clicking on an Active MachineGroup, the user accesses its detail, namely its [parameters](https://docs.inductiva.ai/en/latest/api_reference/computational_resources/machinegroup_class.html#machinegroup-class) and detailed information about each of the machines in that MachineGroup.

There’s also a quick access button to terminate the MachineGroup, so that the user is able to take this immediate action based on the information on screen.

Terminated <a name="consolemg_terminated"></a>
This new screen provides all the key details about MachineGroups that are no longer active, ensuring a clear understanding of past activity:
- Timeframe of activity - Duration, Compute time (running tasks) and Idle time - and reason for termination (user request or automatic termination due to inactivity)
- Cost - initially calculated by Inductiva to provide near-real-time estimation; and confirmed by the cloud provider shortly after.

Costs visibility and management <a name="costsmgmt"></a>
User’s costs breakdown <a name="costsmgmt_userbreakdown"></a>
In the newly added Cost Breakdown screen within the [Account](https://console.inductiva.ai/account) section of the Console, users can review their past spendings, with a breakdown of monthly Compute and Storage costs.

The option to use Inductiva’s API on local machines (see this release's highlight) enables cost savings, since there are no computation costs associated with local machines or the tasks executed on them.
The simulation input and output files are still safely stored in the user’s cloud bucket while needed, so there are storage costs resulting from simulations run on local machines.

Real-time credit management <a name="costsmgmt_realtimecredit"></a>
The update to near real-time credit tracking ensures greater accuracy and control over your usage. This improvement means that tasks and machine groups will now automatically stop if the user’s credits are depleted, helping to avoid accumulating significant negative balances. Previously, credit validation only occurred when starting a new machine group, but with more frequent checks, resources can be managed more efficiently and stay within the allocated budget.

The user’s current credits, tier and quotas are always accessible in [Account](https://console.inductiva.ai/account) or by invoking inductiva user info.

More Simulation options <a name="moresimulators"></a>
New OpenFOAM versions <a name="moresimulators_openfoam"></a>
The latest version of [OpenFOAM ESI](https://inductiva.ai/simulators/openfoam-esi) (v2412) and [OpenFOAM Foundation](https://inductiva.ai/simulators/openfoam-foundation) (v12) have been integrated into the API, joining the other available versions of ESI, v2406 and v2206, and version v8 of Foundation.

Improved usability of DualSPHysics <a name="moresimulators_dualsphysics"></a>
The usability of the [DualSPHysics](https://tutorials.inductiva.ai/simulators/DualSPHysics.html) integration has been improved by enabling users to directly run a shell script with the specific commands of this simulator. This update replaces the old command list, allowing users to seamlessly run their simulation scripts in the Cloud exactly as they would locally, without any modifications.

Task’s life cycle in the python client <a name="moresimulators_commands"></a>
To close the loop on the [Task’s life cycle visibility](https://github.com/inductiva/inductiva/releases#taskslife), the python function inductiva tasks info <task_id> includes the list of commands run during the simulation - see the split in the In Progress phase:


Timeline:
Waiting for Input at 22/01, 20:23:11 11.464 s
In Queue at 22/01, 20:23:23 1002.323 s
Preparing to Compute at 22/01, 20:40:05 1.504 s
In Progress at 22/01, 20:40:07 985.312 s
├> 1.05 s dd if=/dev/stdin of=machinefile
└> 984.037 s swashrun -input DAKHLA.sws -mpi 192
Finalizing at 22/01, 20:56:32 4.42 s
Success at 22/01, 20:56:36

0.12.0

List of changes in the v0.12 release

Table of contents
- [Highlight: Benchmarks](Benchmarks)
- [Use task’s outputs as inputs to other tasks](output_input)

Benchmarks <a name="Benchmarks"></a>
Inductiva’s benchmarking tool allows the user to easily execute a batch of runs specifically aiming to measure and compare the performance of different configurations, such as machine types or simulation settings.
Check out Inductiva’s [Quick Recipe to run a benchmark](https://tutorials.inductiva.ai/how_to/run-benchmarks.html#).

Use task’s outputs as inputs to other tasks <a name="output_input"></a>
After the recent release of the ability to [reuse uploaded input files in multiple Tasks](https://tutorials.inductiva.ai/how_to/reuse-files.html#), the users’ time and effort is even more optimized with the capacity to [use Tasks’ outputs as input to other Tasks](https://tutorials.inductiva.ai/how_to/reuse-files.html#reuse-task-outputs-in-simulations) without having to download and upload.
This way, a Task’s input can simply point to a previous Task’s output folder, saving the user the time and trouble of having to download the output from their bucket to their computer and upload it again as part of the next Task’s input.

0.11.2

List of changes in the v0.11.2 release

Table of contents
- [Run any command in OpenFoam](Allrun)
- [Detailed information about the machines in a machine group](machinesingroup)

Run any command in OpenFoam <a name="Allrun"></a>
Users are able to run any arbitrary commands in the OpenFoam simulator in the Allrun file, both the [ESI ](https://inductiva.ai/simulators/openfoam-esi)and the [Foundation](https://inductiva.ai/simulators/openfoam-foundation) distributions.

Example using the commands argument to pass a list of commands:

my_commands = [
"Cp file1 location",
"blockMesh",
etc
]

openfoam = inductiva.simulators.OpenFOAM(distribution="foundation")
task = openfoam.run(input_dir=input_dir,
commands=my_commands,
n_vcpus=4,
on=machine_group)



Example using the bash_script argument to pass a bash script titled “Allrun”:

openfoam = inductiva.simulators.OpenFOAM(distribution="foundation")
task = openfoam.run(input_dir=input_dir,
bash_script="./Allrun",
on=machine_group)


Detailed information about the machines in a machine group <a name="machinesingroup"></a>
[CLI](https://docs.inductiva.ai/en/latest/cli/cli-overview.html) method inductiva resources info <MachineGroup_name> lists the machines in the [Machine Group](https://docs.inductiva.ai/en/latest/api_reference/computational_resources/machinegroup_class.html) and provides detailed information about each:
- Host name,
- Timestamp when it was started,
- Current status,
- Timestamp of most recent “”heartbeat” (when it was detected as active the last time),
- The task ID that the machine is running, if any.

0.11.0

List of changes in the v0.11.0 release

Table of contents
- [Even more visibility over the Tasks life cycle](taskcycle2)
- [Two options for SWAN simulator](2swan)
- [Allocated RAM of registered machine group](mg_ram)
- [Improve INDUCTIVA_API_KEY user experience](apikey)

Even more visibility over the Tasks life cycle <a name="taskcycle2"></a>
Building up on the improvements introduced in [v0.10](https://github.com/inductiva/inductiva/releases#taskslife) to the visibility of the Tasks life cycle, the Task’s timeline now encompasses even more detail:
- Self-explanatory, user-friendlier titles for the Task’s phases, for the user to realize what's happening with the Task with just a glance at its status and timeline;
- After the Task’s computation starts, the user is able to see in real time which commands are being run by the simulator;
- The progress of the Task is accurately presented in real time, showing each phase’s duration and start timestamp.

This information is available both in the Console or via the tasks info method in CLI.
The complete life cycle of a Task is detailed in the [community docs](https://docs.inductiva.ai/en/latest/intro_to_api/tasks.html#task-lifecycle).

Two options for SWAN simulator <a name="2swan"></a>
SWAN simulator has two options: swan.exe and swanrun.
By default, Inductiva’s API runs swanrun, which generates the .rpt and .erf files when the simulator fails. The user is also able to run swan.exe - check how in the [SWAN tutorial](https://tutorials.inductiva.ai/simulators/SWAN.html)

Allocated RAM of registered machine group <a name="mg_ram"></a>
When registering a machine group, the allocated RAM is also printed in the list of attributes.

Improve INDUCTIVA_API_KEY user experience <a name="apikey"></a>
Improving on the onboarding experience for Windows and Linux users, the API key is now stored in a local file, making it easier for users to set up without having to manually configure environment variables.
Users can now use the inductiva auth login command to authenticate and store their API key in a local file within the Inductiva folder. inductiva auth logout will remove the local file and logout the user.
To test this command it is recommended to not have INDUCTIVA_API_KEY set, otherwise inductiva will always use the environment variable.

0.10.1

List of changes in the v0.10.1 release

Table of contents
- [Upload input files once, reuse in multiple Tasks](inputreuse)
- [List the available computational resources](listmachines)

Upload input files once, reuse in multiple Tasks <a name="inputreuse"></a>
It’s common that users need to set large files as inputs to multiple simulations (for example, the bathymetry of a coastal area, or the shape of an object they are studying under CFD), requiring them to upload these large files repeatedly.
As a way to save time and costs, a new feature allows users to upload some input files once and reuse them in multiple Tasks:
1. Upload the files to a remote location;
2. Run the task using the stored input files (can also be combined with files uploaded in the moment);
3. Maintenance.

**1. Upload the files to a remote location**
Two new methods are available, to upload from a local folder or a remote one.

For local sources, the user must use the upload method, whose first parameter local_path accepts a directory or a file (.zip or other):

inductiva.storage.upload(
local_path="gromacs-input-example/",
remote_dir="my_remote_directory",
)


If the user wishes to upload files that are stored in a remote location, then they must use upload_from_url method, whose first parameter url accepts only a single file (.zip or other):

inductiva.storage.upload_from_url(
url="https://storage.googleapis.com/inductiva-api-demo-files/test_assets/files.zip",
remote_dir="my_remote_directory",
)

Where Remote_dir is the remote directory where the files will be stored.

**2. Run the task using the stored input files**
The simulator.run includes an extra parameter to point to the remote location where the input files were uploaded to. Here’s an example for Gromacs simulator:

task = gromacs.run(
input_dir="empty_folder",
commands=commands,
on=machine,
remote_assets=["my_remote_directory"]
)

The new parameter remote_assets sets the remote storage location, which must be one of the remote locations that the user previously set as remote_dir in the first step.
The input_dir parameter works the same way as before, meaning that if no remote locations are provided, the input files used come from the local directory in input_dir.
If both parameters are provided and file names in different locations overlap, the input_dir files take priority.
Only one of these parameters is required

More than one source can be provided, as this parameter is a list, and it could be directories or files:

task = gromacs.run(
input_dir="empty_folder",
commands=commands,
on=machine,
remote_assets=["gromacs_bucket/file1.txt", "gromacs_bucket/file2.txt"])



**3. Maintenance**
Finally, here’s how to look into the remote files stored and clean up:
- List the remote files:
- In CLI: inductiva storage ls
- Python instruction: inductiva.storage.listdir()
- Remove the full remote directory:
inductiva.storage.remove_workspace(remote_dir="gromacs_bucket")
- Remove a single file from a remote directory
inductiva.storage.remove_workspace(remote_dir="gromacs_bucket/file1.txt")


List the available computational resources <a name="listmachines"></a>
The user was already able to list the the available computational resources in Inductiva’s cloud, but new Python methods were created to facilitate going through the list of hundreds of options and help the user select the most suitable machine for their tasks, based on their attributes ( vCPUs, memory) and price per hour.

The [Console](https://console.inductiva.ai/machine-groups) offers a user interface to search the available machines by filtering the list by:
- Family - Cloud provider classification based on hardware configurations
- Type - [RAM-to-vCPU ratio](https://docs.inductiva.ai/intro_to_api/computational-infrastructure.html#available-computational-resources)
- Spot - Cheaper options that in return could be [auto shutdown by Cloud provider](https://tutorials.inductiva.ai/generating-synthetic-data/synthetic-data-generation-6.html#spot-instances-one-more-cost-saving-strategy)
- Price per hour for Spot and non-Spot
- Range of number of vCPUs
- Range of memory (GB)

Here’s an example in Python:

machines = inductiva.resources.machine_types.get_available_machine_types(
provider="GCP",
machine_families=["c2", "c2d"],
machine_configs=["standard", "highcpu"],
vcpus_range=(16, 32),
memory_range=(50, 100),
price_range=(0, 0.8),
spot=False)

for m in machines:
print(m.machine_type)
print(m.price)

Page 1 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.