Inductiva

Latest version: v0.13.1

Safety actively analyzes 701847 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.8.1

List of changes in the v0.8.1 release 🚀🚀

**New simulator NWChem** [268](https://github.com/inductiva/tasks/issues/268)
We added one more simulator to the API: [NWChem](https://docs.inductiva.ai/en/latest/simulators/NWChem.html), an ab initio computational chemistry software package with quantum chemical and molecular dynamics functionality.
This is an important addition because it strengthens the API capabilities for users working in biochemistry, drug design, material design and other important R&D areas, who until this version could only make use of GROMACS via the API.
Here is an example of how to run NWChem:


import inductiva

input_dir = inductiva.utils.download_from_url(
"https://storage.googleapis.com/inductiva-api-demo-files/"
"nwchem-input-example.zip", unzip=True)

nwchem = inductiva.simulators.NWChem()

task = nwchem.run(input_dir=input_dir,
sim_config_filename="h2o_sp_scf.nw",
n_vcpus=1)

task.wait()
task.download_outputs()


As usual, we also provide a [NWChem Docker image](https://hub.docker.com/layers/inductiva/kutu/nwchem_v7.2.2/images/sha256-c184341df9d2b28109277342d45298697dad376c9a43b2231498df7389f567b4?context=explore) so that users can run this simulation package easily in their local machine.

All of the available simulation images are available in [Kutu](https://hub.docker.com/r/inductiva/kutu/tags) and we have just recently launched a [Kutu webpage](https://inductiva.ai/kutu) in Inductiva’s website to showcase this remarkable resource, making it even easier for our community to find all of the simulation packages and respective distributions.

**New parameter resubmit_on_preemption** [267](https://github.com/inductiva/tasks/issues/267), [#272](https://github.com/inductiva/tasks/issues/272)
Parameter resubmit_on_preemption added to the simulator.run() method that specifies if the task is to be resubmitted when a spot instance is preempted before the task is completed.
Spot Instances are a type of virtual machine offering at significantly reduced costs compared to regular instances, with the downside that they can be terminated at any time. This characteristic makes spot ideal for budget-conscious users when cost savings outweigh the risk of unexpected termination, namely if the task is expected to finish in a short timeframe.
So, users whose tier provides the capability of using spot resources can now determine if their task shall be resubmitted automatically when the spot instance where it was being run is preempted.
By default, this parameter is set to False, so unless the user explicitly sets it to True, the tasks will not be automatically resubmitted after preemption.
Here’s an example of how to set the parameter for NWChem:


task = nwchem.run(..., resubmit_on_preemption=True|False)

0.8.0

List of changes in the v0.8 release 🚀🚀

**Tiers and Credits system**
In v0.8 we introduce the implementation of tiers and a credits system designed for a more structured and fair usage policy, ensuring optimal resource allocation for all users.
The primary goals of this change are to enhance:
**transparency** by providing users with clear and detailed insights into their resource usage and associated costs;
**efficiency** by allowing users to allocate their credits precisely, optimizing computational resources based on their specific needs;
**savings** by enabling users to select appropriate tiers and manage their credits wisely, preventing overuse and reducing unnecessary expenses.

Each user is allocated a specific tier, which comes with an associated number of credits. The tier defines the functionalities accessible to the user. Each tier is associated with quotas that establish limits on the computational resources a user can utilize.

Credits are consumed based on the user's resource usage. This includes factors like computation time and the specific machine groups utilized. For example, running a simulation on a high-memory machine group will consume more credits than on a standard machine group.
Users spend their credits on the resources and functionalities available within their tier. The credits continue to be consumed until they are either exhausted or reach their expiration date (if applicable).

If a user attempts to access resources or functionalities that they are not allowed to due to their tier or have insufficient credits for, a clear message will be displayed. This notification will inform the user about the restriction and suggest possible actions, such as upgrading their tier or adjusting their resource usage.

By understanding and managing their credits, users can optimize their resource usage, ensuring they get the most value from their allocated credits while staying within their limits.
A summary of the tier system and the associated functionalities and quotas is detailed in our [community documentation](https://docs.inductiva.ai/en/latest/api_reference/tiers_and_quotas.html)


**Check own tier / credits / quotas**
Real-time tracking of credit usage allows users to track and optimize their consumption efficiently and avoid unexpected shortfalls. Below in this doc are described the new methods and updated Console features that guarantee users have full visibility over their current tier and credits, as well as remaining credits.
Now that quotas become associated with the tiers, the command inductiva quotas list will be deprecated in future versions, being replaced by user info.

Here’s how the output looks like:

$ inductiva user info
Name: <name of the user here>
Email: <user e-mail here>
Username: <username here>

■ Tier: Power-User

■ Credits

Power-User (tier) 0.00
pioneer (campaign) 10000.00
------------------------------------
Total 10000.00

■ Campaigns

NAME ENROLLMENT DATE EXPIRY DATE AVAILABLE CREDITS INITIAL CREDITS
pioneer 2024-02-06 11:40 2024-07-31 01:00 10000 10000

■ Global User quotas
CURRENT USAGE MAX ALLOWED
Maximum tasks per week 0 task N/A
Maximum number of VCPUs 0 vcpu 1000 vcpu
Maximum price per hour across all instances 0 USD 270 USD
Maximum simultaneous instances 0 instance 100 instance
Maximum time a machine group can stay idle before termination N/A 120 minute

■ Instance User quotas
MAX ALLOWED
Maximum time a machine group can stay up before automatic termination 48 hour
Maximum time a task can stay running in the default queue before automatic termination 16 hour
Maximum disk size 2000 GB
Maximum amount of RAM per VCPU 6 GB



**[Inductiva API Console](https://console.genesis.inductiva.ai/)**
Starting from v0.8, the Inductiva API Console will become an essential part of the daily experience for our users, offering functionalities similar to those available in the CLI, along with advanced features like performance and cost metrics and insights.
To start out, there are two key sets of information accessible via the Console:
A summary of your profile information (accessible by clicking the three dots in the top right corner of the screen);
Set of functionalities exposed via the left-hand side menu, including the possibility of listing all of your current and past tasks.

[225](https://github.com/inductiva/tasks/issues/225)
[[KUTU](https://github.com/inductiva/kutu)] Based on users’ feedback, a few legacy versions for some simulators are being introduced - such as [SWAH ](https://docs.inductiva.ai/en/latest/simulators/SWASH.html)(now v9.01A and v10.01) and XBeach (now v1.23 and 1.24) - so that users are able to run simulations that have been prepared for older versions of the simulator.

[138](https://github.com/inductiva/tasks/issues/138)
Standardize how a user can pass a path to a specific location in their system, and use only strings, deprecating the two other types previously supported (A path object from the python’s os package, a path object from the pathlib’s package).

**Improve onboarding for Windows users**
The onboarding instructions for Windows OS have been enhanced with a clear, step-by-step guide to ensure a smooth onboarding experience with the Inductiva API.

0.7.3

List of changes in the v0.7.3 release 🚀🚀

**Updated Dependency: typing-extensions**
Inductiva API's installation package now supports any version of typing-extensions.
This change ensures greater flexibility and compatibility with various FastAPI versions and other dependencies, enhancing the overall stability and maintainability.
For detailed installation and usage instructions, you’re welcome to revisit the documentation available in the [user’s console](https://console.genesis.inductiva.ai/).

[992](https://github.com/inductiva/inductiva-web-api/issues/992) - **Breaking change**
Input parameter ´Region´ removed from [compute/machine_types](https://api-dev.inductiva.ai/docs#/compute/list_available_machine_types) method.
Since this is a breaking change, the users are recommended to upgrade to the newer version as soon as they import the Inductiva package.

[985](https://github.com/inductiva/inductiva-web-api/issues/985)
Attribute ‘Zone’ added to [Machine_type schema](https://api-dev.inductiva.ai/docs#/:~:text=MachineGroupType-,MachineType,-OutputArchiveInfo).

[180](https://github.com/inductiva/tasks/issues/180)
One of the changes[ released in v.0.7.2](https://github.com/inductiva/inductiva/releases) was to automatically download the standard streams stderr and stdout during task.wait() even if the task is not completed.
Building upon this change, we improved the clarity of the messages prompted in CLI in case the task fails, so that the users become aware of the resources at their disposal to inspect the reason for the failure.
Here’s a couple of examples of what the messages look like:

Task m6bcnzqizij38mtbps8u5nedd failed.
Downloading stdout and stderr files to inductiva_output/m6bcnzqizij38mtbps8u5nedd...
Partial download completed to inductiva_output/m6bcnzqizij38mtbps8u5nedd.
Please inspect the stdout.txt and stderr.txt files at: inductiva_output/m6bcnzqizij38mtbps8u5nedd
For more information.



Task m6bcnzqizij38mtbps8u5nedd failed.
Please inspect the stdout.txt and stderr.txt files at: inductiva_output/m6bcnzqizij38mtbps8u5nedd
For more information.


[190](https://github.com/inductiva/tasks/issues/190), [#1009](https://github.com/inductiva/inductiva-web-api/issues/1009)
The machine will be automatically terminated whenever that task finishes, to [save resources and costs](https://docs.inductiva.ai/en/latest/cli/managing-resources.html).
As a consequence, the PATCH method of the [compute/group](https://api.inductiva.ai/docs#/compute) is deprecated, since the machine group lifecycle configs are set in the class constructor.


[201](https://github.com/inductiva/tasks/issues/201)
A few new methods to list the available simulators and their respective version (besides the [documentation](https://docs.inductiva.ai/en/latest/simulators/overview.html)):
- `inductiva simulators list` list of available simulators for production runs;
- `inductiva simulators list --dev` list of available simulators for development runs.

The output looks like:

SIMULATOR VERSIONS
amr-wind 1.4.0
cans 2.3.4
dualsphysics 5.2.1
fds 6.8
gromacs 2022.2
openfast 3.5.2
openfoam-esi 2206
openfoam-foundation 8
reef3d 24.02
schism 5.11.0
splishsplash 2.13.0
swan 41.45
swash 10.01
xbeach 1.24, 1.23


Also, new methods to run a specific version of a simulator, or to execute its development run.
Exemplifying for GROMACS simulator, whose current and only available version is 2022.2:
| Instruction | Result |
| :----- | :------ |

0.7.2

Highlights of the v0.7.2 release 🚀🚀

In this version, we prioritize issues that were brought to our attention by our users. More specifically, we addressed various issues related with usability and performance, and we improved the way we inform users about how the API itself works as they run their simulations.

Some of the main improvements include:

- More explicit information about errors or problems encountered during the simulations, such as, alerting the user that the reasons why the simulation failed was due to lack of disk space. In this version, there are a lot more potential errors / problems being reported back to the user because we improved the way we communicate errors from the backend to the client ([pr886](https://github.com/inductiva/inductiva-web-api/pull/886)). This dramatically improves usability because it gives actional clues on how to correct problems.
- A mechanism for grouping tasks under a single addressable concept called [Projects](https://tutorials.inductiva.ai/intro_to_api/projects.html). This is especially useful when the user needs to run several simulations under a single context (“a project”), such as when exploring variations of a certain simulation use-case, or when needing to generate datasets of synthetic data.
- We are also now making available ways for users to get summary information about their tasks. The new print_summary() method provides users with a number of stats related to the different stages of the simulations, as well as to the data produced.
- The user now has more direct information about quotas and pricing. This helps users better manage the amount of resources they can use, and also understand how they can save costs by properly choosing specific configurations.


List of changes

[188](https://github.com/inductiva/tasks/issues/188)
Improved messaging: whenever the user tries to access a task that they don’t own or that doesn’t exist, namely when the task ID was misspelled, the user is alerted of the issue by a message automatically printed on the CLI.

[199](https://github.com/inductiva/tasks/issues/199)
Standard streams (stderr for errors and stdout for output) are automatically downloaded during the execution of task.wait(), so that they can be accessed even if the task is not completed.
Although task.download_outputs() is maintained as an available method to download the output of a finished task, this extra measure guarantees that users will have access to errors and output even in the cases where something goes wrong with the task execution.
A message is printed to inform the user when the download is finished.
The command line logs provide the download location.

[200](https://github.com/inductiva/tasks/issues/200)
New task.get_output_info() that can be called by the user to retrieve information about the task’s output, even before downloading it.
This method returns one object with four attributes:
- output_info.total_size_bytes - total output size, before compression;
- output_info.total_compressed_size_bytes - total output size, after compression;
- output_info.n_files - total number of files in the output;
- output_info.files - same attributes per file in the output.

[205](https://github.com/inductiva/tasks/issues/205) [#206](https://github.com/inductiva/tasks/issues/206)
To [terminate all](https://docs.inductiva.ai/en/latest/how_to/manage_computational_resources.html#terminate-the-active-computational-resources) active computational resources becomes much faster.
After the user confirms the instruction, it’ll take only a few seconds to terminate all their active resources, avoiding idle resources and guaranteeing an effective user quota management.
A message is printed as soon as the instruction is finished.

[203](https://github.com/inductiva/tasks/issues/203)
When the user submits a task, information is provided about the number of tasks ahead of theirs. This information is updated regularly.

Task sokraaaw70txt5jc65it8h7kw successfully queued and waiting to be picked-up for execution...
Number of tasks ahead of task sokraaaw70txt5jc65it8h7kw in queue: 17


[196](https://github.com/inductiva/tasks/issues/196)
Manage remote storage - options to delete the data remotely, after download:
- New method task.remove_remote_files() that removes the output.
- New optional parameter ‘rm_remote_files’ in task.download_outputs(). Default is False, so the parameter must be explicitly set to True when the user wants to delete the data after downloading.

[192](https://github.com/inductiva/tasks/issues/192), [#198](https://github.com/inductiva/tasks/issues/198), [#197](https://github.com/inductiva/tasks/issues/197)
New method task.print_summary(), that shows the times spent at all stages of the process, including all auxiliary tasks, such as moving data around between your local computer, your personal remote storage space and the executer machine.

Task status: success
Wall clock time: 14.05 s
Time breakdown:
Input upload: 0.38 s
Time in queue: 0.00 s
Container image download: 1.08 s
Input download: 0.06 s
Input decompression: 0.02 s
Computation: 12.16 s
Output compression: 0.10 s
Output upload: 0.14 s
Data:
Size of zipped output: 878.53 KB
Size of unzipped output: 2.05 MB
Number of output files: 33

Check out an [example](https://docs.staging.inductiva.ai/simulators/XBeach.html#a-more-advanced-example) of how to take advantage of this method in our tutorials documentation.
Here’s the cheat sheet:
- Wall clock time: sum of all times;
- Input upload: time to upload the input files;
- Time in queue: time waiting for task to be picked up;
- Input download: time to download from the remote storage (bucket) to the machine where the task was run;
- Computation: time to actually run the task;
- Output upload: time to upload from the machine where the task was run to the remote storage.

[193](https://github.com/inductiva/tasks/issues/193)
Information about the available [user quota](https://docs.inductiva.ai/en/latest/api_reference/user_quotas.html#) is displayed more often:

- ‘[inductiva resources list](https://docs.inductiva.ai/en/latest/cli/managing-resources.html#list-active-resources)’ overview now also provides the maximum cost ($/hour) of each of the active computational resources.
- When creating or starting a resource,
-- a summary of the quotas being used is shown in a table format;
-- a savings suggestion is provided, when applicable (look for the ‘>>’ after the estimated cloud cost).

Registering MachineGroup configurations:
> Name: api-efmv0es2vnwkyokm5ne1t2nmy
> Machine Type: c2-standard-4
> Data disk size: 10 GB
> Number of machines: 1
> Spot: False
> Estimated cloud cost of machine group: 0.230 $/h
>> The same machine group with spot instances would cost 0.182 $/h less per machine (79.43% savings). Specify `spot=True` in the constructor to use spot machines.
Starting MachineGroup(name="api-efmv0es2vnwkyokm5ne1t2nmy"). This may take a few minutes.
Note that stopping this local process will not interrupt the creation of the machine group. Please wait...
Machine Group api-efmv0es2vnwkyokm5ne1t2nmy with c2-standard-4 machines successfully started in 0:00:24.
The machine group is using the following quotas:

NAME USED BY RESOURCE NEW TOTAL USAGE MAX ALLOWED
cost_per_hour 0.22968 0.22968 270
total_num_machines 1 1 100
total_num_vcpus 4 4 1500


- When terminating the resource, a summary of the quotas being released is shown.

Successfully requested termination of MachineGroup(name="api-efmv0es2vnwkyokm5ne1t2nmy").
Termination of the machine group freed the following quotas:

NAME FREED BY RESOURCE NEW TOTAL USAGE MAX ALLOWED
cost_per_hour 0.22968 0 270
total_num_machines 1 0 100
total_num_vcpus 4 0 1500


[194](https://github.com/inductiva/tasks/issues/194), [#195](https://github.com/inductiva/tasks/issues/195)
Additional optional sub-methods to access logs.
We created a bunch of sub-methods to easy access and get the most out of the logs being generated.
| Instruction | Print to CLI the logs |
| :----- | :------ |
| inductiva logs task_id | of the task identified by task_id |
| inductiva logs | of the last submitted task |
| inductiva logs submitted | of the last submitted task |
| inductiva logs submitted-1 | of the 2nd to last submitted task |
| inductiva logs submitted-n | of the last (n+1)th submitted task |
| inductiva logs started | of the last started task |
| inductiva logs task_id stdout | Print to CLI only the outputsof the task identified by task_id |
| inductiva logs task_id stderr | Print to CLI only the errorsof the task identified by task_id |
| inductiva logs task_id stdout stderr | Print to CLI the outputs and errorsof the task identified by task_id. <br> By default, errors will be printed in red to be distinguishable from other prints. Use no-color to disable the colorized output.
| Combinations of stdout / stderrwith submitted / started | Print to CLI the outputs and / or errorsof the task identified by instruction, as described above. |

0.6.0

**Highlights of the v0.6 release 🚀🚀:**

- **Ability to submit and download very large files**
It is now possible to call simulators and pass very large input files (2Gb+), such as, for example, when you need to pass a very dense mesh to OpenFOAM.

- **Optimized Logging Management**
The new default behavior of the API is to not collect remote stdout logs in real-time, making sure the performance of simulators is preserved. When needed, such as in development time, users can always optionally turn on streaming of remote logging. In any case, nothing is lost: all logging information is still available for downloading, but just after the execution of the simulation.

- **Smarter Scaling Up/Down Policy for Elastic Machine Groups**
We improved the resource management policy of our Elastic Machine Groups to make use of information about the number of tasks waiting on queues as triggers for scaling up and down operations.

- **CaNS: one additional CFD simulator is now available**
We added support for [CaNS](https://docs.inductiva.ai/en/latest/simulators/CaNS.html) (Canonical Navier-Stokes), a massively-parallel numerical simulator of fluid flows.

- **Quotas -- know your limits**
We made it easy for users to know how much of their quota they are using and how much is left. When using Inductiva’s Command Line Interface, you can simply issue inductiva quotas list to get a detailed list of all the quotas together with their current usage. You can also programmatically obtain quota information from your Python script using command inductiva.users.get_quotas().

0.5.0

**Highlights** of the v0.5 release 🚀🚀:

**Simulators**: We have two additional simulators for coastal dynamics and marine sciences:

- [SWAN](https://docs.inductiva.ai/en/latest/simulators/SWAN.html)

- [SCHISM](https://docs.inductiva.ai/en/latest/simulators/SCHISM.html)

**Templating**: Improve the usability of the templating mechanism and makes it much easier to build custom scenarios on top of the generic simulation capabilities provided by the API.

**Command Line Interface**: Ability to download output files directly from the CLI for specific tasks. Either all files or just a subset can be downloaded for 1 or more tasks at a time. This feature greatly simplifies the management of simulation data when running several [simulations in parallel](https://docs.inductiva.ai/en/latest/cli/tracking-tasks.html).

**Up-to-date GCP pricing information**: the API provides daily-updated information about the prices of all VM instances we make available from GCP. This brings complete clarity to the [costs](https://docs.inductiva.ai/en/latest/cli/managing-resources.html#estimate-costs) involved in running a VM via the API.

**Metrics and Benchmarking**: We made extensive consolidation work on our backend, especially in terms of logging and analytics to allow us to compile performance metrics and produce computational benchmarks on behalf of the users (and which will be available soon).

**Documentation**: New [docs.inductiva.ai](https://docs.inductiva.ai/en/latest/) documentation subdomain.

Page 3 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.