Iterative to Launch Open Source Tool, First to Train Machine Learning Models on Any Cloud Using HashiCorp’s Terraform Solution

MLOps Company Iterative Earns SOC 2 Type 1 Compliance

The Terraform Provider Iterative (TPI) simplifies training on any cloud and saves significant time and money in maintaining and configuring compute resources.

Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced a new open source compute orchestration tool using Terraform, a solution by HashiCorp, Inc., the leader in multi-cloud infrastructure automation software.

Marketing Technology News: MarTech Interview with Josh Sukenic, VP and General Manager at AdAdapted

“We chose Terraform as the de facto standard for defining the infrastructure-as-code approach”

Terraform Provider Iterative (TPI) is the first product on HashiCorp’s Terraform technology stack to simplify ML training on any cloud while helping infrastructure and ML teams to save significant time and money in maintaining and configuring their training resources.

Built on Terraform by HashiCorp, an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services, TPI allows data scientists to deploy workloads without having to figure out the infrastructure.

Marketing Technology News: MarTech Interview with Amanda Mountain, Global Vice President at SAP Digital Commerce

Data scientists oftentimes need a lot of computational resources when training ML models. This may include expensive GPU instances that need to be provisioned for training an experiment and then de-provisioned to save on costs. Terraform helps teams to specify and manage compute resources. TPI complements Terraform with additional functionality, customized for machine learning use cases:

  • Just-in-time compute management – TPI automatically provisions and de-provisions compute resources once an experiment is finished running, helping to reduce costs by up to 90%.
  • Automated spot instance recovery – ML teams can use spot instances to train experiments without worrying about losing all their progress if a spot instance terminates. TPI automatically migrates training jobs to a new spot instance when the existing instance terminates so that the workload can pick up where it left off.
  • Consistent tooling for both data scientists and DevOps engineers – TPI delivers a tool that lets both data science and software development teams collaborate using the same language and tool. This simplifies compute management and allows for ML models to be delivered into production faster.

With TPI, data scientists only need to configure the resources they need once and are able to deploy anywhere and everywhere in minutes. Once it is configured as part of an ML model experiment pipeline, users can deploy on AWS, GCP, Azure, on-prem, or with Kubernetes.

“We chose Terraform as the de facto standard for defining the infrastructure-as-code approach,” said Dmitry Petrov, co-founder and CEO of Iterative. “TPI extends Terraform to fit with machine learning workloads and use cases. It can handle spot instance recovery and lets ML jobs continue running on another instance when one is terminated.”

Marketing Technology News: Everyone Is a Creative – Empowering Your Fans to Amplify and Broaden the Reach of Your Content

Picture of Business Wire

Business Wire

For more than 50 years, Business Wire has been the global leader in press release distribution and regulatory disclosure.

You Might Also Like