The bridge between ML and Ops

An open-source MLOps + LLMOps framework that seamlessly integrates existing infrastructure and tools
Diagram illustrating the workflow between ML and Ops teams, featuring tools for model deployment, data processing, and experiment tracking.ZenML MLOps framework: Bridging ML and Ops, showcasing workflow from local development to cloud production with diverse tools and resources.

Trusted by 1,000s of top companies to standardize their MLOps workflows

Airbus Defence & SpaceAXABoschContinentalDelivery HeroEnelGoodyearIKEALeroy MerlinMercado LibreRivianTelefonicaWalmart
AdeoDeepL
DevoteamFrontiersMann + HummelNielsenIQPlaytikaWisetech GlobalAisbachAisera"Logo of ALKi, a company focused on machine learning and automated model deployment solutions."AltenarBrevoDigital DiagnosticsEarthDaily agroEikonGeisingerHematoInfoplazaInstabaseIT4IPMAlt text: "Logo of Multitel Innovation Centre, a hub for machine learning and data science advancements."RiverbankStandard BotsSymphonyAITwoVisiaWayflyer
Speed

Iterate at warp speed

Local to cloud seamlessly. Jupyter to production pipelines in minutes. Smart caching accelerates iterations everywhere. Rapidly experiment with ML and GenAI models.
Learn More
Observability

Auto-track everything

Automatic logging of code, data, and LLM prompts. Version control for ML and GenAI workflows. Focus on innovation, not bookkeeping.
Learn More
You can track all your metadata, data and versions of models with ZenML out of the box
Scale

Limitless Scaling

Scale to major clouds or K8s effortlessly. 50+ MLOps and LLMOps integrations. From small models to large language models, grow seamlessly.
Hugging Face
Hugging Face
Flexibility

Backend flexibility, zero lock-in

Switch backends freely. Deploy classical ML or LLMs with equal ease. Adapt your LLMOps stack as needs evolve.
Learn More
ZenML integrates with GCPZenML allows you to work with Kubernetes on your MLOps projectsZenML integrates natively with AWS and Weights and Biases
Reusability

Shared ML building blocks

Team-wide templates for steps and pipelines. Collective expertise, accelerated development.
Learn More
ZenML allows you to rerun and schedule pipeline runs for machine learning workflows
Optimization

Streamline cloud expenses

Stop overpaying on cloud compute. Clear view of resource usage across ML and GenAI projects.
Learn More
ZenML helps you manage costs for your machine learning workflows
Governance

Built-in compliance & security

Comply with EU AI Act & US AI Executive Order. One-view ML infrastructure oversight. Built-in security best practices.
Learn More
ZenML manages access to all the different parts of your machine learning infrastructure and assets throughout your teamA screenshot of the ZenML UI showing two users and different role access to the app.

Customer Stories

Learn how teams are using ZenML to save time and simplify their MLOps.
HashiCorp
ZenML offers the capability to build end-to-end ML workflows that seamlessly integrate with various components of the ML stack, such as different providers, data stores, and orchestrators. This enables teams to accelerate their time to market by bridging the gap between data scientists and engineers, while ensuring consistent implementation regardless of the underlying technology.
Harold Giménez
Harold Giménez
SVP R&D at HashiCorp
ZenML's automatic logging and containerization have transformed our MLOps pipeline. We've drastically reduced environment inconsistencies and can now reproduce any experiment with just a few clicks. It's like having a DevOps engineer built into our ML framework.
Liza Bykhanova
Liza Bykhanova
Data Scientist at Competera
Stanford University
"Many, many teams still struggle with managing models, datasets, code, and monitoring as they deploy ML models into production. ZenML provides a solid toolkit for making that easy in the Python ML world"
Chris Manning
Chris Manning
Professor of Linguistics and CS at Stanford
Infoplaza
"ZenML has transformed how we manage our GPU resources. The automatic deployment and shutdown of GPU instances have significantly reduced our cloud costs. We're no longer paying for idle GPUs, and our team can focus on model development instead of infrastructure management. It's made our ML operations much more efficient and cost-effective."
Smiling young man with glasses in a casual setting, representing teamwork in data science and machine learning projects.
Christian Versloot
Data Technologist at Infoplaza
Brevo
"After a benchmark on several solutions, we choose ZenML for its stack flexibility and its incremental process. We started from small local pipelines and gradually created more complex production ones. It was very easy to adopt."
Clément Depraz - Data Scientist at Brevo
Clément Depraz
Data Scientist at Brevo
AdeoLeroy Merlin
"ZenML allowed us a fast transition between dev to prod. It’s no longer the big fish eating the small fish – it’s the fast fish eating the slow fish."
François Serra
François Serra
ML Engineer / ML Ops / ML Solution architect at ADEO Services
MadeWithML
"ZenML allows you to quickly and responsibly go from POC to production ML systems while enabling reproducibility, flexibitiliy, and above all, sanity."
Goku Mohandas
Goku Mohandas
Founder of MadeWithML
IT4IPM
"ZenML's approach to standardization and reusability has been a game-changer for our ML teams. We've significantly reduced development time with shared components, and our cross-team collaboration has never been smoother. The centralized asset management gives us the visibility and control we needed to scale our ML operations confidently."
Maximilian Balluff
Maximillian Baluff
Lead AI Engineer at IT4IPM
Koble
"With ZenML, we're no longer tied to a single cloud provider. The flexibility to switch backends between AWS and GCP has been a game-changer for our team."
Dragos Ciupureanu
Dragos Ciupureanu
VP of Engineering at Koble
Salesforce
"ZenML allows orchestrating ML pipelines independent of any infrastructure or tooling choices. ML teams can free their minds of tooling FOMO from the fast-moving MLOps space, with the simple and extensible ZenML interface. No more vendor lock-in, or massive switching costs!"
Richard Socher
Richard Socher
Former Chief Scientist Salesforce and Founder of You.com
Wisetech Global
Thanks to ZenML we've set up a pipeline where before we had only jupyter notebooks. It helped us tremendously with data and model versioning and we really look forward to any future improvements!
Francesco Pudda
Francesco Pudda
Machine Learning Engineer at WiseTech Global
Using ZenML

It's extremely simple to plugin ZenML

Just add Python decorators to your existing code and see the magic happen
Experiment tracker
Automatically track experiments in your experiment tracker
Pythonic
Return pythonic objects and have them versioned automatically
Track metadata
Track model metadata and lineage
Switch orchestrators
Switch easily between local and  cloud orchestration
Data dependencies
Define  data dependencies and modularize your entire codebase
  	@step(experiment_tracker="mlflow")
def read_data_from_snowflake(config: pydantic.BaseModel) -> pd.DataFrame:
    
  
  	  df = read_data(client.get_secret("snowflake_credentials")
  mlflow.log_metric("data_shape", df.shape)
    
  
  	  return df
    
  
  	@step(
  settings={"resources": ResourceSettings(memory="240Gb") }
    
  
  	  model=Model(name="my_model", model_registry="mlflow")
)
    
  
  	def my_trainer(df: pd.DataFrame) -> transformers.AutoModel:
  tokenizer, model = train_model(df)
  return model
  
  	@pipeline(
   active_stack="databricks_stack",
  
  	   on_failure=on_failure_hook
)
  
  	def my_pipeline():
  df = read_data_from_snowflake()    
  my_trainer(df)

my_pipeline()
  
Remove sensitive information
Remove sensitive information from your code
Infrastructure
Choose resources abstracted  from infrastructure
Frameworks
Works for any framework - classical ML or LLM’s
Alerts
Easily define alerts for observability
No compliance headaches

Your VPC, your data

ZenML is a metadata layer on top of your existing infrastructure, meaning all data and compute stays on your side.
ZenML only has access to metadata; your data remains in your VPCZenML machine learning infrastructure diagram: local clients, server, and cloud services for ML workflow management.
ZenML is SOC2 and ISO 27001 Compliant

We Take Security Seriously

ZenML is SOC2 and ISO 27001 compliant, validating our adherence to industry-leading standards for data security, availability, and confidentiality in our ongoing commitment to protecting your ML workflows and data.

Looking to Get Ahead in MLOps & LLMOps?

Subscribe to the ZenML newsletter and receive regular product updates, tutorials, examples, and more.
We care about your data in our privacy policy.
Support

Frequently asked questions

Everything you need to know about the product.
What is the difference between ZenML and other machine learning orchestrators?
Unlike other machine learning pipeline frameworks, ZenML does not take an opinion on the orchestration layer. You start writing locally, and then deploy your pipeline on an orchestrator defined in your MLOps stack. ZenML supports many orchestrators natively, and can be easily extended to other orchestrators. Read more about why you might want to write your machine learning pipelines in a platform agnostic way here.
Does ZenML integrate with my MLOps stack (cloud, ML libraries, other tools etc.)?
As long as you're working in Python, you can leverage the entire ecosystem. In terms of machine learning infrastructure, ZenML pipelines can already be deployed on Kubernetes, AWS Sagemaker, GCP Vertex AI, KubeflowApache Airflow and many more. Artifact, secrets, and container storage is also supported for all major cloud providers.
Does ZenML help in GenAI / LLMOps use-cases?
Yes! ZenML is fully compatabile, and is intended to be used to productionalize LLM applications. There are examples on the ZenML projects repository that showcases our integrations with Llama Index, OpenAI, and Langchain. Check them out here!
How can I build my MLOps/LLMOps platform using ZenML?
The best way is to start simple. The starter and production guides walk you through how to build a miminal cloud MLOps stack. You can then extend with the other numerous components such as experiment tracker, model deployers, model registries and more!
What is the difference between the open source and Pro product?
ZenML is and always will be open-source at its heart. The core framework is freely available on Github and you can run and manage it in-house without using the Pro product. On the other hand, ZenML Pro offers one of the best experiences to use ZenML, and includes a managed version of the OSS product, including some Pro-only features that create the best collaborative experience for many companies that are scaling their ML efforts. You can see a more detailed comparison here.
Still not clear?
Ask us on Slack

Start Your Free Trial Now

No new paradigms - Bring your own tools and infrastructure
No data leaves your servers, we only track metadata
Free trial included - no strings attached, cancel anytime
Alt text: "Dashboard displaying a list of machine learning models with details on versioning, authors, and tags for insights and predictions."