zenml

The latest news, opinions and technical guides from ZenML.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Newsletter 18: Real-Time AI, Zero Cold Starts

ZenML launches Pipeline Deployments, a new feature that transforms any ML pipeline or AI agent into a persistent, high-performance HTTP service with no cold starts and full observability.
Read post

From Batch to Agents: Your Top Questions on ZenML's New Pipeline Deployments

ZenML's new pipeline deployments feature lets you use the same pipeline syntax to run both batch ML training jobs and deploy real-time AI agents or inference APIs, with seamless local-to-cloud deployment via a unified deployer stack component.
Read post

Building a Forecasting Platform, Not Just Models

FloraCast is a production-ready template that shows how to build a forecasting platform—config-driven experiments, model versioning/staging, batch inference, and scheduled retrains—with ZenML and Darts.
Read post

Newsletter Edition #15 - Why you don't need an agent (but you might need a workflow)

Discover why production teams are treating agentic workflows as MLOps evolution, not revolution—plus how ZenML achieved 200x performance improvements for enterprise ML operations. Real insights from 130+ MLOps engineers on building reliable AI systems.
Read post

Scaling ZenML: 200x Performance Improvement Through Database and FastAPI Optimizations in v0.83.0

A technical deep dive into the performance optimizations that improved ZenML's throughput by 200x
Read post

Newsletter Edition #13 - ZenML 0.80.0 just dropped

Our monthly roundup: new features with 0.80.0 release, more new models, and an MCP server for ZenML
Read post

ZenML 0.80.0: Workspace Hierarchy for Pro, Performance Gains for All

ZenML 0.80.0 transforms tenant structures into workspace/project hierarchies with advanced RBAC for Pro users, while enhancing tagging, resource filtering, and dashboard design. Open-source improvements include Kubernetes security upgrades, SkyPilot integration, and significantly faster CLI operations. Both Pro and OSS users benefit from dramatic performance optimizations, GitLab improvements, and enhanced build tracking.
Read post

Streamlining LLM Fine-Tuning in Production: ZenML + OpenPipe Integration

The OpenPipe integration in ZenML bridges the complexity of large language model fine-tuning, enabling enterprises to create tailored AI solutions with unprecedented ease and reproducibility.
Read post

Newsletter Edition #12 - Why Top Teams Are Replacing AI Agents (and What They're Choosing Instead)

Our monthly roundup: Hamza visits the US, a new course built on ZenML and why workflows are better than autonomous agents!
Read post
Oops, there are no matching results for your search.