
How I Rebuilt zenml.io in a Week with Claude Code
I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.
LangGraph is excellent for building stateful, looping agent workflows with memory and tool use. ZenML is the production layer that helps those workflows run reliably across environments with artifact lineage, reproducibility, and deployment pipelines. Use LangGraph for agent logic. Use ZenML to operationalize it like any other critical ML system.
“ZenML has proven to be a critical asset in our machine learning toolbox, and we are excited to continue leveraging its capabilities to drive ADEO's machine learning initiatives to new heights”
François Serra
ML Engineer / ML Ops / ML Solution architect at ADEO Services
Feature-by-feature comparison
| Workflow Orchestration | ZenML defines ML/AI workflows as pipelines (DAGs) of steps and executes them on configurable stacks, with artifacts and metadata tracked by default. | LangGraph natively orchestrates agent workflows as executable graphs with branching and cycles, optimized for stateful LLM/agent control flow. |
| Integration Flexibility | ZenML's stack architecture lets teams swap orchestrators, artifact stores, experiment trackers, and deployers without rewriting pipeline logic. | LangGraph integrates tightly with the LangChain ecosystem, but doesn't provide an MLOps-style plug-in stack for infrastructure components. |
| Vendor Lock-In | ZenML is cloud-agnostic by design: pipelines run on stacks you control, and you can move between infrastructures by swapping stack components. | LangGraph's core library is open-source (MIT) and runs anywhere Python runs; vendor coupling mainly appears when adopting LangSmith for managed operations. |
| Setup Complexity | ZenML can start locally and scale via stacks, but production setups require configuring orchestrators, artifact stores, and other components. | LangGraph's getting-started path is lightweight (pip install + define a graph), and the CLI can bootstrap local dev servers and Docker-based runs. |
| Learning Curve | ZenML maps closely to familiar ML concepts (steps, pipelines, artifacts), and its abstractions align with production ML workflow structure. | LangGraph's explicit state/graph model is powerful, but teams face a learning curve around state design, reducers, interrupts, and debugging cyclical flows. |
| Scalability | ZenML scales by delegating execution to orchestrators (e.g., Kubernetes-native options) and by externalizing artifacts and metadata into stack components. | LangGraph scales to production workloads when deployed with an agent server architecture (Postgres + Redis) or via LangSmith Deployment. |
| Cost Model | ZenML is free in open source, with paid plans pricing around pipeline-run volume and team governance features. | LangGraph OSS is free; LangSmith adds transparent per-seat pricing plus usage-based charges for deployments and traces. |
| Collaboration | ZenML Pro adds projects/workspaces, RBAC, and UI control planes for models and artifacts to enable team collaboration on production workflows. | LangGraph collaboration is strongest when paired with LangSmith (workspaces, team features, deployment management); the OSS library alone is single-app code. |
| ML Frameworks | ZenML is designed to wrap ML training/evaluation/inference across frameworks via steps, artifacts, and stack integrations. | LangGraph is framework-agnostic at the code level but optimized for LLM/agent workflows rather than deep integration with ML training frameworks. |
| Monitoring | ZenML tracks pipeline/step metadata and artifacts to support operational debugging, governance, and integration with monitoring tooling. | LangGraph pairs with LangSmith for deep tracing and debugging of agent execution, with visual trace inspection and replay capabilities. |
| Governance | ZenML Pro plans include RBAC/SSO and enterprise features (custom roles, audit logs) aligned with governance requirements. | Governance controls (SSO/RBAC, enterprise support) are delivered through LangSmith Enterprise rather than the LangGraph OSS library. |
| Experiment Tracking | ZenML treats pipeline runs as experiments and supports experiment tracker components to log metrics, parameters, and model metadata. | LangGraph captures execution traces and state trajectories, but is not an experiment tracking system for ML training runs and hyperparameter sweeps. |
| Reproducibility | ZenML automatically tracks artifact lineage (inputs/outputs, producing steps, dependencies) and uses that to enable reproducibility and caching. | LangGraph supports checkpointing and replay for agent state, but doesn't natively version datasets/models/environments the way an MLOps platform does. |
| Auto-Retraining | ZenML is built for scheduled and trigger-based pipelines that can retrain models, validate data, and promote artifacts through environments. | LangGraph is not designed as an auto-retraining or ML CI/CD system; it focuses on orchestrating agent behaviors and stateful execution. |
Code comparison
from zenml import pipeline, step
@step
def load_data():
# Load and preprocess your data
...
return train_data, test_data
@step
def train_model(train_data):
# Train using ANY ML framework
...
return model
@step
def evaluate(model, test_data):
# Evaluate and log metrics
...
return metrics
@pipeline
def ml_pipeline():
train, test = load_data()
model = train_model(train)
evaluate(model, test) from typing import Annotated
from typing_extensions import TypedDict
from langchain.chat_models import init_chat_model
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
def chatbot(state: State) -> dict:
return {"messages": [llm.invoke(state["messages"])]}
builder = StateGraph(State)
builder.add_node("chatbot", chatbot)
builder.add_edge(START, "chatbot")
builder.add_edge("chatbot", END)
graph = builder.compile(checkpointer=MemorySaver())
out = graph.invoke(
{"messages": [{"role": "user", "content": "Hello!"}]},
config={"configurable": {"thread_id": "demo-thread"}},
)
print(out["messages"][-1].content)
ZenML is fully open-source and vendor-neutral, letting you avoid the significant licensing costs and platform lock-in of proprietary enterprise platforms. Your pipelines remain portable across any infrastructure, from local development to multi-cloud production.
ZenML offers a pip-installable, Python-first approach that lets you start locally and scale later. No enterprise deployment, platform operators, or Kubernetes clusters required to begin — build production-grade ML pipelines in minutes, not weeks.
ZenML's composable stack lets you choose your own orchestrator, experiment tracker, artifact store, and deployer. Swap components freely without re-platforming — your pipelines adapt to your toolchain, not the other way around.
Expand Your Knowledge

I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.


Agentic RAG without guardrails spirals out of control. Here's how ZenML's dynamic pipelines give you fan-out, budget limits, and lineage without limiting the LLMs.