
How I Rebuilt zenml.io in a Week with Claude Code
I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.
ZenML is a lightweight alternative to Kubeflow, the Kubernetes-native platform for machine learning. While Kubeflow offers robust orchestration capabilities for ML workflows on Kubernetes, ZenML provides a more flexible and user-friendly approach to building, deploying, and managing ML pipelines at scale. ZenML's intuitive workflow management simplifies MLOps across various environments, not just Kubernetes. Leverage ZenML's adaptability and ease of use to accelerate your ML initiatives and drive innovation across your organization, without the steep learning curve and infrastructure demands of Kubeflow.
“After a benchmark on several solutions, we choose ZenML for its stack flexibility and its incremental process. We started from small local pipelines and gradually created more complex production ones. It was very easy to adopt.”
Clément Depraz
Data Scientist at Brevo
Feature-by-feature comparison
| Workflow Orchestration | Provides a flexible and user-friendly orchestration layer on top of Kubeflow | Offers powerful Kubernetes-native orchestration for ML workflows |
| Ease of Use | Simplifies the adoption and management of Kubeflow pipelines with an intuitive interface | Requires Kubernetes expertise to effectively utilize its features |
| Integration Flexibility | Seamlessly integrates Kubeflow with other MLOps tools for a customized stack | Primarily focuses on Kubernetes-based integrations and extensions |
| Switch your orchestrator, keep your code | Keep the same code when switching orchestrator | Requires significant rewriting to use Kubeflow code with a different orchestrator |
| Pipeline Customization | Enables easy customization and extension of Kubeflow pipelines | Allows customization but may require more Kubernetes knowledge |
| Collaborative MLOps | Facilitates collaboration among teams with version control and governance features | Provides collaboration features but may require additional setup |
| Scalability | Leverages Kubeflow's scalability while providing an abstraction layer for ease of use | Highly scalable for large-scale ML workflows on Kubernetes |
| Experiment Tracking | Integrates with MLflow and other tools for comprehensive experiment tracking | Offers Kubeflow Metadata for experiment tracking and artifact management |
| Model Deployment | Simplifies the deployment of models using Kubeflow with pre-built integrations | Supports various deployment options, including Kubeflow Serving |
| Monitoring and Logging | Provides centralized monitoring and logging for Kubeflow pipelines | Offers Kubeflow Metadata for logging and monitoring |
| Community and Support | Growing community with active support and resources | Large and active community with extensive resources and support |
| MLOps Lifecycle Coverage | Covers the entire MLOps lifecycle, from data preparation to model monitoring | Focuses primarily on orchestration, deployment, and serving |
| Learning Curve | Reduces the learning curve for adopting Kubeflow with a user-friendly abstraction layer | Requires Kubernetes expertise to effectively utilize its full set of features |
| Hybrid and Multi-Cloud | Supports hybrid and multi-cloud deployments with Kubeflow integration | Enables hybrid and multi-cloud deployments on Kubernetes |
| GPU and Distributed Computing | Seamlessly leverages Kubeflow's GPU and distributed computing capabilities | Provides strong support for GPU and distributed computing workloads |
Code comparison
# zenml integration install kubeflow
# zenml orchestrator register kf_orchestrator -f kubeflow ...
# zenml stack update my_stack -o kf_orchestrator
from zenml import pipeline, step
@step
def preprocess_data(data_path: str) -> str:
# Preprocessing logic here
return processed_data
@step
def train_model(data: str):
# Model training logic here
return model
@pipeline
def my_pipeline(data_path: str):
processed_data = preprocess_data(data_path)
model = train_model(processed_data)
# Run the pipeline
my_pipeline(data_path="path/to/data") # Kubeflow Pipelines SDK
import kfp
from kfp import dsl
def preprocess_op(data_path):
return dsl.ContainerOp(
name='Preprocess Data',
image='preprocess-image:latest',
arguments=['--data_path', data_path]
)
def train_op(data):
return dsl.ContainerOp(
name='Train Model',
image='train-image:latest',
arguments=['--data', data]
)
@dsl.pipeline(
name='My ML Pipeline',
description='A sample ML pipeline'
)
def my_pipeline(data_path: str):
preprocess_task = preprocess_op(data_path)
train_task = train_op(preprocess_task.output)
# Compile and run the pipeline
kfp.compiler.Compiler().compile(my_pipeline, 'pipeline.yaml')
client = kfp.Client()
client.create_run_from_pipeline_func(my_pipeline, arguments={})
ZenML guarantees swifter initialization, surpassing orchestrators for prompt, optimized ML workflows.
ZenML is a native interface to the whole end-to-end machine learning lifecycle, taking you beyond just orchestration.
ZenML excels with dedicated support, offering personalized assistance beyond standard orchestrators.
Expand Your Knowledge

I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.


Agentic RAG without guardrails spirals out of control. Here's how ZenML's dynamic pipelines give you fan-out, budget limits, and lineage without limiting the LLMs.