
How I Rebuilt zenml.io in a Week with Claude Code
I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.
Unlock the full potential of your machine learning projects with ZenML, a flexible alternative to AWS SageMaker. While SageMaker offers a comprehensive cloud platform for ML, ZenML provides a vendor-neutral approach to building, training, and deploying high-quality models at scale. ZenML's intuitive workflow management capabilities extend beyond a single cloud provider, offering the flexibility to work across various environments and tools. Unlike SageMaker's AWS-centric ecosystem, ZenML allows you to accelerate your time-to-market and drive innovation across your organization without being locked into a specific cloud infrastructure, giving you the freedom to adapt your ML workflows as your needs evolve.
“ZenML allowed us a fast transition between dev to prod. It’s no longer the big fish eating the small fish – it’s the fast fish eating the slow fish.”
François Serra
ML Engineer / ML Ops / ML Solution architect at ADEO Services
Feature-by-feature comparison
| Workflow Orchestration | Provides a flexible and portable orchestration layer for ML workflows | Offers orchestration capabilities within the SageMaker ecosystem |
| Integration Flexibility | Seamlessly integrates SageMaker with other MLOps tools for a customized stack | Primarily focuses on integration within the AWS ecosystem |
| Vendor Lock-In | Enables easy migration between orchestrators and cloud providers | Tight coupling with AWS services may lead to vendor lock-in |
| Local Development | Supports local development and testing of ML workflows before deployment | Limited local development capabilities, primarily cloud-based |
| MLOps Lifecycle Coverage | Covers the entire MLOps lifecycle, from data preparation to model monitoring | Covers the entire MLOps lifecycle. Some parts are more integrated than others. |
| Collaborative Development | Facilitates collaboration among teams with version control and governance features | Provides collaboration features through SageMaker Studio |
| Portability | Ensures workflow portability across different environments and platforms | Primarily optimized for the AWS environment |
| Experiment Tracking | Integrates with MLflow and other tools for comprehensive experiment tracking | Offers SageMaker Experiments for experiment tracking |
| Model Deployment | Simplifies model deployment across various platforms, including SageMaker | Supports deployment within the SageMaker ecosystem |
| Monitoring and Logging | Provides centralized monitoring and logging for ML workflows | Offers monitoring and logging capabilities through AWS services |
| Community and Support | Growing community with active support and resources | Large community and extensive support through AWS |
| Pricing Model | Flexible pricing model based on usage and scale | Pay-as-you-go pricing model tied to AWS service usage |
| Learning Curve | Reduces the learning curve by providing a consistent interface across platforms | Requires familiarity with AWS services and SageMaker concepts |
| Hybrid and Multi-Cloud | Supports hybrid and multi-cloud deployments with easy migration | Primarily optimized for AWS, with limited multi-cloud support |
Code comparison
# ZenML with SageMaker integration
from zenml import pipeline, step
from zenml.integrations.aws.flavors import SagemakerOrchestratorFlavor
from zenml.integrations.aws.sagemaker_orchestrator_settings import SagemakerOrchestratorSettings
# Set up SageMaker orchestrator
sagemaker_orchestrator = SagemakerOrchestratorFlavor(
execution_role="arn:aws:iam::123456789012:role/SageMakerRole",
# Other configuration options...
)
# Define custom settings for specific steps
gpu_settings = SagemakerOrchestratorSettings(
processor_args={
"instance_type": "ml.p3.2xlarge",
"volume_size_in_gb": 100
},
input_data_s3_uri="s3://your-bucket/training-data"
)
@step
def prepare_data():
# Data preparation logic
return processed_data
@step(settings={"orchestrator.sagemaker": gpu_settings})
def train_model(data):
# Training logic using SageMaker
return model
@step
def evaluate_model(model):
# Model evaluation logic
return metrics
@pipeline(orchestrator=sagemaker_orchestrator)
def sagemaker_pipeline():
data = prepare_data()
model = train_model(data)
metrics = evaluate_model(model)
# Run the pipeline
sagemaker_pipeline() import boto3
import sagemaker
from sagemaker.estimator import Estimator
from sagemaker.inputs import TrainingInput
# Set up SageMaker session
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# Define estimator
estimator = Estimator(
image_uri="your-docker-image-uri",
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
output_path="s3://your-bucket/output"
)
# Set hyperparameters
estimator.set_hyperparameters(epochs=10, learning_rate=0.1)
# Prepare data
train_data = TrainingInput(
s3_data="s3://your-bucket/train",
content_type="text/csv"
)
# Train the model
estimator.fit({"train": train_data})
# Deploy the model
predictor = estimator.deploy(
initial_instance_count=1,
instance_type="ml.t2.medium"
)
# Make predictions
result = predictor.predict("sample input data")
print(result)
ZenML guarantees swifter initialization, surpassing orchestrators for prompt, optimized ML workflows.
ZenML is a native interface to the whole end-to-end machine learning lifecycle, taking you beyond just orchestration.
ZenML excels with dedicated support, offering personalized assistance beyond standard orchestrators.
Expand Your Knowledge

I rebuilt zenml.io — 2,224 pages, 20 CMS collections — from Webflow to Astro in a week using Claude Code and a multi-model AI workflow. Here's how.


Agentic RAG without guardrails spirals out of control. Here's how ZenML's dynamic pipelines give you fan-out, budget limits, and lineage without limiting the LLMs.