mlops

The latest news, opinions and technical guides from ZenML.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Orchestration Showdown: Dagster vs Prefect vs Airflow

Comparing Airflow, Dagster, and Prefect: Choosing the right orchestration tool for your data workflows.
Read post

Building Scalable Forecasting Solutions: A Comprehensive MLOps Workflow on Google Cloud Platform

MLOps on Google Cloud Platform streamlines machine learning workflows using Vertex AI and ZenML.
Read post

AI-Generated Storytelling: A GenAI Comic About ZenML

Playing around with some genAI services and tools to create a story and comic that showcases the journey of MLOps adoption for a small team.
Read post

MLOps: What It Is, Why It Matters, and How to Implement It

An overview of MLOps principles, implementation strategies, best practices, and tools for managing machine learning lifecycles.
Read post

Newsletter Edition #6 - Fine-tuning LLama 3.1 using your MLOps stack

ZenML's new direction: Simplifying infrastructure connections for enhanced MLOps.
Read post

The Ultimate Guide to LLM Batch Inference with OpenAI and ZenML

OpenAI's Batch API allows you to submit queries for 50% of what you'd normally pay. Not all their models work with the service, but in many use cases this will save you lots of money on your LLM inference, just so long as you're not building a chatbot!
Read post

The struggles of defining a Machine Learning Pipeline

On the difficulties in precisely defining a machine learning pipeline, exploring how code changes, versioning, and naming conventions complicate the concept in MLOps frameworks like ZenML.
Read post

Reflections on working with 100s of ML Platform teams

Exploring the evolution of MLOps practices in organizations, from manual processes to automated systems, covering aspects like data science workflows, experiment tracking, code management, and model monitoring.
Read post

Newsletter Edition #4 - Learnings from Building with LLMs

Today, we're back to LLM land (Not too far from Lalaland). Not only do we have a new LoRA + Accelerate-powered finetuning pipeline for you, we're also hosting a RAG themed webinar.
Read post
Oops, there are no matching results for your search.