ZenML Blog

The latest news, opinions and technical guides from ZenML.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Tutorials
8 mins

Multimodal LLM Pipelines: From Data Ingestion to Real-Time Inference

Learn how to build, fine-tune, and deploy multimodal LLMs using ZenML. Explore LLMOps best practices for deployment, real-time inference and model management.
Read post
ZenML
5 mins

Making ML Documentation AI-Friendly: ZenML's Implementation of llms.txt

Discover how ZenML implements the llms.txt standard to make ML documentation more accessible to both AI assistants and humans. Learn about our modular approach using specialized documentation files, practical integration with AI development tools, and how this structured format enhances the developer experience across different context window sizes.
Read post

New Features: Performance Upgrade, Improvements for Major Cloud Providers, and More!

ZenML 0.74.0 introduces key cloud provider features including SageMaker pipeline scheduling, Azure Container Registry implicit authentication, and Vertex AI persistent resource support. The release adds API Tokens for secure, time-boxed API authentication while delivering comprehensive improvements to timezone handling, database performance, and Helm chart deployments.
Read post

Newsletter Edition #11 - GenAI Meets MLOps: New Roles, New Rules

Our monthly roundup: AI Infrastructure Summit insights, new experiment comparison tools, and a deep dive into AI Engineering roles
Read post
MLOps
2 mins

AI Engineering vs ML Engineering: Evolving Roles in the GenAI Era

The rise of Generative AI has shifted the roles of AI Engineering and ML Engineering, with AI Engineers integrating generative AI into software products. This shift requires clear ownership boundaries and specialized expertise. A proposed solution is layer separation, separating concerns into two distinct layers: Application (AI Engineers/Software Engineers), Frontend development, Backend APIs, Business logic, User experience, and ML (ML Engineers). This allows AI Engineers to focus on user experience while ML Engineers optimize AI systems.
Read post
LLMOps
8 mins

Production LLM Security: Real-world Strategies from Industry Leaders 🔐

Learn how leading companies like Dropbox, NVIDIA, and Slack tackle LLM security in production. This comprehensive guide covers practical strategies for preventing prompt injection, securing RAG systems, and implementing multi-layered defenses, based on real-world case studies from the LLMOps database. Discover battle-tested approaches to input validation, data privacy, and monitoring for building secure AI applications.
Read post

New Dashboard Feature: Compare Your Experiments

ZenML's new Experiment Comparison Tool brings powerful experiment tracking capabilities to your ML pipelines. Compare up to 20 pipeline runs simultaneously through intuitive tabular and parallel coordinates visualizations, helping teams derive actionable insights from their pipeline metadata. Now available in the Pro tier dashboard.
Read post
LLMOps
7 mins

Optimizing LLM Performance and Cost: Squeezing Every Drop of Value

This comprehensive guide explores strategies for optimizing Large Language Model (LLM) deployments in production environments, focusing on maximizing performance while minimizing costs. Drawing from real-world examples and the LLMOps database, it examines three key areas: model selection and optimization techniques like knowledge distillation and quantization, inference optimization through caching and hardware acceleration, and cost optimization strategies including prompt engineering and self-hosting decisions. The article provides practical insights for technical professionals looking to balance the power of LLMs with operational efficiency.
Read post
LLMOps
7 mins

The Evaluation Playbook: Making LLMs Production-Ready

A comprehensive exploration of real-world lessons in LLM evaluation and quality assurance, examining how industry leaders tackle the challenges of assessing language models in production. Through diverse case studies, the post covers the transition from traditional ML evaluation, establishing clear metrics, combining automated and human evaluation strategies, and implementing continuous improvement cycles to ensure reliable LLM applications at scale.
Read post
Oops, there are no matching results for your search.

Start your new ML Project today with ZenML Pro

Join 1,000s of members already deploying models with ZenML.