llm

The latest news, opinions and technical guides from ZenML.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The State of LLM Operations or LLMOps: Why Everything is Hard (And That's OK)

Machine Learning (ML) adoption is gaining momentum, but challenges include robust pipelines, quality issues, and scale monitoring. Recognizing and overcoming these challenges is crucial.
Read post

Using ZenML+ Databricks to Supercharge LLM Development

The integration of ZenML and Databricks streamlines LLM development and deployment processes, offering scalability, reproducibility, efficiency, collaboration, and monitoring capabilities. This approach enables data scientists and ML engineers to focus on innovation.
Read post

Automating Lightning Studio ML Pipelines For Fine Tuning LLM (s)

In the AI world, fine-tuning Large Language Models (LLMs) for specific tasks is becoming a critical competitive advantage. Combining Lightning AI Studios with ZenML can streamline and automate the LLM fine-tuning process, enabling rapid iteration and deployment of task-specific models. This approach allows for the creation and serving of multiple fine-tuned variants of a model, with minimal computational resources. However, scaling the process requires resource management, data preparation, hyperparameter optimization, version control, deployment and serving, and cost management. This blog post explores the growing complexity of LLM fine-tuning at scale and introduces a solution that combines the flexibility of Lightning Studios with the automation capabilities of ZenML.
Read post

New Features: Enhanced Step Execution, AzureML Integration and More!

ZenML's latest release 0.65.0 enhances MLOps workflows with single-step pipeline execution, AzureML SDK v2 integration, and dynamic model versioning. The update also introduces a new quickstart experience, improved logging, and better artifact handling. These features aim to streamline ML development, improve cloud integration, and boost efficiency for data science teams across local and cloud environments.
Read post

How to Finetune Phi 3.5 with ZenML

Master cloud-based LLM finetuning: Set up infrastructure, run pipelines, and manage experiments with ZenML's Model Control Plane for Microsoft's latest Phi model.
Read post

Newsletter Edition #6 - Fine-tuning LLama 3.1 using your MLOps stack

ZenML's new direction: Simplifying infrastructure connections for enhanced MLOps.
Read post

How to Finetune Llama 3.1 with ZenML

Master cloud-based LLM finetuning: Set up infrastructure, run pipelines, and manage experiments with ZenML's Model Control Plane for Meta's latest Llama model.
Read post

The Ultimate Guide to LLM Batch Inference with OpenAI and ZenML

OpenAI's Batch API allows you to submit queries for 50% of what you'd normally pay. Not all their models work with the service, but in many use cases this will save you lots of money on your LLM inference, just so long as you're not building a chatbot!
Read post

Newsletter Edition #4 - Learnings from Building with LLMs

Today, we're back to LLM land (Not too far from Lalaland). Not only do we have a new LoRA + Accelerate-powered finetuning pipeline for you, we're also hosting a RAG themed webinar.
Read post
Oops, there are no matching results for your search.