Tech
Weights & Biases
Company
Weights & Biases
Title
LLMOps Evolution: Scaling Wandbot from Monolith to Production-Ready Microservices
Industry
Tech
Year
2023
Summary (short)
Weights & Biases presents a comprehensive case study of transforming their documentation chatbot Wandbot from a monolithic system into a production-ready microservices architecture. The transformation involved creating four core modules (ingestion, chat, database, and API), implementing sophisticated features like multilingual support and model fallback mechanisms, and establishing robust evaluation frameworks. The new architecture achieved significant metrics including 66.67% response accuracy and 88.636% query relevancy, while enabling easier maintenance, cost optimization through caching, and seamless platform integration. The case study provides valuable insights into practical LLMOps challenges and solutions, from vector store management to conversation history handling, making it a notable example of scaling LLM applications in production.
## Overview Weights & Biases developed Wandbot, a conversational developer assistant designed to help users interact with their documentation and code examples in a natural, conversational manner. The project began in early 2023 and underwent a significant architectural transformation to address production readiness challenges. This case study provides valuable insights into the real-world challenges of taking a RAG (Retrieval-Augmented Generation) application from prototype to production, including the architectural decisions, component design, and operational considerations involved. The original Wandbot was deployed as a monolithic application with separate instances for Discord and Slack, which led to code duplication, maintenance headaches, and infrastructure cost inflation. The team recognized these limitations and undertook a comprehensive refactoring effort to transition to a microservices architecture, which forms the core of this case study. ## The Problem: Monolithic Architecture Limitations The initial version of Wandbot suffered from several production-readiness issues that are common in early-stage LLM applications. The Discord and Slack applications were deployed separately, resulting in duplicated code with only minor configuration differences. This approach created a cascade of operational problems. Maintenance became increasingly difficult as any modification required updates in multiple areas. This often resulted in bugs and inconsistencies due to unsynchronized deployments between the two platforms. The operational costs were inflated because the team was essentially running two distinct bots, which meant duplicating resources such as vector stores and application deployments. As new features like conversation history were integrated, the system's complexity grew exponentially. The monolithic architecture became increasingly cumbersome, hindering the team's ability to scale and iterate on the product. These challenges are representative of what many teams face when transitioning from an LLM prototype to a production system. ## The Solution: Microservices Architecture The team resolved to transition to a microservices-oriented architecture, breaking down the bot into smaller, manageable components. This restructuring allowed them to organize the system into distinct components for ingestion, chat, and database services while centralizing core services and models for use across applications. The modular design also enabled dedicated APIs for seamless integration with existing and potential future platforms, and allowed independent modification of each service to minimize impact on the overall system. ### Ingestion Module The Ingestion Module represents one of the most critical components in any RAG system, handling the parsing and processing of raw documentation in diverse formats including Markdown, Python code, and Jupyter Notebooks. The module creates embedding vectors for document chunks and indexes these documents into a FAISS vector store with relevant metadata. The document parsing pipeline begins with syncing the latest updates from GitHub repositories. The team uses the MarkdownNodeParser from LlamaIndex for parsing and chunking Markdown documents by identifying headers and code blocks. Jupyter Notebooks are converted into Markdown using nbconvert and undergo a similar parsing routine. Code blocks receive special treatment, being parsed and chunked using Concrete Syntax Trees (CST), which segments the code logically into functions, classes, and statements. Each document chunk is enriched with metadata like source URLs and languages to enhance future retrieval. For vector store ingestion, the team uses OpenAI's ada-002 model for embeddings. A crucial operational optimization is the use of SQLite caching (part of LangChain) to minimize redundant model calls, which is essential for cost and operational efficiency. The output is a FAISS index with embedded chunks and metadata, stored as a W&B artifact for versioning and reproducibility. The team also generates comprehensive reports outlining GitHub repository revision numbers, the volume of documents ingested, and artifacts comprising parsed documents and vector stores. This practice provides transparency into the ingestion process and facilitates analysis and future improvements. ### Chat Module The chat module underwent significant transformation during the refactoring effort. The team migrated from LangChain to LlamaIndex, which gave them better control over underlying functionality including retrieval methods, response synthesis pipeline, and other customizations. This migration decision reflects the evolving landscape of LLM frameworks and the importance of choosing tools that provide the right level of abstraction for production use cases. A notable integration is Cohere's rerank-v2 endpoint, which allows Wandbot to sift through retriever results more effectively. Reranking has become a standard practice in production RAG systems for improving the relevance of retrieved documents before they are passed to the LLM for response generation. The team prioritized multilingual support, with the chat module now recognizing and responding to queries in the same language, with particular emphasis on Japanese to serve their W&B Japan Slack community. This required implementing language-based retrieval mechanisms. For reliability, the team implemented an LLM fallback mechanism. If the primary model (GPT-4) experiences downtime, the system seamlessly switches to a backup LLM (GPT-3.5-turbo). This failover mechanism is managed within the LlamaIndex service context and adds a layer of resilience against potential downtimes—a critical consideration for production systems. The system prompt engineering is thorough, instructing the LLM to provide clear and concise explanations, only generate code derived from the provided context, always cite sources, and respond in the user's language. The prompt also includes explicit guidance for handling uncertainty, directing users to support channels when the context is insufficient. ### Database Module The database module serves as Wandbot's memory bank, storing conversational history, providing conversational context for future queries, enabling personalization through conversation threads, and persisting user feedback for continuous improvement. The choice of SQLite as the database was driven by its serverless architecture (no need for a separate database server), its embeddable nature (all data contained within a single, easily transportable file), and ease of integration with Python. The team implements periodic backups (every 10 minutes) to W&B Tables, allowing data persistence as W&B artifacts that can be utilized in evaluation and feedback loops. Caching of LLM query results at the database level reduces the need for repetitive queries, cutting down operational costs. This is a common pattern in production LLM systems where identical or similar queries may be received frequently. ### API Module The API module serves as the central interface for client applications, with key endpoints including /question_answer for storing Q&A pairs, /chat_thread for retrieving conversation history, /query as the primary chat endpoint, and /feedback for storing user feedback. The centralized API approach provides several advantages: loose coupling between frontend applications and backend services, improved developer productivity through abstraction, independent horizontal scaling of individual API services, and enhanced security by avoiding direct exposure of core modules. ## Deployment and Operations The team deployed the microservices on Replit Deployments, which provides improved uptime, auto-scaling capabilities, and enhanced monitoring and security. While the individual microservices for Database, API, and client applications run in a single repl, the platform supports horizontal scaling as usage patterns evolve. ## Evaluation Approach The team conducted both manual and automated evaluation of Wandbot, measuring retrieval accuracy and response relevance across a custom test set with diverse query types. They acknowledge that evaluating RAG systems is complex, requiring examination of each component both individually and as a whole. The article references separate detailed evaluation reports, recognizing that comprehensive LLM evaluation is a substantial undertaking in its own right. ## Key Takeaways for LLMOps This case study illustrates several important LLMOps patterns: the transition from monolithic to microservices architecture for maintainability and scalability; the importance of caching at multiple levels (embeddings, LLM responses) for cost optimization; the value of model fallback mechanisms for reliability; the need for comprehensive evaluation across retrieval and generation components; and the benefits of artifact versioning and reporting for reproducibility and debugging. It's worth noting that while the case study presents a successful transformation, the article is self-published by Weights & Biases about their own product, so claims about performance improvements should be considered in that context. The architectural patterns and technical decisions described, however, represent sound practices that are applicable across production RAG implementations.

Start deploying reproducible AI workflows today

Enterprise-grade MLOps platform trusted by thousands of companies in production.