Company
CircleCI
Title
AI Error Summarizer Implementation: A Tiger Team Approach
Industry
Tech
Year
2023
Summary (short)
CircleCI's engineering team formed a tiger team to explore AI integration possibilities, ultimately developing an AI error summarizer feature. The team spent 6-7 weeks on discovery, including extensive stakeholder interviews and technical exploration, before implementing a relatively simple but effective LLM-based solution that summarizes build errors for users. The case demonstrates how companies can successfully approach AI integration through focused exploration and iterative development, emphasizing that valuable AI features don't necessarily require complex implementations.
## Overview CircleCI, a leading CI/CD (Continuous Integration/Continuous Deployment) platform, launched an AI Error Summarizer feature designed to help developers understand build errors more quickly. This case study comes from a podcast discussion featuring Rob Zuber (CTO of CircleCI) along with engineers Kira Milow and Ryan Hamilton, who were part of the tiger team that built the feature. The conversation provides valuable insights into the organizational, product, and technical approaches to introducing LLM-powered features into an existing developer tools product. ## The Discovery Phase and Organizational Approach The project began with CircleCI forming a dedicated tiger team to explore generative AI opportunities. This approach is notable because it reflects a common industry pattern where leadership recognizes the potential of generative AI but doesn't have a clear vision of how it should be applied. As Rob Zuber candidly described: "my CTO just told me I don't know what AI is but we need some of it in our product as quickly as possible." The tiger team spent approximately 6-7 weeks on the project, with the first week dedicated entirely to learning and exploration. Team members consumed videos, read documentation, and researched what other companies were doing with generative AI. This foundational learning phase is an important LLMOps consideration—even experienced engineers needed time to understand the landscape of NLP, AI, ML, vector databases, and the various technical complexities involved. A particularly valuable aspect of their discovery process was conducting interviews with nearly every product manager in the company. Rather than asking "how should we use AI?", they asked PMs to identify their biggest challenges without thinking about AI as a solution. This product-first approach generated approximately 75 ideas on their Trello board and helped ground the technical exploration in real customer problems. Ryan Hamilton reflected that if he were to do it again, he would spend even more time on these product conversations earlier in the process rather than getting "way down rabbit holes into like NLP and... all the complexities of like vector databases." This insight is valuable for other organizations: understanding product needs should precede deep technical exploration. ## Technical Implementation The actual technical implementation appears to have been surprisingly straightforward once the team understood what they were building. Key technical details mentioned include: - **LLM Integration**: The feature sends error messages to an LLM (specifically mentioned is OpenAI) and receives summarized explanations back - **LangChain**: The team used LangChain as a framework for interfacing with OpenAI's API and chaining together operations - **Python**: Ryan mentioned learning Python in a day via a Pluralsight course and having a working proof of concept with LangChain and OpenAI within a single day - **API Keys**: The barrier to entry was described as simply getting API keys and starting to "ping" the LLM The team emphasized that regular software engineers without specialized ML or AI backgrounds were fully capable of building this feature. They explicitly noted that organizations "don't even need AI or ML Engineers to get started" because "regular software engineers are more than capable of interfacing with the APIs and chaining things together." ## Key LLMOps Lessons and Insights ### Start Simple, Don't Train Your Own Model One of the strongest recommendations from the team was to leverage existing foundational models rather than attempting to train custom models. Kira Milow noted: "chances are pretty good that you do not need to train your own model... there's ChatGPT, there's Llama, there's DALL-E... just start with the basics, try prompt engineering a little bit, you can move up to fine-tuning and vector databases if necessary." This pragmatic approach acknowledges that for many use cases, particularly text summarization and explanation tasks, pre-trained large language models are more than sufficient. The advice to start with prompt engineering and only escalate to fine-tuning or more complex architectures "if necessary" reflects a mature understanding of build-vs-buy tradeoffs in LLMOps. ### Low Cost of Experimentation The team repeatedly emphasized how inexpensive it was to experiment with LLM-based features. Ryan described the process as achieving "a lot not with very little time" and noted that tinkering and prototyping could begin immediately after obtaining API access. This low barrier to entry is a significant shift from traditional ML projects that might require extensive data collection, model training, and infrastructure setup. ### Subtle Features Can Deliver High Value Rather than building a revolutionary, AI-first product, the team built what Ryan described as a "very subtle" addition to the existing CircleCI platform. He drew parallels to Amazon's AI-powered review summarization feature, which sits "way below the fold" but provides significant time savings for users. This philosophy—that AI features should enhance existing products rather than demand attention—represents a pragmatic approach to AI product development. The error summarizer helps developers understand build failures faster without fundamentally changing how they interact with CircleCI. ### Rapid Prototyping and Learning The timeline of the project is instructive for other organizations. Within a 6-7 week period, the team: - Conducted foundational learning about generative AI - Interviewed product managers across the company - Generated and evaluated approximately 75 potential AI feature ideas - Built a working prototype - Shipped a production feature Ryan's experience of going from zero Python knowledge to a working proof of concept within a day (after taking a Pluralsight course overnight) illustrates how accessible LLM-based development has become. ## Organizational Considerations ### Tiger Team Structure The use of a dedicated tiger team with significant autonomy was highlighted as a key enabler of success. Kira noted that "the most valuable part of this whole exercise was just that we were essentially given free rein." The team had weekly check-ins but otherwise had freedom to explore, learn, and form their own opinions. This organizational model—providing engineers with time, space, and freedom to explore emerging technology—proved effective for this type of exploratory AI work. It's worth noting that this approach may be particularly suitable for generative AI projects where even leadership lacks clear direction. ### Cross-Functional Learning The project exposed engineers to parts of the business they wouldn't normally interact with. Both Kira and Ryan emphasized the value of speaking with product managers from across the company, which provided perspective on challenges from areas "that we're not usually involved in." This cross-pollination of ideas is a valuable secondary benefit of tiger team approaches. ## Reflections on the Broader AI Landscape The conversation included observations about how generative AI is being adopted across the tech industry. Ryan mentioned specific examples: - Amazon's review summarization feature - Spotify's AI-powered personalized DJ - Duolingo's AI enhancements for language learning - Figma's AI features These examples reinforced the team's conclusion that many successful AI features are subtle enhancements rather than revolutionary products. They also observed that "every other tech company is in the same exact boat"—exploring how to integrate generative AI into existing products. The accessibility of generative AI to non-technical users was also noted as significant. Kira mentioned that for the first time in her career, her non-technical friends understood what she was working on because ChatGPT had made AI concepts accessible to everyone. ## Limitations and Caveats It's worth noting that this case study comes from a podcast that is produced by CircleCI itself, so there is inherent promotional context. The discussion lacks some technical details that would be valuable for a complete LLMOps assessment: - No mention of evaluation metrics or how they measured the quality of error summaries - No discussion of hallucination risks or how they handle cases where the LLM might provide incorrect explanations - No details on latency, cost per request, or infrastructure considerations - No discussion of prompt engineering specifics or guardrails implemented The emphasis on speed and ease of development is valuable context, but production LLM systems typically require more attention to reliability, accuracy, and edge cases than the conversation suggests. ## Conclusion CircleCI's AI Error Summarizer project demonstrates a pragmatic approach to introducing LLM-powered features into an existing developer tools product. The key takeaways for other organizations include: starting with product problems rather than technology solutions, leveraging existing foundational models rather than building custom ones, empowering engineering teams with autonomy to explore, and recognizing that valuable AI features can be subtle enhancements rather than revolutionary products. The low barrier to entry for experimenting with LLM APIs makes this an accessible path for organizations of various sizes and AI maturity levels.

Start deploying reproducible AI workflows today

Enterprise-grade MLOps platform trusted by thousands of companies in production.