Intercom successfully pivoted from a struggling traditional customer support SaaS business facing near-zero growth to an AI-first agent-based company through the development and deployment of Fin, their AI customer service agent. CEO Eoghan McCabe implemented a top-down transformation strategy involving strategic focus, cultural overhaul, aggressive cost-cutting, and significant investment in AI talent and infrastructure. The company went from low single-digit growth to becoming one of the fastest-growing B2B software companies, with Fin projected to surpass $100 million ARR within three quarters and growing at over 300% year-over-year.
Intercom’s transformation represents one of the most dramatic and successful pivots from traditional SaaS to AI-first operations in the enterprise software space. The company, founded 14 years ago as a customer communication platform, was facing existential challenges with declining growth rates and was approaching negative net new ARR when CEO Eoghan McCabe returned to lead a comprehensive transformation.
Intercom had grown to hundreds of millions in ARR but was experiencing the classic late-stage SaaS stagnation. The company had become what McCabe described as “bloated” with “diluted and unfocused” strategy, trying to serve “all the things for all the people.” They had experienced five consecutive quarters of declining net new ARR and were on the verge of hitting zero growth. The business model was traditional seat-based SaaS with complex, widely criticized pricing that had become a meme on social media.
The company already had some AI infrastructure in place, including basic chatbots and machine learning for Q&A in customer service, but these were rudimentary systems requiring extensive setup and delivering mediocre results. This existing AI team proved crucial when GPT-3.5 was released, as they immediately recognized the transformative potential of the new technology.
The pivot to AI happened remarkably quickly. Just six weeks after GPT-3.5’s launch, Intercom had developed a working beta version of what would become Fin, their AI customer service agent. This rapid development was enabled by several key factors: an existing AI engineering team, a large customer base of 30,000 paying customers with hundreds of thousands of active users, and billions of data points to train and optimize the system.
McCabe made the strategic decision to go “all in” on AI, allocating nearly $100 million in cash to the AI transformation. This wasn’t just a product decision but a complete business model transformation. The company shifted from traditional SaaS metrics to an agent-based model where success is measured by problem resolution rather than seat licenses.
The development of Fin involved significant LLMOps challenges that the company had to solve in production. Initially, the economics were upside-down - they were charging 99 cents per resolved ticket but it was costing them $1.20 to process each one. This required intensive optimization of their AI pipeline, prompt engineering, and infrastructure to achieve profitability.
The pricing model itself represents innovative thinking in AI productization. Rather than charging for usage or seats, Intercom aligned revenue directly with customer value through outcome-based pricing at 99 cents per successfully resolved customer ticket. This required sophisticated monitoring and evaluation systems to ensure high resolution rates, as their revenue model depends entirely on successful problem resolution.
The company had to build robust production systems capable of handling customer-facing AI interactions at scale. With Fin now processing customer support tickets across thousands of businesses, the reliability and consistency requirements are extremely high. The system must maintain performance standards that exceed human agents while being available 24/7 across global time zones.
The AI transformation required more than just technical changes. McCabe implemented what he describes as “founder mode” - a top-down, aggressive restructuring that included:
The company recognized that competing in the AI space requires a different operational tempo. McCabe noted that successful AI companies operate with young teams working “12 hours a day, 365 days a year” and using AI tools throughout their workflow, not just for customer-facing features.
While specific technical details aren’t extensively covered in the interview, several key aspects of Intercom’s production AI system emerge:
The transformation has yielded remarkable results. Fin is growing at over 300% year-over-year and is projected to exceed $100 million ARR within three quarters. Intercom now ranks in the 15th percentile for growth among all public B2B software companies, and McCabe predicts they will become the fastest-growing public software company by next year.
In the competitive landscape, Intercom claims to be the largest AI customer service agent by both customer count and revenue, with the highest performance benchmarks and winning rate in head-to-head comparisons. They maintain the number one rating on G2 in their category.
Several key lessons emerge from Intercom’s transformation:
While the transformation appears highly successful, several challenges and limitations should be noted:
McCabe’s vision extends beyond customer service to a broader transformation of business operations through AI agents. He predicts that future organizations will be “agents everywhere” with complex interactions between humans and AI systems across all business functions. This suggests that Intercom’s transformation may be an early example of a broader shift in how software companies will need to evolve in the AI era.
The case demonstrates that established SaaS companies can successfully transform into AI-first businesses, but it requires fundamental changes in strategy, operations, culture, and technology. The key appears to be treating it as a complete business model transformation rather than simply adding AI features to existing products.
This panel discussion brings together engineering leaders from HRS Group, Netflix, and Harness to explore how AI is transforming DevOps and SRE practices. The panelists address the challenge of teams spending excessive time on reactive monitoring, alert triage, and incident response, often wading through thousands of logs and ambiguous signals. The solution involves integrating AI agents and generative models into CI/CD pipelines, observability workflows, and incident management to enable predictive analysis, intelligent rollouts, automated summarization, and faster root cause analysis. Results include dramatically reduced mean time to resolution (from hours to minutes), elimination of low-level toil, improved context-aware decision making, and the ability to move from reactive monitoring to proactive, machine-speed remediation while maintaining human accountability for critical business decisions.
This panel discussion features three AI-native companies—Delphi (personal AI profiles), Seam AI (sales/marketing automation agents), and APIsec (API security testing)—discussing their journeys building production LLM systems over three years. The companies address infrastructure evolution from single-shot prompting to fully agentic systems, the shift toward serverless and scalable architectures, managing costs at scale (including burning through a trillion OpenAI tokens), balancing deterministic workflows with model autonomy, and measuring ROI through outcome-based metrics rather than traditional productivity gains. Key technical themes include moving away from opinionated architectures to let models reason autonomously, implementing state machines for high-confidence decisions, using tools like Pydantic AI and Logfire for instrumentation, and leveraging Pinecone for vector search at scale.
Arize AI built "Alyx," an AI agent embedded in their observability platform to help users debug and optimize their machine learning and LLM applications. The problem they addressed was that their platform had advanced features that required significant expertise to use effectively, with customers needing guidance from solutions architects to extract maximum value. Their solution was to create an AI agent that emulates an expert solutions architect, capable of performing complex debugging workflows, optimizing prompts, generating evaluation templates, and educating users on platform features. Starting in November 2023 with GPT-3.5 and launching at their July 2024 conference, Alyx evolved from a highly structured, on-rails decision tree architecture to a more autonomous agent leveraging modern LLM capabilities. The team used their own platform to build and evaluate Alex, establishing comprehensive evaluation frameworks across multiple levels (tool calls, tasks, sessions, traces) and involving cross-functional stakeholders in defining success criteria.