ZenML

Multi-Agent Orchestration for Automated Sales Proposal Generation

Fujitsu 2025
View original source

Fujitsu developed an AI-powered solution to automate sales proposal creation using Azure AI Agent Service and Semantic Kernel to orchestrate multiple specialized AI agents. The system integrates with existing tools and knowledge bases to retrieve and synthesize information from dispersed sources. The implementation resulted in a 67% increase in productivity for sales proposal creation, allowing sales teams to focus more on strategic customer engagement.

Industry

Tech

Technologies

Overview

Fujitsu, a Japan-based global leader in information and communication technology (ICT), undertook a significant initiative to automate the creation of sales proposals using AI agents. The company faced a common enterprise challenge: sales teams were spending excessive time on the manual, repetitive task of generating proposals, which limited their capacity for high-value activities such as strategic planning, customer relationship building, and tailoring solutions to specific client needs. New salespeople in particular struggled to navigate Fujitsu’s extensive product portfolio and locate the dispersed expertise needed to create accurate, client-specific proposals.

The solution developed leverages Microsoft’s Azure AI Agent Service within Azure AI Foundry to create an intelligent, scalable AI agent for sales automation. This case study offers valuable insights into how enterprises can deploy multi-agent AI systems in production environments at significant scale, serving approximately 38,000 users across the organization.

Technical Architecture and Implementation

At the core of the solution is Fujitsu Kozuchi Composite AI, which is powered by Microsoft’s Semantic Kernel. This represents an interesting architectural choice for multi-agent orchestration in production. The system employs multiple specialized AI agents that are coordinated by an orchestrator AI to work together as a team to answer questions and generate proposals.

The lead engineer, Hirotaka Ito, explicitly noted that “conventional generative AI, conversational AI, and retrieval-automation generation (RAG) systems alone didn’t meet our needs.” This acknowledgment is significant from an LLMOps perspective, as it highlights the limitations of simpler approaches for complex enterprise workflows. The team found that a multi-agent architecture was necessary to handle the complexity of their use case, which required dynamically retrieving and synthesizing knowledge from scattered internal sources while ensuring proposals remained tailored and data-driven.

The integration between Azure AI Agent Service and Azure AI Search was highlighted as particularly valuable for streamlining knowledge retrieval and organization. According to the case study, this integration was straightforward to implement from the Azure portal, which speaks to the operational advantages of using a platform-as-a-service (PaaS) approach for AI deployment. The ease of integration is an important consideration for LLMOps practitioners who need to balance development velocity with system complexity.

Production Deployment Considerations

Several aspects of this deployment are noteworthy from an LLMOps perspective. First, the solution was designed to integrate seamlessly into Fujitsu’s existing workflows, leveraging Microsoft tools already familiar to the approximately 38,000 employees who would use the system. This approach to integration is crucial for enterprise AI deployments, as user adoption often hinges on minimizing friction and leveraging existing tool familiarity.

The project followed a responsible development approach that included a proof-of-concept phase where Fujitsu refined the AI agent using feedback from sales teams. This iterative optimization for usability before full deployment represents a best practice in LLMOps, ensuring that the system meets real user needs rather than just technical requirements.

Beyond proposal generation, the AI agent serves as a knowledge retrieval system that provides new hires with detailed product information and strategic guidance. This dual-purpose design maximizes the return on investment in the AI infrastructure while addressing multiple business needs.

Results and Impact Assessment

The case study reports a 67% productivity improvement in sales proposal creation, which Fujitsu claims has freed up countless hours that can be redirected toward customer engagement and strategic planning. While this is an impressive figure, it’s worth noting that this is a Microsoft customer story published on Microsoft’s platform, so the claims should be viewed with appropriate context. The specific methodology for calculating this productivity gain is not detailed in the case study.

That said, sales teams reportedly praised the tool for addressing knowledge gaps and enabling stronger customer relationships, suggesting genuine user satisfaction with the deployed system. The fact that the learnings from this internally-facing project are also informing other AI-based initiatives at Fujitsu, including their customer-facing Composite AI platform, indicates the organization sees real value in the approach.

Multi-Agent Architecture Insights

The architectural decision to use multiple specialized AI agents coordinated by an orchestrator is a significant one that reflects emerging patterns in enterprise AI deployment. Rather than relying on a single large language model to handle all aspects of proposal generation, Fujitsu’s approach decomposes the problem into specialized components that can be independently developed, tested, and optimized.

Semantic Kernel serves as the orchestration layer in this architecture, enabling the coordination of multiple agents to complete complex tasks. This approach offers several operational advantages: it allows for more targeted updates to individual agents without affecting the entire system, enables more granular monitoring and debugging, and can provide better performance for specialized tasks compared to a single generalist model.

The orchestrator AI plays a crucial role in understanding user requests and selecting the appropriate combination of agents to fulfill each request. As Fujitsu describes it for their Composite AI platform: “Composite AI understands user requests by natural language input and selects the best AI technologies from our environment to create efficient solutions.” This natural language routing capability is an important aspect of making complex multi-agent systems accessible to non-technical users.

Future Directions and Scalability

Fujitsu has articulated plans to expand the use of AI agents to address broader organizational challenges, including automating strategic planning, improving customer interaction, and enhancing knowledge sharing. Future iterations aim to optimize collaboration among AI agents and tackle more complex tasks, suggesting an evolutionary approach to expanding the system’s capabilities.

The modular design and orchestration features of Azure AI Agent Service are seen as enablers for expanding these capabilities over time. This extensibility is an important consideration for LLMOps, as AI systems need to evolve and scale as organizational needs change and as new capabilities become available.

The vision articulated by the lead engineer—that “AI will not just assist with tasks but also generate strategies and proposals”—suggests a long-term roadmap toward more autonomous AI systems. While this represents an ambitious goal, the current deployment provides a solid foundation for incremental advancement.

Considerations and Caveats

As with any vendor-published case study, some caution is warranted in interpreting the results. The 67% productivity improvement is presented without detailed methodology, and the case study is published on Microsoft’s customer stories platform, which naturally presents the partnership in a favorable light. Independent verification of these results is not available.

Additionally, the case study does not discuss potential challenges such as handling edge cases, managing hallucinations or inaccuracies in generated proposals, governance and compliance considerations, or the ongoing operational costs of running the system. These are important aspects of LLMOps that organizations considering similar deployments would need to carefully evaluate.

That said, the architectural approach—using multi-agent orchestration with Semantic Kernel, integrating with existing enterprise search infrastructure, and conducting proof-of-concept validation before full deployment—represents sound LLMOps practices that other organizations could learn from. The scale of deployment (38,000 users) also suggests that the solution has been operationalized successfully, even if the specific operational details are not fully disclosed in the case study.

More Like This

Evolution from Task-Specific Models to Multi-Agent Orchestration Platform

AI21 2025

AI21 Labs evolved their production AI systems from task-specific models (2022-2023) to RAG-as-a-Service, and ultimately to Maestro, a multi-agent orchestration platform. The company identified that while general-purpose LLMs demonstrated impressive capabilities, they weren't optimized for specific business use cases that enterprises actually needed, such as contextual question answering and summarization. AI21 developed smaller language models fine-tuned for specific tasks, wrapped them with pre- and post-processing operations (including hallucination filters), and eventually built a comprehensive RAG system when customers struggled to identify relevant context from large document corpora. The Maestro platform emerged to handle complex multi-hop queries by automatically breaking them into subtasks, parallelizing execution, and orchestrating multiple agents and tools, achieving dramatically improved quality with full traceability for enterprise requirements.

question_answering summarization document_processing +38

Scaling AI Product Development with Rigorous Evaluation and Observability

Notion 2025

Notion AI, serving over 100 million users with multiple AI features including meeting notes, enterprise search, and deep research tools, demonstrates how rigorous evaluation and observability practices are essential for scaling AI product development. The company uses Brain Trust as their evaluation platform to manage the complexity of supporting multilingual workspaces, rapid model switching, and maintaining product polish while building at the speed of AI industry innovation. Their approach emphasizes that 90% of AI development time should be spent on evaluation and observability rather than prompting, with specialized data specialists creating targeted datasets and custom LLM-as-a-judge scoring functions to ensure consistent quality across their diverse AI product suite.

document_processing content_moderation question_answering +52

Multi-Tenant AI Chatbot Platform for Industrial Conglomerate Operating Companies

Capgemini 2025

Capgemini and AWS developed "Fort Brain," a centralized AI chatbot platform for Fortive, an industrial technology conglomerate with 18,000 employees across 50 countries and multiple independently-operating subsidiary companies (OpCos). The platform addressed the challenge of disparate data sources and siloed chatbot development across operating companies by creating a unified, secure, and dynamically-updating system that could ingest structured data (RDS, Snowflake), unstructured documents (SharePoint), and software engineering repositories (GitLab). Built in 8 weeks as a POC using AWS Bedrock, Fargate, API Gateway, Lambda, and the Model Context Protocol (MCP), the solution enabled non-technical users to query live databases and documents through natural language interfaces, eliminating the need for manual schema remapping when data structures changed and providing real-time access to operational data across all operating companies.

chatbot healthcare document_processing +34