Company
DTDC
Title
Conversational AI Agent for Logistics Customer Support
Industry
Other
Year
2025
Summary (short)
DTDC, India's leading integrated express logistics provider, transformed their rigid logistics assistant DIVA into DIVA 2.0, a conversational AI agent powered by Amazon Bedrock, to handle over 400,000 monthly customer queries. The solution addressed limitations of their existing guided workflow system by implementing Amazon Bedrock Agents, Knowledge Bases, and API integrations to enable natural language conversations for tracking, serviceability, and pricing inquiries. The deployment resulted in 93% response accuracy and reduced customer support team workload by 51.4%, while providing real-time insights through an integrated dashboard for continuous improvement.
## Company and Use Case Overview DTDC is India's leading integrated express logistics provider with the largest network of customer access points in the country. The company operates technology-driven logistics solutions serving diverse industry verticals, making them a trusted partner in delivery excellence. The scale of their operations is substantial, with DTDC Express Limited receiving over 400,000 customer queries monthly, covering tracking requests, serviceability checks, and shipping rate inquiries. The challenge DTDC faced centered around their existing logistics agent called DIVA, which operated on a rigid, guided workflow system. This inflexible approach forced users to follow structured paths rather than engaging in natural, dynamic conversations. The limitations of this system resulted in several operational inefficiencies: increased burden on customer support teams, longer resolution times, and poor overall customer experience. DTDC recognized the need for a more flexible, intelligent assistant that could understand context, manage complex queries, and improve efficiency while reducing reliance on human agents. To address these challenges, DTDC partnered with ShellKode, an AWS Partner specializing in modernization, security, data, generative AI, and machine learning. Together, they developed DIVA 2.0, a generative AI-powered logistics agent built using Amazon Bedrock services. ## Technical Architecture and LLMOps Implementation The solution architecture demonstrates a comprehensive approach to LLMOps, implementing multiple AWS services in a cohesive system designed for production-scale operations. The core of DIVA 2.0 is built around Amazon Bedrock Agents, which serve as the orchestration layer for the conversational AI system. These agents are configured to receive user requests and interpret intent using natural language understanding capabilities. The system employs Anthropic's Claude 3.0 as the primary large language model, accessed through Amazon Bedrock. This choice reflects a strategic decision to leverage a proven foundation model while maintaining the flexibility and security benefits of AWS's managed service approach. The LLM processes context from retrieved data and generates meaningful responses for users, demonstrating effective prompt engineering and response generation in a production environment. A critical component of the LLMOps implementation is the knowledge base architecture. The system utilizes Amazon Bedrock Knowledge Bases integrated with Amazon OpenSearch Service for vector storage. The knowledge base contains web-scraped content from the DTDC website, internal support documentation, FAQs, and operational data. This content is processed into vector embeddings, enabling semantic similarity search capabilities. When users submit general queries, the system performs retrieval-augmented generation (RAG) to provide accurate and relevant responses based on the stored knowledge. The API integration layer represents another sophisticated aspect of the LLMOps implementation. Based on interpreted user intent, the Amazon Bedrock agent triggers appropriate AWS Lambda functions that interface with various backend systems. These include the Tracking System API for real-time shipment status, the Delivery Franchise Location API for service availability checks, the Pricing System API for shipping rate calculations, and the Customer Care API for support ticket creation. This integration demonstrates how LLMs can be effectively connected to existing business systems to provide actionable responses. ## Production Deployment and Scalability The deployment architecture showcases several LLMOps best practices for production systems. The logistics agent is hosted as a static website using Amazon CloudFront and Amazon S3, ensuring global availability and performance optimization. The backend processing is handled by AWS App Runner, which provides automatic scaling capabilities for the web application, API services, and backend web services. The system processes user interactions through a well-defined flow: users access the agent through the DTDC website, submit natural language queries, and receive responses through the App Runner-hosted API services. This architecture supports the high volume of interactions DTDC experiences, with over 400,000 monthly queries requiring consistent performance and availability. Data persistence and analytics are handled through Amazon RDS for PostgreSQL, which stores query interactions and associated responses. This data storage enables the comprehensive dashboard functionality and provides the foundation for continuous improvement of the AI system. The dashboard itself is implemented as a separate static website with API Gateway and Lambda backend services, demonstrating a microservices approach to LLMOps infrastructure. ## Monitoring, Logging, and Governance The LLMOps implementation includes comprehensive monitoring and governance capabilities essential for production AI systems. Amazon CloudWatch Logs captures key events throughout the system, including intent recognition, Lambda invocations, API responses, and fallback triggers. This logging infrastructure supports both operational monitoring and system auditing requirements. AWS CloudTrail provides additional governance by recording and monitoring activity across the AWS account, including actions taken by users, roles, and AWS services. This creates an audit trail essential for compliance and security requirements in production AI deployments. Security monitoring is enhanced through Amazon GuardDuty, which continuously monitors and analyzes AWS data sources and logs to identify suspicious activities. This represents a proactive approach to security in AI systems, recognizing that production LLM deployments require robust threat detection capabilities. ## Performance Metrics and Evaluation The case study provides specific performance metrics that demonstrate the effectiveness of the LLMOps implementation. DIVA 2.0 achieves 93% response accuracy, which represents a significant improvement over the previous rigid system. This accuracy metric suggests effective prompt engineering, knowledge base curation, and model configuration. The operational impact is substantial, with the system reducing the burden on customer support teams by 51.4%. Analysis of three months of dashboard data reveals that 71% of inquiries were consignment-related (256,048 queries), while 29.5% were general inquiries (107,132 queries). Of the consignment inquiries, 51.4% (131,530) were resolved without creating support tickets, while 48.6% (124,518) still required human intervention. The query flow analysis provides insights into user behavior: 40% of inquiries that resulted in tickets started with the customer support center before moving to the AI assistant, while 60% began with the assistant before involving human support. This pattern suggests that users are gaining confidence in the AI system's capabilities while maintaining access to human support when needed. ## Knowledge Management and Continuous Improvement The knowledge base implementation demonstrates sophisticated approaches to knowledge management in LLMOps. The system maintains real-time updates through web scraping of the DTDC website and integration of internal documentation. The vector embedding approach using Amazon OpenSearch Service enables semantic search capabilities that go beyond simple keyword matching. The fallback handling mechanism shows thoughtful design for production AI systems. When the knowledge base cannot provide relevant information, the system returns preconfigured responses indicating that it cannot assist with the request. This approach prevents hallucination and maintains user trust by being transparent about system limitations. The integrated dashboard provides real-time insights into logistics agent performance, including accuracy metrics, unresolved queries, query categories, session statistics, and user interaction data. Features such as heat maps, pie charts, and session logs enable continuous monitoring and improvement of the system. This data-driven approach to AI system optimization represents best practices in LLMOps for maintaining and improving production systems over time. ## Challenges and Implementation Considerations The implementation faced several significant challenges that are common in LLMOps deployments. Integrating real-time data from multiple legacy systems required sophisticated API design and error handling to ensure accurate, up-to-date information. The team needed to address the complexity of logistics terminology and multi-step queries through careful prompt engineering and model fine-tuning with industry-specific data. The transition from the old rigid DIVA system to the new conversational interface required careful change management to maintain service continuity and preserve historical data. This likely involved running parallel systems during the transition period and gradually migrating users to the new interface. Scaling the solution to handle over 400,000 monthly queries while maintaining performance represented a significant engineering challenge. The use of AWS's managed services, particularly Amazon Bedrock Agents and the serverless architecture components, provided the necessary scalability and performance characteristics for this high-volume deployment. ## Business Impact and ROI Considerations While the case study presents impressive results, it's important to note that this is an AWS-sponsored publication that may emphasize positive outcomes. The claimed 93% accuracy and 51.4% reduction in support queries represent significant improvements, but these metrics should be validated through independent measurement and longer-term observation. The business impact extends beyond immediate cost savings to include improved customer experience through faster response times and 24/7 availability. The system's ability to handle natural language queries and provide contextually relevant responses represents a substantial upgrade from the previous guided workflow approach. The investment in this LLMOps implementation likely involved significant costs for AWS services, development resources, and ongoing maintenance. Organizations considering similar deployments should carefully evaluate the total cost of ownership, including not just the technology costs but also the expertise required for implementation and ongoing optimization of the AI system.

Start deploying reproducible AI workflows today

Enterprise-grade MLOps platform trusted by thousands of companies in production.