Mark43, a public safety technology company, integrated Amazon Q Business into their cloud-native platform to provide secure, generative AI capabilities for law enforcement agencies. The solution enables officers to perform natural language queries and generate automated case report summaries, reducing administrative time from minutes to seconds while maintaining strict security protocols and data access controls. The implementation leverages built-in data connectors and embedded web experiences to create a seamless, secure AI assistant within existing workflows.
Mark43 is a public safety technology company that provides cloud-native solutions for law enforcement agencies, including Computer Aided Dispatch (CAD), Records Management System (RMS), and analytics capabilities. Their platform serves first responders and command staff who need immediate access to relevant data across multiple systems while maintaining strict security protocols. The core challenge they addressed was enabling faster access to mission-critical information while reducing administrative burden on officers, allowing them to focus more time on serving their communities.
This case study demonstrates an approach to embedding generative AI capabilities directly into existing enterprise applications using Amazon Q Business, AWS’s managed generative AI assistant service. While the article is published on the AWS blog and features AWS employees as co-authors (which suggests some promotional intent), it provides useful technical details about how Mark43 implemented LLM-powered search and summarization in a production environment with stringent security requirements.
Mark43’s existing infrastructure is built on AWS using a microservices architecture that combines serverless technologies including AWS Lambda, AWS Fargate, and Amazon EC2. They employ event-driven architectures with real-time processing and purpose-built AWS services for data hosting and analytics. This modern cloud foundation was essential for enabling the AI integration described in this case study.
The generative AI implementation centers on Amazon Q Business, which is AWS’s managed generative AI assistant that can be connected to enterprise data sources. The architecture utilizes Amazon Q Business’s built-in data connectors to unify information from various sources. Specifically, the implementation draws data from:
A key technical benefit highlighted is that Amazon Q Business automatically uses data from these connected sources as context to answer user prompts without requiring Mark43 to build and maintain a retrieval augmented generation (RAG) pipeline themselves. This is a significant operational advantage, as RAG implementations typically require substantial engineering effort to create, optimize, and maintain. However, it should be noted that this convenience comes with the trade-off of being locked into the Amazon Q Business ecosystem and its specific approach to RAG.
The integration approach is notable for its simplicity. Amazon Q Business provides a hosted chat interface web experience with an AWS-hosted URL. To embed this experience into Mark43’s existing web application, the implementation required:
iframe) HTML component to their web application with the src attribute pointing to the Amazon Q Business web experience URLThis low-code approach allowed Mark43 to focus on creating a rich AI experience for their customers rather than building infrastructure. The article claims the entire deployment—from setting up the Amazon Q Business application, integrating data sources, embedding the application, testing and tuning responses, to completing a successful beta version—took only “a few weeks.” While this timeline seems optimistic for a mission-critical public safety application, the managed nature of Amazon Q Business would indeed reduce implementation complexity compared to building a custom RAG solution.
Security is particularly critical in public safety applications where data access must be strictly controlled. The implementation addresses this through several mechanisms:
The solution integrates with Mark43’s existing identity and access management protocols, ensuring that users can only access information they’re authorized to view. Importantly, the AI assistant respects the same data access restrictions that apply to users in their normal workflow—if a user doesn’t have access to certain data outside of Amazon Q Business, they cannot access that data through the AI assistant either.
Amazon Q Business provides administrative controls and guardrails that allow Mark43 administrators to:
Mark43 explicitly states their commitment to responsible AI use, which includes transparency about AI interactions (informing users they’re interacting with an AI solution), recommending human-in-the-loop review for critical decisions, and limiting the AI assistant’s responses to authorized data sources only rather than drawing from general LLM knowledge. This last point is particularly important for public safety applications where accuracy and provenance of information are paramount.
The stated benefits of the implementation include:
The article mentions positive reception at the International Association of Chiefs of Police (IACP) conference in Boston in October 2024, where agencies described the capability as a “game-changer” and recognized potential for enhancing investigations, driving real-time decision support, and increasing situational awareness. One agency noted the value for making officer training programs more efficient.
While the case study presents a compelling implementation, several aspects warrant critical consideration:
The article is published on the AWS blog with AWS employees as co-authors, which means it serves partially as promotional content for Amazon Q Business. The quantitative claims (such as “reducing administrative time from minutes to seconds”) are not supported by specific metrics or benchmarks from actual deployments.
The solution’s reliance on Amazon Q Business means that the core LLM capabilities and RAG implementation are essentially a black box managed by AWS. While this reduces operational burden, it also means Mark43 has limited visibility into and control over the model’s behavior beyond the guardrails and filters provided by the platform.
The rapid deployment timeline of “a few weeks” may not fully account for the ongoing work required to tune and optimize responses, handle edge cases, and ensure the system performs reliably in production. The article mentions “testing and tuning responses to prompts” but provides no detail on the evaluation methodology or ongoing monitoring approach.
Mark43 indicates plans to expand their Amazon Q Business integration with a focus on continuous improvements to the user experience. They also mention leveraging other AWS AI services beyond Amazon Q Business, suggesting a broader AI-powered platform evolution.
The case study represents an interesting example of embedding managed generative AI services into mission-critical applications in a regulated environment. The approach of using a managed service like Amazon Q Business with built-in data connectors and security controls offers a lower-barrier path to AI adoption compared to building custom RAG pipelines, though it does come with platform lock-in considerations. For organizations with similar requirements—particularly those already invested in the AWS ecosystem—this implementation pattern may serve as a useful reference for how to approach LLM integration in production environments where security and data governance are paramount.
This panel discussion brings together engineering leaders from HRS Group, Netflix, and Harness to explore how AI is transforming DevOps and SRE practices. The panelists address the challenge of teams spending excessive time on reactive monitoring, alert triage, and incident response, often wading through thousands of logs and ambiguous signals. The solution involves integrating AI agents and generative models into CI/CD pipelines, observability workflows, and incident management to enable predictive analysis, intelligent rollouts, automated summarization, and faster root cause analysis. Results include dramatically reduced mean time to resolution (from hours to minutes), elimination of low-level toil, improved context-aware decision making, and the ability to move from reactive monitoring to proactive, machine-speed remediation while maintaining human accountability for critical business decisions.
Beekeeper, a digital workplace platform for frontline workers, faced the challenge of selecting and optimizing LLMs and prompts across rapidly evolving models while personalizing responses for different users and use cases. They built an Amazon Bedrock-powered system that continuously evaluates multiple model/prompt combinations using synthetic test data and real user feedback, ranks them on a live leaderboard based on quality, cost, and speed metrics, and automatically routes requests to the best-performing option. The system also mutates prompts based on user feedback to create personalized variations while using drift detection to ensure quality standards are maintained. This approach resulted in 13-24% better ratings on responses when aggregated per tenant, reduced manual labor in model selection, and enabled rapid adaptation to new models and user preferences.
PDI Technologies, a global leader in convenience retail and petroleum wholesale, built PDIQ (PDI Intelligence Query), an AI-powered internal knowledge assistant to address the challenge of fragmented information across websites, Confluence, SharePoint, and other enterprise systems. The solution implements a custom Retrieval Augmented Generation (RAG) system on AWS using serverless technologies including Lambda, ECS, DynamoDB, S3, Aurora PostgreSQL, and Amazon Bedrock models (Nova Pro, Nova Micro, Nova Lite, and Titan Embeddings V2). The system features sophisticated document processing with image captioning, dynamic token management for chunking (70% content, 10% overlap, 20% summary), and role-based access control. PDIQ improved customer satisfaction scores, reduced resolution times, increased accuracy approval rates from 60% to 79%, and enabled cost-effective scaling through serverless architecture while supporting multiple business units with configurable data sources.