## Overview
This case study presents An Garda Síochána's (Ireland's national police service) ambitious digital transformation program, as described by Tim Willoughby, Head of Digital Service and Innovation. The presentation covers the organization's journey from deploying 16,000+ mobile devices to frontline officers through to implementing body-worn cameras and exploring AI capabilities for digital evidence processing. While the AI/LLM components are largely aspirational and pending legislative approval, the case study provides valuable insights into how a large government organization is preparing infrastructure and workflows for future AI integration in a highly regulated environment.
## The Problem Space
An Garda Síochána faced multiple challenges that drove this digital transformation initiative. When Tim joined in 2017, the organization was grappling with fundamental operational issues including problems counting breath tests and issues with fixed charge penalty notices being written off. Beyond these immediate problems, the broader challenge was bringing modern technology to a frontline workforce of approximately 14,000 officers in a way that was secure, scalable, and could support evidence collection and presentation in court.
A particularly illustrative example of the problems they face came after the November 2023 Dublin riots. Officers were tasked with identifying 99 individuals from approximately 40,000 hours of video footage. Without AI assistance, this required manual frame-by-frame analysis over months, eventually resulting in publishing photos on the Garda website and the "Crime Call" television program to solicit public assistance. While all 99 individuals were eventually identified through this manual process with public help, the case highlighted the urgent need for AI-assisted video analysis capabilities.
## The Mobile-First Foundation
Before discussing AI capabilities, it's important to understand the foundational infrastructure that has been built. The organization has deployed over 16,000 managed mobile devices (as of the presentation, 15,825). A key technical achievement was implementing self-enrollment, which took two years to perfect. This allows an officer to receive a phone, enter their username and password, and have the entire profile configured automatically via cloud-based solutions. This capability proved crucial during the COVID-19 pandemic when 5,500 phones were deployed via courier during the first six months without requiring in-person setup.
The mobile platform now supports multiple applications including fixed charge notices (traffic tickets), real-time insurance database checks (leading to 15,000 uninsured vehicles being removed from roads in one year), 999 emergency incident management with driving directions, comprehensive search capabilities with UX-designed color coding (red for dangerous individuals, orange for important information), integration with the Schengen system for European arrest warrants, investigation support tools, and domestic violence reporting applications.
A notable technical approach is the use of Samsung DeX technology, allowing officers to connect their phones to screens for a full desktop experience. This is deployed in small stations via docking, in cars via the vehicle screen, and at roadsides using "lap docks" (devices with screen, keyboard, and mouse but no operating system). The backend uses VMware Horizon on a cloud infrastructure.
## Body-Worn Camera Deployment
The body-worn camera initiative represents the most significant recent deployment. Following the November 2023 riots, the organization received impetus from the Department of Justice to proceed. Legislation was signed on December 5, 2023, and they turned around an EU tender in approximately six weeks. Notably, they chose three different vendors rather than one, deploying them in Dublin, Limerick, and Waterford to evaluate performance across different environments (urban vs. rural, large vs. small stations).
Technical challenges addressed included uniform modifications (adding ClickFast clips for secure camera attachment), station wiring (coordinated with the OPW), designing a separate network exclusively for digital evidence that goes directly to the cloud, and implementing RFID-based camera assignment so officers can securely check out devices for their shifts.
The cameras feature a dual-storage system with a 30-second rolling buffer. This buffer continuously overwrites itself, but when an officer presses record, the previous 30 seconds are appended to the recording. This has proven valuable in situations where officers were attacked unexpectedly or witnessed crimes in progress—pressing record captures the preceding context that would otherwise be lost.
The digital evidence system is designed for the full chain of custody through to court presentation. Officers can add metadata (tagging offenses as indictable or non-evidential) which determines retention periods. The DPP and solicitors receive email invitations to view evidence securely, and officers have successfully presented digital evidence directly in court via laptop or phone connected to the court's Wi-Fi system.
## AI Capabilities: Current State and Aspirations
The most relevant LLMOps aspects of this case study relate to AI capabilities that are either in trial or awaiting legislative approval (the "Recording Devices Bill Amendment" which includes AI elements). Tim outlined a ten-level hierarchy of AI capabilities they are considering:
The first tier involves event detection using object recognition AI—identifying when doors open/close, bags are left unattended, or similar events. This is purely object-based AI without any facial recognition. The second tier covers vehicle and object search capabilities, citing a case (the "Fat Freddy Thompson case") that required four months of frame-by-frame searching to track a vehicle through footage.
The third tier addresses clustering and counting—useful for crowd management at venues like Croke Park or even commercial applications like detecting coffee queue lengths. The fourth and fifth tiers involve tracking individuals based on appearance (e.g., "blue hoodie with red bag") across video footage using object recognition rather than facial recognition.
The sixth and seventh tiers begin to involve facial features—detecting faces and matching them within the same evidence set (e.g., confirming the person at location A is the same person at location B). Importantly, this remains retrospective analysis of captured evidence rather than real-time processing, and doesn't involve comparison to external biometric databases.
The eighth and ninth tiers would involve searching against external databases, which would require court orders. The organization doesn't maintain its own facial database but may need this capability to participate in European policing systems (like Schengen) that are developing facial search capabilities. The tenth tier—real-time facial recognition—is explicitly stated as not being pursued and is precluded by the EU AI Act.
## Generative AI and LLM Applications
Three specific generative AI capabilities are being trialed or considered, pending legislative approval:
**Real-time Translation**: Camera vendors offer live translation capabilities where an officer's conversation with a non-English speaker is sent to the cloud, detected for language, and translated bidirectionally. This is particularly relevant with the upcoming EU presidency bringing international visitors. The capability is being trialed and "works very well" but cannot be deployed live without legislative basis.
**Live Transcription**: Cameras can transcribe conversations in real-time, providing officers with full transcription and details. This relates to broader interview recording challenges—current legislation states interviews "may be recorded in writing," requiring manual transcription, while pending legislation (the Garda Powers Bill) would allow simply "recorded," enabling AI-assisted transcription.
**Automated Report Generation**: This is described as the most "compelling" capability, currently deployed in US law enforcement. The system goes beyond transcription to generate complete policing reports from interactions—including structured information like names, dates, times, locations, and witness details. Tim describes an interesting behavioral effect: officers knowing the system is listening ask better, more structured questions, effectively training themselves to gather more comprehensive information. This is characterized as "Pavlov's dog" in action, where the AI system improves human performance.
## Infrastructure and Operational Considerations
The organization has built nine cloud instances (three vendors, each with test/dev, training, and production environments). They are researching multi-cloud architectures for redundancy, considering active-active software-as-a-service deployments across different vendors. While more expensive, the criticality of digital evidence—particularly for court proceedings—justifies this research.
A key philosophical approach described is "computer in the middle" rather than "man in the middle." All systems maintain human oversight—for example, speed camera violations are reviewed by three people before issuing penalties. The challenge they anticipate is balancing AI automation with the principle that humans make final decisions, particularly as they're asked to implement more AI-driven traffic enforcement.
## Regulatory and Ethical Considerations
The presentation reflects careful navigation of regulatory requirements including GDPR, working with the Data Protection Commissioner through formal Data Protection Impact Assessments (DPIAs). The 30-second buffer required specific engagement with the DPC to address privacy concerns. The organization clearly distinguishes between what current legislation permits and what future legislation might enable, maintaining a careful line between research/trials and production deployment.
Tim acknowledges public fear about policing overreach, referencing international headlines, and frames his role as putting "moderation in terms of serving the public interest." The approach emphasizes that this is about efficiency and enabling officers rather than surveillance—citing examples like reducing the months of manual video analysis that would otherwise consume officer time.
## Lessons Learned and Approach
Several key lessons emerge from this case study. The "love the problem, not the solution" philosophy drives starting with understanding frontline needs rather than implementing technology for its own sake. Building diverse teams is emphasized—"if you get a group of engineers in a room together... it's not going to happen because they're trained to do the same thing."
The user experience team continuously engages with frontline members, observing "a day in the life" in patrol cars and stations. The app-first approach is explicitly contrasted with traditional IT methodology: rather than writing specifications and building backend systems first, they built mobile apps to digitize paper processes, captured real data, and then worked backward to design proper reporting and backend systems. This is described as handling "complex" problems (where solutions emerge from understanding) versus "complicated" problems (where solutions can be specified in advance).
The presentation concludes with the observation that "the future is now here" and that continuous adaptation is necessary—framing digital transformation as a journey rather than a destination.