Other
AstraZeneca / Adobe / Allianz Technology
Company
AstraZeneca / Adobe / Allianz Technology
Title
Enterprise GenAI Implementation Strategies Across Industries
Industry
Other
Year
Summary (short)
A panel discussion featuring leaders from AstraZeneca, Adobe, and Allianz Technology sharing their experiences implementing GenAI in production. The case study covers how these enterprises prioritized use cases, managed legal considerations, and scaled AI adoption. Key successes included AstraZeneca's viral research assistant tool, Adobe's approach to legal frameworks for AI, and Allianz's code modernization efforts. The discussion highlights the importance of early legal engagement, focusing on impactful use cases, and treating AI implementation as a cultural transformation rather than just a tool rollout.
# Enterprise AI Adoption Panel: AstraZeneca, Adobe, and Allianz Technology This case study synthesizes insights from a panel discussion featuring executives from three major enterprises—AstraZeneca (pharmaceutical), Adobe (creative technology), and Allianz Technology (insurance/financial services IT)—who shared their experiences deploying generative AI and LLMs in production environments. The panel, hosted at an AWS conference, provides a cross-industry perspective on LLMOps challenges, solutions, and best practices for enterprise AI adoption. ## Overview and Context The panel brought together leaders with distinct roles: Anna Berg Asberg (VP of R&D IT at AstraZeneca), Rockett (Legal CTO at Adobe), and Axel Schell (CTO of Allianz Technology). Each organization is at a different stage of AI maturity and faces unique regulatory and operational constraints, yet common themes emerged around use case prioritization, stakeholder engagement, cultural transformation, and technical architecture decisions. ## AstraZeneca: Research Assistant for Literature Search AstraZeneca's primary GenAI success story centers on a research assistant designed to accelerate drug discovery by helping scientists navigate vast amounts of internal experimental data and external published literature. The company's mission—to eliminate cancer as a cause of death and help patients with chronic and rare diseases—drove the prioritization of use cases that could meaningfully accelerate medicines reaching patients. ### Development Approach The organization took a structured approach to GenAI adoption by first gathering senior R&D leaders and providing them with training, including what Anna described as a "bootcamp" with hands-on prompt engineering exercises. This executive education phase was critical for building understanding of what the technology could actually accomplish, enabling leadership to make informed decisions about which problems to tackle first. Rather than selecting the highest-impact problems immediately, they pragmatically chose problems that were suitable for GenAI as a first wave. The research assistant was built to perform literature search across both internal proprietary experimental data and externally published scientific literature. A key requirement was that responses needed to be "trustable" and delivered "in a reasoned way"—suggesting the implementation likely incorporated retrieval-augmented generation (RAG) patterns to ground responses in source documents and provide citations. ### Rapid Scaling and Viral Adoption The production deployment story is particularly instructive for LLMOps practitioners. The initial pilot included only 40 users, with an expected 9-month timeline to full production readiness. However, pilot user feedback dramatically accelerated this timeline. A young scientist named Holly directly challenged leadership, asking why her peers couldn't access a tool that was saving her "weeks, probably months" on each literature search task. When senior leaders demanded immediate scaling, the team was able to respond because they had engaged legal teams early and had already completed regulatory groundwork in major countries. Within one to two weeks, they conducted a "silent launch" that resulted in thousands of users overnight. This case demonstrates the importance of parallel workstreams—running legal, compliance, and works council approvals alongside technical development rather than sequentially. ### Key LLMOps Lessons from AstraZeneca Anna emphasized several critical lessons for production AI systems. First, the importance of early legal engagement cannot be overstated—this proved to be the primary enabler of rapid scaling. Second, organizations should prepare for global deployment challenges upfront, recognizing that getting solutions approved in China differs dramatically from UK, Sweden, or Poland requirements. She recommended seeking "platform approvals" that cover a wider range of solutions rather than narrow point solutions, reducing the need to restart approval processes for each new application. The organization also practiced disciplined experimentation. When approaches didn't work on the first attempt, they stopped quickly rather than persisting with failing solutions. However, they maintained focus on the underlying problem—just because one technical approach failed didn't mean the problem wasn't worth solving. This iterative mindset treated failed experiments as "step one" rather than failures. ## Adobe: Legal Framework for AI Governance Adobe's perspective, represented by Rockett (a software engineer turned copyright attorney), focused on the governance and legal frameworks necessary for deploying GenAI products both internally and externally. As a company whose products serve the creative community, Adobe faced heightened scrutiny around training data provenance and copyright concerns. ### The Licensed Content Decision When developing Firefly, their generative imaging model, Adobe made an early strategic decision to train only on licensed content. This decision had cascading positive effects: it made the model "commercially safe," enabled Adobe to offer indemnification to customers, and generated significant goodwill with the creative community. While this approach may have constrained model capabilities compared to training on broader internet data, it provided a defensible position in an uncertain regulatory environment. ### The A-F Framework for Use Case Evaluation Adobe developed a structured framework for evaluating AI use cases that provides granularity for decision-making. Rather than binary approve/reject decisions, the framework enables tuning across six dimensions: - **A**: What team wants to use this? - **B**: Using what technology? - **C**: Using what input data (public vs. confidential)? - **D**: What will be the output data? - **E**: For what audience? - **F**: For what final objective? This framework shifts the conversation from "can we do this?" to "how do we tune these six elements to enable this?" It recognizes that legal isn't a speed bump but an accelerator when engaged properly—legal teams themselves are users who want GenAI to help automate their own work. ### Legal as an Accelerator Rockett strongly advocated for early legal engagement, noting that legal reviews running in parallel with development actually accelerate time-to-value. This reframes the traditional IT-legal relationship where legal reviews are often seen as gates at the end of development. When legal understands they are also users and beneficiaries of AI capabilities, their motivation shifts from risk avoidance to collaborative enablement. ## Allianz Technology: Code Modernization and Model-Agnostic Architecture Allianz Technology, as the internal IT service provider for Allianz's financial services business units, approached GenAI with a focus on scalability, value creation, and speed. Axel Schell described their use of Amazon Q for code modernization as part of a broader "AI for IT" initiative. ### Use Case Prioritization Criteria Allianz established explicit criteria for selecting GenAI use cases: - **Scalability**: How easily can the use case be scaled across the organization? - **Value**: What measurable value does the use case create? - **Speed**: An active choice to avoid initiatives taking longer than defined time limits They implemented a "mini competition" model where use cases could apply for funding, with strict timeboxing to enforce rapid iteration. There was no tolerance for six-month projects that might never deliver value. ### The "AI for IT" Distinction Allianz separated their AI initiatives into two categories: customer-facing/business-related applications and internal IT function improvements (branded "AI for IT"). This distinction recognized that these represent different models, different use cases, and potentially different governance requirements. Code modernization with Amazon Q falls into the latter category. ### Cultural Transformation Over Tool Rollout A crucial insight from Axel was that GenAI adoption cannot be treated as a simple tool rollout. He drew analogies to previous transformation efforts: "You don't become agile by just rolling out Jira and you don't become a DevOps organization by just using Jenkins." The same principle applies to AI—the tool alone doesn't create transformation. The organization invested in understanding how developers currently work and what needs to change. They addressed concerns about job security by reframing the narrative: developers won't be replaced by AI, but developers who don't use AI will be replaced by developers who do. They categorized work into "yay tasks" (enjoyable, creative work) and "nay tasks" (tedious heavy lifting), positioning GenAI as taking away the nay tasks. Even within teams sitting on the same floor working on identical technology stacks, adoption varied significantly—some developers embraced the tools enthusiastically while others resisted. This variance confirmed that the transformation is cultural, not technical. ### Building a Gen AI Culture Allianz established "Gen AI hacks" where developers share successful implementations, creating curiosity and peer-to-peer learning. This organic spread of knowledge proved more effective than top-down mandates. The organization's new mantra became "AI by default" rather than "digital by default." ### Model-Agnostic Architecture A significant technical insight from Allianz concerned their approach to model selection and architecture. They recognized that different use cases require different models—picture scanning, handwriting recognition, and claims data analysis might each benefit from specialized models. Their architecture philosophy emphasizes: - Model agnosticism: ability to swap models without impacting the entire use case - Flexible switching: users can choose optimization priorities (latency, cost, accuracy, or latest technology) - Future-proofing: acknowledgment that no one knows which model will be best in a year This approach aligns with the triangle of accuracy, performance/speed, and cost that AWS promotes, allowing optimization based on specific use case requirements. ### Small Use Cases at Scale Axel offered a provocative perspective on impact measurement. Rather than focusing exclusively on a few large flagship use cases with dedicated teams, he argued for widespread enablement of small use cases. A single use case worth $400, built by one developer in two days, might have greater organizational impact if replicated thousands of times across the enterprise. The goal is making AI usage feel as natural as using PowerPoint or spreadsheets—something 80-90% of the organization does routinely, not just expert IT teams working on big projects. ## Cross-Cutting Themes and LLMOps Best Practices Several themes emerged consistently across all three organizations: **Early Legal Engagement**: All panelists emphasized engaging legal stakeholders from the beginning. Legal teams can accelerate rather than delay projects when brought in early, and they have their own use cases that make them motivated partners. **Iterative Development with Clear Kill Criteria**: None of the organizations advocated for lengthy development cycles. Instead, they emphasized rapid prototyping, early user feedback, and willingness to pivot or abandon approaches that don't work—while maintaining focus on the underlying problem. **Success Stories as Scaling Engines**: Viral hits like AstraZeneca's research assistant create momentum and organizational buy-in that formal mandates cannot achieve. Sharing success stories through informal channels spreads adoption organically. **Cultural Transformation Focus**: All acknowledged that technology deployment alone is insufficient. Changing how people work, addressing job security concerns, and building new organizational muscles around AI usage require sustained attention to culture and change management. **Flexible Architecture for Model Evolution**: The rapid pace of model advancement requires architectures that can incorporate new models without wholesale system redesign. Building with swappable components enables organizations to stay current without constant rearchitecting. **Balancing Speed and Scale**: Organizations must navigate tension between moving fast on individual use cases and avoiding duplicate efforts across business units. There's no perfect solution, but awareness of this tradeoff enables better decision-making.

Start deploying reproducible AI workflows today

Enterprise-grade MLOps platform trusted by thousands of companies in production.