ZenML

AI-Driven Student Services and Prescriptive Pathways at UCLA Anderson School of Management

UCLA 2025
View original source

UCLA Anderson School of Management partnered with Kindle to address the challenge of helping MBA students navigate their intensive two-year program more effectively. Students were overwhelmed with coursework, career decisions, club activities, and internship searches, receiving extensive information without clear guidance. The solution involved digitizing over 2 million paper records and building an AI-powered application that provides personalized, prescriptive roadmaps for students based on their career goals. The system integrates data from multiple sources including student records, career placement systems, clubs, and course catalogs to recommend specific courses, internships, clubs, and target companies. The project took approximately 8 months (December 2023 to August 2024) and demonstrates how educational institutions can leverage agentic AI frameworks to deliver better student experiences while maintaining data security and privacy standards.

Industry

Education

Technologies

Overview

This case study from AWS re:Invent features a collaboration between UCLA Anderson School of Management and Kindle, a consulting firm, to build AI-native systems for improving student services. The presentation includes perspectives from Anita Micas (Kindle’s Government and Education Market lead), Howard Miller (CIO at UCLA Anderson), and Chin Vo (Kindle’s VP of Innovation at Scale). The UCLA Anderson case represents a practical implementation of agentic AI in higher education, addressing real operational challenges while navigating complex data security requirements.

Business Context and Problem Statement

The initiative began nearly three years ago when ChatGPT emerged publicly. The dean of UCLA Anderson posed a provocative question to Howard Miller: if ChatGPT could score a B+ on his final exam, what was the value proposition of higher education? This catalyzed a strategic shift toward becoming an “AI thought leader” as a school.

The specific business problem addressed by the student services application relates to the inherent complexity of the two-year MBA program at UCLA Anderson. Students enter the program quickly, often starting in summer before fall classes begin. They face multiple simultaneous pressures: intensive coursework, early internship interviews, club involvement decisions, and career planning—sometimes while still uncertain about their ultimate career direction. The school was “inundating them from the beginning” with information through extensive emails and resources, essentially saying “good luck” without providing structured guidance. While career advisors existed, there was no systematic, personalized roadmap tool to help students navigate optimal paths based on their stated career objectives.

AI Framework and Architecture

Kindle’s approach centers on what they call their “Agentic AI Framework,” which represents a philosophy of embedding AI into core operations rather than treating it as a standalone project or pilot. The framework consists of three main components:

Data Ingestion Layer: This component enables capture of both structured and unstructured data from diverse sources. In the UCLA case, this included source code analysis to understand dependencies, security and IT policies to ensure agents operate within safe standards, and policy/procedure guides to map operational flows and identify bottlenecks. For the student services application specifically, data was pulled from disparate systems including student record databases, career placement systems, club information, and course catalogs.

Agent Builder: This layer deploys intelligent agents for specific business functions. These agents can automate tasks, make decisions, and adapt based on context. The framework supports progressive AI maturity from simple reactive generative AI to independent agents to multi-agent systems to full agentic workflows.

Agent Catalog: Described as “where the magic happens,” this is a centralized repository of reusable AI agents that enables scaling by deploying repeatable models across organizations. This component allows the framework to be lightweight and adaptable, avoiding heavy investment in any particular technology stack that might become obsolete quickly.

Technical Implementation Details

The UCLA Anderson implementation involved several significant technical challenges and architectural decisions:

Data Consolidation: A major technical hurdle was consolidating data from multiple disparate systems into a single environment and engine. The team pulled together student records, career placement data, club information, and course catalog data. Howard Miller acknowledged that if they were to start the project in 2024 rather than late 2023, they wouldn’t need to consolidate data into one place—modern AI integration capabilities have advanced significantly in just 6-9 months, allowing systems to access data in place rather than requiring migration.

Security and Privacy Architecture: Because the application dealt with sensitive student data, UCLA Anderson faced substantial institutional hurdles beyond typical organizational change management. They needed to pass third-party risk management reviews and convince central campus administration that the project could proceed safely. The team took “extra care to architect that environment such that it mirrored the information security policy of the UC system almost line for line” to ensure it would pass audits. This security-first approach added complexity but was essential for dealing with confidential educational records.

Timeline and Development Process: The project began with an AI task force formed in September 2023, which was cross-functional including faculty, staff, and students. By December 2023, they had identified a platform and pricing model that the dean approved. Two months later (approximately February 2024), they were working with consultants and had a platform in place. Another two months after that (approximately April 2024), they had their first two production use cases. The full project was completed around August 2024—roughly 8 months from platform selection to completion.

User Experience Design: A critical component was creating a user interface that students would actually want to use. The application provides prescriptive guidance: if a student indicates they want to become a product manager at LinkedIn or another specific company, the system recommends which courses to take, which internships to pursue, which clubs to join, and identifies companies that historically align with that career path.

Multi-Agent Architecture: Howard Miller described implementing an agentic approach where one agent can call another agent and successfully hand off context, allowing users to seamlessly transition between specialized agents. He characterized this as a building-blocks approach rather than trying to architect something overly comprehensive from the beginning. He candidly noted these aren’t “the sexy what everybody thinks agentic AI should be” but represent practical, foundational implementations that deliver business value.

LLMOps Operational Considerations

Fail Fast Philosophy: A recurring theme throughout the discussion was the importance of failing fast. Howard Miller explicitly stated that if he were to do the project differently, he would have “failed faster to begin with.” He felt they spent too much time trying to perfect the architecture, and by the time they launched, technology had changed so dramatically that they should have put something in users’ hands sooner and iterated from there. This reflects a key LLMOps principle of rapid experimentation and iteration.

Technology Stack Adaptability: Chin Vo emphasized that their framework is intentionally lightweight because they know “3 months from now, 6 months or 12 months, it’s gonna change. Something’s gonna come up.” This approach mirrors lessons from compute services evolution (virtualization → containerization → serverless) where architectural patterns evolved rapidly, and organizations needed flexibility to adopt new approaches without massive reinvestment.

Quick Time to Value: Howard Miller stressed the importance of finding projects with quick time to value rather than trying to “boil the ocean.” He advised not starting with sensitive and confidential data but acknowledged that the UCLA Anderson project deliberately tackled student data despite this being more challenging. His recommendation for others is to begin with less sensitive use cases to build momentum and confidence.

Trust and Observability: A major operational challenge discussed was building trust in autonomous agents. Chin Vo identified trust as one of the two main barriers (along with data) preventing organizations from becoming AI-native. Recent AWS announcements around agent core services address this through policy engines that ensure agents access the right tools, monitoring to verify agents are doing what they’re supposed to do, and observability features that provide audit trails showing what agents are doing at all times.

Organizational Change Management: Both speakers emphasized that people and process changes are as important as technology. Chin Vo’s role encompasses data and AI practice, enterprise strategy and architecture, people and performance (OCM), and user experience—all coordinated to ensure successful adoption. The UCLA project included a cross-functional task force from the beginning, bringing together faculty, staff, and students to set direction collaboratively.

Broader Context and Other Kindle Use Cases

The presentation included examples of Kindle’s AI work in government and public sector contexts:

Critical Assessment

The presentation exhibits several characteristics common to vendor-led conference sessions, where Kindle is clearly positioning its services and framework. However, the inclusion of Howard Miller as a genuine client provides valuable ground truth and balanced perspective. His candid admissions—that the agents aren’t particularly sophisticated yet, that they spent too much time on architecture, that timing matters enormously in this fast-moving field—add credibility.

The 8-month timeline from platform selection to production is relatively fast for an educational institution dealing with sensitive data, suggesting effective project management. However, the project’s timing (late 2023 to mid-2024) means it was developed during a period of extremely rapid AI evolution, which partially validates Howard’s concern about over-architecting.

The security approach—mirroring UC system information security policy “almost line for line”—demonstrates appropriate caution with student data but may have contributed to slower development. The tradeoff between speed and security/compliance is inherent in educational and government contexts.

The “fail fast” recommendation conflicts somewhat with the security-first approach UCLA Anderson necessarily took. This tension between rapid experimentation and careful governance is a central challenge in LLMOps for regulated sectors.

The claim that modern AI can access data in place without consolidation is somewhat optimistic—while technically possible through APIs and integration layers, practical challenges around data quality, consistency, and real-time access often still favor some degree of consolidation or data preparation, particularly for RAG-based systems.

Future Outlook

Both speakers anticipated that agentic AI will continue to dominate the landscape through 2025 and likely 2026. Chin Vo predicted that agents as part of enterprises will become an accepted reality rather than a question, similar to how serverless computing on AWS became normalized after initial skepticism. Howard Miller expects continued evolution toward agents calling other agents and delivering business outcomes, with a deliberate focus on not “scaring away the humans who think their jobs are going to be replaced.”

The emphasis on building blocks and incremental progress rather than attempting transformative implementations all at once reflects mature thinking about AI adoption in production environments. The UCLA Anderson case demonstrates that even relatively straightforward agentic implementations—like prescriptive student guidance based on career goals—can deliver meaningful business value when properly scoped and executed.

Key LLMOps Takeaways

The case study illustrates several important LLMOps principles: the critical importance of organizational buy-in and cross-functional collaboration from the start; the need for rapid iteration and willingness to fail fast; the value of starting with clear business outcomes rather than technology-first approaches; the challenge of balancing security, privacy, and compliance requirements with development speed; the rapidly evolving nature of AI technology requiring flexible, adaptable architectures; the importance of trust-building through observability, audit trails, and policy enforcement; and the reality that successful AI implementations often start modestly rather than attempting to solve every problem at once. The UCLA Anderson implementation represents a practical, production-grade agentic AI system that delivers measurable value while navigating the complex requirements of educational data governance.

More Like This

Agentic AI Copilot for Insurance Underwriting with Multi-Tool Integration

Snorkel 2025

Snorkel developed a specialized benchmark dataset for evaluating AI agents in insurance underwriting, leveraging their expert network of Chartered Property and Casualty Underwriters (CPCUs). The benchmark simulates an AI copilot that assists junior underwriters by reasoning over proprietary knowledge, using multiple tools including databases and underwriting guidelines, and engaging in multi-turn conversations. The evaluation revealed significant performance variations across frontier models (single digits to ~80% accuracy), with notable error modes including tool use failures (36% of conversations) and hallucinations from pretrained domain knowledge, particularly from OpenAI models which hallucinated non-existent insurance products 15-45% of the time.

healthcare fraud_detection customer_support +90

Building a Multi-Agent Research System for Complex Information Tasks

Anthropic 2025

Anthropic developed a production multi-agent system for their Claude Research feature that uses multiple specialized AI agents working in parallel to conduct complex research tasks across web and enterprise sources. The system employs an orchestrator-worker architecture where a lead agent coordinates and delegates to specialized subagents that operate simultaneously, achieving 90.2% performance improvement over single-agent systems on internal evaluations. The implementation required sophisticated prompt engineering, robust evaluation frameworks, and careful production engineering to handle the stateful, non-deterministic nature of multi-agent interactions at scale.

question_answering document_processing data_analysis +48

Building Economic Infrastructure for AI with Foundation Models and Agentic Commerce

Stripe 2025

Stripe, processing approximately 1.3% of global GDP, has evolved from traditional ML-based fraud detection to deploying transformer-based foundation models for payments that process every transaction in under 100ms. The company built a domain-specific foundation model treating charges as tokens and behavior sequences as context windows, ingesting tens of billions of transactions to power fraud detection, improving card-testing detection from 59% to 97% accuracy for large merchants. Stripe also launched the Agentic Commerce Protocol (ACP) jointly with OpenAI to standardize how agents discover and purchase from merchant catalogs, complemented by internal AI adoption reaching 8,500 employees daily using LLM tools, with 65-70% of engineers using AI coding assistants and achieving significant productivity gains like reducing payment method integrations from 2 months to 2 weeks.

fraud_detection chatbot code_generation +57