Company
Google
Title
Optimizing Security Incident Response with LLMs at Google
Industry
Tech
Year
2024
Summary (short)
Google implemented LLMs to streamline their security incident response workflow, particularly focusing on incident summarization and executive communications. They used structured prompts and careful input processing to generate high-quality summaries while ensuring data privacy and security. The implementation resulted in a 51% reduction in time spent on incident summaries and 53% reduction in executive communication drafting time, while maintaining or improving quality compared to human-written content.
## Overview Google's Security Workflow Automation team, in collaboration with their Privacy and Security Incident Response groups, developed an LLM-powered system to accelerate the creation of incident summaries and executive communications. Security incident management at Google's scale involves a rigorous five-step process: identification, coordination, resolution, closure, and continuous improvement. A critical but time-consuming aspect of this process is communicating incident status to various stakeholders including executives, team leads, and partner teams. The team estimated that writing thorough summaries could take nearly an hour for simpler incidents and multiple hours for complex communications. The hypothesis was that generative AI could digest incident information faster, freeing incident responders to focus on critical tasks. The results validated this: LLM-generated summaries were produced 51% faster while receiving quality ratings 10% higher than human-written equivalents. ## Input Processing and Data Handling One of the significant LLMOps challenges addressed was handling the diverse, unstructured data typical of security incidents. The data processed includes free-form text, logs, images, links, impact statistics, timelines, and code snippets. To make this manageable for the LLM, the team implemented a structured preprocessing pipeline. Long and noisy sections of code and logs were replaced with self-closing XML-style tags like `` and ``. This approach served dual purposes: preserving structural information while conserving tokens for more important facts, and reducing the risk of hallucinations that might arise from the model attempting to interpret technical artifacts. During prompt engineering iterations, the team added additional semantic tags including ``, `<Actions Taken>`, `<Impact>`, `<Mitigation History>`, and `<Comment>`. This structured tagging approach mirrored their incident communication templates and allowed implicit information to be conveyed to the model. The self-explanatory nature of these tags also provided convenient aliases for prompt instructions, enabling directives like "Summarize the `<Security Incident>`". ## Iterative Prompt Engineering The team documented a transparent, iterative approach to prompt development across three major versions: **Version 1** started with a simple summarization task. The limitations quickly became apparent: summaries were too long for executive consumption, important facts like incident impact and mitigation were missing, writing style was inconsistent and didn't follow best practices (passive voice, tense, terminology, format), irrelevant data from email threads was included, and the model struggled to identify the most relevant and up-to-date information. **Version 2** attempted to address these issues with a more elaborate prompt. The model was instructed to be concise and given explicit guidance on what constitutes a well-written summary, focusing on main incident response steps (coordination and resolution). However, limitations persisted: summaries still didn't consistently address incidents in the expected format, the model sometimes lost sight of the task or failed to incorporate all guidelines, struggled with focusing on latest updates, and showed tendencies toward drawing conclusions on hypotheses with minor hallucinations. **Version 3 (Final)** introduced two key improvements: the insertion of two human-crafted summary examples (few-shot learning) and the introduction of a `<Good Summary>` tag. The tag served multiple purposes—it highlighted high-quality summaries and instructed the model to begin immediately with the summary without repeating the task (a common LLM behavior). This final version produced "outstanding summaries" in the desired structure, covering all key points with minimal hallucinations. ## Privacy and Risk Management Infrastructure Given that security incidents can contain confidential, sensitive, and privileged data, the team built an infrastructure with privacy by design. Every component of the pipeline—from user interface to the LLM to output processing—has logging turned off. The LLM itself does not use any input or output for retraining. Instead of traditional logging for monitoring, the team relies on metrics and indicators to ensure proper functionality. This represents an interesting LLMOps pattern where privacy requirements necessitate alternative approaches to system observability. ## Human-in-the-Loop Workflow Design A critical aspect of the deployment was ensuring the LLM complemented rather than replaced human judgment. The workflow integration features a 'Generate Summary' button in the UI that pre-populates a text field with the LLM's proposed summary. Users have three options: accept the summary as-is, make manual modifications before accepting, or discard the draft entirely and start fresh. This design pattern addresses several concerns: it mitigates risks around potential hallucinations and errors by requiring human review, it accounts for human misinterpretation of LLM-generated content, and it maintains human accountability. The team emphasizes the importance of monitoring quality and feedback over time. ## Evaluation Methodology The team conducted a rigorous comparative evaluation with a sample of 100 summaries: 50 human-written (from both native and non-native English speakers) and 50 LLM-written using the final prompt. Summaries were presented to security teams in a blind evaluation without revealing the author. Results showed LLM-written summaries covered all key points and were rated 10% higher than human-written equivalents. The time savings were measured across a sample size of 300 summaries, showing 51% time reduction per incident summary. ## Edge Case Handling An important production consideration emerged around input size. The team discovered hallucination issues when input size was small relative to prompt size—in these cases, the LLM would fabricate most of the summary and key points would be incorrect. The solution was programmatic: if input size is smaller than 200 tokens, the system does not call the LLM and instead relies on human-written summaries. This represents a practical example of understanding model limitations and implementing guardrails in production. ## Extension to Complex Use Cases Building on summarization success, the team expanded to more complex executive communications drafted on behalf of Incident Commanders. These communications go beyond summaries to include multiple sections (summary, root cause, impact, mitigation), follow specific structures and formats, and must adhere to writing best practices including neutral tone, active voice, and minimal acronyms. The experiment with executive communications showed generative AI can evolve beyond high-level summarization. LLM-generated drafts reduced time spent on executive summaries by 53% while delivering at least on-par content quality in terms of factual accuracy and adherence to writing best practices. ## Future Directions The team mentions exploring generative AI for other security applications including teaching LLMs to rewrite C++ code to memory-safe Rust and getting generative AI to read design documents and issue security recommendations based on content. These represent potential expansions of the LLMOps infrastructure established for incident response. ## Critical Assessment While the results are impressive, it's worth noting that this case study comes from Google's own security blog, so there may be some inherent positive bias. The evaluation methodology, while described, doesn't specify whether the blind evaluators knew the experiment was comparing humans to LLMs. The 10% quality improvement is reported without confidence intervals or statistical significance testing. Additionally, the privacy infrastructure that prevents logging may create challenges for debugging and continuous improvement that aren't fully addressed in the write-up. The approach of simply not calling the LLM for small inputs (under 200 tokens) is pragmatic but doesn't address how to improve performance on these edge cases over time.</div><div class="uui-text-fallback w-condition-invisible w-dyn-bind-empty w-richtext"></div></div></div><div class="uui-padding-vertical-small"></div></div></div></header><section class="uui-section_cta05"><div class="uui-page-padding transparent"><div class="uui-container-large"><div class="uui-padding-vertical-xlarge"><div class="uui-cta05_component"><div class="uui-cta05_content"><div class="uui-max-width-large-4"><h3 class="uui-heading-small text-color-white">Start deploying reproducible AI workflows today</h3><div class="uui-space-xsmall-3"></div><div class="uui-text-size-large text-color-white">Enterprise-grade MLOps platform trusted by thousands of companies in production.</div></div></div><div class="uui-button-row-5 is-reverse-mobile-landscape"><div class="uui-button-wrapper-4 max-width-full-mobile-landscape"><a href="/book-your-demo" class="uui-button over-dark w-inline-block"><div>Book a Demo</div></a></div><div class="uui-button-wrapper-4 max-width-full-mobile-landscape"><a href="https://cloud.zenml.io" class="uui-button uui-button-secondary-gray w-inline-block"><div>Use Open Source</div></a></div></div></div></div></div></div></section><footer class="uui-footer03_component"><div class="uui-page-padding-4"><div class="uui-container-large-3"><div class="uui-padding-vertical-xlarge-2"><div class="w-layout-grid uui-footer03_top-wrapper"><div class="uui-footer03_left-wrapper"><a href="/" class="uui-footer03_logo-link w-nav-brand"><div class="zenml-logo"><img src="https://cdn.prod.website-files.com/64a817a2e7e2208272d1ce30/652559300366c8455e5c6ba5_Vectors-Wrapper.svg" loading="lazy" width="114.73145294189453" height="28.000001907348633" alt="I'm sorry, but I can't generate an alt text without more context about the image. Please describe the main elements of the image, and I'll be happy to help!" class="logo-footer"/></div></a><div class="uui-footer03_details-wrapper"><div class="uui-text-size-medium">Simplify MLOps</div></div><div class="w-layout-grid uui-footer03_social-list"><a href="https://www.linkedin.com/company/zenml" target="_blank" class="uui-footer03_social-link w-inline-block"><div class="social-icon w-embed"><svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M22.2234 0H1.77187C0.792187 0 0 0.773438 0 1.72969V22.2656C0 23.2219 0.792187 24 1.77187 24H22.2234C23.2031 24 24 23.2219 24 22.2703V1.72969C24 0.773438 23.2031 0 22.2234 0ZM7.12031 20.4516H3.55781V8.99531H7.12031V20.4516ZM5.33906 7.43438C4.19531 7.43438 3.27188 6.51094 3.27188 5.37187C3.27188 4.23281 4.19531 3.30937 5.33906 3.30937C6.47813 3.30937 7.40156 4.23281 7.40156 5.37187C7.40156 6.50625 6.47813 7.43438 5.33906 7.43438ZM20.4516 20.4516H16.8937V14.8828C16.8937 13.5562 16.8703 11.8453 15.0422 11.8453C13.1906 11.8453 12.9094 13.2937 12.9094 14.7891V20.4516H9.35625V8.99531H12.7687V10.5609H12.8156C13.2891 9.66094 14.4516 8.70938 16.1813 8.70938C19.7859 8.70938 20.4516 11.0813 20.4516 14.1656V20.4516Z" fill="currentColor"/> </svg></div></a><a href="https://twitter.com/zenml_io" target="_blank" class="uui-footer03_social-link w-inline-block"><div class="social-icon w-embed"><svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M0.058593 1L9.32417 13.5613L0 23.7697H2.09852L10.2622 14.8305L16.86 23.7702H23.9995L14.212 10.5047L22.8912 1H20.7926L13.2745 9.23144L7.19956 1H0.058593ZM3.14482 2.56723H6.42554L20.9118 22.2025H17.6321L3.14482 2.56723Z" fill="currentColor"/> </svg></div></a><a href="https://zenml.io/slack-invite" target="_blank" class="uui-footer03_social-link w-inline-block"><div class="social-icon w-embed"><svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M5.00787 15.0047C5.00787 16.3843 3.89291 17.4992 2.51339 17.4992C1.13386 17.4992 0.0188976 16.3843 0.0188976 15.0047C0.0188976 13.6252 1.13386 12.5102 2.51339 12.5102H5.00787V15.0047ZM6.25512 15.0047C6.25512 13.6252 7.37008 12.5102 8.74961 12.5102C10.1291 12.5102 11.2441 13.6252 11.2441 15.0047V21.2409C11.2441 22.6205 10.1291 23.7354 8.74961 23.7354C7.37008 23.7354 6.25512 22.6205 6.25512 21.2409V15.0047Z" fill="currentColor"/> <path d="M8.74961 4.98898C7.37008 4.98898 6.25512 3.87402 6.25512 2.49449C6.25512 1.11496 7.37008 0 8.74961 0C10.1291 0 11.2441 1.11496 11.2441 2.49449V4.98898H8.74961ZM8.74961 6.25512C10.1291 6.25512 11.2441 7.37008 11.2441 8.74961C11.2441 10.1291 10.1291 11.2441 8.74961 11.2441H2.49449C1.11496 11.2441 0 10.1291 0 8.74961C0 7.37008 1.11496 6.25512 2.49449 6.25512H8.74961Z" fill="currentColor"/> <path d="M18.7465 8.74961C18.7465 7.37008 19.8614 6.25512 21.2409 6.25512C22.6205 6.25512 23.7354 7.37008 23.7354 8.74961C23.7354 10.1291 22.6205 11.2441 21.2409 11.2441H18.7465V8.74961ZM17.4992 8.74961C17.4992 10.1291 16.3843 11.2441 15.0047 11.2441C13.6252 11.2441 12.5102 10.1291 12.5102 8.74961V2.49449C12.5102 1.11496 13.6252 0 15.0047 0C16.3843 0 17.4992 1.11496 17.4992 2.49449V8.74961Z" fill="currentColor"/> <path d="M15.0047 18.7465C16.3843 18.7465 17.4992 19.8614 17.4992 21.2409C17.4992 22.6205 16.3843 23.7354 15.0047 23.7354C13.6252 23.7354 12.5102 22.6205 12.5102 21.2409V18.7465H15.0047ZM15.0047 17.4992C13.6252 17.4992 12.5102 16.3843 12.5102 15.0047C12.5102 13.6252 13.6252 12.5102 15.0047 12.5102H21.2598C22.6394 12.5102 23.7543 13.6252 23.7543 15.0047C23.7543 16.3843 22.6394 17.4992 21.2598 17.4992H15.0047Z" fill="currentColor"/> </svg></div></a><a href="https://www.youtube.com/@ZenML" target="_blank" class="uui-footer03_social-link w-inline-block"><div class="social-icon w-embed"><svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M0 0H24V24H0V0Z" fill="currentColor"/> <path d="M11.995 6.33781C11.995 6.33781 6.93749 6.33782 5.66813 6.66766C4.98846 6.85756 4.42881 7.41722 4.23891 8.10688C3.90907 9.37625 3.90906 12.005 3.90906 12.005C3.90906 12.005 3.90907 14.6437 4.23891 15.8931C4.42881 16.5828 4.97847 17.1324 5.66813 17.3223C6.94749 17.6622 11.995 17.6622 11.995 17.6622C11.995 17.6622 17.0625 17.6622 18.3319 17.3323C19.0215 17.1424 19.5712 16.6028 19.7511 15.9031C20.0909 14.6437 20.0909 12.015 20.0909 12.015C20.0909 12.015 20.1009 9.37625 19.7511 8.10688C19.5712 7.41722 19.0215 6.86757 18.3319 6.68766C17.0625 6.33783 11.995 6.33781 11.995 6.33781ZM10.3858 9.57625L14.5938 12.005L10.3858 14.4238V9.57625Z" fill="white"/> </svg></div></a></div></div><div id="w-node-_6666e989-203e-20f3-8484-bd8adbfc4923-bc99cc29" class="w-layout-grid uui-footer03_menu-wrapper"><div class="uui-footer03_link-list"><div id="w-node-_6666e989-203e-20f3-8484-bd8adbfc4925-bc99cc29" class="uui-footer-title">Product</div><a href="/features" class="uui-footer03_link w-inline-block"><div>Features</div></a><a href="/pro" class="uui-footer03_link w-inline-block"><div>ZenML Pro</div><div class="uui-badge-small-success-2"><div>New</div></div></a><a href="/open-source-vs-pro" class="uui-footer03_link w-inline-block"><div>OSS vs Managed</div></a><a href="/integrations" class="uui-footer03_link w-inline-block"><div>Integrations</div></a><a href="/pricing" class="uui-footer03_link w-inline-block"><div>Pricing</div></a></div><div class="uui-footer03_link-list"><div class="uui-footer-title">Resources</div><a href="/newsletter-signup" class="uui-footer03_link w-inline-block"><div>Newsletter</div><div class="uui-badge-small-success-2"><div>New</div></div></a><a href="/blog" class="uui-footer03_link w-inline-block"><div>Blog</div></a><a href="https://docs.zenml.io/getting-started/introduction" target="_blank" class="uui-footer03_link w-inline-block"><div>Docs</div></a><a href="https://docs.zenml.io/changelog" target="_blank" class="uui-footer03_link w-inline-block"><div>Changelog</div></a><a href="https://zenml.featureos.app/roadmap" target="_blank" class="uui-footer03_link w-inline-block"><div>Roadmap</div></a><a href="/slack" class="uui-footer03_link w-inline-block"><div>Slack</div></a></div><div class="uui-footer03_link-list"><div class="uui-footer-title">Company</div><a href="/careers" class="uui-footer03_link w-inline-block"><div>Careers</div></a><a href="/company" class="uui-footer03_link w-inline-block"><div>About Us</div></a><a href="/company" class="uui-footer03_link w-inline-block"><div>Our Values</div></a><a href="/careers" class="uui-footer03_link w-inline-block"><div>Join Us</div></a></div></div><div id="w-node-_30ff4fd9-cbe0-dfff-43a1-971b5abf05e1-bc99cc29" class="footer-spacing"></div><div id="w-node-_73e9b741-9b86-772a-5e39-25f4bc99cc3f-bc99cc29" class="w-layout-grid uui-footer03_menu-wrapper"><div class="uui-footer03_link-list"><a id="w-node-_031619c6-c59f-52af-6129-d0d6c1c893d9-bc99cc29" href="/vs/zenml-vs-orchestrators" class="footer-header-link w-inline-block"><div class="uui-footer-title">ZenML vs Orchestrators</div></a><div class="w-dyn-list"><div role="list" class="w-dyn-items"><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-apache-airflow" class="uui-footer03_link w-inline-block"><div>Apache Airflow</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-dagster" class="uui-footer03_link w-inline-block"><div>Dagster</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-databricks" class="uui-footer03_link w-inline-block"><div>Databricks</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-flyte" class="uui-footer03_link w-inline-block"><div>Flyte</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-kedro" class="uui-footer03_link w-inline-block"><div>Kedro</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-kubeflow" class="uui-footer03_link w-inline-block"><div>Kubeflow</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-prefect" class="uui-footer03_link w-inline-block"><div>Prefect</div></a></div></div></div></div><div class="uui-footer03_link-list"><a href="/vs/zenml-vs-experiment-trackers" class="footer-header-link w-inline-block"><div class="uui-footer-title">ZenML vs Exp Trackers</div></a><div class="w-dyn-list"><div role="list" class="w-dyn-items"><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-mlflow" class="uui-footer03_link w-inline-block"><div>MLflow</div></a></div></div></div><a href="/vs/zenml-vs-experiment-trackers" class="uui-footer03_link w-inline-block"><div>Weights & Biases</div></a><a href="/vs/zenml-vs-experiment-trackers" class="uui-footer03_link w-inline-block"><div>Neptune AI</div></a><a href="/vs/zenml-vs-experiment-trackers" class="uui-footer03_link w-inline-block"><div>CometML</div></a></div><div class="uui-footer03_link-list"><a href="/vs/zenml-vs-e2e-platforms" class="footer-header-link w-inline-block"><div class="uui-footer-title">ZenML vs e2e Platforms</div></a><div class="w-dyn-list"><div role="list" class="w-dyn-items"><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-aws-sagemaker" class="uui-footer03_link w-inline-block"><div>AWS Sagemaker</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-clearml" class="uui-footer03_link w-inline-block"><div>ClearML</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-metaflow" class="uui-footer03_link w-inline-block"><div>Metaflow</div></a></div><div role="listitem" class="w-dyn-item"><a href="/compare/zenml-vs-valohai" class="uui-footer03_link w-inline-block"><div>Valohai</div></a></div></div></div><a href="/vs/zenml-vs-e2e-platforms" class="uui-footer03_link w-inline-block"><div>GCP Vertex AI</div></a><a href="/vs/zenml-vs-e2e-platforms" class="uui-footer03_link w-inline-block"><div>Azure ML</div></a><a href="/compare/zenml-vs-clearml" class="uui-footer03_link w-inline-block"><div>ClearML</div></a></div></div><div id="w-node-e2da6676-4ca0-e684-ae88-85f17a6cfe19-bc99cc29" class="footer-spacing"></div><div id="w-node-_1c9e2cd3-b452-c36a-b2df-bb2dd6dc4bd3-bc99cc29" class="w-layout-grid uui-footer03_menu-wrapper"><div class="uui-footer03_link-list"><div id="w-node-_1c9e2cd3-b452-c36a-b2df-bb2dd6dc4bd5-bc99cc29" class="uui-footer-title">GenAI & LLMs</div><a href="/llmops-database" class="uui-footer03_link w-inline-block"><div>LLMOps Database</div></a><a href="https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide" target="_blank" class="uui-footer03_link w-inline-block"><div>Finetuning LLMs</div></a><a href="https://github.com/zenml-io/zenml-projects/tree/main/zencoder" target="_blank" class="uui-footer03_link w-inline-block"><div>Creating a code copilot</div></a><a href="https://docs.zenml.io/stacks/orchestrators/skypilot-vm" target="_blank" class="uui-footer03_link w-inline-block"><div>Cheap GPU compute</div></a></div><div class="uui-footer03_link-list"><div class="uui-footer-title">MLOps Platform</div><a href="https://docs.zenml.io/stacks" target="_blank" class="uui-footer03_link w-inline-block"><div>Mix and match tools</div></a><a href="https://docs.zenml.io/stacks/alerters" target="_blank" class="uui-footer03_link w-inline-block"><div>Create alerting</div></a><a href="https://docs.zenml.io/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component" target="_blank" class="uui-footer03_link w-inline-block"><div>Plugin custom stack components</div></a></div><div class="uui-footer03_link-list"><div class="uui-footer-title">Leveraging Hyperscalers</div><a href="https://docs.zenml.io/stacks/orchestrators/skypilot-vm" target="_blank" class="uui-footer03_link w-inline-block"><div>Train on Spot VMs</div></a><a href="https://github.com/zenml-io/zenml-projects/tree/main/huggingface-sagemaker" target="_blank" class="uui-footer03_link w-inline-block"><div>Deploying Sagemaker Endpoints</div></a><a href="https://docs.zenml.io/how-to/popular-integrations/gcp-guide" target="_blank" class="uui-footer03_link w-inline-block"><div>Managing GCP Vertex AI</div></a><a href="https://docs.zenml.io/how-to/popular-integrations/kubernetes" target="_blank" class="uui-footer03_link w-inline-block"><div>Training on Kubernetes</div></a><a href="https://docs.zenml.io/how-to/popular-integrations/aws-guide" target="_blank" class="uui-footer03_link w-inline-block"><div>Local to Sagemaker Pipelines</div></a></div></div></div><div class="uui-footer03_bottom-wrapper"><div class="uui-text-size-small">© 2025 ZenML. All rights reserved.</div><div class="w-layout-grid uui-footer03_legal-list"><a id="w-node-_73e9b741-9b86-772a-5e39-25f4bc99cc74-bc99cc29" href="/imprint" class="uui-footer03_legal-link">Imprint</a><a id="w-node-d3d47d7c-1185-346b-ed8e-12cbffd9f736-bc99cc29" href="/privacy-policy" class="uui-footer03_legal-link">Privacy Policy</a><a id="w-node-_222fcd2d-cb1d-f057-ac4d-d68ba77e7483-bc99cc29" href="/terms-of-service" class="uui-footer03_legal-link">Terms of Service</a><div class="uui-footer03_legal-link hide-mobile-landscape">|</div><a id="w-node-cd471b68-6870-7671-dd74-774596f902d9-bc99cc29" href="https://status.zenml.io" class="uui-footer03_legal-link">ZenML Pro Status</a></div></div></div></div></div></footer><script src="https://d3e54v103j8qbb.cloudfront.net/js/jquery-3.5.1.min.dc5e7f18c8.js?site=64a817a2e7e2208272d1ce30" type="text/javascript" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"></script><script src="https://cdn.prod.website-files.com/64a817a2e7e2208272d1ce30/js/webflow.schunk.e0c428ff9737f919.js" type="text/javascript"></script><script src="https://cdn.prod.website-files.com/64a817a2e7e2208272d1ce30/js/webflow.52dd7937.c87f2f55dd805e91.js" type="text/javascript"></script><script src="https://hubspotonwebflow.com/assets/js/form-124.js" type="text/javascript" integrity="sha384-bjyNIOqAKScdeQ3THsDZLGagNN56B4X2Auu9YZIGu+tA/PlggMk4jbWruG/P6zYj" crossorigin="anonymous"></script></body></html>