Cloud Cavalry vs. Generative AI Machine Guns: Preparing for the Next War in Enterprise Security

Hyperscale LLM adoption is NOT the same as cloud adoption. To assume it will follow the same path is a category error.

Jack Perschke, Harrison Kirby

12/15/20243 min read

There’s a well-known adage in military strategy: “Armies always prepare for the last war.” It reflects a dangerous complacency, the assumption that future conflicts will look just like the past. This mindset led cavalry charges into machine guns during World War I and now threatens to derail enterprise adoption of Generative AI. Today, the enterprise is lining up its cloud cavalry against the machine guns of hyperscale LLMs, underestimating the new challenges this transformative technology brings.

For years, the prevailing belief in enterprise IT has been, “This will be fine—eventually, everyone adopted cloud.” But this confidence belies a critical difference: cloud was secured at the infrastructure layer, while Generative AI requires encryption and security at the application layer, a much more complex and risky proposition. To understand why this is such a shift, let’s explore the key challenges through the lens of the Obashi Framework and what this means for secure enterprise adoption of Generative AI.

Cloud vs. Generative AI: The Encryption Divide

The Obashi Framework breaks down enterprise IT systems into layers - Ownership, Business, Application, System, Hardware, and Infrastructure - to map dependencies and ensure a secure flow of information. Cloud adoption largely focused on securing the infrastructure layer with encryption in transit and at rest. Data could remain securely locked away, with applications only processing it when necessary.

Generative AI, particularly Large Language Models (LLMs), turns this model on its head:

  1. LLMs Must Read the Data: Unlike cloud infrastructure, LLMs need to access not just metadata but also prompts and the retrieved chunks of enterprise data to deliver meaningful outputs. This means encryption cannot stop at the infrastructure layer.

  2. Application Layer Encryption: To ensure security in Generative AI, encryption must be applied at the application layer, where the LLM interprets the data. This is far more complex because it involves securing highly dynamic and contextual exchanges rather than static datasets.

This shift to application-layer encryption raises a host of issues that hyperscale LLMs were not originally designed to handle.

The Risks of Application-Layer Encryption for Generative AI

  1. Exposure of Sensitive Prompts and Outputs
    LLMs rely on prompts and retrieved information to generate responses. These interactions are inherently plaintext during processing, leaving them vulnerable to breaches.

    • What happens if prompts themselves contain sensitive data either individually or in aggregate?

    • Can enterprises afford the reputational and regulatory damage of such exposure?

  2. Increased Attack Surfaces
    Application-layer encryption introduces new vulnerabilities, especially when LLMs operate across distributed systems.

    • Encrypted data must be decrypted at the point of processing, creating potential entry points for attackers.

    • Securing these interactions across hyperscale models used in multi-tenant environments adds enormous complexity.

  3. Complexity of Policy Enforcement
    In traditional cloud systems, data policies are tied to infrastructure controls. With LLMs, enterprises must enforce these policies dynamically across both the prompt and retrieved information.

    • How do you ensure compliance when data flows through a model you don’t control?

    • What happens when sensitive outputs are cached or logged inadvertently?

  4. Auditing and Traceability
    Cloud systems benefit from robust auditing at the infrastructure level. LLMs complicate this because interactions happen in a black box that is difficult to audit.

    • How can enterprises track who accessed sensitive data or how it was used in model outputs?

    • What mechanisms ensure that retrieved data isn’t stored or reused inappropriately?

  5. Vendor Lock-In Risks
    Hyperscale LLM providers often offer limited transparency into their models. Enterprises adopting these models may find themselves locked into platforms that don’t align with their security needs.

    • What guarantees exist that vendors can comply with stringent regulatory requirements?

    • Will enterprises face increased costs to retrofit LLMs with enterprise-grade security?

The Path Forward: Rethinking Secure Adoption

Enterprise adoption of Generative AI cannot rely on the cloud playbook. Instead, it requires new strategies tailored to the unique challenges of LLMs:

  1. Fine-Tuning with Private Models
    Enterprises must explore private or hybrid models, fine-tuned on proprietary data, to retain greater control over encryption and security. This can limit reliance on hyperscale LLMs and reduce exposure risks.

  2. Zero Trust Architectures for AI
    Applying Zero Trust principles to LLM interactions is essential. Every interaction—whether prompt generation or data retrieval—must be authenticated, encrypted, and monitored.

  3. Obashi-Driven Security Audits
    Enterprises should leverage frameworks like Obashi to map dependencies and identify risks at every layer. This ensures that application-layer encryption is implemented with minimal disruption.

  4. Privacy-Preserving AI Techniques
    Techniques like homomorphic encryption and federated learning can allow LLMs to process encrypted data without exposing the plaintext, offering potential solutions for sensitive use cases.

  5. Enterprise-Grade GenAIOps Platforms
    Platforms like Great Wave AI are emerging to address these challenges by offering enterprise-focused capabilities such as secure deployment, operational monitoring, and compliance enforcement tailored to LLMs.

Conclusion: Preparing for the New War

Generative AI is not just a continuation of the cloud revolution; it is an entirely new battlefield. Enterprises cannot afford to prepare for the last war, assuming that existing cloud-era security paradigms will suffice. Hyperscale LLMs represent machine guns to the cavalry of cloud-first thinking—a technological mismatch that will leave enterprises exposed if not addressed.

By acknowledging the unique challenges of application-layer encryption and adopting innovative strategies like those outlined above, enterprises can navigate this new frontier securely and confidently. The question is no longer, “Which hyperscale LLM will you use?" It’s, “How will you build a secure GenAI ecosystem with security, trust and control build from the start?”