LLMs will always be dangerous. Security can only be delivered at the level of the application
This post talks about the way, in order to be useful, LLMs must be capable of doing bad as well as good things. With that established, we ask how can safe and secure applications be built? Do new technologies like Great Wave AI help? How can The Centre for GenAIOps help to create and share best practice and is there a role for us in standards and regulation?
Harrison Kirby
3/25/20242 min read
As we all know Generative AI (GenAI) is going to change the world. However, as with any transformative technology, the road to mass adoption is fraught with challenges, particularly in the realms of security and ethical use.
Large Language Models (LLMs) are at the heart of GenAI's revolution and the development and maintenance of LLMs require significant investment from providers. For these investments to get a return, mass adoption is essential. Yet, for LLMs to reach the largest possible market and achieve mass adoption, they must cater to the full spectrum of human activity that is as varied and rich as life itself.
For example, we might all agree that an LLM should never discuss bomb-making techniques but what if counter-terror police want to use it to improve their training? It should never create malware, but what about for pen testing? It should never discuss self-harm but what about to improve the reach or understanding a charity’s counselling team?
To reduce the freedom of action of an LLM is to reduce its potency and potency is what makes these tools useful. So, if controls cannot happen at the point of the underlying model without compromising the model’s value, then they have to happen at the point of use. This means anyone building GenAI applications today has to properly understand all the ways it can be used and misused and have mitigated all the risks it presents to both the commissioning organisation and its users.
The analogy I use is electricity. Its usefulness and the reason it is dangerous are inseparable. The safety to users is delivered not by reducing the power of electricity but by suitably designed, made and regulated electronic applications. It will be the same with GenAI. So the question is, how do we use these super-powerful and scary models to build consistently safe and effective GenAI applications?
The answer to that is by cooperating, sharing best practice and reusing components where possible. Some businesses are already leading the way on this. Great Wave AI is a no-code Gen AI development tool that comes with configurable input & output guardrails as well as output evaluation and monitoring – all out of the box. This kind of technology goes a long way to giving application developers the control they need to safely deliver against use cases without having to do a PhD in Large Language Model theory.
That said, it should be perfectly possible to build safe and secure applications from scratch so long as we all talk to each other and learn from emerging best practices. However, developing secure and ethical GenAI applications is only half the battle. The other half is ensuring these applications can be trusted by users, industries, and regulators alike. This is where The Centre for GenAIOps steps in. Recognizing the critical need for a standardized, efficient process for accrediting the security and ethical integrity of GenAI applications, The Centre for GenAIOps is committed to establishing trusted systems for quickly and effectively evaluating and accrediting GenAI applications.
Our mission is clear: to pave the way for the secure, ethical, and widespread adoption of GenAI technology. The journey toward the mass adoption of GenAI is undoubtedly complex, filled with technical, ethical, and regulatory hurdles. The Centre for GenAIOps is hoping to facilitate to cooperation we need to unlock the immense potential of GenAI, ensuring it is safe, trusted and used widely as a force for good.