Skip to content
  • Technology & data

Rethinking cybersecurity in the age of generative AI

Cyber Gen AI

by Jay Bangle

Generative AI offers efficiency but poses unique cybersecurity risks. Traditional measures fall short; a new paradigm is needed

Generative AI (GenAI) is making significant and ground-breaking strides in the world, offering advantages in efficiency, productivity, accessibility, and process compliance. In the public sector, these technologies can streamline knowledge access, making it easier for public servants to retrieve relevant information, understand the correct procedures, and make informed decisions. 

Yet, as these systems become more integrated into public services, the challenges of securing them against cyber threats grow in complexity. Cybersecurity professionals must adapt their strategies and skills to protect these increasingly vital technologies.

Limitations of conventional cybersecurity thinking

Traditional cybersecurity measures often rely on fixed concepts like firewalls, patching, and monitoring. While these methods have their merits and are still absolutely required, they fall short when applied to GenAI systems. These models are dynamic and adaptive, making them difficult to secure using conventional means.

Consider social engineering. Just as a human can be manipulated to reveal sensitive information, GenAI models can be exploited and tricked for malicious purposes (e.g. prompt attack). The static nature of traditional cybersecurity measures is ill-suited to counter these dynamic threats. We need to explore new options.

Shifting the focus: Language as a firewall

When it comes to GenAI systems, language itself can serve as an additional layer of defence. This is particularly important given the unique vulnerabilities these systems have to prompt attacks and other forms of linguistic manipulation.

One of the first lines of defence is the careful construction of metaprompts or system prompts. These are essentially the instructions that guide the AI's behaviour. By crafting these prompts with precision, we can limit the scope of the AI's responses, thereby reducing the risk of it divulging sensitive information or making damaging statements. For instance, a well-constructed metaprompt should be designed to respond in a polite but firm way to any queries that seek to extract confidential data or provoke inappropriate responses.

Another crucial aspect is the implementation of a separate, natural language AI that reviews both the input prompts and the generated output for contentious or offensive material. This is not just about filtering what goes into the system but also scrutinising what comes out of it. For example, if the GenAI is tasked with responding to public queries, the discrete AI should block any generated responses that could be considered controversial or harmful, allowing for a person to step into the conversation.

By treating language as a firewall, organisations can add an extra layer of security that is uniquely suited to the challenges posed by these new technologies. This approach ensures that both the input and output are monitored and filtered, offering a more comprehensive form of protection against both conventional and novel cyber threats.

Importance of a comprehensive governance strategy

A multilayered approach is essential for securing GenAI systems. This should encompass not just technical aspects but also ethical and legal considerations. A comprehensive governance strategy can help establish principles, guidelines and standards for the secure and responsible use of these technologies.

Collaboration is key. Cybersecurity professionals, technologists, and AI ethicists must work together to develop effective governance frameworks that address the unique challenges posed by these solutions.

Staff training and education

Staff training is crucial for navigating the unique challenges posed by GenAI. Updated cybersecurity training should be extended to all staff, not just technical teams, to raise awareness of new risks. Developing critical thinking skills is also key, especially when it comes to challenging and validating the outputs.

Staff should be trained to ask for references to sources and to understand the reasoning steps behind AI-generated content. Additionally, the importance of data quality and the use of trusted sources cannot be overstated, as these factors significantly influence the quality of outputs and help limit potential attacks. Interestingly, GenAI also offers opportunities for innovative cybersecurity approaches, making staff education a two-way street between learning and innovation.

Regular audits and AI ethics reviews

Audits and ethics reviews are essential tools in ensuring that these systems operate within acceptable boundaries, especially as they evolve over time. Regular assessments can help identify vulnerabilities and ethical concerns that are specific to these systems. Based on these findings, necessary additional controls and safeguards can be implemented to mitigate risks.

As GenAI systems become increasingly integral in the public sector, enhancing efficiency, productivity, and process compliance, the complexity of securing them also rises. Traditional cybersecurity measures, while foundational, are not sufficient to address the unique challenges posed by these dynamic technologies.

The concept of "language as a firewall'' introduces a paradigm shift in cybersecurity thinking. It emphasises the importance of thoughtfully crafted metaprompts and system prompts as a first line of defence. Furthermore, the role of a discrete AI system that reviews both input and output adds a comprehensive layer of security, safeguarding against both conventional and novel cyber threats.

Regular audits and ethics reviews remain crucial for identifying vulnerabilities and ethical concerns, while a multilayered governance strategy ensures that technical, ethical, and legal considerations are all accounted for. Collaboration among cybersecurity professionals, technologists, and AI ethicists is indispensable in developing robust governance frameworks.

In this evolving landscape, cybersecurity professionals must continually adapt their skills and strategies. A proactive approach that includes linguistic precision, real-time monitoring, regular audits, and comprehensive governance is essential for the secure and responsible use of GenAI systems. The field must remain agile, continually learning and adapting to stay ahead of emerging challenges in this rapidly advancing technological frontier.

Jay Bangle's avatar

Jay Bangle

Chief Technology Innovation Officer

Jay leads all technology and engineering delivery at TPXimpact. He helps organisations improve their services, experiences and outcomes, including collaborating with senior officials to navigate the opportunities and ethical implications of AI.

Contact Jay

Our recent insights

Transformation is for everyone. We love sharing our thoughts, approaches, learning and research all gained from the work we do.

Bill Data

Making data deliver for public services

The government must prioritise sharing, AI management, and public trust to reach its data potential.

Keeping your systems on track with digital MOTs

Outdated tech can hold back organisations. Learn how digital MOTs can assess and future-proof your systems.

The NHS shouldn’t be using AI…Yet

Artificial intelligence could transform our health service, but we need to act now so it can effectively harness its benefits