Skip to main content
Arrow Electronics, Inc.
Background
Article

Generative AI and Cybersecurity: The New Frontier

April 23, 2024 | Brett Lee-Price

The New Wild West

In January 2023, ChatGPT, the popular chatbot from OpenAI, was estimated to have reached 100 million monthly users in just two months. Since then, with the release and rapid rise of generative AI over the last seventeen months, the flurry of interest has been accompanied by an uncritical embracement of the technology as many find in them a way to aid them in their day-to-day activities.

Indeed, today, AI seems to be the thing to be into, with marketing teams running wild with AI and ML taglines and, in some ways, it makes sense. There has been a number of statistics floating around that estimate that in some industries, AI could help increase productivity by as much as 30 percentage. Consequently, many companies, including a large percentage of the Fortune 500, are presently incorporating AI models into their workflows.

However, while generative AI may indeed be a boon to many companies, it is equally a significant risk opening up a number of compliance and security quandaries, with issues coming from both internal and external sources.

 

Internal Challenges

Potential Data Leaks

The initial challenge that generative AI presents is how such systems have been utilised from within the organisation. The amount of potential confidential data and IP that is digested by these, specifically public, systems as employees utilise them to help with reports, copywriting, and other activities opens the possibility that such data can be more easily exfiltrated out of a company by opening up additional potential exit points.

Misconfigurations

For those companies that are rolling out internal AI Models and ChatGPT-like systems, the next significant issue is that of misconfiguration. Being that large language models (LLMs) are consuming large datasets in order to be trained to complete allotted tasks, misconfigurations can be absolutely devastating. Already stories have started emerging of sensitive content that being exposed to employees (and students!), including salary data, information around sales revenue, and acquisition information. All it takes is the inclusion of an incorrect folder, a payroll spreadsheet being saved in the wrong location, and so forth. Unfortunately, misconfigured LLMs can exponentially heighten the discovery of information that could be devastating in the wrong hands.

 

External Challenges

While the internal challenges presented above are not insignificant, it is the external challenges that will have a long-lasting impact upon the cybersecurity landscape well into the future. This is because whilst generative AI can be a force for good, it can equally be used for much more nefarious activity – namely assisting or outright perpetrating attacks.

“Born Good” LLMs Used Wrongly

With the advent of generative AI, it was only a matter of time before threat actors contemplated how platforms such as ChatGPT and Google’s Gemini could be utilised to assist them in their goals. Initially, this was attempted through trying to have the platforms do such tasks such as generate malicious code or expose data and system prompts. However, both OpenAI and Google implemented security measures and guardrails to limit what said platforms could generate. However, despite this, such guardrails have been shown time and again to be susceptible to being bypassed – and prompt injection seems to be but one of the latest issues at play.

“Born Malicious” LLMs

The recent appearance of several models and generative tools available for sale on the Dark Web particularly paints the future trajectory of cybersecurity threats, with AI Chatbots such as FraudGPT and DarkBART or generative password-cracking tools like PassGAN proliferating. DarkGemini, perhaps the most sophisticated of these malicious generative AI chatbot to-date, is able to generate a reverse shell, build zero-day malware, and do reverse image searches – all for an approximate $70 AUD monthly subscription. Whilst such models are still in their infancy regarding their capability, recent reported attacks, such as a recent PowerShell payload that was used in March, had all the traces of being AI-written.

However, before we simply come to the conclusion that threat actors are simply leveraging chatbots in order to create the weapons to help them do the attacks, it must be stated that with the weaponisation of generative AI and LLMs also comes the reality that it is increasingly also the AI being used to automate and enhance a large part of the overarching attack kill chain, automating anywhere from reconnaissance to exploitation. It is without doubt that the increasing sophistication of these types of attacks will evade even AI-based detection methods.

 

Summary

Generative AI is here to stay and regardless of the AI-maturity of a company, Cybersecurity professionals will need to be alert and aware of the threat and risks posed by AI from both an internal and external perspective. Ensuring that misconfigurations are avoided, access lists applied, and implementation of mitigation strategies for adversarial machine learning attacks will go a long way to reduce the increased attack surface that generative AI has created within companies.

Likewise, ensuring a right data security posture and implementation of a zero-trust architecture, in addition to a general defence in depth approach, will also mitigate much of the damage that a successful AI-assisted attack may cause within an organisation. However, like all things, there is never a silver bullet. Cybersecurity is a perpetual arms race, and generative AI is just now another element of it.