Cybersecurity Risks of AI

Uncovering the Cybersecurity Risks of AI

Generative AI tools have boosted cybersecurity risks by increasing accessibility to committing ransomware, spear phishing, and other cybercrimes.
3 min read

Uncovering the Cybersecurity Risks of AI

3 min read
Cybersecurity Risks of AI

AI has provided threat actors with the opportunity execute cybercrimes at scale. Ransomware is now back on the rise, phishing attacks have become far more realistic, and cybercriminals no longer need to be expert hackers or computer geniuses. These imminent threats may sound daunting, but there are actionable ways to protect yourself against the cybersecurity risks of AI. 

Ransomware with the Use of AI  

Over the past few years, ransomware has become less effective, with a 40% decline in the amount of money being paid. Additionally, about 59% of ransomware victims refuse to pay off their ransom, making this a less lucrative crime. However,  generative AI will make it easier for threat actors to scale cyber attacks with ease. It will increase their ability to reach a larger group of people at a faster rate, thereby boosting their chances of scoring a ransom payment. 

Cybersecurity Risks of AI Can Lead to Physical Risks 

AI has enabled even the most unskilled coders to conduct cyber attacks, building on the democratization of cybercrime that third-party Ransomware as a Service (RaaS) previously prompted. Additionally, AI tools like ChatGPT can write code and create malware for cyberattacks. Moreover, this code manipulation could facilitiate physical risks, such as controlling heavy machinery through operational technology. Another AI cybersecurity concern involves the vulnerability of AI systems to data manipulation, potentially leading to biases or the spread of false information.

WormGPT, a Volatile Tool Causing Cyber Risk

WormGPT is a generative AI software that is sold on a subscription basis and is being used for malicious intent. WormGPT’s training data was based on malware-related data and other diverse forms of data, so the software is extremely volatile and can be used for all kinds of malicious intent causing tremendous cyber risk. It has no ethical boundaries or limitations and this is drawing in black hat criminals, hackers that break into computers with malicious intent, and amateur coders looking to commit cybercrimes.

Example of WormGPT
An example of how threat actors could instruct WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. (Source: SlashNext)

WormGPT and Phishing at the Root of AI Security Risks

It has become increasingly popular amongst cybercriminals to use WormGPT to create phishing emails. For example, the software has been used to write an emails that are not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

This type of development is extremely detrimental because it makes catching these phishing attempts that much more difficult. This issue is further compounded by the rising cost of phishing breaches, with some reaching $5 million. Now, Cybercriminals have more access than ever to generative AI tools that can help to cleverly conduct phishing breaches. There are entire forums dedicated to cybercriminals discussing ways to create extremely effective and convincing phishing emails using WormGPT which will heavily increase the incurred cost of phishing breaches.  

Other Cyber Risks of WormGPT

Due to the precise nature of WormGPT-generated phishing emails, high-profile individuals are even more exposed to AI cybersecurity risks. Nation State Actors can now use WormGPT to target high-profile individuals with highly convincing spear phishing emails, gaining access to valuable information. Additionally, AI tools such as WormGPT have increased the popularity of DDoS (Distributed Denial-of-service) attacks: through robotic automation, it is easier than ever for cybercriminals to automate the destruction of any established website. Active Directories (AD) are other easy targets for AI tools since they are often mismanaged.

Mitigating Cybersecurity Risks of AI: 3 Tips to Give Insureds

  1. Test and audit any AI tools that are used in the workplace to ensure they are up to date. This helps avoid any security or privacy issues. Additionally, the amount of personal data shared through automation should be carefully considered and strict guidelines should be set dictating what personal information on clients and employees should be shared.
  2. Employees should be trained on how to handle AI risk management, detect potential phishing emails, and the dangers of downloading unsolicited software which could result in AI-created malware.
  3. All in all, it is of paramount importance that an incident response plan should be outlined so that if there is an AI cybersecurity risk, it can be handled in a timely, efficient manner. 

 

Unlock more world-class knowledge and expertise.

Upgrade your membership to enjoy unlimited access to premium content.

Already have an account?

About Cyber Insurance Academy

The Cyber Insurance Academy was cultivated by the leading minds in cybersecurity and insurance, with a mission to help cyber insurance professionals stay ahead of the curve. We aim to address the industry’s educational gap and technical challenges, while fostering a vibrant community of like-minded professionals.

 

Our first-of-its-kind online campus blends a Gold-Standard CII-CPD accredited course, expert-led certification courses, industry-leading events, a top-tier content library, and a supportive, diverse and professional network that equips you with the confidence and expertise to lead in cyber insurance and make an impact.

Want cyber insurance updates sent straight to your inbox?

Skip to content