BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Four Strategies To Protect Against Generative AI-Powered Attackers

Forbes Technology Council

Atul Tulshibagwale is the Chief Technology Officer at SGNL, a leader in zero-standing access solutions.

We are all fatigued by the barrage of content about generative AI (GenAI), but don’t let cybercriminals use this to their advantage. Below, I will explain the significant new enterprise security threats GenAI is fueling and how CISOs can prepare for AI-powered attackers.

The Importance Of Identity And Failure Of MFA

As a result of cloud transformation and business transformation, enterprises are increasingly relying on user authentication to access many enterprise services and applications. User authentication is the fundamental way enterprises validate the identity of a user.

Unfortunately, even if enterprises use edge security products (sometimes referred to as "secure access service edge" or SASE), enterprises are not safe. A compromised user identity can go undetected, or the compromised identity can be leveraged to trick the edge security products.

Phishing has been a well-known attack vector to compromise a user's identity for some time. To combat the rise of phishing attempts, many enterprises have been implementing multifactor authentication (MFA). The problem is that MFA solutions are not phishing resistant, as demonstrated in well-publicized hacks such as that of MGM, which cost the company tens of millions of dollars. Users can be tricked into giving up the one-time codes that some MFA solutions depend upon. In the era of GenAI, the ability of cybercriminals to get away with these types of attacks is on the rise.

Three GenAI-Powered Threats To Be Aware Of

GenAI employs advanced algorithms to analyze and comprehend given input data. It then leverages this understanding to produce creative and contextually relevant outputs in various formats such as text, images, videos or even code. Cybercriminals are increasingly leveraging GenAI to attack organizations in increasingly sophisticated ways. Three specific GenAI threats to be aware of include:

• Multi-Modal Phishing: Think of this as the personalized "Pro Max" version of phishing. It may involve targeted, professionally written generated text in emails or text messages, coordinated with phone calls that impersonate trusted users. This improved conversational capability is helping cybercriminals convince their victims to do things they should not.

There is a great example of how multi-modal phishing works in this short clip of a 60 Minutes episode. Seeing this video, you'll agree that we all need to accept that despite our best attempts to avoid it, some identities are going to be compromised and user authentication is an insufficient approach to protect our enterprise systems.

• Virtually Undetectable Malware: A lot of malware detection depends on finding unique sequences of bits (known as the signature), which are only known to exist in malware. This works so that even if copy-cat attackers make minor changes to the malware, the signature is unchanged, and therefore the malware is still detectable.

Using GenAI, however, attackers can generate new source code to create undetectable variants by the hundreds, so that such signature-based methods are no longer effective. You can read about an example here. Such advanced malware can be used to steal access tokens that have already gone through a phishing-resistant strong authentication check. Using stolen access tokens, attackers can effectively compromise user identities, bypassing strong authentication.

• Data Exfiltration Through GenAI: Enterprises are adopting GenAI to bolster their services, including support content. However, if the input (including prompts and training data) to the LLMs includes privileged information, it is easily possible for a user or someone posing as a user to trick the language model into revealing the privileged information. See a few examples here.

How To Defend Your Enterprise Against GenAI Threats

Fortunately, you are not left defenseless against GenAI. There are things you can do to protect yourself and your organization.

1. Deploying Phishing-Resistant Strong Authentication: The good news is that there are choices available today that can authenticate users in a way that is phishing resistant. Technologies like passkeys make the user experience very simple, too, while making authentication far less vulnerable to phishing attacks. However, this still leaves open the possibility of access token theft or social engineering, whereby legitimate users act on behalf of bad actors.

2. Adopting Zero-Standing Access: If you assume that the advanced threats posed by phishing and malware will result in some user identities being compromised, avoid catastrophe by locking down what a logged-in user has access to at any given point in time. This is the realm of authorization or access management. Access management has conventionally been dependent on static privileges for users that encompass everything they may need access to in their jobs.

Oftentimes, organizations are unable to remove privileges even as users move to different departments, causing permissions sprawl. In the face of GenAI threats, organizations must quickly move to dynamic access management, which provides users access only to the data they need to access to complete their current tasks.

This means that even if a user's identity is compromised, the attacker impersonating the user will still have only limited or no access at all. Through dynamic access management, you can dramatically reduce the potential blast radius of a bad actor and minimize the damage done to your company if you are breached.

3. Ensuring Only Public Data Is Input Into GenAI: Enterprises must strictly enforce policies that prevent confidential or privileged data from being used as input to GenAI (either as prompts or as training data). This will prevent the possibility of data exfiltration through GenAI. Companies are limiting the scope of GenAI use to only publicly available content such as help articles or marketing materials so that no proprietary information is leaked to the AI model.

4. Investing In Employee Training: It is important to help employees understand how GenAI is changing the threat landscape, so they are more aware of how to help keep the enterprise safe. Training should raise employee awareness of the fact that GenAI could potentially leak proprietary information, and it is also a dangerous tool that empowers cyberattackers.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website