Security is a lonely vocation; little wonder why many security teams feel overworked as their defenses are pommeled by hackers and malware. After all, it only takes a single successful exploit to create havoc.
Unfortunately, this security asymmetry is about to get worse with AI. According to Andy Thompson, advisor and evangelist at CyberArk Labs, many things (in the cybersecurity world) have been going on at "a fairly steady rate." "But with the advancements of AI, it's just taking (cybersecurity) to a whole another level."
To be fair, AI offers benefits for attackers and defenders. But speed is not on the side of the defenders. "At this point, cybersecurity has always been a game of cat and mouse. Now (with AI) the game of cat and mouse (is played) at an exponential light speed," says Thompson.
Vishing gets serious
One way attackers are moving forward is vishing or voice phishing. This is familiar to anyone who has received a call asking for personal or sensitive information.
From the attacker's side, this requires time and effort. So, many vishing attempts have targeted high-profile personalities or those with assets but were not tech-savvy (like retirees).
With generative AI, attackers can scale their attacks with deep fakes, and find out the best one to fool you.
"So what we're seeing is almost a programmatic way to do a phishing campaign that is scalable," Thompson describes. He adds that generative AI even allows vishers to do A-B testing and improve the success rate of vishing campaigns.
"If you think about it, regular phishing email campaigns roughly have a 5-10% success rate. AI's just going to ramp up that success rate. I don't know what that success rate will be, but I can sure guarantee it's going to be well over 10% for sure," says Thompson.
It is why Thompson thinks the offensive capabilities of AI are “just really, really scary right now.”
AI running loose
Another concern is the role of shadow AI, where individuals or even departments use AI products that companies have not tested or the security teams have not approved.
It's not a new issue; all companies face shadow IT challenges, especially when procurement and departmental needs are out of sync.
The challenge with generative AI is that you are introducing a learning algorithm that can be made to generate something erroneous or share information with another adjacent data-ingesting model. Again, the speed at which this can go all wrong is now truncated considerably.
"It's genuinely a concern that whatever generative AI produces is a viable functional product. So, a lot of beta testing would be involved in validating the integrity of what AI is producing. You just have to weigh the ramifications," says Thompson.
Biometric bypass
Another area where Thompson is concerned (and equally excited about) is biometric bypass.
This hails from research done by Tel Aviv University that discovered a way to bypass a large percentage of facial recognition systems by faking your face (or the series of values that define your facial attributes) using a generative adversarial network (GAN).
The researchers could compare this vector to other vectors in their identity database and "then slowly kind of tweaked the vectors so that they matched as many as possible and then use that to create a human-like face," continues Thompson.
After several iterations, the researchers created nine sets of nine images that could easily bypass biometric authentication. They figured out that just one set of nine images could bypass 60% of all the biometric authentication in their database.
"And if you think about it, from an attacker's perspective, 60% is a viable attack vector. Just with those nine images in hand, you can get through half the biometric processes out there," says Thompson.
This has significant implications for nations, especially in Asia, where biometric and facial recognition are used extensively for various citizen services and policing purposes.
Currently, it’s all theoretical. But you can bet that some organized crime or state actor is using this knowledge to their advantage.
The future is poly
What is closer to reality is polymorphic malware. This malware uses a mutation engine to morph its code and change its appearance. URSNIF, VIRLOCK and BAGLE are examples of polymorphic malware.
These need a “fundamental level of knowledge” to use and create, says Thompson. However, “what we're seeing is that AI is being used in a unique novice novel way that we haven't seen before in the usage of polymorphic malware,” he explains.
ChatGPT just made this specialized polymorphic malware more accessible and dynamic. There are popular cases where researchers used a Python executable, BlackMamba, to prompt ChatGPT's API to build a keylogger that mutates with each call to the API, evading EDR filters. Another case involves the malware using ChatGPT for modules for carrying out malicious intent.
"In the past, we would define polymorphic malware as the code staying static, but the method in which it's encoded or encrypted differs each time. Now with generative AI, it's not just the encoding or the encryption that's changing, it's literally the functions, the variables, the processes, and the code itself is actually changing," describes Thompson.
The result is that we will have custom-created malware explicitly made for an endpoint. "So that's what's really impressive is that the term polymorphic isn't just about obfuscation anymore," adds Thompson.
Fighting AI malware
So how do you fight malware or hackers armed with generative AI capabilities and only need to be successful only once?
"Despite all the advancements in AI, some of the best practices that we've been preaching for years are more valuable than ever," says Thompson.
Start with awareness training, which Thompson says becomes more important in a generative AI deepfake world. "If your CEO is reaching out to you out of the blue, you should try to verify this," highlights Thompson, who shared examples of fake Whatsapp messages or emails mimicking CEOs asking for quick money deposits for business deals from their finance team.
Along with security awareness, vendors are looking at ways to identify deep fake audio and video. "But again, this is a cat and mouse game," says Thompson, who added that researchers are already saying AI bot voice and human-generated voice will soon be indistinguishable.
"So, end-user awareness training is almost more important than any sort of technical control when it comes to at least vishing," says Thompson.
Another is defense in depth, a strategy that leverages multiple security measures to protect organizational assets. “It's more important than ever to have those additional compensating controls,” says Thompson.
Strangely, this is also an area in which AI may be the best answer—a reason why CyberArk is enhancing its products with AI.
"Not only will it allow for more to be done with less headcount, but it also really allows us to have a more consolidated view of the situation and gives us a better visibility of our organization," says Thompson.
For example, Thompson points to the use of AI for vulnerability scanning, "so it can detect not just what applications are installed but potentially the dependencies as well."
"This is incredibly powerful from defensive and offensive perspectives," says Thompson.
Another area to consider is creating and adjusting security policies dynamically—again, using AI.
"So if you have an application control policy, that's fairly static. We can dynamically add and remove policies as needed, based on what the AI determines as appropriate. So I do see AI being leveraged in a ton of different ways to protect organizations," says Thompson.
Lastly, Thompson urged companies and even policing organizations to do away with the red tape and bureaucracy and realize that generative AI will shape their future (whether they like it or not).
"There's a reason we have shadow IT—bureaucracy hinders progress. And I think that the organizations that say they aren't using generative AI, it's not that they're not; they're just not aware that their employees are doing it behind their backs. So I think what organizations ought to do is not deny the inevitable," says Thompson.
"They need to accept that generative AI is here to stay. It needs to be embraced rather than shunned; it's not going anywhere. We need to harness it now before the attackers do. Because let's face it, they may already have."
Winston Thomas is the editor-in-chief of CDOTrends and DigitalWorkforceTrends. He’s a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/Nils Jacobi