There is considerable media attention on how Artificial Intelligence (AI) has accelerated in usage and is now permeating all aspects of our lives.
Cybersecurity is no exception.
A study by European police agency Europol and a leading security provider identified how cybercriminals are already using AI to make their attacks more prevalent and effective, and the many ways AI will power cybercrime in the future. It is critically important for all enterprises, especially smaller companies with minimal cybersecurity resources, to stay abreast of how AI is enhancing the seriousness of these threats and how they are being utilized by “black-hat” actors.
The goal of this blog is to provide such a perspective. The thoughts and examples provided are not meant to be exhaustive but to spur creative ideas on how machine-driven action, which underpins AI, creates completely new threat capabilities from common threats.
Recent cybersecurity research provided a list of some of the top common cybersecurity threats:
Let’s delve into the growing role of AI via each threat:
OpenAI’s ChatGPT has changed the fundamental nature of phishing attacks. Gone are the days of poorly written and unauthentic “Nigerian Prince” emails, which became late-night talk show comedy showpieces and were instantly deleted by everyone.
ChatGPT uses a generative language model called GPT-3, containing references that are relevant to the recipient, making them extremely convincing and hard to dismiss. This level of content sophistication also enables the emails to bypass most spam filters, even the more stringent rules recently enacted by Google and other top cloud-based email providers.
Here is a hypothetical example: A finance executive at a company receives an email containing a highly personalized discussion thread between the CEO and another company that culminates in a request for them to quickly transfer money. There is tremendous pressure on the finance executive to act quickly because of the time sensitivity and senior-level attention to the required action, and the fact that the discussion thread is so convincing.
What makes it even more worrying is that ChatGPT can create these emails with malicious code inside that, if clicked, can be used to exploit the vulnerabilities found periodically in Microsoft Exchange.
According to the FBI, gaining access to a business e-mail account happens via:
Domain spoofing: a fake website or email domain from a legitimate-appearing partner or vendor fools people into trusting the source.
While building a website has always been easy, turnkey products such as Wix.com provide plug-n-play options to all types of users With AI, the sophistication of these websites can grow exponentially. Content provided on the website, both text-based and visual, is much deeper, giving the appearance of authenticity to create trust with the user and encourage them to visit the website, engage with it and provide their email and other personal information. Over time, emails from this domain sent directly to the user can be used to compromise the account.
Social engineering: manipulation using fear, love, greed, and other emotions to fool people into providing confidential information.
Deepfakes on LinkedIn are a great example of this. Because of AI-generated personalities, the LinkedIn user sees a real person with real credentials, someone who is frequently connected with other contacts they know. They seem legitimate, but once a direct connection is established, it’s only a matter of time before sensitive information is requested and provided.
Compromised accounts: using password cracking or malware to crack an account.
Strong passwords used to be one of the top recommended safeguards to ensure that email accounts were kept secure. This is no longer true. AI-enabled password-cracking programs can rapidly generate all possible permutations of an email password, and break into accounts within a minuscule amount of time.
Emerging technologies such as no-code/low-code allow hackers with limited technical knowledge to develop programs that carry out cyberattacks. AI capabilities are on the other end of the spectrum, as they allow skilled cybercriminals to improve the efficiency of their crime-as-a-service business models and sell their services to these under-sophisticated hackers.
Rather than focusing their effort and time on coordinating full attacks, crime-as-a-service vendors can specialize in one type of service and sell these capabilities to non-technical hacking communities, greatly increasing their reach, frequency, and results.
For example, a specialist in developing ChatGPT-generated emails can “sub-contract”’this expertise to other, less experienced hackers, helping them eliminate the baseline work that would take a significant amount of learning time.
Think of this as an illicit B2B subscription-based SaaS business.The solution offered is the ability to harness the power of AI, and the distributors. are the subscription-paying hackers.
These days, thousands of companies use the same technology vendors. If an attacker can penetrate the software underpinning the vendor’s supply-chain product, they immediately gain access to a number of corporate targets. By installing malicious code into the software, they can now easily penetrate every company that purchases the technology from the vendor.
For example, hackers can leverage supply-chain technology via vulnerabilities in the software, with AI-enabled compilers embedded inside. The compiler penetrates the software and with its AI capability automatically translates code written in one language into a different programming language. The compiler then inserts malicious code into the translation it produces. This process is entirely invisible to those who purchase the technology, and now there is malware across a wide swathe of companies waiting to be exploited.
Migrating to the cloud from an on-premise IT architecture essentially means that a company has transferred all data held locally on-line. This includes private employee HR information, vendor relationships, and customer records, among other items. If the cloud network where this information is hosted is compromised, so is the data that is stored there.
For example, a significant cloud-based AI cyberattack happened to TaskRabbit, an online marketplace for freelance laborers. The attack used a large computer botnet, with an AI-based load-balancing program to perform a highly coordinated DDoS attack on TaskRabbit’s cloud servers. As a result of the attack, 3.75 million website users had their TaskRabbit accounts compromised, and the AI engine was also able to pull Social Security numbers and bank account details automatically from the user data. The attack was so drastic that the entire site had to be disabled.
Given the high value and amount of information stored at data centers, sophisticated attackers are known to maintain a long-term presence at these data centers for months at a time without being detected. They move slowly and with caution to evade traditional security controls and are often targeted to specific individuals or activities at the data center.
These attackers use AI programs to learn the dominant communication channels and the best ports to move around, and this learning ability means that hackers can innocuously spread within the digital environment of the data center. They are hidden in plain sight. Malware inserted into the data center environment by these AI programs is then used to analyze the vast volumes of data processed at machine speed, rapidly identifying which data sets are valuable and which are not.
According to multiple research organizations, publicly available AI programs and talent will contribute to rapid growth in ransomware. Breaching of networks via AI-crafted phishing campaigns, as well as utilizing AI to automate attacks, are the two main drivers of this growth.
It is important to note that, at present, most cybercriminal enterprises do not have in-house AI expertise. To compensate for this deficiency, these enterprises will take the quickest path to success by hiring AI talent with some of their previously gained ransom money.
IoT devices are vulnerable to cyberattacks due to a combination of their multiple attack surfaces, their newness as a category, and lax security requirements. These IoT devices come with many security vulnerabilities.
When it comes to hiring, the main security challenge of any organization is to distinguish between actual candidates to hire and fraudsters pretending to be interested. With AI-generated deep fakes, this line has significantly blurred.
Once you let someone into the company, they’re automatically an insider threat.
The deepfake threat is very real. The FBI issued a Public Service Announcement to warn corporate America of the seriousness of deepfake employees. Onboarding a deepfake posing as an employee will ultimately drive damaging cyberattacks, data theft and damage to the company’s reputation.
The most obvious examples are propaganda campaigns involving social media users or other communication channels, which are effective ways to spread false information, ultimately affecting the behavior of those on the receiving end. AI is perfect for these sorts of campaigns.
While AI is having a significant impact on common cybersecurity threats, society’s response is still in a nascent phase. Remember, this blog is simply meant to provide representational ideas of how this technology will evolve.
While regulation of AI by federal, state, and local officials is on its way, the best thing for organizations to do in the meantime is to deploy effective cybersecurity tools and enhance cybersecurity training for staff so they can spot attacks involving the use of AI.
To take a free Chief Outsiders AI Maturity Assessment for your organization, click here.
Topics: Value Proposition
Tue, Mar 19, 2024