Artificial intelligence (AI) has become a hot topic lately. Thousands of articles have been written about it, countless books published, and probably just as many training courses have been held. We're well aware of the benefits AI brings to cybersecurity, but are we fully conscious of the risks it poses?
AI techniques have been in development for years, with their applications expanding rapidly, especially as computers have become more powerful and communication between devices has improved. Academic research on AI frequently focuses on areas like genetic algorithms, data mining, neural networks, and deep learning. However, AI also presents significant challenges for those working in cybersecurity.
A lot is being said about how AI can be used to combat cyberthreats, particularly in solving complex problems, accounting for potential risks, and preparing organisations to adopt new tools. Yet, much less attention is given to the vulnerabilities of AI systems themselves—specifically, how they can be attacked or misused.
Beware of Your Smart Appliances
One growing threat is the use of AI to interfere with social networks. Specially programmed bots, powered by AI, can infiltrate social media groups or accounts, aiming to shape opinions or steal personal data for criminal purposes. This kind of activity has been particularly noted in Poland, with links to Russian influence campaigns. There are now chatbots that can convincingly mimic human conversation, tricking individuals into revealing sensitive personal information. It’s not hard to see how this data could then be used to commit fraud or other crimes.
Another concern is how AI can help prioritise targets for cyber-attacks through machine learning. By analysing large datasets, AI can pinpoint victims with greater precision, factoring in variables such as personal wealth or willingness to pay based on online behaviour. This enables the creation of detailed profiles, making potential targets easier to exploit.
These days, finding a device that isn't connected to the internet is becoming a rare feat. Almost everything is ‘smart’ now — whether it's toys, TVs, fridges, washing machines, or ovens. Many of these devices are capable of recording audio, integrating data, and connecting with other appliances, making them prime targets for cyber-attacks.
A couple of examples stand out. Nearly every smartphone comes equipped with a voice assistant that has high-level access to system resources and private information. Hackers can exploit this by sending commands to a nearby smart speaker or assistant. Worse still, if the device's speaker is connected to the internet, an attacker could exploit vulnerabilities to force the playback of malicious audio files from any web address they’ve chosen.
Hackers Don’t Sleep
Hackers are increasingly harnessing advances in artificial intelligence to launch more sophisticated cyber-attacks. Common examples include Distributed Denial of Service (DDoS) attacks, Man-in-the-Middle (MITM) attacks, and DNS tunnelling. DDoS attacks aim to disable a targeted computer system or network by overwhelming it with requests from multiple sources at once. MITM attacks involve intercepting and tampering with online communications without the knowledge of either party. Meanwhile, DNS tunnelling is used to bypass security protocols or conceal illicit network traffic.
One of the most troubling aspects of AI in malware is its capacity to learn from previous detection events. Once an AI-based malware identifies what triggered its detection, it can adapt and alter its behaviour to evade similar identification in future. A notable example occurred in 2015, when AI was used to craft emails capable of bypassing sophisticated spam filters. These targeted email attacks rely heavily on advanced social engineering tactics to succeed.
Criminals already have access to vast amounts of data on organisations that could be their next victims. Consider how much information is freely available on social media profiles of senior executives and key financial personnel, as well as from official websites, news reports, travel schedules, data leaks, and even insiders. This wealth of data provides the raw material needed to train machine learning algorithms, enabling attackers to target their victims with far greater precision.
The risks posed by AI in cybersecurity are all too real. In 2017, one of the first open-source AI tools capable of hacking into web application databases was introduced. Tools like this accelerate the process of password cracking by enhancing traditional methods that compare multiple hash variations of a security code. While these approaches have been effective, neural networks have significantly improved both the speed and accuracy of password guessing.
AI is not limited to digital threats either — it can also pose physical security risks. For instance, AI can be used to automate attacks or cause malfunctions in autonomous vehicles and other physical systems integrated with the Internet of Things (IoT) and its industrial counterpart, the Industrial Internet of Things (IIoT). These systems are designed to streamline data flow and improve efficiency, but unfortunately, they can also be exploited to cause harm.
Did you know?
One of our main areas of actions are countering cybercrime and enhancing cybersecurity. Since 2021, we have been coordinating the European network of practitioners fighting cybercrimes, CYCLOPES. We are also a partner in the EU-funded CYRUS project, which focuses on cybersecurity in the industrial sector. Additionally, we have developed our own PPHS’s Cybersecurity Standard. Learn more at this link.
Today Versus Tomorrow
Artificial Intelligence (AI) is proving to be a highly effective tool for cybercriminals due to its ability to learn from the present and predict future behaviours. As a result, terms like ‘AI-Crime,’ ‘malicious AI,’ and ‘malicious use and abuse of AI’ have emerged. While not all of these activities are currently classified as criminal under existing laws, they pose significant threats to the security of individuals, organisations, and public institutions. Malicious misuse refers to the use of AI with harmful intentions, as well as attacks aimed at undermining the very systems that rely on AI.
In February 2019, experts from academia, law enforcement, defence, government, and the private sector came together at University College London for a workshop titled ‘AI & Future Crime.’ The goal was to identify how AI might be abused in the future. Participants pointed out that AI-based crime could overlap with traditional forms of cybersecurity and cybercrime, but also present entirely new threats. Some of these risks extend current illegal activities, while others represent novel forms of criminal behaviour. Crucially, these different avenues are not mutually exclusive.
AI can, of course, be used to facilitate traditional crimes. It can be deployed to predict vulnerabilities in individuals or institutions, generate false content for blackmail or reputation damage, and even carry out tasks that humans are either unwilling or unable to do—whether due to danger, physical limitations, or slower reaction times.
Furthermore, AI itself can become a target. Criminals may attempt to circumvent the security measures embedded within AI systems to avoid detection or prosecution. In some cases, the goal may be to make critical systems fail, thereby undermining public confidence in their reliability. Such actions can weaken society’s sense of security and erode trust, ultimately fraying the bonds between people.
At this stage, AI cannot do everything. Ironically, it is often the overestimation of AI’s capabilities that leads to successful criminal exploitation. For instance, a novice investor might be duped into believing that AI can accurately predict stock market changes. Unfortunately, the realisation that this trust was misplaced often comes too late — when the damage has already been done.
AI Crime as a Service
The extent to which AI can enhance criminal activity largely depends on how deeply the target is integrated within the AI ecosystem. Naturally, AI is far more suited to participating in sophisticated crimes like bank fraud than in something as low-tech as a pub brawl. This is especially relevant in today’s world, where modern society relies heavily on complex computational networks — not just for finance and commerce, but also for communication, politics, media, work, and social interactions.
Unlike traditional forms of crime, digital criminal techniques can be easily shared, replicated, and even sold. This has led to the rise of ‘crime as a service,’ where individuals seeking to break the law can outsource the more complex aspects of their AI-powered activities. The ability to market and sell criminal methodologies means that even those with limited technical knowledge can exploit AI for illegal purposes.
Unfortunately, the commercialisation of these specialisations — and the emergence of AI crime as a service — has already become a reality. On various darknet forums, one can find not only increasing discussions about using AI-based tools for criminal purposes but also concrete offers to carry out illegal activities. However, it is crucial to remember that this battle is ongoing. Just as criminals are using AI to their advantage, technological advances also provide law enforcement agencies with the means to stay ahead, taking two steps forward for every move made by those operating outside the law.
Developed on the basis of a paper by Dr. Krzysztof Jan Jakubski entitled „Niebezpieczna Sztuczna Inteligencja.” (EN: ‘Dangerous Artificial Intelligence’) published in the special issue of the publication „Przestępczość Teleinformatyczna 2023” (EN: ‘Teleinformatics Crime 2023’).