Article

How AI Can Harm Cybersecurity

Reading Time: mins

Artificial intelligence (AI) has become a hot topic lately. Thousands of articles have been written about it, countless books published, and probably just as many training courses have been held. We're well aware of the benefits AI brings to cybersecurity, but are we fully conscious of the risks it poses?

AI techniques have been in development for years, with their applications expanding rapidly, especially as computers have become more powerful and communication between devices has improved. Academic research on AI frequently focuses on areas like genetic algorithms, data mining, neural networks, and deep learning. However, AI also presents significant challenges for those working in cybersecurity.

A lot is being said about how AI can be used to combat cyberthreats, particularly in solving complex problems, accounting for potential risks, and preparing organisations to adopt new tools. Yet, much less attention is given to the vulnerabilities of AI systems themselves—specifically, how they can be attacked or misused.

Beware of Your Smart Appliances

One growing threat is the use of AI to interfere with social networks. Specially programmed bots, powered by AI, can infiltrate social media groups or accounts, aiming to shape opinions or steal personal data for criminal purposes. This kind of activity has been particularly noted in Poland, with links to Russian influence campaigns. There are now chatbots that can convincingly mimic human conversation, tricking individuals into revealing sensitive personal information. It’s not hard to see how this data could then be used to commit fraud or other crimes.

Another concern is how AI can help prioritise targets for cyber-attacks through machine learning. By analysing large datasets, AI can pinpoint victims with greater precision, factoring in variables such as personal wealth or willingness to pay based on online behaviour. This enables the creation of detailed profiles, making potential targets easier to exploit.

These days, finding a device that isn't connected to the internet is becoming a rare feat. Almost everything is ‘smart’ now — whether it's toys, TVs, fridges, washing machines, or ovens. Many of these devices are capable of recording audio, integrating data, and connecting with other appliances, making them prime targets for cyber-attacks.

A couple of examples stand out. Nearly every smartphone comes equipped with a voice assistant that has high-level access to system resources and private information. Hackers can exploit this by sending commands to a nearby smart speaker or assistant. Worse still, if the device's speaker is connected to the internet, an attacker could exploit vulnerabilities to force the playback of malicious audio files from any web address they’ve chosen.

Hackers Don’t Sleep

Hackers are increasingly harnessing advances in artificial intelligence to launch more sophisticated cyber-attacks. Common examples include Distributed Denial of Service (DDoS) attacks, Man-in-the-Middle (MITM) attacks, and DNS tunnelling. DDoS attacks aim to disable a targeted computer system or network by overwhelming it with requests from multiple sources at once. MITM attacks involve intercepting and tampering with online communications without the knowledge of either party. Meanwhile, DNS tunnelling is used to bypass security protocols or conceal illicit network traffic.

One of the most troubling aspects of AI in malware is its capacity to learn from previous detection events. Once an AI-based malware identifies what triggered its detection, it can adapt and alter its behaviour to evade similar identification in future. A notable example occurred in 2015, when AI was used to craft emails capable of bypassing sophisticated spam filters. These targeted email attacks rely heavily on advanced social engineering tactics to succeed.

Criminals already have access to vast amounts of data on organisations that could be their next victims. Consider how much information is freely available on social media profiles of senior executives and key financial personnel, as well as from official websites, news reports, travel schedules, data leaks, and even insiders. This wealth of data provides the raw material needed to train machine learning algorithms, enabling attackers to target their victims with far greater precision.

The risks posed by AI in cybersecurity are all too real. In 2017, one of the first open-source AI tools capable of hacking into web application databases was introduced. Tools like this accelerate the process of password cracking by enhancing traditional methods that compare multiple hash variations of a security code. While these approaches have been effective, neural networks have significantly improved both the speed and accuracy of password guessing.

AI is not limited to digital threats either — it can also pose physical security risks. For instance, AI can be used to automate attacks or cause malfunctions in autonomous vehicles and other physical systems integrated with the Internet of Things (IoT) and its industrial counterpart, the Industrial Internet of Things (IIoT). These systems are designed to streamline data flow and improve efficiency, but unfortunately, they can also be exploited to cause harm.

Cybersecurity | Cyberbezpieczeństwo

Did you know?

One of our main areas of actions are countering cybercrime and enhancing cybersecurity. Since 2021, we have been coordinating the European network of practitioners fighting cybercrimes, CYCLOPES. We are also a partner in the EU-funded CYRUS project, which focuses on cybersecurity in the industrial sector. Additionally, we have developed our own PPHS’s Cybersecurity Standard. Learn more at this link.

Today Versus Tomorrow

Artificial Intelligence (AI) is proving to be a highly effective tool for cybercriminals due to its ability to learn from the present and predict future behaviours. As a result, terms like ‘AI-Crime,’ ‘malicious AI,’ and ‘malicious use and abuse of AI’ have emerged. While not all of these activities are currently classified as criminal under existing laws, they pose significant threats to the security of individuals, organisations, and public institutions. Malicious misuse refers to the use of AI with harmful intentions, as well as attacks aimed at undermining the very systems that rely on AI.

In February 2019, experts from academia, law enforcement, defence, government, and the private sector came together at University College London for a workshop titled ‘AI & Future Crime.’ The goal was to identify how AI might be abused in the future. Participants pointed out that AI-based crime could overlap with traditional forms of cybersecurity and cybercrime, but also present entirely new threats. Some of these risks extend current illegal activities, while others represent novel forms of criminal behaviour. Crucially, these different avenues are not mutually exclusive.

AI can, of course, be used to facilitate traditional crimes. It can be deployed to predict vulnerabilities in individuals or institutions, generate false content for blackmail or reputation damage, and even carry out tasks that humans are either unwilling or unable to do—whether due to danger, physical limitations, or slower reaction times.

Furthermore, AI itself can become a target. Criminals may attempt to circumvent the security measures embedded within AI systems to avoid detection or prosecution. In some cases, the goal may be to make critical systems fail, thereby undermining public confidence in their reliability. Such actions can weaken society’s sense of security and erode trust, ultimately fraying the bonds between people.

At this stage, AI cannot do everything. Ironically, it is often the overestimation of AI’s capabilities that leads to successful criminal exploitation. For instance, a novice investor might be duped into believing that AI can accurately predict stock market changes. Unfortunately, the realisation that this trust was misplaced often comes too late — when the damage has already been done.

AI Crime as a Service

The extent to which AI can enhance criminal activity largely depends on how deeply the target is integrated within the AI ecosystem. Naturally, AI is far more suited to participating in sophisticated crimes like bank fraud than in something as low-tech as a pub brawl. This is especially relevant in today’s world, where modern society relies heavily on complex computational networks — not just for finance and commerce, but also for communication, politics, media, work, and social interactions.

Unlike traditional forms of crime, digital criminal techniques can be easily shared, replicated, and even sold. This has led to the rise of ‘crime as a service,’ where individuals seeking to break the law can outsource the more complex aspects of their AI-powered activities. The ability to market and sell criminal methodologies means that even those with limited technical knowledge can exploit AI for illegal purposes.

Unfortunately, the commercialisation of these specialisations — and the emergence of AI crime as a service — has already become a reality. On various darknet forums, one can find not only increasing discussions about using AI-based tools for criminal purposes but also concrete offers to carry out illegal activities. However, it is crucial to remember that this battle is ongoing. Just as criminals are using AI to their advantage, technological advances also provide law enforcement agencies with the means to stay ahead, taking two steps forward for every move made by those operating outside the law.

Developed on the basis of a paper by Dr. Krzysztof Jan Jakubski entitled „Niebezpieczna Sztuczna Inteligencja.” (EN: ‘Dangerous Artificial Intelligence’) published in the special issue of the publication „Przestępczość Teleinformatyczna 2023” (EN: ‘Teleinformatics Crime 2023’).

Marek Wierzbicki
Advisor, Expert in the Field of Analysis and Criminal Intelligence
PPHS
Przemyslaw Dobrzynski
Senior Communication Specialist
PPHS
ul. Slowackiego 17/11, 60-822 Poznan, Poland
ul. Slowackiego 17/11
60-822 Poznan, Poland
Tax ID: 7831618232
REGON No: 300294630
KRS No: 0000251345
Join our Newsletter!
Stay up to date with important news.
Freshmail*
SHIELD4CROWD has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101121171

Dołącz do naszego Newslettera!

Dołącz do PPBW

    Freshmail

    Przemysław Dobrzyński

    Starszy Specjalista ds. Komunikacji

    Dołączyłem do Platformy w listopadzie 2017 roku. Przez długi czas byłem odpowiedzialny za realizację krajowych i europejskich projektów związanych z bezpieczeństwem. W związku z dynamicznym rozwojem PPBW oraz powstaniem Działu Komunikacji, zająłem się promocją i upowszechnianiem rezultatów naszych działań.

    Obecnie koordynuję komunikację projektów finansowanych przez UE z obszaru bezpieczeństwa. Do moich obowiązków należy również zarządzanie kanałami online PPBW oraz wspieranie zespołu w bieżących pracach.

    Posiadam szeroki zakres umiejętności, który pozwala mi realizować różnorodne zadania, takie jak tworzenie treści (teksty, zdjęcia, wideo), planowanie strategii komunikacyjnych i procesów, budowanie społeczności i relacji, a także administrowanie platformami online.

    Join our Newsletter!

    PPHS's Trainings Contact Form

    Freshmail

    Przemyslaw Dobrzynski

    Senior Communication Specialist at the Polish Platform for Homeland Security

    I’ve been working at PPHS since November 2017. For a long time, I was responsible for implementing both international and national security projects. As PPHS developed and the Communication Department was established, I was promoted to a role focused entirely on communication.

    I currently serve as the Communication & Dissemination Manager for EU-funded security projects. My responsibilities also include managing the online channels run by PPHS and supporting the team with ongoing tasks.

    I have a broad set of skills, enabling me to handle a variety of tasks such as content creation (texts, photos, videos), communication and process planning, community and relationship building, as well as managing online platforms.

    Join Our Team

      Consent*
      Freshmail