AI attacks, AI defends: how to use neural networks in information security

Hello! We share with you the material prepared by Roman Strelnikov — head of the information security department at Bitrix24. Roman is the person who controls everything and even approves articles for this blog, ensuring that not a drop of confidential information leaks from the company.

As you can see, we have security under human control. Even despite the vast number of AI solutions in the field of information protection. We understand that AI is becoming a full-fledged player in this area. And it plays in both teams: attackers and defenders. It learns, makes mistakes, adapts, and improves.

And if attackers can trust an AI for the attack, what happens on the defense side? Previously, we always knew that no matter the technical reasons for a failure, server crash, or system breach, the responsibility always fell on the employee. In reality, where the decision is made by a trained model, the question changes: who is to blame if the AI makes a mistake?

This is our new reality. Welcome to yet another technological race.

What AI can do in the hands of hackers

The main advantage of AI on the attackers' side is speed:

  • AI can pick passwords tens of times faster — it analyzes leaks, builds probabilistic models, and understands which combinations are more typical for a specific user;

  • AI scans code for vulnerabilities faster than humans and can write malicious code that changes with each execution. Such a virus cannot be detected by signature and sometimes doesn't even manifest in a "sandbox";

  • AI automates phishing, allowing for mass attacks to be carried out dozens of times faster than before.

    On October 29, 2024, an internet provider from East Asia was subjected to a large-scale UDP DDoS attack with a power of 5.6 Tbps. The source of the attack was a botnet created from a modified version of the Mirai malware. This incident became one of the largest volume cyberattacks recorded in the region in recent years. The attack was very short in duration, but its intensity was record-breaking. The company was a client of one of the largest cybersecurity providers, Cloudflare, but the protection failed to handle the incident.

A human doesn't have such speed. No SOC has such a response. And if the company's defense doesn't have a similar AI, the attack is likely inevitable.

What’s wrong with AI on the defense side?

Both attackers and information security experts prefer to create their own AI models — each with their own goal. Publicly available ChatGPT or DeepSeek are not suitable for this task; their filters only allow a limited amount of information on "forbidden" topics.

We decided to create our own language model and train it on our data, on the attacks that occur within our company — this will be cheaper and faster, and the model itself will be small and compact.

Such models are usually tailored to the specifics of a particular company or product, as they are "fed" with specific logs, methods, trigger points, and real cases.

Similarly, wrongdoers build their own models for attacks: they gather necessary data and train their models on it, unrestrained by the filters present in accessible LLM solutions. However, they make their solutions universal and available to their "colleagues." In conjunction, for example, with Deep Seek, a very powerful tool emerges that can attack any network. In this way, AI greatly simplifies any attack tasks.

At the same time, models used in defense are not as universal—they are limited to specific tools, forced to close weak points in the perimeter and account for infrastructure nuances.
For example, a defense model embedded in an SIEM system may only be effective within the limits of pre-defined correlation rules, log formats, and known attack patterns. If an attacker uses an unknown bypass method (zero-day, non-standard action chains), the model may simply fail to react.

If an infrastructure uses a non-standard authorization system, the universal defense model might ignore anomalies in logs that, for a specific company, indicate an attack. To adapt the solution to this peculiarity, manual adjustments and specific data for training are required.

In essence, a universal defense model is a compromise between accuracy and generalization. But truly effective solutions require deep customization for the company's infrastructure, constant data updating, and model retraining, which is technically and financially difficult to implement for many organizations.

What can AI do on the defense side?

Unfortunately, not much yet. Behavior and anomaly analysis (XDR/SIEM)

One of the most promising approaches is analyzing user and network behavior. Artificial intelligence monitors usual activity and instantly responds to deviations: unusual IPs, mass file downloads, suspicious access attempts to network segments. This is not just a reaction to a signature—it is an attempt to understand what exactly is happening and how to solve the problem.

Attack prediction

AI learns from past experiences. It analyzes patterns from previous attacks, compares them with current events, and concludes: if a bot has started scanning one system, it is highly likely to move on to the next one. A model correctly trained on high-quality, labeled data reflecting real incidents in a specific environment can predict the next breach point.

Without such training, even the most powerful LLM will be blind to the infrastructure's peculiarities.

For example, an AI model trained on data from activity within cloud infrastructure (access logs, time anomalies, behavioral profiles of service accounts) was able to detect a planned breach during the preparation phase. Before breaching the perimeter, attackers first requested metadata from the storage and tested API responses at atypical times (for example, at night on weekends). Previously, such activity was perceived as noise. After additional training on similar cases, the system began to detect the beginning of attack chains with high accuracy—before the main incident occurred.

AI can attack and defend. But we still can't place all the responsibility for cybersecurity on it. Behind the loud promises of "automatic cybersecurity" often hides not a universal tool, but a complex instrument that requires precise settings and qualified support.

For example, SOAR and automated response theoretically sound like a dream: an event occurred, the system understood everything, blocked it, notified everyone, and updated the rules. In practice, it all comes down to how and by whom this AI was trained.

Let's assume the automated response model was trained on logs from a "calm" period — when there were almost no attacks, but the system regularly experienced false alarms from network instability and periodic updates. As a result, the AI learned that "anomalies are normal" and began ignoring potentially dangerous deviations. When a real phishing attack occurred with account takeover and internal scanning, the system did not respond — everything fit within its understanding of "normal." In another scenario, such an AI, trained on strict rules and "clean" logs, began aggressively blocking administrator actions, mistaking their planned operations for suspicious activity.

The risks of this approach are obvious:

  • missed attacks due to a skewed understanding of normal behavior;

  • paralysis of business processes due to false positives;

  • loss of trust in the system when its actions become unpredictable.

Therefore, AI is not a "magic button," but a tool whose effectiveness depends on data quality and continuous retraining.

Example: Implementing SOAR/XDR in Bitrix24

In Bitrix24, we deliberately implement the XDR approach, without fanaticism and step by step.

First — debugging logging and monitoring. Then — training models. After that — careful implementation of automated response scenarios.

This is the only way to ensure that AI truly helps, rather than creating an illusion of security.

We also regularly check how secure our infrastructure really is. We conduct penetration tests — sometimes "blind" (black box), when the testers know nothing about the internal structure, and sometimes with partial access (gray box), to see how the system behaves under pressure. We do this not only with our specialists: we involve external experts to get the most objective picture.

This practice helps us not only find vulnerabilities but also keep our finger on the pulse and constantly adapt to new threats. After all, a high level of security is not a status, but a process that needs to be maintained every day.

Where Automation Breaks: The Realities of the Cybersecurity Market

Cybersecurity systems are often presented as magical solutions. But behind the beautiful interface lies:

— complex setup,

— the need for manual retraining of models,

— and the mandatory involvement of qualified specialists.

Without them, the AI product either won't work or will work incorrectly.

The main problem is the lack of qualified personnel. According to latest data, almost 42,000 vacancies in the field of information security were opened in the first 3 months of 2025, which is already almost half of the total for 2024. Salaries in the segment are outpacing the overall IT growth, and the main issue is the lack of relevant skills among applicants and the lag of educational programs behind market needs.

It is specialists at the L2–L3 level who write response rules, analyze incidents, set up playbooks, and ensure that automation really works.

The question is not whether artificial intelligence will replace humans in information security, but who has a stronger, better-trained, and more efficient AI on their team. AI will not replace a team of defense specialists, but it will significantly strengthen it, just as it is already strengthening attackers. One should not try to shift everything to neural networks; it is more important to learn how to use them faster, deeper, and smarter than the opponent.

Comments