The rapid adoption of AI is adding complexity to the threat landscape. Attacks that once required extensive preparation and advanced technical skills can now be executed more efficiently, lowering the barrier to entry for less-experienced threat actors.
![]() |
|
Kaspersky signage at the company’s headquarters. Photo courtesy of Kaspersky |
Vladislav Tushkanov, group manager at Kaspersky’s AI Technology Research Center, said AI enables experienced attackers to operate faster while also equipping inexperienced actors with new capabilities. “Those already skilled are able to work faster, and those not skilled at all gain more capability. Instead of spending a month studying programming, they can just ask ChatGPT,” he said.
Attackers are increasingly using AI to generate phishing emails, synthetic voices, images and videos that are more convincing and harder to detect. Deepfakes, in particular, have emerged as a growing risk in corporate environments.
British engineering firm Arup lost about US$25 million in 2024 after an employee was deceived into transferring funds during a deepfake video call, according to The Guardian.
Beyond deepfakes, AI can support multiple stages of cyberattacks, from reconnaissance and message customization to evading detection by security systems.
Kaspersky research shows that 72% of businesses are seriously concerned about attackers’ use of AI. Traditional defenses, the company said, are increasingly challenged by threats that evolve rapidly and are difficult to predict.
At the same time, AI is becoming a critical tool for strengthening cyber defenses. It can help organizations detect threats more quickly, automate parts of the response process and enhance predictive capabilities, enabling a shift from reactive to proactive security strategies.
“To detect AI, and to prevent your analysts from getting bogged down in routine, in the daily analysis of an ever-increasing number of alerts, you need machine learning (ML). It copes with this perfectly – and your professionals will save time for complex or business-critical tasks,” Tushkanov said.
However, effective deployment of AI in cybersecurity requires more than technology alone. Businesses also need skilled personnel, implementation experience and a strong data foundation.
Kaspersky said it currently uses AI to analyze and classify more than 460,000 malware samples daily. Combined with proprietary data, processing methods and model training infrastructure, these capabilities underpin its approach to increasingly complex cybersecurity challenges.
Despite these advances, AI cannot yet replace human expertise in incident investigation and decision-making. Assessing risks and determining responses still rely heavily on professional judgment.
“In the future, AI systems may be able to support those kinds of decisions. But at this point, the human role remains essential,” Tushkanov said.
Kaspersky develops secure AI to address emerging threats while meeting growing expectations for transparency, ethics and compliance. Through its “True to Business” approach, the company aims to help organizations automate security processes, reduce operational burdens and maintain stable, secure operations.
Readers can learn more here.




