- NextWave AI
- Posts
- How Hackers Are Using Artificial Intelligence to Supercharge Cyberattacks
How Hackers Are Using Artificial Intelligence to Supercharge Cyberattacks
AI Agents Are Reading Your Docs. Are You Ready?
Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.
Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.
This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.
Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.
That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype
In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.
Cybersecurity experts are raising concerns about the growing use of artificial intelligence (AI) by cybercriminals. A recent report by Microsoft Threat Intelligence highlights how hackers are increasingly integrating AI tools into their operations, enabling them to accelerate attacks, scale malicious activities, and reduce the technical skills required to conduct sophisticated cyberattacks. As AI technology becomes more accessible, it is transforming the cyber threat landscape and making digital attacks more efficient and difficult to detect.
According to the report, attackers are using generative AI tools throughout the entire lifecycle of a cyberattack. From the initial reconnaissance phase to post-compromise operations, AI is helping cybercriminals automate tasks that previously required significant time and expertise. Generative AI models can quickly produce convincing text, code, and multimedia content, making them attractive tools for malicious actors.
One of the most common uses of AI in cybercrime is the creation of phishing campaigns. Phishing emails have long been a major method used by hackers to gain access to sensitive information. With AI, attackers can now generate highly convincing emails that closely mimic legitimate communications from trusted organizations. These emails can be customized to match the tone, language, and context of a target company or individual, increasing the likelihood that victims will fall for the scam.
Beyond phishing, AI is also being used to assist in malware development. Attackers can rely on AI coding tools to generate malicious code, debug errors, and even convert malware components into different programming languages. This capability allows cybercriminals to create new variants of malware more quickly and adapt existing tools to different environments. Some experiments have even shown early signs of AI-assisted malware that can dynamically generate scripts or modify its behavior during execution, making it more difficult for security systems to detect.
The report also identifies specific threat groups that are incorporating AI into their operations. Among them are North Korean-linked actors known as Jasper Sleet (Storm-0287) and Coral Sleet (Storm-1877). These groups are reportedly using AI as part of sophisticated remote IT worker schemes. In these operations, hackers create realistic digital identities and apply for remote jobs at Western companies. Once hired, they gain legitimate access to corporate systems and maintain long-term entry points for cyber espionage or financial theft.
AI tools play a crucial role in making these fraudulent identities believable. For example, threat actors may use AI platforms to generate lists of culturally appropriate names and realistic email address formats. They also analyze job postings for software development and IT positions using AI to extract the required skills and qualifications. This information is then used to tailor fake resumes and profiles that closely match the job requirements, increasing the chances of being hired.
In addition to identity fraud, cybercriminals are using AI to streamline the creation of malicious infrastructure. AI can help generate fake company websites, configure server infrastructure, and test deployment environments. These capabilities allow attackers to quickly set up phishing domains, command-and-control servers, or other elements needed to support large-scale cyber campaigns.
However, many AI platforms include safety mechanisms designed to prevent misuse. To bypass these protections, hackers are experimenting with “jailbreaking” techniques that manipulate AI systems into generating restricted content. By carefully crafting prompts or disguising their intentions, attackers can sometimes trick language models into producing harmful code or instructions that would normally be blocked.
Researchers are also beginning to observe early experiments with so-called “agentic AI,” which refers to AI systems capable of performing tasks autonomously and adapting their behavior based on results. While fully autonomous cyberattacks are not yet common, some threat actors are exploring ways to use AI to automate parts of their operations. For now, most attacks still rely on human decision-making, with AI acting as a tool that enhances speed and efficiency rather than replacing human control.
Security experts warn that these developments could significantly reshape the future of cybersecurity. As AI continues to improve, the barrier to entry for cybercrime may become much lower. Individuals with limited technical knowledge could potentially launch sophisticated attacks with the assistance of AI tools.
Major technology companies are already seeing evidence of this trend. Along with Microsoft, researchers from Google have reported that threat actors are experimenting with their AI systems to support cyberattacks. Meanwhile, Amazon has also observed malicious campaigns where attackers used multiple generative AI services while targeting network infrastructure and security devices.
Because many of these attacks rely on legitimate credentials or insider access, organizations must adapt their defensive strategies. Microsoft recommends that companies treat suspicious remote workers and unusual login activity as potential insider threats. Monitoring abnormal credential use, strengthening identity verification systems, and improving defenses against phishing attacks are becoming increasingly important.
In addition, organizations are encouraged to secure their own AI systems, which may become targets for manipulation or exploitation. As AI becomes deeply integrated into business operations, protecting these systems will be essential to maintaining overall cybersecurity.
The rise of AI-powered cybercrime demonstrates how rapidly emerging technologies can reshape both innovation and security risks. While AI offers enormous benefits across industries, it also provides new tools for malicious actors. Addressing this challenge will require collaboration between governments, technology companies, and cybersecurity experts to ensure that AI is developed and deployed responsibly.
Ultimately, the battle between cybersecurity defenders and cybercriminals is entering a new phase—one where artificial intelligence plays a central role on both sides. Organizations that adapt quickly and strengthen their security frameworks will be better positioned to defend against the evolving threats of the AI era.

