• NextWave AI
  • Posts
  • IDEsaster: Over 30 Critical Flaws in AI Coding Tools Expose Developers to Data Theft and Remote Code Execution

IDEsaster: Over 30 Critical Flaws in AI Coding Tools Expose Developers to Data Theft and Remote Code Execution

In partnership with

Go from AI overwhelmed to AI savvy professional

AI keeps coming up at work, but you still don't get it?

That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.

Here's what you get:

  • Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.

  • Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.

  • New AI tools tested and reviewed - We try everything to deliver tools that drive real results.

  • All in just 3 minutes a day

Artificial intelligence has rapidly transformed modern software development. AI-powered Integrated Development Environments (IDEs) and coding assistants are now embedded into workflows across the world, helping developers write code faster, debug efficiently, and automate repetitive tasks. But with this convenience comes a new and growing attack surface. A recent discovery has revealed that AI-enabled IDEs may be far more vulnerable than previously imagined.

More than 30 security vulnerabilities have been identified across popular AI coding tools and IDE extensions—including Cursor, Windsurf, GitHub Copilot, Roo Code, Zed.dev, Junie, Cline, and several others. These flaws collectively fall under a newly defined attack class named “IDEsaster,” discovered by security researcher Ari Marzouk (MaccariTA).

These vulnerabilities show that the integration of AI agents inside IDEs creates dangerous chains of attack capability, allowing adversaries to bypass safeguards, hijack context, steal data, or even execute arbitrary code—sometimes without any user interaction.

Why IDEsaster Matters

According to Marzouk, every tested AI IDE suffered from at least one universal attack chain. The core issue lies in the way AI agents interact with IDE features. Traditional IDE features—file reading, file editing, workspace configuration, and command execution—were never designed with autonomous AI agents in mind. These assistants now have the ability to perform actions automatically, which attackers can exploit through several vectors.

IDEsaster chains rely on the combination of:

  1. Prompt Injection Hijacking
    Attackers manipulate an AI agent’s context by adding hidden instructions inside files, URLs, or invisible characters. The agent interprets these as commands.

  2. Auto-Approved Agent Actions
    Many AI assistants automatically approve file reads/writes or tool calls inside the workspace.

  3. Legitimate IDE Features Used as Weapons
    Once the AI agent is tricked, it can use built-in IDE capabilities—like editing settings files or loading external schemas—to leak data or run malicious code.

This creates an environment where a single poisoned README file, hidden Unicode instruction, malicious MCP (Model Context Protocol) output, or manipulated workspace configuration can trigger severe compromises.

Examples of the Exploit Chains

Multiple CVEs were issued to document the severity of the vulnerabilities. Some key attack paths include:

1. Data Exfiltration via Remote JSON Schema

Tools: Cursor, Roo Code, JetBrains Junie, GitHub Copilot, Kiro.dev, Claude Code

Attackers use prompt injection to read sensitive files with tools like read_file or search_project. The AI agent then writes a JSON file that references a remote schema hosted by an attacker. When the IDE fetches that schema using a GET request, sensitive data gets leaked automatically.

2. Code Execution via Manipulated Settings Files

Tools: GitHub Copilot, Cursor, Roo Code, Zed.dev, Claude Code

Prompt injection can cause the agent to edit IDE settings such as .vscode/settings.json or IntelliJ’s workspace.xml.

For example:

  • Changing php.validate.executablePath

  • Setting PATH_TO_GIT to a malicious executable

This results in arbitrary code running whenever the IDE uses those settings.

3. Remote Code Execution Through Workspace Configuration

Tools: Cursor, Roo Code, GitHub Copilot

By altering .code-workspace files, attackers can override multi-root workspace settings, leading to automatic execution of malicious routines—again, without any user interaction.

A crucial takeaway: auto-approved file writes make these vulnerabilities extremely dangerous, as many AI IDEs permit AI agents to modify workspace files freely.

Additional Vulnerabilities in AI Coding Tools

Apart from IDEsaster exploits, several other concerning flaws were disclosed:

OpenAI Codex CLI (CVE-2025-61260)

A command injection flaw arising because the CLI blindly executes commands from MCP configuration files at startup. A tampered .env or config.toml file can lead to immediate compromise.

Google Antigravity Flaws

These include:

  • Indirect prompt injections via poisoned web sources

  • Credential harvesting

  • Remote command execution

  • Persistent backdoors embedded into trusted workspaces

The browser agent inside Antigravity could be tricked into browsing attacker-controlled sites for exfiltration.

PromptPwnd: A New Class of CI/CD Attacks

This technique targets AI agents connected to GitHub Actions or GitLab pipelines. By feeding malicious prompts through issues or pull requests, attackers can cause AI agents to execute privileged CI/CD actions—leading to repository compromise or supply chain tampering.

Why AI-Driven IDEs Are at Risk

Agentic AI systems operate autonomously, carry out actions, and trust the context given to them. Unfortunately, AI models have no inherent ability to distinguish between:

  • user-given instructions

  • attacker-controlled content embedded inside files

  • poisoned MCP output

  • hidden characters or Unicode exploits

This makes them uniquely susceptible to multi-step exploit chains. As Marzouk explains, security thinking must evolve from “secure by design” to a new paradigm: “Secure for AI.” This approach considers how AI features can be misled, manipulated, or exploited as they evolve over time.

Recommendations for Developers

To reduce risk, developers using AI IDEs should follow these guidelines:

1. Use AI assistants only with trusted projects

Malicious files, READMEs, or even filenames can contain hidden prompt injections.

2. Connect only to verified MCP servers

Even legitimate servers can be breached. Continuous monitoring is essential.

3. Carefully inspect all external context sources

URLs, HTML comments, invisible Unicode characters, and CSS-hidden text may carry malicious instructions.

4. Advocate for better security in AI tools

AI IDE developers should:

  • Apply least-privilege design

  • Reduce prompt injection attack surfaces

  • Use sandboxing for command execution

  • Harden agent system prompts

  • Add path traversal and command injection tests

A Growing Warning for the AI Era

As organizations increasingly incorporate AI agents into workflows, these findings highlight a critical reality: AI expands the attack surface in unpredictable ways. Every repository or development environment using AI for triage, labeling, or code suggestions is now potentially vulnerable to prompt and command injection attacks.