• NextWave AI
  • Posts
  • X Tightens Revenue Rules to Combat AI Deepfakes Amid Israel–Iran War

X Tightens Revenue Rules to Combat AI Deepfakes Amid Israel–Iran War

In partnership with

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.

That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype

In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.

In the midst of escalating tensions in the Middle East, Elon Musk’s social media platform X has announced sweeping changes to its creator revenue-sharing programme, targeting the growing problem of AI-generated deepfakes. The move comes as misinformation linked to the ongoing Israel-Iran conflict circulates widely across digital platforms.

The company has revised its monetisation policies, warning creators that posting AI-generated videos of armed conflicts without proper disclosure will result in strict penalties. Under the new guidelines, users who fail to label AI-created conflict-related content will face a 90-day suspension from X’s Creator Revenue Sharing programme. Repeated violations could lead to permanent removal from the programme.

A Push for Authenticity During Wartime

The announcement was made by X’s Head of Product, Nikita Bier, who emphasised the importance of maintaining trustworthy information flows during times of war. In a public statement, Bier explained that the rapid evolution of generative AI tools has made it easier than ever to fabricate realistic yet misleading videos.

“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote. “With today’s AI technologies, it is trivial to create content that can mislead people.”

According to the revised policy, AI-generated videos depicting armed conflicts must clearly include a disclosure indicating that the material was created or modified using artificial intelligence. Content may be flagged for review through X’s Community Notes system or by detecting metadata and other signals associated with generative AI tools.

The company has also introduced a visible “Made with AI” label that appears on posts identified as AI-assisted. This label is designed to inform viewers about the nature of the content and promote transparency across the platform.

Rising Concerns Over Deepfakes

The policy update comes at a sensitive time. The Israel–Iran conflict has generated intense global interest, with millions turning to social media for real-time updates. However, alongside legitimate reporting, AI-generated images and videos have proliferated, blurring the line between fact and fabrication.

Deepfake technology—once considered niche—has now become accessible to ordinary users. With minimal technical expertise, creators can generate hyper-realistic footage depicting military strikes, political speeches, or battlefield scenarios. Such content, when presented without disclosure, can inflame tensions, spread panic, or distort public understanding of unfolding events.

X’s leadership appears keen to avoid becoming a hub for wartime misinformation. Musk and Bier recently highlighted record-breaking traffic on the platform, underscoring its growing influence as a real-time news source. With that influence comes heightened responsibility.

Financial Incentives and Platform Integrity

X’s Creator Revenue Sharing programme allows eligible users to earn money based on engagement with their content. By linking policy violations directly to monetisation privileges, the company is leveraging financial incentives to enforce compliance.

A 90-day suspension from revenue sharing can significantly impact creators who depend on the platform for income. The threat of permanent exclusion adds further weight to the policy. In effect, X is sending a clear message: transparency around AI-generated content is no longer optional.

This strategy reflects a broader industry trend in which platforms attempt to balance free expression with safeguards against manipulation. By targeting undisclosed AI content specifically in the context of armed conflict, X is focusing on scenarios where misinformation could have the most serious consequences.

Strengthening Spam and Automation Detection

Beyond deepfakes, X is also intensifying efforts to combat spam and automated abuse. Bier recently warned about the potential risks posed by emerging AI agent platforms, suggesting that they could overwhelm traditional communication channels such as email and messaging apps with automated spam.

In response, X has rolled out enhanced spam and automation detection systems. The company stated that accounts exhibiting signs of non-human interaction face a high risk of permanent suspension.

These measures follow an earlier purge in which X reportedly removed approximately 1.7 million bot accounts that had been flooding reply sections with spam. The crackdown signals a broader campaign to improve platform quality and user trust.

Balancing Innovation and Responsibility

The new rules highlight the growing tension between technological innovation and ethical responsibility. Generative AI tools have unlocked remarkable creative possibilities, enabling users to produce sophisticated multimedia content at unprecedented speed. Yet the same tools can be weaponised to deceive.

Musk, who has been both an advocate and a critic of AI technologies, appears to be steering X toward stricter oversight in high-risk contexts. The revenue-sharing revisions suggest a pragmatic approach: rather than banning AI-generated content outright, the platform demands transparency.

Critics may argue that enforcement challenges remain. Detecting AI-generated material is not always straightforward, especially as generative models become more advanced. Nevertheless, the combination of community reporting, metadata analysis, and financial penalties could deter at least some misuse.

A Defining Moment for Digital Platforms

As geopolitical conflicts increasingly play out online, social media companies face mounting pressure to prevent their platforms from amplifying falsehoods. X’s latest policy shift reflects a recognition that AI-driven misinformation is no longer a theoretical risk—it is a present reality.

By tying monetisation privileges to responsible disclosure, X is attempting to preserve the credibility of its timeline during one of the most volatile periods in recent memory. Whether these measures prove sufficient remains to be seen, but the move marks a significant step in the evolving relationship between artificial intelligence, social media, and global conflict.

In an era where a fabricated video can travel the world in minutes, the battle for authenticity has become as critical as any conflict on the ground.