- NextWave AI
- Posts
- India’s New AI Governance Framework: Nasscom Welcomes Coordination Over Control
India’s New AI Governance Framework: Nasscom Welcomes Coordination Over Control
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
India’s technology landscape is entering a decisive new phase with the government’s release of its Artificial Intelligence (AI) governance guidelines earlier this week. The framework, which emphasizes coordination over control, has been warmly welcomed by the country’s premier IT industry body, Nasscom, as a pragmatic, principle-based approach that balances innovation with responsibility.
Rather than building a heavy, centralized regulatory structure, the guidelines aim to promote collaboration among government bodies, industry players, and policy experts. Nasscom described the move as one that encourages innovation while managing risks through evidence-based tools, establishing India’s intent to lead in the global conversation on ethical and responsible AI.
A Framework Rooted in Coordination
The newly proposed AI governance architecture is designed around three core institutions:
AI Governance Group (AIGG)
Technology and Policy Expert Committee (TPEC)
AI Safety Institute (AISI)
Together, these bodies form what the government calls a whole-of-government approach. The intention is to enable smooth coordination among ministries and departments without creating an over-centralized or rigid regulator. This approach allows flexibility for rapid technological advancement, while ensuring that safety, privacy, and accountability remain central to AI development.
Nasscom emphasized that this structure represents a shift from the older, control-oriented model of regulation toward one based on collaboration, transparency, and adaptability. “The framework reflects a deep understanding of how innovation ecosystems function — it prioritizes dialogue and shared accountability instead of rigid enforcement,” the association noted.
Eight Principles for Responsible AI
India’s AI guidelines are anchored in eight key governance principles, reflecting a global consensus on ethical AI design and deployment:
Transparency – ensuring clear information about AI systems’ development and decision-making.
Accountability – defining responsibility across the AI lifecycle.
Safety – preventing harm through robust testing and evaluation.
Privacy – protecting user data and personal information.
Fairness – mitigating bias and discrimination in AI systems.
Human-Centered Values – ensuring AI enhances, not replaces, human judgment.
Inclusive Innovation – promoting AI for all segments of society, not just the privileged few.
Digital-by-Design – integrating AI governance principles into digital infrastructure from the ground up.
Together, these principles align India’s AI governance framework with international standards such as those of the OECD and the EU’s proposed AI Act, while maintaining a strong focus on India’s unique developmental priorities.
Nasscom’s Perspective: Bridging Policy and Practice
Nasscom’s statement following the release of the guidelines highlighted that the overall alignment between the government’s goals and industry expectations is “strong.” The differences that remain, it said, are largely operational rather than philosophical — pertaining to the scope of voluntary commitments, the functioning of regulatory sandboxes, and how incident reporting mechanisms will be structured.
“These are matters of implementation rather than intent,” Nasscom observed. The organization believes that such differences can be resolved through continued consultation between industry experts and policymakers, ensuring a governance model that evolves with technological progress.
To strengthen the framework further, Nasscom proposed a few specific steps:
Unified Reporting Mechanism:
Create a single reporting interface that connects privacy, cybersecurity, and safety systems. This would simplify compliance and make it easier for companies to adhere to multiple regulatory requirements through a single channel.Clearer Pathways for Voluntary Commitments:
Provide transparent conformance routes for companies that wish to adopt ethical AI practices proactively. This would help encourage self-regulation and accountability within the industry.Concrete Plans for Regulatory Sandboxes:
Develop pilot programs under AISI and TPEC to test new AI applications in controlled environments before full-scale deployment. Such sandboxes would enable innovation while managing risks effectively.
Balancing Innovation and Responsibility
What sets India’s approach apart is its agility — the framework does not attempt to freeze innovation within a rigid rulebook. Instead, it allows room for adaptation as AI technologies evolve. This principle-based structure mirrors global best practices but remains rooted in local realities.
By emphasizing coordination over control, India is signaling that AI regulation should not become a bottleneck for progress. Rather, it should serve as an enabler of trust — building confidence among consumers, developers, and international partners that AI in India is safe, ethical, and transparent.
This stance comes at a time when nations across the world are struggling to balance AI’s transformative potential with its ethical and social risks. The EU, for instance, has opted for a more prescriptive model through its AI Act, while the United States prefers a decentralized, innovation-first approach. India’s model, experts say, sits between the two extremes — combining principle-based flexibility with a focus on collective responsibility.
The Growing AI Market in India
India’s AI market currently stands at an estimated $7–10 billion, and according to industry projections, it is expected to grow at a compound annual growth rate (CAGR) of 25–35 percent by 2027. This trajectory mirrors the global trend of exponential AI adoption across sectors such as healthcare, manufacturing, finance, and education.
As Indian enterprises increasingly integrate AI into their operations, the demand for strong governance frameworks will only intensify. Clear guidelines not only protect users but also create investor confidence and facilitate cross-border collaborations. Nasscom believes that with these new principles in place, India can accelerate its progress toward becoming a global hub for responsible AI innovation.
The release of these AI guidelines is not the end of a process but the beginning of one. Implementation, continuous feedback, and technological evolution will shape how effective they ultimately become. The government’s openness to collaboration, combined with industry commitment, will determine the success of this initiative.
For India, which is home to one of the world’s largest digital populations and fastest-growing technology ecosystems, responsible AI is not just a policy issue — it’s a societal necessity. As AI systems increasingly influence education, healthcare, governance, and even justice, the stakes are high.
Nasscom’s endorsement of a coordination-first approach reflects an understanding that innovation thrives in an environment of trust, accountability, and shared vision.
Conclusion
India’s AI governance guidelines mark a forward-looking milestone in the country’s digital journey. By focusing on principles rather than prescriptions, and collaboration rather than control, the government has crafted a framework that respects both technological dynamism and ethical responsibility.

