AI News

Bengaluru forum urges people first guardrails for Generative AI

Published on: Oct 31, 2025. 6:49 PM
Ethan Jung

From Bengaluru this week, a panel of researchers and civil society leaders warned that the rush to automate must be matched by a plan to protect people, setting a cautious tone for the next phase of Generative AI. At the Our Digital Futures Fest hosted by IT for Change, speakers argued that value creation means little without labor safeguards, transparency, ecological accounting and scrutiny of the data supply chain.

Sabina Dewan, founder and executive director at JustJobs Network, challenged the assumption that rapid reskilling will carry workers into the new economy anchored in automated systems. “There is this fictional idea that people can just be reskilled, and they will be able to partake in the AI-driven economy. Reality doesn’t work that way,” she said, calling for institutions that meet workers where they are through enforceable labor rules, social security and skill systems shaped by real jobs rather than the promise of Generative AI.

Anita Gurumurthy, founding member and executive director of IT for Change who moderated the session, put the critique on an environmental and societal footing. She raised concerns about extractivist business models, the huge ecological footprint and the risk of a collective cognitive decline when automated systems mediate knowledge at scale. For a market racing to productize Generative AI, that cluster of issues functions as a license to operate question rather than a niche ethical debate.

Brian Chen, policy director at Data & Society, urged decision-makers to unpack what tasks are being automated because the value of a system depends on the function it performs. Even when social benefits are demonstrable, he warned that the gains will likely flow upward given natural monopolies, worker subordination and resource exhaustion that define the current political economy of AI. If that pattern persists, the economics of Generative AI could tilt toward a few scale players while compliance and infrastructure costs rise for smaller rivals.

The supply side of that infrastructure came into focus through the example raised by José Renato Laranjeira de Pereira, a researcher at the University of Bonn’s Sustainable AI Lab and cofounder of Laboratório de Políticas Públicas e Internet. Pointing to Brazil, he noted the government recently granted tax exemptions for data center construction, followed by cases where facilities were placed on indigenous lands, constraining access to basic natural resources and, in some reports, accompanied by attacks to divert those resources to technology companies. He also described the arrival of Starlink in the Amazon and central regions, where connectivity has been sparse, as a new frontier in data extraction drawing entire communities into a data colonialism dynamic tied to the reach of Generative AI.

Trust emerged as a practical challenge for software buyers through the interventions of Sai Rahul Poruri, chief executive of the FOSS United Foundation, who observed that vendors rarely disclose when chatbots stray beyond expected behavior or when systems experience failures. He called for self-reporting and clearer incident practices, a shift that would shape procurement criteria, sharpen audit standards and influence investment theses as capital searches for durable businesses in Generative AI. For enterprises weighing automation, the winners are likely to be those that prove their models are reliable, accountable and equitable, a signal that the next wave of corporate content will be built under tighter governance rather than pure hype.

Ethan Jung profile photo
By Ethan Jung ethan.jung@aitoolsbee.com Analyzes the latest generative AI models and cutting-edge tools.
Covers how technological progress is shaping new products and services, delivering clear insights into the fast-evolving AI tools industry.