Skip to content Skip to footer

Google’s Gemini AI Chatbot Faces Legal Scrutiny in India

Discover the dispute as Google’s Gemini AI faces legal scrutiny in India for potential IT law violations, spotlighting the tech-regulation balance.

NEW DELHI, India (AI Reporter/News): In a move that underscores the growing tension between technological advancements and regulatory compliance, India’s Union Minister of State for Electronics and Technology, Rajeev Chandrasekhar, has voiced serious concerns regarding Google’s Gemini AI chatbot.

The minister’s statement comes in light of recent controversies involving the AI’s responses and image generation capabilities, which have sparked a debate on the adherence to Indian IT laws.

Social Media Sparks Legal Concerns

The controversy took center stage earlier this week when social media users on X highlighted Gemini AI’s generation of contentious responses about Prime Minister Narendra Modi among other political figures globally. One user’s outcry over the AI’s output led to a direct response from Minister Chandrasekhar, who flagged these incidents as “direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code.”

“This #GeminiAI from @google is not just woke, it’s downright malicious @GoogleIndia. The GOI should take note.

A user wrote who responded to the post containing Gemini AI’s responses on political figures.

To which, Chandrasekhar replied, “These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code. @GoogleAI @GoogleIndia @GoI_MeitY.”

Direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code.

Rajeev Chandrasekhar, Union Minister of State for Electronics and Technology, India

Google’s Response to the Outcry

This development follows closely on the heels of Google’s decision to suspend Gemini AI’s ability to generate images of people. The decision was a response to the uproar caused by the AI’s generation of historically inaccurate images.

Notably, the AI was criticized for depicting figures from diverse backgrounds in inappropriate contexts, including a controversial portrayal of US ‘Founding Fathers’ and Nazi-era soldiers as “people of colour.”

Broader Implications and Criticisms

The issue has not only caught the attention of regulatory bodies in India but has also been criticized by prominent figures such as Tesla and SpaceX CEO Elon Musk, who accused Google of propagating “racist, anti-civilizational programming” through its AI models.

The Road Ahead for AI Governance

Google introduced the image generation feature on its Gemini (formerly known as Bard) AI platform earlier this month, aiming to revolutionize the way users interact with AI technologies.

However, the ensuing controversies highlight the complex challenges tech companies face in balancing innovation with legal and ethical standards.

As the situation unfolds, the dialogue between Google and Indian regulatory authorities is expected to intensify, potentially setting precedents for how AI technologies are governed and utilized globally.

The incident underscores the critical need for a collaborative approach to ensure that technological advancements do not come at the cost of violating legal frameworks or societal norms.

Google Responds Swiftly to Gemini AI Controversy in India

Tech giant Google announced on Saturday, Feb 24, 2024, that it has acted ‘quickly’ to resolve concerns surrounding its Gemini AI tool. The move comes after the Indian government expressed dissatisfaction with the tool’s “biased” response to a query about Prime Minister Narendra Modi, highlighting the company’s commitment to addressing regulatory concerns promptly.

Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news.

Google

“We’ve worked quickly to address this issue. Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving,” a Google spokesperson said.

(copyright@aireporter.news)