Last week, Google’s AI platform Google Gemini landed on the wrong side of Prime Minister Narendra Modi’s gaze after the AI tool called PM Modi a fascist in response to a query.
According to media reports, when the chatbot was asked whether Prime Minister Modi was a fascist, the response was that he was ‘accused of implementing policies some experts have characterized as fascist. These accusations are based on a number of factors, including BJP’s Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religions minorities.’
What irked the Indian government more was that when a similar question was asked about former US President Donald Trump and Ukranian President Volodymyr Zelensky, the AI platform did not give a clear answer.
After social media picked up this news which went viral soon after, Rajeev Chandrasekhar, Minister of State for Electronics posted on social media platform X: “These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT Rules) of the IT Act and violations of several provisions of the Criminal Code.”
Now, while your interpretation of whether the chatbot’s response to the question about PM Modi is right or wrong depends on your political affiliation and how you connect fascism and what is going on in the country, the scary part is that a chatbot can now decide what a stranger with no knowledge of India or Modi would believe about the Indian PM or what is going on in the country.
The proponents of AI and generative AI might make compelling arguments that generative AI platforms, at least the good ones, function exactly like a human mind and only make rational decisions without any bias. But these responses and analyses by AI tools clearly raise larger concerns about a future world, where with so many AI tools around, we might not after all use our own minds to decide on who people are and what is going on in the country we live.
With the arrival of Google and its gargantuan database containing information about just anything, the need for remembering dates, places or even historic events have become redundant. With AI, the need for analysing and understanding, which are fundamental to gaining knowledge or even making an informed decision might soon become irrelevant as an AI tool might be seen as a quicker and faster alternative.
In a recent interview to a major media outlet, Microsoft Chairman and CEO Satya Nadella revealed that his dream was that everyone to have an AI tutor, AI doctor, and an AI Consultant in future. While this dream of eight billion people in the planet having an AI tutor and a personal AI physician is laudable and does bring about equality in one’s access to quality education, health care etc. one also has to wonder what would happen when our children are tutored by AI tools.
When AI begins to tutor children, the material which is tutored is also controlled by the platform or the company that promotes the platform and thus the knowledge acquired by a generation of humans would be decided by a handful of tech firms.
We are already there if a recent documentary that was released on the X platform by Elon Musk is to be believed. ‘The War on Children’, a documentary by Robby Starbuck, which has received over 30 million reviews on social media already has interviews with Riley Gaines, Chaya Raichik of Libis of TikTok and US Senator Rand Paul discussing the sexualization of children through social media platforms. The documentary tries to portray how social media platforms now play a huge rule in youngsters choosing even their sexuality and sexual preferences.
While AI, like any new technology brings in a lot that mankind could benefit from, unless stringent regulations are put in place, we would all become robots some day and robots would do the thinking and decision-making for us. But that is still some time away. For now, we can still fix things and the union government has just shown that to us.
For when this columnist tried the same question ‘Is PM Modi a fascist’ on Google’s Gemini on Sunday, he got the response: “I’m just a language model, I can’t help you with that.”
So, that ‘bug’ has been fixed as of now.