[ad_1]
In a first-of-its-kind move against generative AI companies like Google and OpenAI in India, the Indian IT Ministry has issued an advisory to companies running such platforms, including underlying models and wrappers, that their services should not generate data under Indian laws or regulations. Illegal response. “Threaten the integrity of the electoral process.”
Platforms that currently offer “under-tested/unreliable” AI systems or large language models to Indian users must explicitly seek permission from the Center before doing so and appropriately mark possible and inherent “bugs or errors in the generated output.” Unreliability”.
Google’s artificial intelligence platform Gemini recently attacked A message from the Ministry of Electronics and Information Technology (MeitY) to learn about the answers generated by the platform to questions about Prime Minister Narendra Modi. indian express It had earlier been reported that the government planned to issue a show-cause notice to Google. The paper also reports that Ola’s beta-generated artificial intelligence provides Krutrim’s illusions.
Rajeev Chandrasekhar, minister of state for electronics and information technology, said the proposal “sends a signal to India for future legislative action to control generative AI platforms”.answer a question indian express, Chandrasekhar explained that requiring such companies to seek government licenses would effectively create a sandbox where governments may seek demonstrations of their platforms, including the consent structures they follow.
The notice was sent to all intermediaries including Google and OpenAI on Friday night. This advice also applies to all platforms that allow users to create deepfakes. Chandrasekhar confirmed that this includes Adobe. The companies are required to submit a report on the actions taken within 15 days.
“The use of testing/unreliable AI models/LLM/generative AI, software or algorithms and their availability to users on the Indian Internet must have the express permission of the Government of India. India, only if the The output produced may be inherently error-prone or unreliable before deployment. Additionally, a “consent pop-up” mechanism may be used to explicitly inform users of the inherent error or unreliability of the generated output.”
“All intermediaries or platforms ensure that their computer resources do not allow for any bias or discrimination or threat to the integrity of the electoral process, including through the use of artificial intelligence models/LLM/generative artificial intelligence, software or algorithms),” it added.
Chandrasekhar added that the reason for the advisory specifically mentioned the integrity of the electoral process, keeping in mind the upcoming Lok Sabha elections later this year. “We know that misinformation and deepfakes will be exploited in the run-up to the election to try to influence or shape the outcome,” he said in response to a question about whether the proposal went beyond existing IT rules.
© Indian Express Private Limited
First uploaded on: February 3, 2024 14:42 UTC
[ad_2]
Source link