[ad_1]
According to Reuters on March 4, the Indian government announced a new requirement for technology companies to obtain government approval before publicly releasing artificial intelligence (AI) tools that are still in development or are considered “unreliable.”
The move is part of India’s efforts to manage the deployment of artificial intelligence technology to improve the accuracy and reliability of tools available to citizens in preparation for elections.
artificial intelligence rules
According to the directive issued by the Ministry of Information Technology, any AI-based application, especially those involving generative AI, must obtain explicit authorization from the government before being introduced in the Indian market.
Additionally, these AI tools must be labeled with warnings that they may produce incorrect answers to user queries, reinforcing the government’s position on the need to clarify AI capabilities.
The regulation is in line with a global trend of countries seeking to develop guidelines for the responsible use of artificial intelligence. India’s move to strengthen regulation of artificial intelligence and digital platforms is consistent with its broader regulatory strategy to protect users’ interests in the rapidly evolving digital era.
The government’s advice also flags concerns about the impact of artificial intelligence tools on the integrity of the electoral process. With the upcoming general election in which the ruling party is expected to retain its majority, there is greater focus on ensuring that AI technology does not compromise electoral fairness.
Gemini Criticism
This follows recent criticism of Google’s Gemini AI tool, which triggered a reaction that was considered unfavorable to Indian Prime Minister Narendra Modi.
Google responded to the incident by acknowledging that its artificial intelligence tools have flaws, especially when it comes to sensitive topics such as current affairs and politics. The company said the tool remains “unreliable.”
IT Deputy Minister Rajeev Chandrasekhar said reliability issues do not absolve platforms from legal responsibility and stressed the importance of adhering to legal obligations regarding security and trust.
By introducing these regulations, India is taking steps to establish a controlled environment for the introduction and use of AI technology.
The requirement for government approval and the emphasis on transparency around possible inaccuracies are seen as measures to balance technological innovation with social and ethical considerations, aiming to protect democratic processes and the public interest in the digital age.
[ad_2]
Source link