[ad_1]
Artificial intelligence companies have been at the forefront of developing transformative technologies. Now they, too, are racing to limit how artificial intelligence is used in a year marked by major elections around the world.
Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent its tools from being abused in elections, in part by banning the use of the tools to create chatbots that impersonate real people or institutions.In recent weeks, Google has also said it would limit its artificial intelligence chatbot Bard from responding to certain election-related prompts to avoid inaccuracies.. Meta, which owns Facebook and Instagram, has pledged to better label artificial intelligence-generated content on its platforms so voters can more easily tell which information is true and which is false.
On Friday, Anthropic, another leading artificial intelligence startup, joined its peers in banning its technology from being used in political campaigns or lobbying. The company, which makes a chatbot called Claude, said in a blog post that it will warn or suspend any user who violates its rules. It added that it was using tools trained to automatically detect and block error messages and impact operations.
“The history of AI deployment is also full of surprises and unexpected impacts,” the company said. “We expect that AI systems in 2024 will emerge with surprising uses—uses that their own developers did not anticipate. of.”
The efforts are part of a push by artificial intelligence companies to take control of the technology they popularize as billions of people go to the polls. At least 83 elections are expected to be held around the world this year, the highest concentration in at least the next 24 years, according to consultancy Anchor Change. People in Taiwan, Pakistan and Indonesia have gone to the polls in recent weeks, and India, the world’s largest democracy, is due to hold general elections in the spring.
It’s unclear how effective restrictions on artificial intelligence tools will be, especially as tech companies continue to advance increasingly complex technologies. On Thursday, OpenAI unveiled Sora, a technology that can instantly produce photorealistic videos. These tools can be used to generate words, sounds and images in political campaigns, blurring fact from fiction and raising questions about whether voters can tell what is true.
Artificial intelligence-generated content has appeared in U.S. political campaigns, sparking regulatory and legal pushback. Some state lawmakers are drafting bills to regulate political content generated by artificial intelligence.
Last month, New Hampshire residents received robocall messages discouraging them from voting in the state primary in a voice that was likely human-generated and sounded like President Joe Biden. The Federal Communications Commission outlawed such calls last week.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities and mislead voters,” FCC Chairman Jessica Rosenworcel said at the time.
AI tools have also produced misleading or deceptive depictions of politicians and political topics in Argentina, Australia, the United Kingdom and Canada. Former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s elections last week, used an artificial intelligence voice to declare victory from prison.
Experts say the potential for misinformation and deception caused by artificial intelligence could be devastating to democracy during one of the most consequential election cycles in memory.
“We’re behind the eight ball here,” said Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence and is the founder of True Media, a company that works to identify online content in political campaigns. Disinformation non-profit organization. “We need tools to respond to this issue immediately.”
Anthropic said in a statement on Friday that it is planning tests to determine how its Claude chatbot can generate biased or misleading content related to political candidates, political issues and election administration. These “red team” tests, typically used to break a technology’s safeguards to better identify its vulnerabilities, will also explore how artificial intelligence responds to harmful queries, such as asking for tips on voter suppression tactics.
In the coming weeks, Anthropic will also launch a trial designed to redirect U.S. users with voting-related queries to authoritative sources such as TurboVote from the nonpartisan nonprofit Democracy Works. The company said its artificial intelligence models were not trained frequently enough to reliably provide instant facts about specific elections.
Similarly, OpenAI said last month that it plans to use ChatGPT to guide people with voting information and label images generated by artificial intelligence.
“As with any new technology, these tools bring benefits as well as challenges,” OpenAI said in a blog post. “They are also unprecedented, and as we learn more about how our tools are used, we will Continuously improving our approach.”
(The New York Times sued OpenAI and its partner Microsoft in December, claiming copyright infringement on news content related to artificial intelligence systems.)
Synthesia, a startup with an artificial intelligence video generator that has been linked to disinformation campaigns, also banned the use of the technology for “news-like content,” including false, polarizing, divisive or misleading material. Alexandru Voica, director of corporate affairs and policy at Synthesia, said the company has improved its systems for detecting abuse of the technology.
Stability AI, a startup that provides image generation tools, says it prohibits the use of its technology for illegal or unethical purposes, works to prevent the generation of unsafe images, and applies imperceptible watermarks to all images.
The biggest tech companies are also getting involved. Last week, Meta said it was working with other companies on technical standards to help identify when content was generated by artificial intelligence. Ahead of EU parliamentary elections in June, TikTok said in a blog post on Wednesday that it would ban potentially misleading and manipulated content and require users to label authentic artificial intelligence creations.
Google said in December it would also require video creators on YouTube and all election advertisers to disclose digitally altered or generated content. The company said it is preparing for the 2024 elections by limiting how artificial intelligence tools like Bard return responses to certain election-related queries.
Google said: “As with any emerging technology, artificial intelligence brings new opportunities and challenges.” The company added that artificial intelligence can help fight abuse, “but we are also preparing for how to change the misinformation landscape. “
[ad_2]
Source link