[ad_1]
Efforts to counter AI’s threats to democracy and humanity through regulations and guardrails face many legal complexities, including new rules that would conflict with free speech protections and logistical difficulties in controlling the use of algorithms.
A surge in interest in AI-powered chatbots like ChatGPT in 2023 has led experts and commentators to sound the alarm about the impact AI could have on the country and the world, saying it could spread misinformation, destabilize the job market, and even Destroy humanity. The legislative and executive branches moved quickly to enact new regulations to ensure that companies and individuals use the technology safely without stifling innovation. But writing rules for complex technology is not easy.
“The role of government is to encourage [AI’s] Susan Aaronson, professor of artificial intelligence governance at George Washington University, said: washington examiner. However, “no one knows how to address the AI ecosystem in an efficient way. Managing such a rapidly evolving technology is challenging.”
Dozens of bills have been introduced in Congress to regulate artificial intelligence, and President Joe Biden issued a sweeping executive order in October setting out rules for artificial intelligence. The government clearly wants to set rules for how to use artificial intelligence models safely.
But if lawmakers want to enact meaningful regulations, they need to realize that some aspects of the technology are not easily addressed through legislation.
Artificial Intelligence and Free Speech
A major obstacle to regulating artificial intelligence is that it is ultimately just code, therefore. A form of speech protected by the First Amendment. Daniel Castro, vice president of the Information Technology and Innovation Foundation, said regulating artificial intelligence models “is much more difficult because now we are talking about software and data regulation, which is closely related to free speech and other issues.” Washington Examiner.
Software code is also considered a form of speech, according to High Court. The Ninth Circuit Court of Appeals ruled in 1995 that software source code is protected by the First Amendment and that the federal government cannot impose restrictions on encryption.
The issue of speech comes into play when discussing how to deal with fake images, videos or audio generated by artificial intelligence (also known as deepfakes). Deepfakes are seen as a threat to spread misinformation and commit scams on a large scale. But users still have the right to create images and other media through artificial intelligence. “Any attempt to regulate content generated by artificial intelligence, including [AI models]Esha Bhandari, deputy director of the Speech, Privacy and Technology Project at the American Civil Liberties Union (ACLU), wrote in a blog post. The ability to lie is constitutionally protected outside of narrowly defined situations, so generative AI models must be allowed to create images.
Open source software
Another aspect of artificial intelligence that is difficult to regulate is the existence of strong open source models. These large language models allow anyone to view or manipulate the code that powers the program. They are considered an indispensable tool to help smaller companies compete with large closed-source artificial intelligence models such as Open-AI’s GPT-4.
Open source AI models are particularly difficult to set guardrails because developers may live anywhere in the world. Once a model is published, anyone with Internet access can change, update, or modify the model. This open access could also be used to spread misinformation or develop biological weapons.
Members of Congress worry that malicious actors like Russia could misuse the open source model. Senators Richard Blumenthal and Josh Hawley sent a letter to Meta CEO Mark Zuckerberg in June asking about the company’s open source Large language model LLaMA posted an issue in which they claimed the model could be abused for “spam, fraud, malware, privacy violations, harassment, and other inappropriate behavior and harm.”
Hugging Face CEO Clement Delangue, who runs one of the leading open source artificial intelligence centers, defended open source models in Congress that same month, noting that they are “critical to incentives and deeply consistent with American values and interests.” There is ambiguity surrounding the open source model when it comes to regulation. While lawmakers like Hawley and Blumenthal may want to ban the model because of its use by potentially malicious actors, this restriction would also limit all of the model’s benefits.
The complexity of open source models is the reason behind the EU’s decision to exclude most open source models from the Artificial Intelligence Bill, which is expected to be passed this spring. The EU’s comprehensive AI framework will impose different restrictions on AI models based on “risk levels.” These include scanning the underlying models used to perform various tasks, limiting high-risk technologies such as face scanners, and requiring AI developers to submit summaries of training materials for regulatory review.
regulatory alternatives
Artificial intelligence is developing faster than agencies can create regulations. Lawmakers may have greater success by focusing on specific applications of the technology rather than the model itself.
There are already “technology-neutral laws” [that] “It applies to how artificial intelligence is used,” Duane Pozza, a former FTC staffer and partner at Wiley Rein, told Caijing.com. washington examiner. For example, last week the Federal Communications Commission announced a ban on the use of artificial intelligence-generated voices in automated calls. The new rule doesn’t add new rules, Pozza said, but it does define “how artificial intelligence fits into existing regulations on robocalls.”
That’s why Castro encourages regulating the functionality of AI models rather than the entire model. For example, the Federal Trade Commission took action Thursday to create rules to combat the use of generative artificial intelligence in scam calls. The FTC already has fraud authority, but now it’s focused on specific uses of artificial intelligence to defraud people, rather than taking any action on the software behind it.
Click here to read more from The Washington Examiner
Some experts in the field of artificial intelligence have urged lawmakers not to push legislation without understanding its impact. “The more we rush to regulate, the higher the cost will be,” said Will Rinehart, a senior fellow at the American Enterprise Institute. washington examiner. Hawley tried to speed up the passage of legislation last December to remove Section 230 protections for artificial intelligence. This reform will have a significant impact on speech on the Internet and online platforms. Texas Republican Sen. Ted Cruz vetoed the bill, saying it needed to go through the proper committee pipeline before being considered.
“When you want something that protects consumers and individuals but doesn’t hinder innovation and entrepreneurship. It takes some time to figure out what the real issues are,” Reinhart said.
[ad_2]
Source link