[ad_1]
The rapid spread of generative artificial intelligence poses a wide range of risks to society, and governments are working to develop effective strategies to address these complex challenges.
One of these risks is potential copyright infringement.
Artificial intelligence learns from the vast amounts of information flooding the Internet and creates new creations at a speed that exceeds human capabilities.
There is a vague swirl of concern among policymakers and regulators, as well as authors, creators and other copyright holders, that unfair or unlawful uses of creative works and other materials may become widespread.
To address this issue, the Agency for Cultural Affairs, through the Copyright Division of the Council for Culture, last month provided partial guidelines on situations that may raise copyright law issues.
However, there are still many gray areas. The fact that the agency received 25,000 public comments on the guidance is evidence of widespread community anxiety about the matter.
One regulatory issue involves the ability of AI to produce speech using a specific person’s voice. This is because current copyright and other intellectual property laws do not provide adequate protection for the “voices” provided by voice actors.
In addition to copyright infringement, privacy violations and disinformation are among the risks associated with the creation of artificial intelligence.
Insufficient information sharing about how artificial intelligence works and how companies intend to use the technology appears to be at the root of these concerns.
For example, the public has no way of knowing what considerations AI companies have given to people’s rights and safety. Or what kind of data the AI is learning and how to use it to produce new outputs.
Unless AI technology companies properly disclose this information, rights holders and users will not be able to engage in equal dialogue and negotiation with them. Efforts need to be made to promote mutual understanding.
Still, concerns remain because the draft guidelines for artificial intelligence business developed by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry only call on companies to take “voluntary” initiatives to ensure the safety and transparency of artificial intelligence.
In terms of measures to combat online defamation, foreign artificial intelligence-driven social media providers have failed to make sufficient efforts to solve this serious problem, despite their overwhelming influence and dominance in the cyberspace.
Since influential AI giants are also located overseas, how to properly regulate them is a challenging issue for Japanese regulators and policymakers.
Will a voluntary approach be effective? Do we need to consider more aggressive measures? Given international trends, governments need to engage in discussions that directly address these issues.
History shows that with the emergence of new transformative technologies, humans have always gone through a process of trial and error.
Only a few companies have been leading the development and innovation of artificial intelligence and hold important information about this technology. The key regulatory challenge is how to prevent the inner workings of AI from becoming a black box to the public. The government should actively look at what role it can play to achieve this goal.
Now is also the time to review and reassess the overall approach to regulating IT (information technology) giants and formulate an overall roadmap for the future.
——”Asahi Shimbun”, April 6
[ad_2]
Source link