[ad_1]
As we enter the second full year of the artificial intelligence (AI) revolution, a clear understanding of the technology and its legal implications is critical for every general counsel (GC).
From understanding the technology itself and its limitations, to addressing legal uncertainty and establishing best practices, this alert covers the most pressing issues related to artificial intelligence in the legal field in 2024. In addition to legal considerations, such as intellectual property protection and compliance under new laws through the Corporate Transparency Act (CTA) in effect, GC will also gain a deep understanding of business use cases and the opportunities AI offers to maximize financial returns.
1. Understand technology
Much of the current literature and discussion about AI conflates various types and subsets of AI technologies. “Artificial intelligence” broadly refers to any machine or software designed to imitate human intelligence. “Machine learning” is another widely used term and is a subset of artificial intelligence that learns from data to make decisions.While artificial intelligence and machine learning have been around for decades (think IBM’s Watson), its widespread release is produced In 2022, artificial intelligence (Gen AI) tools like ChatGPT can be said to have ushered in the current era of artificial intelligence. Broadly speaking, Gen AI tools create new content such as text, images, songs, computer code and videos based on user prompts. Each subset of AI tools and technologies has its own unique benefits and legal risks, so understanding the differences between them is critical for businesses to achieve their goals while effectively managing risks.
2. Understand the limitations
As we delve deeper into the world of artificial intelligence, it is critical to recognize its limitations, especially in the context of this new generation of artificial intelligence. People should not assume that anything created using Gen AI is suitable for commercial use. Content generated by artificial intelligence may be inaccurate, biased or infringe the intellectual property rights of third parties, in particular copyright, trademark and publicity rights. For example, Gen AI tools have been known to replicate the likenesses of famous celebrities and create songs that sound like real musicians. A trademark or design created using Gen AI may be confusingly similar to an existing third-party trademark. In addition, software created by Gen AI may contain code that is subject to restrictive or onerous third-party or open source license agreements. Commercial entities that use such content without the necessary rights or approvals may face significant liability. Given these limitations, it is critical that GCs remain skeptical and diligent when evaluating the potential use of Gen AI tools and content.
3. Hedging against legal uncertainty
Next year may determine whether Gen AI’s current moment is called the “Napster era” or the “Spotify era.” A number of pending lawsuits are challenging a core assumption of Gen AI developers: that using third-party copyrighted works to train Gen AI tools is “fair use” under U.S. copyright law. If a court concludes that this assumption is incorrect, Gen AI developers could suffer staggering damages for copyright infringement. Such a ruling could trigger a new wave of lawsuits alleging vicarious or contributory copyright infringement. On the other hand, a “fair use” designation would bring much-needed clarity to the industry and potentially mitigate some legal risks. As we await authoritative court rulings on this issue, GCs should consider whether and how to account for this legal uncertainty when assessing the risks of specific uses that your business team may desire.
4. Establish best practices for protecting intellectual property rights
A recent decision by the U.S. Copyright Office’s Board of Review could have far-reaching consequences for some of the artwork created by Gen AI. On December 11, 2023, the Review Board confirmed its refusal to register an artistic work partially created by Gen AI, concluding that the work lacked the “human authorship” required to claim copyright protection. The decision marks the third time in recent months that the Board of Review has issued a written opinion analyzing the impact of Gen AI on copyright protection, continuing a trend of courts and the Copyright Office denying copyright protection to works generated by artificial intelligence. This decision has significant implications for rights holders. If a work contains too much Gen AI content, the copyright protection of the work may be lost in whole or in part. In addition, copyright applicants must disclose the inclusion of AI-generated content in their copyright applications. Failure to do so may result in the copyright registration being cancelled, preventing action in federal court and the inability to seek statutory damages from the infringer. GCs should work closely with creative teams to implement best practices and policies to help reduce the risk that any particular work product will not be protected by U.S. copyright law.
5. Help enterprises identify and evaluate potential use cases
There is no shortage of Gen AI tools on the market today covering a variety of potential uses. While business teams may ultimately decide which use cases and tools best fit your company’s needs, legal plays a key role in both prioritizing use cases and evaluating third-party tools and technologies. For example, legal can help companies identify and differentiate between “high” risk and “low” risk use cases. Legal departments should also work closely with IT departments in vendor selection and negotiations, paying particular attention to data security, intellectual property rights protection, and tool transparency to ensure that selected AI tools comply with the company’s legal and compliance framework.
6. Build your own tool or application
While “off-the-shelf” Gen AI solutions may be sufficient for many businesses, others may go a step further and build custom AI tools or “fine-tune” third-party tools to better meet their needs. As next-generation AI technologies improve and become more widely adopted, customized AI tools and applications are likely to become increasingly common. For example, the market for customized Gen AI legal tools is already booming, with tools specifically designed for contract drafting, legal research, and more. Companies that build their own tools or fine-tune existing tools can increase output, make the tools easier to use, and potentially reduce the risk of infringement. However, GCs must also grapple with key issues such as ensuring that custom tools operate in a transparent and reliable manner, ensuring any necessary rights and licenses for content in corporate data sets, carefully drafting supplier contracts, and adequately protecting intellectual property rights and confidential information.
7. Update the model agreement and terms
As the use of Gen AI continues to increase, it is critical for companies to re-evaluate their standard contracts and agreements to ensure that Gen AI-related uses are taken into account. This may include sample pitch decks, copyright licensing agreements, employment agreements and consulting services agreements. GC should ask: Do these agreements grant rights to use acquired content or information for machine learning model training or deployment? Does the Software as a Service (SaaS) license or agreement contain appropriate warranties, releases, work product ownership, and disclaimers? Likewise, companies should revisit their terms of use to ensure they adequately address potential AI-related uses. For example, companies may need to consider broadening the scope of any licenses they obtain for user-generated content or materials. They may also want to consider prohibiting certain third-party activities on their sites, such as web scraping or other unauthorized data collection, and informing users that certain features or access to services may be restricted or modified. To respond to legal or regulatory changes.
8. Prepare for increased government oversight
As we’ve examined before, federal agencies have been actively evaluating the use of AI learning. Over the past three months, the government has become more concerned about how U.S. data can and is being used for artificial intelligence systems in a variety of areas, including chip processing, cloud computing access and training on national security issues. In addition to the various regulatory proposals mandated by President Joe Biden’s executive orders, U.S. agencies have launched workforce education and R&D funding programs, such as a U.S. Department of Commerce (DOC) rule requiring foreign companies to train or provide computing capabilities in artificial intelligence. The company reports. Have large computing clusters for training artificial intelligence models. U.S. agencies are also assessing the risks of AI systems to critical infrastructure and weighing the results of AI security testing and other important information from developers of powerful AI systems. Companies in the defence, technology, economics or public health sectors will need to consider how to respond to further requests and plan how to protect consumer data used for training or processing by artificial intelligence systems. The U.S. Federal Trade Commission (FTC) is also increasingly concerned about the impact of the rapid development and deployment of artificial intelligence, especially related to consumer products and privacy laws, and how artificial intelligence tools affect the work of creators. This echoes the FTC’s investigation into the role of major cloud service providers in Gen AI companies, as well as the FTC’s five-year ban on Rite Aid’s use of facial recognition technology in December 2023. General artificial intelligence models or AI systems entering the European Union (EU) must plan to comply with the EU Artificial Intelligence Act, in addition to complying with U.S. federal and state laws and regulations.
9. Comply with employment laws
From recording employee productivity to maximizing human resource efficiency, artificial intelligence holds huge potential for employers. However, the capabilities and potential pitfalls of artificial intelligence are challenging the limits of federal and state labor laws, including the Fair Labor Standards Act (FLSA), which has defined the 40-hour work week for nearly a century. Artificial intelligence tools that monitor keystrokes, mouse activity, and/or webcams of remote and onsite employees may undercount (or overcount) compensable time away from computers. Likewise, cameras and sensors that may be used on a factory floor to monitor worker productivity may not account for compensable time away from the shop floor, such as changing into or taking off uniforms or equipment. Additionally, artificial intelligence tools used for employment decisions may lead to the perpetuation of unlawful discrimination or bias. Before implementing any such tools, companies should carefully consider their obligations under the FLSA and the extent to which they can rely on AI tools.
10. Notes for AI startups
Are you a GC for an AI startup? If so, don’t ignore the CTA. Effective January 1, 2024, this federal law requires many companies to disclose information about the individuals who own or control them to the Financial Crimes Enforcement Network (FinCEN), a division of the U.S. Department of the Treasury (USDT). Covered companies must report the identity of their “beneficial owners” – these are typically individuals who own some interest in the company or have the right to exercise some control over the company. The beneficial owner is an individual, not a corporate entity or trust. Reporting companies formed or registered in 2024 must submit a preliminary report to FinCEN within 90 days of formation or registration. Companies incorporated or registered before January 1, 2024 must submit an initial report by January 1, 2025. After filing an initial report with FinCEN, the reporting company must update its FinCEN registration information or certain other information within 30 days of a change in its beneficial ownership. If you are using artificial intelligence to manage your entity formation and reporting guidance, be aware of this new law (and others like it), especially since many Gen AI tools do not take into account changes in legal and regulatory requirements.
[View source.]
[ad_2]
Source link