[ad_1]
Artificial intelligence is here to stay. The development and use of artificial intelligence (“AI”) in healthcare is growing rapidly and shows no signs of slowing down.
From a government perspective, many federal agencies are embracing the possibilities of artificial intelligence. The U.S. Centers for Disease Control and Prevention is exploring the ability of artificial intelligence to estimate sentinel events and combat disease outbreaks, and the National Institutes of Health is using artificial intelligence in priority research areas. The Centers for Medicare & Medicaid Services is also evaluating whether the algorithms used by plans and providers to identify high-risk patients and manage costs introduce biases and limitations. In addition, as of December 2023, the U.S. Food and Drug Administration has approved more than 690 artificial intelligence devices for market use.
From a clinical perspective, payers and providers are integrating artificial intelligence into daily operations and patient care. Hospitals and payers are using artificial intelligence tools to assist with billing. Doctors are using AI to take notes, and many healthcare providers are grappling with which AI tools to use and how to deploy AI in clinical settings. With the application of artificial intelligence in clinical settings, the standard of patient care is constantly evolving, and no entity wants to be left behind.
From an industry perspective, the legal and business landscape is transforming as new national and international regulations focus on establishing the safe and effective use of artificial intelligence, as well as business responses to these regulations. Three of the regulations are top of mind, including (i) President Biden’s Executive Order on the Development and Use of Safe, Secure, and Trustworthy Artificial Intelligence; (ii) the U.S. Department of Health and Human Services (“HHS”) on Health Data , technology and interoperability final rules; (iii) World Health Organization (“WHO”) guidance on large multimodal models for generating artificial intelligence. In response to the introduction of regulations and general advances in AI, interested healthcare stakeholders, including many leading healthcare companies, voluntarily committed to the common goal of responsible AI use.
U.S. Executive Order on the Development and Use of Safe, Secure, and Trustworthy Artificial Intelligence
On October 30, 2023, President Biden issued an Executive Order on the Development and Use of Safe, Reliable, and Trustworthy Artificial Intelligence (the “Executive Order”). Although long-awaited, the executive order is a significant development and one of the most ambitious attempts to regulate this emerging technology. The executive order has eight guiding principles and priorities, including (i) safety and security; (ii) innovation and competition; (iii) commitment to the American workforce; (iv) fairness and civil rights; (v) consumers protection; (vi) privacy; (vii) government use of artificial intelligence; (viii) global leadership.
For healthcare stakeholders in particular, the executive order directs the National Institute of Standards and Technology to develop guidance and best practices for the development and use of artificial intelligence and directs the U.S. Department of Health and Human Services to establish an artificial intelligence working group, The working group will develop policies and frameworks for the responsible deployment of artificial intelligence and artificial intelligence technologies in healthcare. In addition to these directives, the executive order also emphasizes the duality of artificial intelligence, namely the “promise” it brings and the “dangers” it can cause. This duality is reflected in the HHS directive to establish an AI safety plan and prioritize funding to support AI development while ensuring compliance with non-discrimination standards.
U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule
Following the issuance of the Executive Order, the HHS Office of the National Coordinator finalized rules to increase transparency in the algorithm on December 13, 2023, known as HT-1. With regard to artificial intelligence, the rule increases transparency by establishing transparency requirements that artificial intelligence and other predictive algorithms are part of certified health information technology. This rule also:
- Implement requirements to improve equity, innovation and interoperability;
- Support the access, exchange and use of electronic health information;
- Address concerns about bias, data collection, and security;
- Modify existing clinical decision support certification standards and narrow the scope of affected predictive decision support interventions; and
- Adopt Health IT certification requirements with new conditions and certification maintenance requirements for developers.
Voluntary commitments from leading healthcare companies to responsible use of artificial intelligence
Immediately following the release of HT-1, leading healthcare companies made voluntary commitments to responsible AI development and deployment. On December 14, 2023, the Biden administration announced that 28 health care providers and payer organizations have signed an agreement to commit to the safe, reliable, and trustworthy purchase and use of artificial intelligence technology. Specifically, provider and payer organizations agree to:
- Develop artificial intelligence solutions to optimize healthcare delivery and payment;
- Work to ensure solutions are fair, appropriate, efficient, effective and safe (“FAVES”);
- Deploy trust mechanisms to inform users whether content is primarily generated by artificial intelligence and has not been reviewed or edited by humans;
- Adherence to a risk management framework when using artificial intelligence; and the use of artificial intelligence technology. Specifically, provider and payer organizations agree to:
- Develop artificial intelligence solutions to optimize healthcare delivery and payment;
- Work to ensure solutions are fair, appropriate, efficient, effective and safe (“FAVES”);
- Deploy trust mechanisms to inform users whether content is primarily generated by artificial intelligence and has not been reviewed or edited by humans;
- Adhere to a risk management framework when using artificial intelligence; and
- Research, investigate and develop artificial intelligence quickly but responsibly.
WHO Guidelines for Generating Large Multimodal Models for Artificial Intelligence
On January 18, 2024, the World Health Organization released guidelines for generative artificial intelligence large-scale multimodal models (“LMM”), which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. . The 98-page WHO guideline makes more than 40 recommendations on LMM for technology developers, providers and governments, and lists five potential applications of LMM, such as (i) diagnostics and clinical care; (ii) patient guidance. Use; (iii) Administrative tasks; (4) Medical education; (v) Scientific research. It also addresses liability issues that may arise when using LMM.
Closely linked to guidance from the World Health Organization, the European Council’s agreement to move forward with the EU Artificial Intelligence Act (the “Act”) is an important milestone in the regulation of AI in the EU. As previewed in December 2023, the bill will inform how artificial intelligence is regulated across the EU, and other countries may take note and follow suit.
in conclusion
There is no doubt that artificial intelligence is here to stay. But it remains to be seen what the healthcare industry will look like when artificial intelligence is more fully integrated. As AI and the use of AI in healthcare settings change, the framework governing AI will continue to evolve. At the same time, healthcare stakeholders considering or adopting AI solutions should stay informed about AI developments to ensure compliance with applicable laws and regulations.
[ad_2]
Source link