Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Following Google, OpenAI and 13 other AI companies, leading healthcare entities have agreed to sign the Biden-⁠Harris Administration’s voluntary commitments for the safe, secure and trustworthy development and use of artificial intelligence. 

Announced on Dec. 14, the commitments reflect a series of actions to pursue the “once-in-a-generation” benefits of large-scale models that can perform a variety of tasks in healthcare environments while mitigating their risks and protecting patients’ sensitive health information at the same time.

A total of 28 organizations on the demand side of healthcare operations – providers and payers who develop, purchase and implement AI-enabled technologies in their workflows – have signed these commitments. Some of the names on the list are CVS Health, Stanford Health, Boston Children’s Hospital, UC San Diego Health, UC Davis Health and WellSpan Health.

Building AI to optimize healthcare delivery and payment

Even before the rise of ChatGPT and generative AI in general, the role of artificial intelligence in healthcare was widely discussed, this included diagnosing diseases early and discovering new treatments. However, with all the benefits, many have raised questions about the safety and reliability of AI systems in healthcare settings. In a recent survey by GE Healthcare, which is not one of the signing entities here, 55% of clinicians said AI technology is not yet ready for medical use and 58% implied that they do not trust AI data. 

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

For clinicians who had more than 16 years of experience, the skepticism level was even higher, with 67% lacking trust in AI. 

With these voluntary commitments to the Biden administration, the 28 healthcare providers and payers want to end this skepticism and develop AI to deliver more coordinated care, improved patient experiences and reduced clinician burnout.

“We believe that AI is a once-in-a-generation opportunity to accelerate improvements to the healthcare system, as noted in the Biden Administration’s call to action for frontier models to work towards early cancer detection and prevention,” the organizations noted in their commitment document.

To build downstream users’ (imagine clinicians and healthcare workers) confidence in these AI systems, the organizations have committed to ensuring their projects are aligned with the fair, appropriate, valid, effective and safe (FAVES) AI principles outlined by the U.S. Department of Health and Human Services (HHS). This will help them make sure that their solutions perform up to the mark in targeted real-world use cases without including any biases and known risks.

Then, they will work to establish trust with a focus on transparency and a risk management framework. The former will inform users if the content they are seeing is largely or exclusively AI-generated and not edited or reviewed by a human. At the same time, the latter will include comprehensive tracking of applications powered by models and accounting for potential harms in different healthcare applications/settings and steps to mitigate them. 

“We will establish policies and implement controls for applications of frontier models, including how data are acquired, managed and used. Our governance practices shall include maintaining a list of all applications using frontier models and setting an effective framework for risk management and governance, with defined roles and responsibilities for approving the use of frontier models and AI applications,” the companies wrote.

Responsible research and innovation

While focusing on existing implementations, the organizations also said they will continue R&D on health-centric AI innovation – with guardrails in place. 

To do this, they plan to leverage non-production environments, test data and internally facing applications to prototype new applications and confirm their privacy compliance. Then, they will monitor the outcomes of these applications on an ongoing basis, ensuring that they are providing fair and accurate responses in their respective use case. This can be done with the help of a human-in-the-loop or dedicated tooling for AI evaluation and observability.

Finally, the companies will also focus on mitigating the problems associated with open-source technology, wherever used, and train their workforce on safe and effective development and use of applications powered by frontier models.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.





Source link