American College of Radiology Offers Suggestions for Federal Artificial Intelligence Oversight

March 23, 2020

ACR submitted 10 priorities to the White House to augment AI oversight.

In the continued effort to incorporate and implement artificial intelligence solutions in healthcare, the American College of Radiology (ACR) sent comments to the White House Office of Management and Budget (OMB) recently. The comments will be included in a federal document intended to guide artificial intelligence (AI) oversight.

Within its comments, the ACR outlined 10 priorities in response to the OMB’s draft memorandum, “Guidance for Regulation of Artificial Intelligence.” The draft is part of the larger Executive Order 13859: “Maintaining American Leadership in Artificial Intelligence,” which focuses on both regulatory and nonregulatory approaches to AI-powered and enabled technologies and industries.

In its letter, the ACR agreed with several OMB priorities, but included additional suggestions:

1. Actions surrounding AI must ensure continued public trust. The ACR recommended the U.S. government collaborate with     third parties, such as professional organizations, to develop certification measures, validation services, and real-world performance monitoring agencies.

2. The ACR seconded the OMB’s assertion that the public should play a role in federal processes that would ensure regulator transparency and accountability.

3. Rulemaking and guidance efforts should be based in scientific integrity and information quality. In particular, the ACR wrote, they should be founded in “transparency articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigations, and appropriate use of the regulated AI applications,” adding that risk and risk mitigation should also be included.

4. The ACR agreed with the OMB that oversight approaches should be based on the application of risk assessment and management, touching on multiple agencies and technologies. It stressed that certain sectors or agencies, including healthcare, can have oversight gaps, and in these cases, third-party validation or certification can be helpful.

5. AI costs and benefits should be evaluated in the development of specific applications. Engaging national associations that represent AI end-users, including the ACR Data Science Institute, for partnership can ensure that resources are being applied to innovations that will actually be adopted and routinely implemented.

6. The flexibility and agility to protect health and patient safety while pivoting to support rapid changes and updates is critical for regulatory bodies.

7. Any AI algorithms must be generalizable for multiple populations and care sites. All platforms must be trained with large datasets, as well as routinely validated and monitored to detect any unforeseen or aberrant technological evolutions.

8. The ACR also called of full disclosure and transparency of any premarket AI review, including ensuring a significant level of data traceability and insight into the training data used in creating new models.

9. The top concern for any AI used in healthcare must be patient and public safety. The ACR agreed with OMB that cybersecurity is also a top priority.

10. Any coordination between regulatory agency should be apparent to public stakeholders, AI developers, and users.