info@ehidc.org

 202-624-3270

Unpacking the Black Box in Artificial Intelligence for Medicine

Analytics

  • Analytics

    Examine how healthcare data can provide insight across claims, cost, clinical, and more.

Unpacking the Black Box in Artificial Intelligence for Medicine

December 10, 2019

Unpacking the Black Box in Artificial Intelligence for Medicine

In clinics around the world, a type of artificial intelligence called deep learning is starting to supplement or replace humans in common tasks such as analyzing medical images. Already, at Massachusetts General Hospital in Boston, “every one of the 50,000 screening mammograms we do every year is processed through our deep learning model, and that information is provided to the radiologist,” says Constance Lehman, chief of the hospital’s breast imaging division.

In deep learning, a subset of a type of artificial intelligence called machine learning, computer models essentially teach themselves to make predictions from large sets of data. The raw power of the technology has improved dramatically in recent years, and it’s now used in everything from medical diagnostics to online shopping to autonomous vehicles.

But deep learning tools also raise worrying questions because they solve problems in ways that humans can’t always follow. If the connection between the data you feed into the model and the output it delivers is inscrutable — hidden inside a so-called black box — how can it be trusted? Among researchers, there’s a growing call to clarify how deep learning tools make decisions — and a debate over what such interpretability might demand and when it’s truly needed. The stakes are particularly high in medicine, where lives will be on the line.

The full Undark article can be viewed at this link.  

 

Share