From The LINK: Seeing Beneath the Surface

Megan HarrisTuesday, December 13, 2022

New research in SCS uses convolutional neural networks to combine raw ultrasound waveforms with a black-and-white ultrasound image to segment and label tissues for anatomical, pathological or other diagnostic purposes.

Ultrasound imaging is a safe, cheap and fast diagnostic method. But it is arguably the worst in terms of clarity, making it difficult for even highly trained humans to confidently discern and diagnose. What if AI could do it better?

John Galeotti, a systems scientist with Carnegie Mellon University's Robotics Institute and an adjunct assistant professor in biomedical engineering, thinks it can. His most recent work, W-Net, is a novel convolutional neural network framework that combines raw ultrasound waveforms with the black-and-white ultrasound image to segment and label tissues for anatomical, pathological or other diagnostic purposes.

By processing details in an ultrasound that a radiologist doesn't typically use, the AI can make more informed conclusions. In fact, recent tests using W-Net in breast tumor detection showed that the AI outperformed established diagnostic frameworks. The team is now using it in pulmonary applications and partnering with clinical experts to further develop a deep-learning model that can not only understand ultrasound readings, but also present them to clinicians and patients on a screen —which could be especially beneficial in high-stakes scenarios such as ambulance rides and battlefields.

To learn more about how W-Net is leveraging AI to help improve medical diagnostic techniques, read the full article in the SCS magazine, The LINK.

For More Information

Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu