deep learning neuroradiology

How Deep Learning Can Be Applied to Neuroradiology

While neurological diseases seem sudden, striking out of nowhere, many of them are actually progressive. The brain develops such conditions over time, and the symptoms can be barely noticeable until it is too late. The secret to helping such patients in time is befriending deep learning technology.

Various types of image analysis software, mostly based on deep learning (a subfield of artificial intelligence) are being increasingly adopted in radiology due to their ability to automate image processing and segmentation, reducing the time on scan interpretation. According to a recent report, the forecast for the global medical image analysis market is set to reach $4.26 billion by 2025.

In image analysis, deep learning is mostly represented by convolutional neural networks, multi-layered artificial neural networks specifically successful in image classification. These deep learning models help health specialists overcome human limits in visual data processing. Providers can see the structure of the brain and grasp the changes in brain activity at a higher speed and accuracy, finding subtle patterns and preventing severe health deteriorations.

Oh, the Things Deep Learning Can Do

The best thing about deep learning is that it can be applied across the full spectrum of the neuroimaging cycle, starting from the initial examination, where the decision to order an MRI, CT, Angiography, or Myelography test is made; then, image acquisition itself and activities around image processing and interpretation – localization, segmentation, detection of lesions, and the following differential diagnosis. Each of these tasks can be improved with the help of deep learning.

Protocolling

After an examination, a health specialist orders a study, which is manually bound to a particular neuroimaging protocol. Radiologists have to not only follow the protocol, but also pay attention to specific requests of physicians, which are most likely typed into the order history in free text.

Here, deep learning (in the form of recurrent neural networks) can streamline the protocolling process by introducing natural language processing to extract all free text and automating it. It will be relatively easy to enable since all prior test orders that were protocolled by human specialists make a huge database. This database can serve as a training set for neural networks to recognize the structure of protocols and grasp free text so that the algorithms would be able to handle protocolling themselves and free more time for radiologists on acquiring and studying patient images.

Prioritizing Image Reviews

A really promising application of deep learning is automated queuing of image reviews depending on suspected acuity. Since the algorithms can be trained to identify critical issues on images – e.g., tumors on MRI scans – it is possible to use these algorithms to also prioritize the radiologic review of most concerning images.

With automation in preliminary image processing and flagging of findings, the radiologists will focus their attention on confirming the diagnoses for patients in critical or even life-threatening cases. Cutting the time between image acquisition and its interpretation, providers will be able to provide well-timed medical assistance for the patients with acuities.

Simulating Scanning Results

Additionally, some patients may have MRI-incompatible implants or other health issues that make certain imaging tests impossible. Deep learning can help to fill in the blanks for these patients, recreating the missing data points. The only issue is that it requires multiple sets of medical images of patients from the same population. In this case, the neural network will be able to predict expected images for the patients that are unable to undergo an examination in a traditional way.

Lesion Localization and Segmentation

While image segmentation and lesion localization are tedious tasks for human health specialists, they practically comprise the dream job for the deep learning technology. Lesion detection means identifying potential abnormalities, whereas segmentation allows radiologists to highlight and delineate the core segment and its subregions.

These processes are highly important for treatment and surgery planning, as well as monitoring the lesion’s response to an already administered therapy. Since the lesion’s growth or reduction can come to 1-2 millimeters, it is a must to reassure in the precision of measurements. By supporting human radiologists in their complex daily activities, deep learning can truly save lives.

For example, Stanford University researchers presented an algorithm for automated brain lesion segmentation based on convolutional neural networks. This algorithm achieved 89 percent accuracy in tumor segmentation, spotting the lesion area, edema, enhancing and nonenhancing cores with higher precision than human radiologists, scoring 85 percent.

Qi Dou and other researchers, published in IEEE Engineering in Medicine and Biology Society, were able to train convolutional neural networks for detecting cerebral microbleeds on MRI scans and bringing in automated identification of infarcted brain tissue. Dou et al. achieved a high sensitivity of 93.16 percent with a two-step approach, where lesions were first localized by the network and then analyzed with the algorithm again to determine whether the abnormality was a true microhemorrhage or mimic. This approach is a promising way of reducing the damage for patients who suffered from an acute stroke.

Challenges of Deep Learning Adoption in Neuroradiology

Of course, the first thing that bothers clinical stakeholders is that if deep learning becomes
the go-to technology in radiology, it will entail work shortages due to automation of many current tasks that require more time and specialists. Moreover, even considering the great promise of neural networks and machine learning for automating complex and tedious tasks, there should be established processes for verifying the results.

Next, there is the black box problem of deep learning, meaning we know what a deep network can classify, but have little understanding of how exactly it comes to the classification decision. Therefore, prior to deep learning’s widespread adoption, it is crucial to form a solid understanding of the whys and hows in the neural network’s performance. Radiologists can’t just accept “because” as an answer to their question about why this brain lesion is malignant or benign, can they?

Another challenge is rooted in annotated datasets needed for successful training of algorithms. First, these sets have to be large. Even though healthcare is all about big data, it is necessary to create manually annotated sets for each particular task, the area of application, imaging method, even modality. Additionally, providers have to keep both datasets and algorithms up-to-date, as source data and practice patterns evolve with time.

So, What’s Next for Deep Learning?

With all these challenges in mind, we can confidently state that deep learning will likely be adopted across various clinical domains and, of course, neuroradiology in the next 5 to 10 years. It certainly won’t be able to replace human radiologists, because this technology requires continuous supervision, double-checks, and manual input from health specialists. But it is already capable of improving efficiency and accuracy in diagnosis confirmation, assisting providers in making well-timed decisions, and helping them to keep it cool even in acute situations.

Picture of By Inga Shugalo

By Inga Shugalo

Inga Shugalo is a Healthcare Industry Analyst at Itransition. She focuses on Healthcare IT, highlighting the industry challenges and technology solutions that tackle them.

All Posts

More
Articles

[ninja_form id=16]

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​

Looking for the latest tech news? We have you covered.

Don’t be the office chump. Sign up here for our twice weekly newsletter and outsmart your coworkers.