A team of artificial intelligence researchers has developed a new deep-learning method to identify and segment tumours in medical images.  

Newswise — Montréal, February 22, 2018 – The software makes it possible to automatically analyze several medical imaging modalities. Through a supervised learning process from labeled data inspired by the functioning of the neurons of the brain, it automatically identifies liver tumours, delineates the contours of the prostate for radiation therapy or makes it possible to count the number of cells at the microscopic level with a performance similar to that of an expert human eye.

“We have developed software that could be added to visualization tools to help doctors perform advanced analyses of different medical imaging modalities,” explained Samuel Kadoury, a researcher at the CRCHUM, professor at Polytechnique Montréal and the study’s senior author. He added: “The algorithm makes it possible to automate pre-processing detection and segmentation (delineation) tasks of images, which are currently not done because they are too time-consuming for human beings. Our model is very versatile – it works for CT liver scan images, magnetic resonance images (MRI) of the prostate and electronic microscopic images of cells.”

Take the example of a patient with liver cancer. Currently, when this patient has a CT scan, the image has to be standardized and normalized before being read by the radiologist. This pre-processing step involves a bit of magic. “You have to adjust the grey shades because the image is often too dark or too pale to distinguish the tumours,” stated Dr. An Tang, a radiologist and researcher at the CRCHUM, professor at Université de Montréal and the study’s co-author. “This adjustment with CAD-type computer-aided diagnosis-assistance techniques is not perfect and lesions can sometimes be missed or incorrectly detected. This is what gave us the idea of improving machine vision. The new deep-learning technique eliminates this pre-processing step by modelling the variability observed on a training database.”

How did the AI engineers design this model to view the abnormalities of the human body?

“We came up with the idea of combining two types of convolutional neural networks that complemented each another very nicely to create an image segmentation. The first network takes as an input a raw biomedical data and learns the optimal data normalization. The second takes the output of the first model and produces segmentation maps,” summarized Michal Drozdzal, the study’s first author, formerly a postdoctoral fellow at Polytechnique and presently research scientist at Facebook AI Research in Montréal.

A neural network is a complex series of computer operations that allows the computer to learn by itself by feeding it a massive number of examples. Convolutional neural networks (CNNs) work a little like our visual cortex by stacking several layers of processing to produce an output result – an image. They can be represented as a pile of building blocks. There are several types of neural networks, each with a different architecture.

The researchers combined two neural networks: a fully convolutional network (FCN) and a fully convolutional residual network (FC-ResNet).

According to Professor Kadoury, who is also Canada Research Chair in Medical Imaging and Assisted Interventions, the new algorithm had to be trained to discover lesions by itself: “We fed the computer hundreds of labeled samples from lesions and healthy tissue that had been manually identified by humans. The parameters of the neural networks are adjusted in order to match the gold standard annotations and later recognize the image without any need for further supervision,” he said.

The researchers compared the results obtained by their algorithm with other algorithms. “From a visual analysis, we can see that our algorithm performs as well as, if not better than, other algorithms, and is very close to what a human would do if he/she had hours to segment a large number of images. Our algorithm could potentially be used to standardize images from different hospital centres,” asserted Kadoury.

In cancer imaging, doctors would like to measure the tumour burden, which is the volume of all the tumours in a patient’s body.

“The new algorithm could be used as a tool for detecting and monitoring the tumour burden, to get a much fuller picture of the extent of the disease,” stated Dr. Tang.

Because of the versatility of this new algorithm, it would be possible to train it for different pathologies such as lung or brain cancer. Training an algorithm is a very long process (hours, even days), but, once trained, this model could analyze images in fractions of seconds and reach a performance level in detection and classification comparable to that of human beings.

However, Dr. Tang believes that it will be several years before advances in AI are transferred to practice in our hospitals: “We are at the proof-of-concept stage,” he noted. “It works with a dataset. If we take images from scans performed with different techniques or colour doses, or from different manufacturers or hospitals, will the algorithm work as well? We still have a number of challenges to deal with to be able to implement these algorithms on a large scale. We’re still in the research and development category. We’ll have to validate it on a large population, in different image-acquisition scenarios, to confirm the robustness of this algorithm.”

And then there’s the big question – the one that gives rise to much debate: will new advances in AI replace human beings in medicine? “No. On the contrary, artificial intelligence will allow us to perform tasks that cannot be done at this time because they are too time-consuming”, Dr. Tang concluded. “Artificial intelligence provides us with additional tools. We are not eliminating jobs; instead, we are adding new capabilities to analyze a large number of images. ”

About this study

The article “Learning normalized inputs for iterative estimation in medical image segmentation” was published in the February edition of the journal Medical Image Analysis, and was published online on November 14, 2017. This research was funded by Imagia Inc., MITACS (grant number IT05356), MEDTEQ, the Fonds de recherche du Québec - santé (FRQS) and the Fondation de l’Association des radiologistes du Québec. The study was conducted by Samuel Kadoury (CRCHUM and Polytechnique Montréal), Michal Drozdzal and several co-authors: Gabriel Chartrand, Eugene Vorontsov, Mahsa Shakeri, Lisa Di Jorio, An Tang, Adriana Romero, Yoshua Bengio and Chris Pal. To find out more, view the study: DOI: 10.1016/j.media.2017.11.005

Media relations:

Isabelle Girard, Communications and Media Relations – CRCHUM

Phone: +1 514 233-6671 | @CRCHUM

[email protected]

Annie Touchette, Communications and Media Relations, Polytechnique Montréal

Phone: +1 514 231-8133

[email protected]

 

Definitions:

Artificial intelligence (AI): software that is able to reproduce abilities characteristic of human beings. 

Deep learning: an AI technique that allows a computer to discover by itself the optimal problem-solving strategy from a massive amount of data.

Artificial neuron: a mathematical function that can process data.

Convolutional neuronal network (CNN or ConvNet): a type of artificial neural network that works by an overlapping of layers, as in the visual cortex of a human being. The layers of neurons are stacked on top of one another to pre-process small quantities of information, such as images.