Artash Nath and Vikas Nath

The Python Neuroimaging Workshop held on 13 November on fMRI and DMRI analysis at the BrainHack Toronto, Krembil Centre for Neuroinformatics was very informative. Read my report “BrainHack Toronto 2019: Learnings from the Python Neuroimaging Workshop“. 

brainhack picThe workshop introduced me to neuroimaging and neuroinformatics. I learned about the human brain, how data about the human brain is generated, brain scan data sets available, privacy issues and data analysis challenges faced by those working in this sector.

To prepare myself for the hackathon, I expanded my knowledge by going over the resources provided during the workshop and posted on the GitHub as well as listening remotely to discussions happening in the science panel organized by the BrainHack Toronto. I was eager to test and build upon my learnings. I believe the best way to learn something new is do a lot of reading, listen to subject experts, and then take up a project. Undertaking projects challenges me to merge my previous knowledge and experience with new learnings. 

Michael
Great discussions with Michael Joseph, Reseach Analyst, TIGRLab on fMRI data

For the hackathon part of the BrainHack Toronto, I decided to explore possibilities of using Machine Learning on Magnetic Resonance Imaging Scans, especially functional MRI (fMRI) scans of the brain. While the MRI generates 3D scans of the brain, the fMRI adds a time dimension to these images making them 4D images. 

As a test case, I wanted to see if I could create a machine learning algorithm that would be able to estimate the age by looking at their functional MRI (fMRI) scan.

As analyzing data with many dimensions is hard and time-consuming for humans, the task can potentially be turned to machine learning. Machine learning algorithms can identify patterns in multi-dimensional data and given enough training data, can demonstrate a reasonable accuracy in their predictions.

Data Set

I could access several hundred fMRI scans of the human brain from the International Neuroimaging Data-sharing Initiative (INDI).

The fMRI data comprises of a 4-dimensional array. The first 3 dimensions represent the 3Dd image of the patient’s head. The fourth is the time scale or time intervals over which the scan was taken. The 4D array allows us to view the movements of fluids in the brain

Machine Learning Models  

As the dataset is primarily of  3D images takes over an approximately 5-minute duration, I wanted to experiment with using a Convolutional Neural Network (CNN) on the dataset. CNN can analyze data containing images very efficiently, as it works in a similar way that humans do. A CNN notices the main and secondary features in pictures in the train data and calculates how they affect the output. In this case, I would be feeding a 3D video of the fMRI scan to the neural network as an input.

CNNs do not work well with Image Timeseries Data

Convolutional Neural Network is a Deep Learning algorithm for image recognition and classification. It takes images as input; it passes through a convolution layer that assigns weights and biases to different aspects in the image to be able to differentiate aspect from the other. Interestingly the CNN model was inspired by the organization of the Visual Cortex region of the brain. Neurons in the primary visual cortex respond to specific, simple features in the visual environment that help us identify and recognize images.

As I started building my model using Convolutional Neural Networks (CNNs), I realized that there would be many flaws in using a CNN on the MRI data.

CNN works well when looking for features in static images including 2D images or even 3D images. But to a CNN, the order in which you input the images does not matter. It will be able to identify the main features in an image but will not work well in identifying changes in features over time.

RNNs work well with Image Timeseries Data but not with Image Recognition

A possible option would be to use Recurrent Neural Networks (RNNs). RNNs are neural networks where connections between nodes form a temporal sequence. They remember every information through time. The output from the previous step is fed as input to the next step. RNNs can use their internal memory to process recurring sequences of inputs and this makes them suited to tasks where sequences matter such as speech recognition, language translation, etc.

While an RNN would be capable of observing time-series data but they perform poorly in image recognition as they lack feature-detection aspects found in CNNs. 

Hypothetical Solution: CNN-RNN Hybrid

Since CNNs are good for image recognition and RNNs are good in classifying time series data, why not merge them to get the best of both the models?

brainhacking
Hackathon in Progress

So I planned out a model, where each of the 3D images (of the fMRI scan) obtained in a time series would be fed to a CNN to perform feature extraction.  Then, these features, with their original time-series order preserved would be fed into an RNN. The RNN would analyze these time series of features and create an output. This ideally should solve the problem!

If this hybrid CNN-RNN model were to be set up, it would have millions, possibly billions of trainable parameters to adjust during training. One of the immediate constraints would be computing power: it would require an extremely powerful GPU machine to run on.

Another key constraint would be Training Data. As the complexity of the model and parameters to be trained increase, the amount of training data needed grows. The need for more data also grows with the number of variables per data point. This would be in the order of tens of millions for functional MRI scan data. The proposed model would likely require several hundred thousand data points to reach a reasonable accuracy. However, in the current database available online, there are only two to three hundred data points. This is not enough to implement the hybrid CNN-RNN model.

brainhack artash
I really enjoyed my first BrainHack Toronto 2019. I will be back in 2020!

Big Challenges, Sparse Data

Why is the public data available on neuroscience so sparse? This was very much unlike my previous astronomy and machine learning projects where data was free, available to anyone, and runs into millions of data points.

This is because of privacy and security issues. MRI scan data are personal and revealing. An MRI scan will look right into the brain, revealing a lot about the person including age, emotional state, physical dexterity, and possible neuro-related problems and issues. Thus, the availability of this data is extremely (and rightly) regulated. It is rarely revealed to someone other than the patient and the doctors viewing it or to verified research groups on the need to know basis.

Way Ahead

When data is limited, we can use tricks to squeeze most out of the limited data set, such as data augmentation, batch normalization or using pre-trained weights to initialize the lower layers of the neural network.

cup
A well-earned Cup at the end of the BrainHack Toronto 2019

We could also create a smaller and less complex model. Dimension reduction can be done on available data to ensure that the model is able to train efficiently. While this would lead to loss in accuracy, it would at least be a way forward until something better can be implemented.

Accuracy could be improved by taking a multimodal approach – combining brain scan data with other data such as Electroencephalography (EEG) data, movement of eyes, posture or other characteristics such as facial emotion detection. This could provide complementary information to what is being visualized through the brain scan.

I hope to continue working on this project by building up my knowledge and testing our newer machine learning models.

Resources:

Link to data: http://fcon_1000.projects.nitrc.org/indi/cmi_healthy_brain_network/sharing_neuro.html

BrainHack Toronto 2019: Learnings from the Python Neuroimaging Workshop (Report)

https://hotpoprobot.com/2019/11/14/reflections-from-brainhack-toronto/

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s