→ MIT Master of Engineering Thesis
May 2024




Recognizing Brain Regions in
2D Images from Brain Tissue



Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing  physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities.

Various tools and deep learning models have been developed to automatically identify different anatomical structures in 3D MRI volumes. However, the only method that exists to segment the anatomical structures in 2D brain slices, whether they be 2D slices extracted from an MRI or photographs of slices from a physically dissected brain, is manually labeling by a trained neuroanatomist, which is costly, resource-intensive, and time-consuming.

In this project, we develop a new deep learning method to automatically segment 50 different regions in 2D photographs of the brain. Because a supervised image and segmentation map dataset does not exist for the photographs, we train the state-of-the-art SegFormer model on a supervised dataset of 2D MRI slices. We employ multiple data augmentation techniques to increase the variability of the training data to more closely resemble the variability seen in brain photographs, so that the model is robust enough to segment the anatomical regions in brain photographs.

In this project, the SegFormer model achieved test dice scores between 0.6-0.75 on the segmentation of 50 different anatomical regions in 2D MRI slices, depending on which augmentations were incorporated during training. Additionally, the project demonstrated that incorporating complex augmentations that forced the model to learn the segmentation task with reduced contextual information as well as those that decoupled the tissue and background by manipulating them independently helped improve the robustness of the model, allowing it to better segment 2D photographs of the brain. Although there is much room for improvement, this project provides a set of techniques that can be extended to further improve the model’s robustness so that it can be applied to other imaging modalities as well in the future.


hello
Chapter 1
hello Introduction
hello Neuroimaging data exists in a wide variety of modalities, as seen in Figure 1.1. These include, but are not limited to, in vivo sMRI scans, photographs of dissected images, or lightsheet microscopy to visualize brain tissue. Each of these types of imaging provide different kinds of information. For instance, the in vivo sMRI scans help visualize various anatomical structures in the brain, whereas the lightsheet microscopy images allow us to visualize the brain at a cellular level. This neuroimaging data is not only important in clinical scenarios or in research about the human brain, but it is also used to study the brains of other animals like macaque monkeys.
The human brain is made up of various anatomical regions or structures, such as the cerebellum, hippocampus, or thalamus. Each of these regions serve specific cognitive, sensory, motor, and emotional functions within the brain. For example, the cerebellum guides motor coordination and balance, while the amygdala is involved in processing emotions, particularly fear. Identifying and analyzing structural changes to these anatomical regions is an important part of understanding and recognizing various neurological disorders.
helloOften, the first step in neuroimaging research is understanding which anatomical structures are present in an image. An sMRI image, as seen in column 1 of Figure 1.1, provides a clear, high-resolution image of these various structures
...read more | See the code here


Copyright © 2023 Sabeen Lohawala. All rights reserved.