MSBI Thesis & Abstracts 2019-2020
Casey Alcantar
Advisor: Dr. Steven Hetts, MD
Thesis Title: A Faster R-CNN Model for Detection of MR-Compatible Catheters
Abstract: This project sought to create a deep learning model to detect and track MR-compatible catheter tips under Magnetic Resonance Imaging. Interventional MRI, or iMRI, has many advantages over traditional x-ray angiography methods, yet the path towards adoption is hindered by many obstacles, including the lack of easily visualizable catheter tips. The model, the Faster Region-based Convolutional Neural Network (Faster R-CNN), was chosen due to its well-balanced speed and accuracy over other model architectures. The dataset included MR images of passive and resonant catheter tips alone and as well as passive catheter tips in an abdominal aorta phantom. The Faster R-CNN was trained over many iterations and over the best run it was able to draw bounding boxes over the tip of the catheter with an overall mean average precision of 0.59 and overall average recall of 0.66. Further optimization of training parameters will be needed to create a model that can achieve a better mean average precision. This study opens the possibility of applying artificial intelligence models towards iMRI methods, which helps push towards the goal of proving the safety and efficacy of iMRI procedures. These foundational elements are critical to smoothing the adoption of iMRI for guiding endovascular procedures.
Souradeep Gogol Bhattacharya
Advisor: Dr. Duygu Tosun, PhD
Thesis Title: An imaging informatics-based risk assessment for early detection of depressive symptom onset
Abstract: Co-morbid depressive symptoms commonly present in age-related neurodegenerative diseases. Depressive symptoms impact patient outcomes adversely. Currently, there exists no early warning system for alerting providers of patients at high risk of developing depressive symptoms. This project is a retrospective study that uses the archival Flourodeoxyglucose Positron Emission Tomography (FDG-PET) dataset from Alzheimer’s Disease Neuroimaging Initiative (ADNI) to create an informatics-based approach that is capable of placing patients in one of three risk-groups for developing co-morbid depressive symptoms. Specifically, we propose an unsupervised technique to model different profiles of depressive symptom development from longitudinal clinical assessments up to 5 years. We assessed blood and Cerebrospinal fluid (CSF) biomarkers as well as FDG-PET scans as candidate biomarkers predictive of risk for cumulative depressive symptom development.
Our data-driven analysis utilizing Dynamic Time Warping (DTW) and Dynamic BarycenterAveraging (DBA) revealed 3 major risk groups. The 1st group presented with the lowest baseline depressive symptoms, as indicated by their Geriatric Depression Scale (GDS) scores, (GDS=0.53± 0.67) and stayed stable over the course of ADNI study. The 2nd group presented with slightly elevated depressive symptoms at baseline (GDS=2.04 ± 1.18). Finally, the 3rd group presented with significant depressive symptoms (GDS=3.42 ± 1.40) at baseline. Groups did not differ in prevalence of females or in years of education. Patients that have Cognitive Impairments (CI), indicated by both the clinical diagnosis and the patient’s Clinical Dementia Rating (CDR), are more likely to have higher GDS scores and are at higher risk of developing depressive symptoms(i.e., baseline CI prevalence of 39.7% in Group 1 compared to 63.6% and 82.1% in Groups 2 and3, respectively). Participants in the 2nd and 3rd were more likely to progress from Mild CognitiveImpairment (MCI) to Alzheimer’s Disease (AD) (29% and 33% respectively). Participants in the 3rd group had the lowest baseline Cerebrospinal fluid (CSF) tau and CSF p-tau, with respect to the1st and 2nd groups, and the highest plasma Neurofilament Light (NFL).
Jacob Ellison
Advisor: Dr. Janine Lupo, PhD
Thesis Title: Improving the generalizability of convolutional neural networks for brain tumor segmentation in the post-treatment setting
Abstract: Current encoder-decoder convolutional neural networks (CNN) used in automated glioma lesion segmentation and volumetric measurements perform well on newly diagnosed lesions that have not received any treatment. However, there are challenges in generalizability for patients after treatment, including at the time of suspected recurrence. This results in decreased translation to clinical use in the post-treatment setting where it is needed the most. A potential reason is that these deep learning models are primarily trained on a singular curated dataset and demonstrate decreased performance when they are tested in situations with unseen variations to disease states, scanning protocols or equipment, and operators. While using a highly curated dataset does have the benefit of standardizing comparison of models, it comes with some significant drawbacks to generalizability. The primary source of images used to train current models for glioma segmentation is the BraTS (Multimodal Brain Tumor Image Segmentation Benchmark) dataset. The image domain of the BraTS dataset is large, including high- and low-grade tumors, varying acquisition resolution, and scans from multi-center studies. Despite this, it may still lack sufficient feature representation in the target clinical imaging domain. Here we address generalizability to the disease state of post-treatment glioma. The current BraTS dataset consists entirely of images obtained from newly diagnosed patients who have not undergone surgical resection, received adjuvant treatment, or shown significant disease progression, all of which can greatly alter the characteristics of these lesions. To improve the clinical utility of deep learning models for glioma segmentation, they must accommodate variations in signal intensity that may arise as a result of resection, tissue damage (treatment induced or otherwise), or progression. We compared models trained on either BraTS data, UCSF acquired post-treatment glioma data, UCSF acquired newly diagnosed glioma data, and various combinations of these data, to determine the effect of including images with features unique to treated gliomas into training the networks on segmentation performance in the post-treatment domain. Although an absolute threshold training inclusion value for generalization of segmentation networks to post-treatment glioma patients has not been established, we found that with 200 total training volumes, models trained with greater than or equal to 30% of the training images from patients with prior treatment received the greatest performance gains when testing in this domain. Additionally, we found that after this threshold is met, additional images from newly diagnosed patients did not negatively impact segmentation performance on patients with treated gliomas. We also developed a pre-processing pipeline and implemented a loss penalty term that incorporates cavity distance relationships to the tumor into weighting a cross entropy loss term. The aim of this was to bias the network weights to morphological features of the image relevant to pathologies that are prevalent post-treatment. This may either be used as an initialization for training with an available larger dataset such as BraTS or used to finetune a transferred network that has not seen sufficient post-treatment glioma images during training in order to allow domain adaptation with fewer training data from this disease state. Preliminary results show qualitatively more desirable segmentations of tumor lesions with respect to cavities and small disconnected components in selected examples that are worthy of further analysis with alternate training configurations, more focused performance assessments, and larger cohorts. Here, we will evaluate these techniques as potential solutions to improve the generalizability of CNN tumor segmentation to post- treatment glioma, as well as provide a framework for further data augmentation based on augmenting the boundary of these lesions.
Anil Kemisetti
Advisor: Dr. Peder Larson, PhD
Thesis Title: Comparing the Training Performance of a Deep Neural Network for Accelerated MRI Reconstruction Using Synthesized and Realistic k-Space Data
Abstract: Magnetic Resonance Imaging (MRI) is a powerful medical imaging modality used as a diagnostic tool. There is a steady rise in the imagining examination. Trends from 2000 - 2016 showed that nearly 16 million to 21 million patients had enrolled annually in various US health care systems. The number of MRIs per 1000 increased from 62 per 1000 to 139 per 1000 patients from 2000 to 2016. MR images are usually stored in Picture Archiving and Communication Systems (PACS) in Digital Imaging and Communication in Medicine (DICOM). DICOM format includes a header and imaging data. MRI k-space is the raw data obtained during the MR signal acquisition. The file size of complex MR data is huge. It is generally transformed into the anatomical imaging data, and raw data is discarded and not transferred to the PACS. The abundant DICOM data has the potential to be used for training neural networks. Deep Neural Network models depend on the extensive training datasets. DICOM images are magnitude images without the image phase. It is essential to understand the effect of missing image phase information to use the DICOM data for this training task effectively.
Chieh Te Jack Lin
Advisor: Dr. Henry Vanbrocklin, PhD
Thesis Title: Synthesis of DFO with PSMA-targeted inhibitor labeled 89Zr to evaluate prostate cancer
Abstract:
Jacob Oeding
Advisor: Dr. Drew Lansdown, PhD
Thesis Title: Automatic Three-Dimensional Bone Shape Modeling Using Clinical MRI
Abstract: Statistical shape modeling has been employed to study three-dimensional bony morphological features of the tibia and femur as potential risk factors for ACL injury and negative outcomes after ACL reconstruction. However, prior studies have been limited in size, largely due to the need for either CT imaging or high-resolution MRI with tedious manual segmentation. In this study, a deep learning model was trained to automatically segment tibia and femur bones from clinical MRI scans. The model was used to infer segmentations from a large dataset (> 300 images) of preoperative and postoperative clinical MR images from patients who had underwent ACL reconstruction and had clinical, two-dimensional PD-weighted MRIs. Three-dimensional bone shape models were constructed from inferred segmentations. PCA was performed, and results were compared between datasets of same knees imaged 6 months apart. Correlations between same knee principal components were moderate to strong, and point-to-point deviations between same knee vertices were small, indicating that reliable and repeatable statistical shape modeling can be obtained from clinical MRI sequences.
Jonathan Renslo
Advisor: Dr. Galateia Kazakia, PhD
Thesis Title: Distinguishing porosity and vasculature changes in Type 2 Diabetes (T2D) in distal tibia with MRI and HRqCT
Abstract:
Avantika Sinha
Advisor: Dr. Peder Larson, PhD
Thesis Title: Standardized, Open-Source Processing of Hyperpolarized 13C Data
Abstract: Magnetic Resonance Spectroscopic Imaging (MRSI) allows us to visualize metabolites within the body without radioactive tracers, such as those used in PET scans. Hyperpolarized MRI exploits this process by using dynamic dissolution nuclear polarization to enhance the signal of and visualize previously difficult to detect metabolic intermediates such as pyruvate. 13C is a common hyperpolarized (HP) tracer, used for pyruvate imaging, that can help with cancer detection and tracking. MRSI and HP data contains varying numbers of spatial, spectral and temporal dimensions and is typically encoded using proprietary vendor-specific file formats, which presents challenges for post-processing and analysis using standard medical imaging software. The spectral data needs to be registered with the anatomical data, and temporal dimensions represented, if necessary. The Hyperpolarized MRI Technology Resource Center (HMTRC) was created for the dissemination of tools to make HP MRI more accessible, one of its aims being the development of free, open-source software (FOSS). At the University of California, San Francisco, the SIVIC (Spectroscopic Imaging, Visualization, and Computing) software package was developed to aid in this process. Bruker is one of these vendors, and proprietary data files from their newer 3T small animal scanner at UCSF cannot be directly inputted into SIVIC. This lack of standardization for spectral data causes problems across research, as each research group must manually pre-process the data before analysis. In this work, we aim to streamline this workflow by introducing a function that can take as input Bruker 2dseq files from an EPSI sequence and output in a standardized DICOM MRS format. Ultimately, this will provide a data pipeline that enables efficient analysis of Bruker HP data, allow for greater collaboration between research groups, and lines up with the HMTRC’s aim of developing additional FOSS. As of the time of this thesis submission, the pipeline can identify the 2dseq file and write metadata from parameter maps, but cannot fully handle spectral data.
Bishal Upadhyaya
Advisor: Dr. Youngho Seo, PhD
Thesis Title: A Comparison of FDG and Amyloid PET for the Deep Learning Prediction of Alzheimer’s Disease in Low Income Communities
Abstract: The onset of Alzheimer’s Disease (AD) may begin up to 20 years before clinical symptoms are apparent. It is a challenge for clinicians to treat, as the best medical outcome is temporarily slowing the onset of the disease before it becomes ultimately fatal. Due to economic and social pressure AD places on the affected individual and their families, it is particularly burdensome to low-to-middle income countries (LMCIs). Thankfully, cutting-edge deep learning (DL) models are being developed to predict the diagnosis of AD before clinical symptoms manifest. These DL models most commonly use the biomarker fluorodeoxyglucose (FDG), although amyloid-PET (florbetapir) has also become prevalent. FDG is the more accessible and affordable biomarker, whereas, florbetapir detects the amyloid plaque that is theorized to be directly responsible for the neuronal death in AD and may be the better detector of the disease. To determine the most practical biomarker for LMICs, we utilized low-resolution datasets of FDG-PET and amyloid-PET provided by the Alzheimer’s Disease Neuroimaging Initiative to train two DL models with a 3D image classification framework. The FDG-PET and amyloid-PET models resulted in AUCs of 0.919 and 0.891 on their respective test sets. We conclude the difference of DL performance between the two biomarkers trained on low-resolution PET data is negligible. Thereby, as the modestly priced and universal biomarker, FDG-PET is the practical option for LMICs and other financially vulnerable communities for the prediction of a future AD diagnosis.