publications
2024
- Cereb.CortexSegmentation of supragranular and infragranular layers in ultra-high resolution 7T ex vivo MRI of the human cerebral cortexXiangrui Zeng , Oula Puonti , Areej Sayeed , Rogeny Herisse , and 19 more authorsCerebral Cortex, 2024
Accurate labeling of specific layers in the human cerebral cortex is crucial for advancing our understanding of neurodevelopmental and neurodegenerative disorders. Building on recent advancements in ultra-high-resolution ex vivo MRI, we present a novel semi-supervised segmentation model capable of identifying supragranular and infragranular layers in ex vivo MRI with unprecedented precision. On a dataset consisting of 17 whole-hemisphere ex vivo scans at 120
mum, we propose a Multi-resolution U-Nets framework that integrates global and local structural information, achieving reliable segmentation maps of the entire hemisphere, with Dice scores over 0.8 for supra- and infragranular layers. This enables surface modeling, atlas construction, anomaly detection in disease states, and cross-modality validation while also paving the way for finer layer segmentation. Our approach offers a powerful tool for comprehensive neuroanatomical investigations and holds promise for advancing our mechanistic understanding of progression of neurodegenerative diseases. - arXivNeurovascular Segmentation in sOCT with Deep Learning and Synthetic Training DataEtienne Chollet† , Ya
"el Balbastre† , Chiara Mauri , Caroline Magnain , and 2 more authorsarXiv preprint arXiv:2407.01419, 2024Microvascular anatomy is known to be involved in various neurological disorders. However, understanding these disorders is hindered by the lack of imaging modalities capable of capturing the comprehensive three-dimensional vascular network structure at microscopic resolution. With a lateral resolution of <=20
textmum and ability to reconstruct large tissue blocks up to tens of cubic centimeters, serial-section optical coherence tomography (sOCT) is well suited for this task. This method uses intrinsic optical properties to visualize the vessels and therefore does not possess a specific contrast, which complicates the extraction of accurate vascular models. The performance of traditional vessel segmentation methods is heavily degraded in the presence of substantial noise and imaging artifacts and is sensitive to domain shifts, while convolutional neural networks (CNNs) require extensive labeled data and are also sensitive the precise intensity characteristics of the data that they are trained on. Building on the emerging field of synthesis-based training, this study demonstrates a synthesis engine for neurovascular segmentation in sOCT images. Characterized by minimal priors and high variance sampling, our highly generalizable method tested on five distinct sOCT acquisitions eliminates the need for manual annotations while attaining human-level precision. Our approach comprises two phases: label synthesis and label-to-image transformation. We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models. - MIDLA Label-Free and Data-Free Training Strategy for Vasculature Segmentation in serial sectioning OCT DataarXiv preprint arXiv:2405.13757, 2024
Serial sectioning Optical Coherence Tomography (sOCT) is a high-throughput, label free microscopic imaging technique that is becoming increasingly popular to study post-mortem neurovasculature. Quantitative analysis of the vasculature requires highly accurate segmentation; however, sOCT has low signal-to-noise-ratio and displays a wide range of contrasts and artifacts that depend on acquisition parameters. Furthermore, labeled data is scarce and extremely time consuming to generate. Here, we leverage synthetic datasets of vessels to train a deep learning segmentation model. We construct the vessels with semi-realistic splines that simulate the vascular geometry and compare our model with realistic vascular labels generated by constrained constructive optimization. Both approaches yield similar Dice scores, although with very different false positive and false negative rates. This method addresses the complexity inherent in OCT images and paves the way for more accurate and efficient analysis of neurovascular structures.
- bioRxivA next-generation, histological atlas of the human brain and its application to automated brain MRI segmentationAdri
‘a Casamitjana , Matteo Mancini , Eleanor Robinson , Lo
"
ic Peter , and <span class="more-authors" title="click to view 20 more authors" onclick=" var element = $(this); element.attr('title', ''); var more_authors_text = element.text() == '20 more authors' ? 'Roberto Annunziata, Juri Althonayan, Shauna Crampsie, Emily Blackburn, Benjamin Billot, Alessia Atzeni, Oula Puonti, Ya
"el Balbastre, Peter Schmidt, James Hughes, Jean C Augustinack, Brian L Edlow, Lilla Z
"ollei, David L Thomas, Dorit Kliemann, Martina Bocchetta, Catherine Strand, Janice L Holton, Zane Jaunmuktane, Juan Eugenio Iglesias' : '20 more authors'; var cursorPosition = 0; var textAdder = setInterval(function(){ element.html(more_authors_text.substring(0, cursorPosition + 1)); if (++cursorPosition == more_authors_text.length){ clearInterval(textAdder); } }, '10'); " >20 more authors</span>bioRxiv, 2024Magnetic resonance imaging (MRI) is the standard tool to image the human brain in vivo. In this domain, digital brain atlases are essential for subject-specific segmentation of anatomical regions of interest (ROIs) and spatial comparison of neuroanatomy from different subjects in a common coordinate frame. High-resolution, digital atlases derived from histology (e.g., Allen atlas [3], BigBrain [4], Julich [5]), are currently the state of the art and provide exquisite 3D cytoarchitectural maps, but lack probabilistic labels throughout the whole brain. Here we present NextBrain, a next-generation probabilistic atlas of human brain anatomy built from serial 3D histology and corresponding highly granular delineations of five whole brain hemispheres. We developed AI techniques to align and reconstruct
10,000 histological sections into coherent 3D volumes, as well as to semi-automatically trace the boundaries of 333 distinct anatomical ROIs on all these sections. Comprehensive delineation on multiple cases enabled us to build an atlas with probabilistic labels throughout the whole brain. Further, we created a companion Bayesian tool for automated segmentation of the 333 ROIs in any in vivo or ex vivo brain MRI scan using the NextBrain atlas. We showcase two applications of the atlas: automated segmentation of ultra-high-resolution ex vivo MRI and volumetric analysis of brain ageing based on
4,000 publicly available in vivo MRI scans. We publicly release the raw and aligned data (including an online visualisation tool), probabilistic atlas, and segmentation tool. By enabling researchers worldwide to analyse brain MRI scans at a superior level of granularity without manual effort or highly specific neuroanatomical knowledge, NextBrain will accelerate our quest to understand the human brain in health and disease.Competing Interest StatementThe authors have declared no competing interest. - eLifeMachine learning of dissection photographs and surface scanning for quantitative 3D neuropathologyHarshvardhan Gazula , Henry F. J. Tregidgo , Benjamin Billot , Yael Balbastre , and 22 more authorseLife, 2024
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer’s Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer’s disease cases and controls. The tools are available in our widespread neuroimaging suite “FreeSurfer” (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).
- CVPRFully Convolutional Slice-to-Volume Reconstruction for Single-Stack MRIIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024
In magnetic resonance imaging (MRI), slice-to-volume reconstruction (SVR) refers to computational reconstruction of an unknown 3D magnetic resonance volume from stacks of 2D slices corrupted by motion. While promising, current SVR methods require multiple slice stacks for accurate 3D reconstruction, leading to long scans and limiting their use in time-sensitive applications such as fetal fMRI. Here, we propose a SVR method that overcomes the shortcomings of previous work and produces state-of-the-art reconstructions in the presence of extreme inter-slice motion. Inspired by the recent success of single-view depth estimation methods, we formulate SVR as a single-stack motion estimation task and train a fully convolutional network to predict a motion stack for a given slice stack, producing a 3D reconstruction as a byproduct of the predicted motion. Extensive experiments on the SVR of adult and fetal brains demonstrate that our fully convolutional method is twice as accurate as previous SVR methods. Our code is available at http://github.com/seannz/svr.
- CBM @ MICCAIDiffeomorphic Multi-Resolution Deep Learning Registration for Applications in Breast MRIMatthew G French , Gonzalo D Maso Talou , Thiranja P Babarenda Gamage , Martyn P Nash , and <span class="more-authors" title="click to view 5 more authors" onclick=" var element = $(this); element.attr('title', ''); var more_authors_text = element.text() == '5 more authors' ? 'Poul M Nielsen, Anthony J Doyle, Juan Eugenio Iglesias, Ya
"el Balbastre, Sean I Young' : '5 more authors'; var cursorPosition = 0; var textAdder = setInterval(function(){ element.html(more_authors_text.substring(0, cursorPosition + 1)); if (++cursorPosition == more_authors_text.length){ clearInterval(textAdder); } }, '10'); " >5 more authors</span>2024In breast surgical planning, accurate registration of MR images across patient positions has the potential to improve the localisation of tumours during breast cancer treatment. While learning-based registration methods have recently become the state-of-the-art approach for most medical image registration tasks, these methods have yet to make inroads into breast image registration due to certain difficulties-the lack of rich texture information in breast MR images and the need for the deformations to be diffeomophic. In this work, we propose learning strategies for breast MR image registration that are amenable to diffeomorphic constraints, together with early experimental results from in-silico and in-vivo experiments. One key contribution of this work is a registration network which produces superior registration outcomes for breast images in addition to providing diffeomorphic guarantees.
2023
- Sci.Adv.A cellular resolution atlas of Broca’s areaIrene Costantini† , Leah Morgan† , Jiarui Yang† , Yael Balbastre† , and 35 more authorsScience Advances, 2023
Brain cells are arranged in laminar, nuclear, or columnar structures, spanning a range of scales. Here, we construct a reliable cell census in the frontal lobe of human cerebral cortex at micrometer resolution in a magnetic resonance imaging (MRI)–referenced system using innovative imaging and analysis methodologies. MRI establishes a macroscopic reference coordinate system of laminar and cytoarchitectural boundaries. Cell counting is obtained with a digital stereological approach on the 3D reconstruction at cellular resolution from a custom-made inverted confocal light-sheet fluorescence microscope (LSFM). Mesoscale optical coherence tomography enables the registration of the distorted histological cell typing obtained with LSFM to the MRI-based atlas coordinate system. The outcome is an integrated high-resolution cellular census of Broca’s area in a human postmortem specimen, within a whole-brain reference space atlas.
- Sci.Adv.SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometryScience advances, 2023
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, “SynthSR,” that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer’s disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
2022
- IEEE TMILearn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learningAlessa Hering , Lasse Hansen , Tony C. W. Mok , Albert C. S. Chung , and 49 more authorsIEEE Transactions on Medical Imaging, 2022
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
- MIUAFitting Segmentation Networks on Varying Image Resolutions Using SplattingIn Annual Conference on Medical Image Understanding and Analysis , 2022
Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step.
- WBIRSuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise RegistrationIn International Workshop on Biomedical Image Registration , 2022
In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching, and estimation of deformation. We introduce one simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques.
- MRMCorrecting inter-scan motion artifacts in quantitative R1 mapping at 7TMagnetic Resonance in Medicine, 2022
Purpose: Inter-scan motion is a substantial source of error in R1 estimation methods based on multiple volumes, for example, variable flip angle (VFA), and can be expected to increase at 7T where B1 fields are more inhomogeneous. The established correction scheme does not translate to 7T since it requires a body coil reference. Here we introduce two alternatives that outperform the established method. Since they compute relative sensitivities they do not require body coil images.
Theory: The proposed methods use coil-combined magnitude images to obtain the relative coil sensitivities. The first method efficiently computes the relative sensitivities via a simple ratio; the second by fitting a more sophisticated generative model.
Methods: R1 maps were computed using the VFA approach. Multiple datasets were acquired at 3T and 7T, with and without motion between the acquisition of the VFA volumes. R1 maps were constructed without correction, with the proposed corrections, and (at 3T) with the previously established correction scheme. The effect of the greater inhomogeneity in the transmit field at 7T was also explored by acquiring B1+ maps at each position.
Results: At 3T, the proposed methods outperform the baseline method. Inter-scan motion artifacts were also reduced at 7T. However, at 7T reproducibility only converged on that of the no motion condition if position-specific transmit field effects were also incorporated.
Conclusion: The proposed methods simplify inter-scan motion correction of R1 maps and are applicable at both 3T and 7T, where a body coil is typically not available. The open-source code for all methods is made publicly available. - IEEE TBEVolumetric characterization of microvasculature in ex vivo human brain samples by serial sectioning optical coherence tomographyJiarui Yang , Shuaibin Chang , Ichun Anderson Chen , Sreekanth Kura , and 11 more authorsIEEE Transactions on Biomedical Engineering, 2022
Objective: Serial sectioning optical coherence tomography (OCT) enables accurate volumetric reconstruction of several cubic centimeters of human brain samples. We aimed to identify anatomical features of the ex vivo human brain, such as intraparenchymal blood vessels and axonal fiber bundles, from the OCT data in 3D, using intrinsic optical contrast.
Methods: We developed an automatic processing pipeline to enable characterization of the intraparenchymal microvascular network in human brain samples.
Results: We demonstrated the automatic extraction of the vessels down to a 20 μm in diameter using a filtering strategy followed by a graphing representation and characterization of the geometrical properties of microvascular network in 3D. We also showed the ability to extend this processing strategy to extract axonal fiber bundles from the volumetric OCT image.
Conclusion: This method provides a viable tool for quantitative characterization of volumetric microvascular network as well as the axonal bundle properties in normal and pathological tissues of the ex vivo human brain. - Front.Neurosci.Factorisation-based image labellingFrontiers in Neuroscience, 2022
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.
2021
- MedIAModel-based multi-parameter mappingMedical Image Analysis, 2021
Quantitative MR imaging is increasingly favoured for its richer information content and standardised measures. However, computing quantitative parameter maps, such as those encoding longitudinal relaxation rate (R1), apparent transverse relaxation rate (R2*) or magnetisation-transfer saturation (MTsat), involves inverting a highly non-linear function. Many methods for deriving parameter maps assume perfect measurements and do not consider how noise is propagated through the estimation procedure, resulting in needlessly noisy maps. Instead, we propose a probabilistic generative (forward) model of the entire dataset, which is formulated and inverted to jointly recover (log) parameter maps with a well-defined probabilistic interpretation (e.g., maximum likelihood or maximum a posteriori). The second order optimisation we propose for model fitting achieves rapid and stable convergence thanks to a novel approximate Hessian. We demonstrate the utility of our flexible framework in the context of recovering more accurate maps from data acquired using the popular multi-parameter mapping protocol. We also show how to incorporate a joint total variation prior to further decrease the noise in the maps, noting that the probabilistic formulation allows the uncertainty on the recovered parameter maps to be estimated. Our implementation uses a PyTorch backend and benefits from GPU acceleration. It is available at https://github.com/balbasty/nitorch.
- MIDLAn MRF-UNet product of experts for image segmentationIn Medical Imaging with Deep Learning , 2021
While convolutional neural networks (CNNs) trained by back-propagation have seen unprecedented success at semantic segmentation tasks, they are known to struggle on out-of-distribution data. Markov random fields (MRFs) on the other hand, encode simpler distributions over labels that, although less flexible than UNets, are less prone to over-fitting. In this paper, we propose to fuse both strategies by computing the product of distributions of a UNet and an MRF. As this product is intractable, we solve for an approximate distribution using an iterative mean-field approach. The resulting MRF-UNet is trained jointly by back-propagation. Compared to other works using conditional random fields (CRFs), the MRF has no dependency on the imaging data, which should allow for less over-fitting. We show on 3D neuroimaging data that this novel network improves generalisation to out-of-distribution samples. Furthermore, it allows the overall number of parameters to be reduced while preserving high accuracy. These results suggest that a classic MRF smoothness prior can allow for less over-fitting when principally integrated into a CNN model. Our implementation is available at https://github.com/balbasty/nitorch.
- NeuroimageJoint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrastJuan Eugenio Iglesias , Benjamin Billot , Yaël Balbastre , Azadeh Tabari , and 6 more authorsNeuroimage, 2021
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
- HBMSimultaneous voxel-wise analysis of brain and spinal cord morphometry and microstructure within the SPM frameworkMichela Azzarito , Sreenath P Kyathanahally , Ya
"el Balbastre , Maryam Seif , and 4 more authorsHuman brain mapping, 2021To validate a simultaneous analysis tool for the brain and cervical cord embedded in the statistical parametric mapping (SPM) framework, we compared trauma‐induced macro‐ and microstructural changes in spinal cord injury (SCI) patients to controls. The findings were compared with results obtained from existing processing tools that assess the brain and spinal cord separately. A probabilistic brain‐spinal cord template (BSC) was generated using a generative semi‐supervised modelling approach. The template was incorporated into the pre‐processing pipeline of voxel‐based morphometry and voxel‐based quantification analyses in SPM. This approach was validated on T1‐weighted scans and multiparameter maps, by assessing trauma‐induced changes in SCI patients relative to controls and comparing the findings with the outcome from existing analytical tools. Consistency of the MRI measures was assessed using intraclass correlation coefficients (ICC). The SPM approach using the BSC template revealed trauma‐induced changes across the sensorimotor system in the cord and brain in SCI patients. These changes were confirmed with established approaches covering brain or cord, separately. The ICC in the brain was high within regions of interest, such as the sensorimotor cortices, corticospinal tracts and thalamus. The simultaneous voxel‐wise analysis of brain and cervical spinal cord was performed in a unique SPM‐based framework incorporating pre‐processing and statistical analysis in the same environment. Validation based on a SCI cohort demonstrated that the new processing approach based on the brain and cord is comparable to available processing tools, while offering the advantage of performing the analysis simultaneously across the neuraxis.
2020
- JCBFMAssessment of simplified methods for quantification of [18F]-DPA-714 using 3D whole-brain TSPO immunohistochemistry in a non-human primateNadja Van Camp , Yaël Balbastre* , Anne-Sophie Herard* , Sonia Lavisse , and 10 more authorsJournal of Cerebral Blood Flow
& Metabolism, 2020The 18 kDa translocator protein (TSPO) is the main molecular target to image neuroinflammation by positron emission tomography (PET). However, TSPO-PET quantification is complex and none of the kinetic modelling approaches has been validated using a voxel-by-voxel comparison of TSPO-PET data with the actual TSPO levels of expression. Here, we present a single case study of binary classification of in vivo PET data to evaluate the statistical performance of different TSPO-PET quantification methods. To that end, we induced a localized and adjustable increase of TSPO levels in a non-human primate brain through a viral-vector strategy. We then performed a voxel-wise comparison of the different TSPO-PET quantification approaches providing parametric [18F]-DPA-714 PET images, with co-registered in vitro three-dimensional TSPO immunohistochemistry (3D-IHC) data. A data matrix was extracted from each brain hemisphere, containing the TSPO-IHC and TSPO-PET data for each voxel position. Each voxel was then classified as false or true, positive or negative after comparison of the TSPO-PET measure to the reference 3D-IHC method. Finally, receiver operating characteristic curves (ROC) were calculated for each TSPO-PET quantification method. Our results show that standard uptake value ratios using cerebellum as a reference region (SUVCBL) has the most optimal ROC score amongst all non-invasive approaches.
- Cell Metab.Impairment of glycolysis-derived l-serine production in astrocytes contributes to cognitive deficits in Alzheimer’s diseaseJuliette Le Douce† , Marianne Maugard† , Julien Veran† , Marco Matos† , and 32 more authorsCell metabolism, 2020
Alteration of brain aerobic glycolysis is often observed early in the course of Alzheimer’s disease (AD). Whether and how such metabolic dysregulation contributes to both synaptic plasticity and behavioral deficits in AD is not known. Here, we show that the astrocytic l-serine biosynthesis pathway, which branches from glycolysis, is impaired in young AD mice and in AD patients. l-serine is the precursor of d-serine, a co-agonist of synaptic NMDA receptors (NMDARs) required for synaptic plasticity. Accordingly, AD mice display a lower occupancy of the NMDAR co-agonist site as well as synaptic and behavioral deficits. Similar deficits are observed following inactivation of the l-serine synthetic pathway in hippocampal astrocytes, supporting the key role of astrocytic l-serine. Supplementation with l-serine in the diet prevents both synaptic and behavioral deficits in AD mice. Our findings reveal that astrocytic glycolysis controls cognitive functions and suggest oral l-serine as a ready-to-use therapy for AD.
- MICCAIFlexible Bayesian modelling for nonlinear image registrationIn Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part III 23 , 2020
We describe a diffeomorphic registration algorithm that allows groups of images to be accurately aligned to a common space, which we intend to incorporate into the SPM software. The idea is to perform inference in a probabilistic graphical model that accounts for variability in both shape and appearance. The resulting framework is general and entirely unsupervised. The model is evaluated at inter-subject registration of 3D human brain scans. Here, the main modeling assumption is that individual anatomies can be generated by deforming a latent ’average’ brain. The method is agnostic to imaging modality and can be applied with no prior processing. We evaluate the algorithm using freely available, manually labelled datasets. In this validation we achieve state-of-the-art results, within reasonable runtimes, against previous state-of-the-art widely used, inter-subject registration algorithms. On the unprocessed dataset, the increase in overlap score is over 17%. These results demonstrate the benefits of using informative computational anatomy frameworks for nonlinear registration.
- MICCAIJoint total variation ESTATICS for robust multi-parameter mappingIn Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part II 23 , 2020
Quantitative magnetic resonance imaging (qMRI) derives tissue-specific parameters – such as the apparent transverse relaxation rate R2*, the longitudinal relaxation rate R1 and the magnetisation transfer saturation – that can be compared across sites and scanners and carry important information about the underlying microstructure. The multi-parameter mapping (MPM) protocol takes advantage of multi-echo acquisitions with variable flip angles to extract these parameters in a clinically acceptable scan time. In this context, ESTATICS performs a joint loglinear fit of multiple echo series to extract R2* and multiple extrapolated intercepts, thereby improving robustness to motion and decreasing the variance of the estimators. In this paper, we extend this model in two ways: (1) by introducing a joint total variation (JTV) prior on the intercepts and decay, and (2) by deriving a nonlinear maximum
empha posteriori estimate. We evaluated the proposed algorithm by predicting left-out echoes in a rich single-subject dataset. In this validation, we outperformed other state-of-the-art methods and additionally showed that the proposed approach greatly reduces the variance of the estimated maps, without introducing bias. - MIUAGroupwise Multimodal Image Registration using Joint Total VariationIn Medical Image Understanding and Analysis: 24th Annual Conference, MIUA 2020, Oxford, UK, July 15-17, 2020, Proceedings 24 , 2020
In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv.
2019
- Front.Neuroanat.Automated individualization of size-varying and touching neurons in macaque cerebral microscopic imagesZhenzhen You , Ya
"el Balbastre , Cl
’ement Bouvier , Anne-Sophie H
’erard , and 5 more authorsFrontiers in Neuroanatomy, 2019In biomedical research, cell analysis is important to assess physiological and pathophysiological information. Virtual microscopy offers the unique possibility to study the compositions of tissues at a cellular scale. However, images acquired at such high spatial resolution are massive, contain complex information, and are therefore difficult to analyze automatically. In this article, we address the problem of individualization of size-varying and touching neurons in optical microscopy two-dimensional (2-D) images. Our approach is based on a series of processing steps that incorporate increasingly more information. (1) After a step of segmentation of neuron class using a Random Forest classifier, a novel min-max filter is used to enhance neurons’ centroids and boundaries, enabling the use of region growing process based on a contour-based model to drive it to neuron boundary and achieve individualization of touching neurons. (2) Taking into account size-varying neurons, an adaptive multiscale procedure aiming at individualizing touching neurons is proposed. This protocol was evaluated in 17 major anatomical regions from three NeuN-stained macaque brain sections presenting diverse and comprehensive neuron densities. Qualitative and quantitative analyses demonstrate that the proposed method provides satisfactory results in most regions (e.g., caudate, cortex, subiculum, and putamen) and outperforms a baseline Watershed algorithm. Neuron counts obtained with our method show high correlation with an adapted stereology technique performed by two experts (respectively, 0.983 and 0.975 for the two experts). Neuron diameters obtained with our method ranged between 2 and 28.6 μm, matching values reported in the literature. Further works will aim to evaluate the impact of staining and interindividual variability on our protocol.
- arXivA tool for super-resolving multimodal clinical MRIMikael Brudfors , Yael Balbastre , Parashkev Nachev , and John AshburnerarXiv preprint arXiv:1909.01140, 2019
We present a tool for resolution recovery in multimodal clinical magnetic resonance imaging (MRI). Such images exhibit great variability, both biological and instrumental. This variability makes automated processing with neuroimaging analysis software very challenging. This leaves intelligence extractable only from large-scale analyses of clinical data untapped, and impedes the introduction of automated predictive systems in clinical care. The tool presented in this paper enables such processing, via inference in a generative model of thick-sliced, multi-contrast MR scans. All model parameters are estimated from the observed data, without the need for manual tuning. The model-driven nature of the approach means that no type of training is needed for applicability to the diversity of MR contrasts present in a clinical context. We show on simulated data that the proposed approach outperforms conventional model-based techniques, and on a large hospital dataset of multimodal MRIs that the tool can successfully super-resolve very thick-sliced images. The implementation is available from https://github.com/brudfors/spm_superres.
- MedIAAn algorithm for learning shape and appearance models without annotationsJohn Ashburner , Mikael Brudfors , Kevin Bronik , and Yael BalbastreMedical image analysis, 2019
This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. It is based on the idea that having a more accurate shape and appearance model leads to more accurate image registration, which in turn leads to a more accurate shape and appearance model. This leads naturally to an iterative scheme, which is based on a probabilistic generative model that is fit using Gauss-Newton updates within an EM-like framework. It was developed with the aim of enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications.
The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle “missing data”, which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1,900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets. - MICCAI-SASHIMIEmpirical bayesian mixture models for medical image translationIn Simulation and Synthesis in Medical Imaging: 4th International Workshop, SASHIMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings 4 , 2019
Automatically generating one medical imaging modality from another is known as medical image translation, and has numerous interesting applications. This paper presents an interpretable generative modelling approach to medical image translation. By allowing a common model for group-wise normalisation and segmentation of brain scans to handle missing data, the model allows for predicting entirely missing modalities from one, or a few, MR contrasts. Furthermore, the model can be trained on a fairly small number of subjects. The proposed model is validated on three clinically relevant scenarios. Results appear promising and show that a principled, probabilistic model of the relationship between multi-channel signal intensities can be used to infer missing modalities – both MR contrasts and CT images.
- IPMINonlinear Markov random fields learned via backpropagationIn Information Processing in Medical Imaging: 26th International Conference, IPMI 2019, Hong Kong, China, June 2–7, 2019, Proceedings 26 , 2019
Although convolutional neural networks (CNNs) currently dominate competitions on image segmentation, for neuroimaging analysis tasks, more classical generative approaches based on mixture models are still used in practice to parcellate brains. To bridge the gap between the two, in this paper we propose a marriage between a probabilistic generative model, which has been shown to be robust to variability among magnetic resonance (MR) images acquired via different imaging protocols, and a CNN. The link is in the prior distribution over the unknown tissue classes, which are classically modelled using a Markov random field. In this work we model the interactions among neighbouring pixels by a type of recurrent CNN, which can encode more complex spatial interactions. We validate our proposed model on publicly available MR data, from different centres, and show that it generalises across imaging protocols. This result demonstrates a successful and principled inclusion of a CNN in a generative model, which in turn could be adapted by any probabilistic generative approach for image segmentation.
2018
- Front.Neurosci.Voxel-based statistical analysis of 3D immunostained tissue imagingMichel E Vandenberghe , Nicolas Souedet , Anne-Sophie H
’erard , Anne-Marie Ayral , and <span class="more-authors" title="click to view 8 more authors" onclick=" var element = $(this); element.attr('title', ''); var more_authors_text = element.text() == '8 more authors' ? 'Florent Letronne, Ya
"el Balbastre, Elmahdi Sadouni, Philippe Hantraye, Marc Dhenain, Fr
’ed
’erique Frouin, Jean-Charles Lambert, Thierry Delzescaux' : '8 more authors'; var cursorPosition = 0; var textAdder = setInterval(function(){ element.html(more_authors_text.substring(0, cursorPosition + 1)); if (++cursorPosition == more_authors_text.length){ clearInterval(textAdder); } }, '10'); " >8 more authors</span>Frontiers in Neuroscience, 2018Recently developed techniques to visualize immunostained tissues in 3D and in large samples have expanded the scope of microscopic investigations at the level of the whole brain. Here, we propose to adapt voxel-based statistical analysis to 3D high-resolution images of the immunostained rodent brain. The proposed approach was first validated with a simulation dataset with known cluster locations. Then, it was applied to characterize the effect of ADAM30, a gene involved in the metabolism of the amyloid precursor protein, in a mouse model of Alzheimer’s disease. This work introduces voxel-based analysis of 3D immunostained microscopic brain images and, therefore, opens the door to localized whole-brain exploratory investigation of pathological markers and cellular alterations.
- Data Br.A validation dataset for Macaque brain MRI segmentationYaël Balbastre , Denis Rivière , Nicolas Souedet , Clara Fischer , and 8 more authorsData in Brief, 2018
Validation data for segmentation algorithms dedicated to preclinical images is fiercely lacking, especially when compared to the large number of databases of Human brain images and segmentations available to the academic community. Not only is such data essential for validating methods, it is also needed for objectively comparing concurrent algorithms and detect promising paths, as segmentation challenges have shown for clinical images.
The dataset we present here is a first step in this direction. It comprises 10 T2-weighted MRIs of healthy adult macaque brains, acquired on a 7 T magnet, along with corresponding manual segmentations into 17 brain anatomic labelled regions spread over 5 hierarchical levels based on a previously published macaque atlas (Calabrese et al., 2015) [1].
By giving access to this unique dataset, we hope to provide a reference needed by the non-human primate imaging community. This dataset was used in an article presenting a new primate brain morphology analysis pipeline, Primatologist (Balbastre et al., 2017) [2]. Data is available through a NITRC repository (https://www.nitrc.org/projects/mircen_macset). - MIUAMRI super-resolution using multi-channel total variationIn Medical Image Understanding and Analysis: 22nd Conference, MIUA 2018, Southampton, UK, July 9-11, 2018, Proceedings 22 , 2018
This paper presents a generative model for super-resolution in routine clinical magnetic resonance images (MRI), of arbitrary orientation and contrast. The model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice-select profile of the MR scanner. The paper introduces a prior based on multi-channel total variation for MRI super-resolution. Bias-variance trade-off is handled by estimating hyper-parameters from the low resolution input scans. The model was validated on a large database of brain images. The validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different MR contrasts, and that it generalises well to the large variability present in MR images of different subjects.
- MICCAIDiffeomorphic brain shape modelling using Gauss-Newton optimisationIn Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I , 2018
Shape modelling describes methods aimed at capturing the natural variability of shapes and commonly relies on probabilistic interpretations of dimensionality reduction techniques such as principal component analysis. Due to their computational complexity when dealing with dense deformation models such as diffeomorphisms, previous attempts have focused on explicitly reducing their dimension, diminishing de facto their flexibility and ability to model complex shapes such as brains. In this paper, we present a generative model of shape that allows the covariance structure of deformations to be captured without squashing their domain, resulting in better normalisation. An efficient inference scheme based on Gauss-Newton optimisation is used, which enables processing of 3D neuroimaging data. We trained this algorithm on segmented brains from the OASIS database, generating physiologically meaningful deformation trajectories. To prove the model’s robustness, we applied it to unseen data, which resulted in equivalent fitting scores.
2017
- NeuroimagePrimatologist: a modular segmentation pipeline for macaque brain morphometryYaël Balbastre , Denis Rivière , Nicolas Souedet , Clara Fischer , and 8 more authorsNeuroimage, 2017
Because they bridge the genetic gap between rodents and humans, non-human primates (NHPs) play a major role in therapy development and evaluation for neurological disorders. However, translational research success from NHPs to patients requires an accurate phenotyping of the models. In patients, magnetic resonance imaging (MRI) combined with automated segmentation methods has offered the unique opportunity to assess in vivo brain morphological changes. Meanwhile, specific challenges caused by brain size and high field contrasts make existing algorithms hard to use routinely in NHPs. To tackle this issue, we propose a complete pipeline, Primatologist, for multi-region segmentation. Tissue segmentation is based on a modular statistical model that includes random field regularization, bias correction and denoising and is optimized by expectation-maximization. To deal with the broad variety of structures with different relaxing times at 7 T, images are segmented into 17 anatomical classes, including subcortical regions. Pre-processing steps insure a good initialization of the parameters and thus the robustness of the pipeline. It is validated on 10 T2-weighted MRIs of healthy macaque brains. Classification scores are compared with those of a non-linear atlas registration, and the impact of each module on classification scores is thoroughly evaluated.
2016
- ThesisD
’eveloppement et validation d’outils pour l’analyse morphologique du cerveau de MacaqueYaël BalbastreUniversit
’e Paris Saclay (COmUE) , 2016 - IEEE ICIPAutomated cell individualization and counting in cerebral microscopic imagesZhenzhen You , Michel E Vandenberghe , Yael Balbastre , Nicolas Souedet , and 4 more authorsIn 2016 IEEE International Conference on Image Processing (ICIP) , 2016
In biomedical research, cell counting is important to assess physiological and pathophysiological information. However, the automated analysis of microscopic images of tissues remains extremely challenging. We propose an automated processing protocol for proper segmentation of individual cells in microscopic images. A Gaussian filter is applied to improve signal to noise ratio (SNR) then an original minmax method is proposed to produce an image in which information describing both cell centers (minima) and boundaries are enhanced. Finally, a contour-based model initialized from minima in the min-max cartography is carried out to achieve cell individualization. This method is evaluated on a NeuN-stained macaque brain section in sub-regions presenting various levels of fraction of neuron surface occupation. Comparison with several methods of reference demonstrates that the performances of our method are superior. A first application to the segmentation of neurons in the hippocampus illustrates the ability of our approach to deal with massive and complex data.
- MICCAI-BrainLesA quantitative approach to characterize MR contrasts with histologyYa
"el Balbastre , Michel E Vandenberghe , Anne-Sophie H
’erard , Pauline Gipchtein , and 6 more authorsIn Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers 1 , 2016Immunohistochemistry is widely used as a gold standard to inspect tissues, characterize their structure and detect pathological alterations. As such, the joint analysis of histological images and other imaging modalities (MRI, PET) is of major interest to interpret these physical signals and establish their correspondence with the biological constitution of the tissues. However, it is challenging to provide a meaningful characterization of the signal specificity. In this paper, we propose an integrated method to quantitatively evaluate the discriminative power of imaging modalities. This method was validated using a macaque brain dataset containing: 3 immunohistochemically stained and 1 histochemically stained series, 1 photographic volume and 1 in vivo T2 weighted MRI. First, biological regions of interest (ROIs) were automatically delineated from histological sections stained for markers of interest and mapped on the target non-specific modalities through co-registration. These non-overlapping ROIs were considered ground truth for later classification. Voxels were evenly split in training and testing sets for a logistic regression model. The statistical significance of resulting accuracy scores was evaluated through null distribution simulations. Such an approach could be of major interest to assess relevant biological characteristics from various imaging modalities.
2015
- IEEE EMBCRobust supervised segmentation of neuropathology whole-slide microscopy imagesMichel E Vandenberghe , Ya
"el Balbastre , Nicolas Souedet , Anne-Sophie H
’erard , and 3 more authorsIn 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , 2015Alzheimer’s disease is characterized by brain pathological aggregates such as Aβ plaques and neurofibrillary tangles which trigger neuroinflammation and participate to neuronal loss. Quantification of these pathological markers on histological sections is widely performed to study the disease and to evaluate new therapies. However, segmentation of neuropathology images presents difficulties inherent to histology (presence of debris, tissue folding, non-specific staining) as well as specific challenges (sparse staining, irregular shape of the lesions). Here, we present a supervised classification approach for the robust pixel-level classification of large neuropathology whole slide images. We propose a weighted form of Random Forest in order to fit nonlinear decision boundaries that take into account class imbalance. Both color and texture descriptors were used as predictors and model selection was performed via a leave-one-image-out cross-validation scheme. Our method showed superior results compared to the current state of the art method when applied to the segmentation of Aβ plaques and neurofibrillary tangles in a human brain sample. Furthermore, using parallel computing, our approach easily scales-up to large gigabyte-sized images. To show this, we segmented a whole brain histology dataset of a mouse model of Alzheimer’s disease. This demonstrates our method relevance as a routine tool for whole slide microscopy images analysis in clinical and preclinical research settings.