CITATION.html 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
  1. <!DOCTYPE html>
  2. <html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
  3. <head>
  4. <meta charset="utf-8" />
  5. <meta name="generator" content="pandoc" />
  6. <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
  7. <title>fMRIPrep citation boilerplate</title>
  8. <style type="text/css">
  9. code{white-space: pre-wrap;}
  10. span.smallcaps{font-variant: small-caps;}
  11. span.underline{text-decoration: underline;}
  12. div.column{display: inline-block; vertical-align: top; width: 50%;}
  13. </style>
  14. <!--[if lt IE 9]>
  15. <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
  16. <![endif]-->
  17. </head>
  18. <body>
  19. <p>Results included in this manuscript come from preprocessing performed using <em>fMRIPprep</em> 1.2.6-1 (<span class="citation" data-cites="fmriprep1">Esteban, Markiewicz, et al. (2018)</span>; <span class="citation" data-cites="fmriprep2">Esteban, Blair, et al. (2018)</span>; RRID:SCR_016216), which is based on <em>Nipype</em> 1.1.7 (<span class="citation" data-cites="nipype1">Gorgolewski et al. (2011)</span>; <span class="citation" data-cites="nipype2">Gorgolewski et al. (2018)</span>; RRID:SCR_002502).</p>
  20. <dl>
  21. <dt>Anatomical data preprocessing</dt>
  22. <dd><p>The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) using <code>N4BiasFieldCorrection</code> <span class="citation" data-cites="n4">(Tustison et al. 2010, ANTs 2.2.0)</span>, and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped using <code>antsBrainExtraction.sh</code> (ANTs 2.2.0), using OASIS as target template. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c <span class="citation" data-cites="mni">(Fonov et al. 2009, RRID:SCR_008796)</span> was performed through nonlinear registration with <code>antsRegistration</code> <span class="citation" data-cites="ants">(ANTs 2.2.0, RRID:SCR_004757, Avants et al. 2008)</span>, using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using <code>fast</code> <span class="citation" data-cites="fsl_fast">(FSL 5.0.9, RRID:SCR_002823, Zhang, Brady, and Smith 2001)</span>.</p>
  23. </dd>
  24. <dt>Functional data preprocessing</dt>
  25. <dd><p>For each of the 1 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of <em>fMRIPrep</em>. A deformation field to correct for susceptibility distortions was estimated based on <em>fMRIPrep</em>’s <em>fieldmap-less</em> approach. The deformation field is that resulting from co-registering the BOLD reference to the same-subject T1w-reference with its intensity inverted <span class="citation" data-cites="fieldmapless1 fieldmapless2">(Wang et al. 2017; Huntenburg 2014)</span>. Registration is performed with <code>antsRegistration</code> (ANTs 2.2.0), and the process regularized by constraining deformation to be nonzero only along the phase-encoding direction, and modulated with an average fieldmap template <span class="citation" data-cites="fieldmapless3">(Treiber et al. 2016)</span>. Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate co-registration with the anatomical reference. The BOLD reference was then co-registered to the T1w reference using <code>flirt</code> <span class="citation" data-cites="flirt">(FSL 5.0.9, Jenkinson and Smith 2001)</span> with the boundary-based registration <span class="citation" data-cites="bbr">(Greve and Fischl 2009)</span> cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using <code>mcflirt</code> <span class="citation" data-cites="mcflirt">(FSL 5.0.9, Jenkinson et al. 2002)</span>. The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time-series will be referred to as <em>preprocessed BOLD in original space</em>, or just <em>preprocessed BOLD</em>. The BOLD time-series were resampled to MNI152NLin2009cAsym standard space, generating a <em>preprocessed BOLD run in MNI152NLin2009cAsym space</em>. First, a reference volume and its skull-stripped version were generated using a custom methodology of <em>fMRIPrep</em>. Several confounding time-series were calculated based on the <em>preprocessed BOLD</em>: framewise displacement (FD), DVARS and three region-wise global signals. FD and DVARS are calculated for each functional run, both using their implementations in <em>Nipype</em> <span class="citation" data-cites="power_fd_dvars">(following the definitions by Power et al. 2014)</span>. The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction <span class="citation" data-cites="compcor">(<em>CompCor</em>, Behzadi et al. 2007)</span>. Principal components are estimated after high-pass filtering the <em>preprocessed BOLD</em> time-series (using a discrete cosine filter with 128s cut-off) for the two <em>CompCor</em> variants: temporal (tCompCor) and anatomical (aCompCor). Six tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions. This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions. For aCompCor, six components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation). The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. All resamplings can be performed with <em>a single interpolation step</em> by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and template spaces). Gridded (volumetric) resamplings were performed using <code>antsApplyTransforms</code> (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels <span class="citation" data-cites="lanczos">(Lanczos 1964)</span>. Non-gridded (surface) resamplings were performed using <code>mri_vol2surf</code> (FreeSurfer).</p>
  26. </dd>
  27. </dl>
  28. <p>Many internal operations of <em>fMRIPrep</em> use <em>Nilearn</em> 0.5.0 <span class="citation" data-cites="nilearn">(Abraham et al. 2014, RRID:SCR_001362)</span>, mostly within the functional processing workflow. For more details of the pipeline, see <a href="https://fmriprep.readthedocs.io/en/latest/workflows.html" title="FMRIPrep&#39;s documentation">the section corresponding to workflows in <em>fMRIPrep</em>’s documentation</a>.</p>
  29. <h3 id="references" class="unnumbered">References</h3>
  30. <div id="refs" class="references">
  31. <div id="ref-nilearn">
  32. <p>Abraham, Alexandre, Fabian Pedregosa, Michael Eickenberg, Philippe Gervais, Andreas Mueller, Jean Kossaifi, Alexandre Gramfort, Bertrand Thirion, and Gael Varoquaux. 2014. “Machine Learning for Neuroimaging with Scikit-Learn.” <em>Frontiers in Neuroinformatics</em> 8. <a href="https://doi.org/10.3389/fninf.2014.00014" class="uri">https://doi.org/10.3389/fninf.2014.00014</a>.</p>
  33. </div>
  34. <div id="ref-ants">
  35. <p>Avants, B.B., C.L. Epstein, M. Grossman, and J.C. Gee. 2008. “Symmetric Diffeomorphic Image Registration with Cross-Correlation: Evaluating Automated Labeling of Elderly and Neurodegenerative Brain.” <em>Medical Image Analysis</em> 12 (1): 26–41. <a href="https://doi.org/10.1016/j.media.2007.06.004" class="uri">https://doi.org/10.1016/j.media.2007.06.004</a>.</p>
  36. </div>
  37. <div id="ref-compcor">
  38. <p>Behzadi, Yashar, Khaled Restom, Joy Liau, and Thomas T. Liu. 2007. “A Component Based Noise Correction Method (CompCor) for BOLD and Perfusion Based fMRI.” <em>NeuroImage</em> 37 (1): 90–101. <a href="https://doi.org/10.1016/j.neuroimage.2007.04.042" class="uri">https://doi.org/10.1016/j.neuroimage.2007.04.042</a>.</p>
  39. </div>
  40. <div id="ref-fmriprep2">
  41. <p>Esteban, Oscar, Ross Blair, Christopher J. Markiewicz, Shoshana L. Berleant, Craig Moodie, Feilong Ma, Ayse Ilkay Isik, et al. 2018. “FMRIPrep.” <em>Software</em>. Zenodo. <a href="https://doi.org/10.5281/zenodo.852659" class="uri">https://doi.org/10.5281/zenodo.852659</a>.</p>
  42. </div>
  43. <div id="ref-fmriprep1">
  44. <p>Esteban, Oscar, Christopher Markiewicz, Ross W Blair, Craig Moodie, Ayse Ilkay Isik, Asier Erramuzpe Aliaga, James Kent, et al. 2018. “fMRIPrep: A Robust Preprocessing Pipeline for Functional MRI.” <em>Nature Methods</em>. <a href="https://doi.org/10.1038/s41592-018-0235-4" class="uri">https://doi.org/10.1038/s41592-018-0235-4</a>.</p>
  45. </div>
  46. <div id="ref-mni">
  47. <p>Fonov, VS, AC Evans, RC McKinstry, CR Almli, and DL Collins. 2009. “Unbiased Nonlinear Average Age-Appropriate Brain Templates from Birth to Adulthood.” <em>NeuroImage</em>, Organization for human brain mapping 2009 annual meeting, 47, Supplement 1: S102. <a href="https://doi.org/10.1016/S1053-8119(09)70884-5" class="uri">https://doi.org/10.1016/S1053-8119(09)70884-5</a>.</p>
  48. </div>
  49. <div id="ref-nipype1">
  50. <p>Gorgolewski, K., C. D. Burns, C. Madison, D. Clark, Y. O. Halchenko, M. L. Waskom, and S. Ghosh. 2011. “Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python.” <em>Frontiers in Neuroinformatics</em> 5: 13. <a href="https://doi.org/10.3389/fninf.2011.00013" class="uri">https://doi.org/10.3389/fninf.2011.00013</a>.</p>
  51. </div>
  52. <div id="ref-nipype2">
  53. <p>Gorgolewski, Krzysztof J., Oscar Esteban, Christopher J. Markiewicz, Erik Ziegler, David Gage Ellis, Michael Philipp Notter, Dorota Jarecka, et al. 2018. “Nipype.” <em>Software</em>. Zenodo. <a href="https://doi.org/10.5281/zenodo.596855" class="uri">https://doi.org/10.5281/zenodo.596855</a>.</p>
  54. </div>
  55. <div id="ref-bbr">
  56. <p>Greve, Douglas N, and Bruce Fischl. 2009. “Accurate and Robust Brain Image Alignment Using Boundary-Based Registration.” <em>NeuroImage</em> 48 (1): 63–72. <a href="https://doi.org/10.1016/j.neuroimage.2009.06.060" class="uri">https://doi.org/10.1016/j.neuroimage.2009.06.060</a>.</p>
  57. </div>
  58. <div id="ref-fieldmapless2">
  59. <p>Huntenburg, Julia M. 2014. “Evaluating Nonlinear Coregistration of BOLD EPI and T1w Images.” Master’s Thesis, Berlin: Freie Universität. <a href="http://hdl.handle.net/11858/00-001M-0000-002B-1CB5-A" class="uri">http://hdl.handle.net/11858/00-001M-0000-002B-1CB5-A</a>.</p>
  60. </div>
  61. <div id="ref-mcflirt">
  62. <p>Jenkinson, Mark, Peter Bannister, Michael Brady, and Stephen Smith. 2002. “Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images.” <em>NeuroImage</em> 17 (2): 825–41. <a href="https://doi.org/10.1006/nimg.2002.1132" class="uri">https://doi.org/10.1006/nimg.2002.1132</a>.</p>
  63. </div>
  64. <div id="ref-flirt">
  65. <p>Jenkinson, Mark, and Stephen Smith. 2001. “A Global Optimisation Method for Robust Affine Registration of Brain Images.” <em>Medical Image Analysis</em> 5 (2): 143–56. <a href="https://doi.org/10.1016/S1361-8415(01)00036-6" class="uri">https://doi.org/10.1016/S1361-8415(01)00036-6</a>.</p>
  66. </div>
  67. <div id="ref-lanczos">
  68. <p>Lanczos, C. 1964. “Evaluation of Noisy Data.” <em>Journal of the Society for Industrial and Applied Mathematics Series B Numerical Analysis</em> 1 (1): 76–85. <a href="https://doi.org/10.1137/0701007" class="uri">https://doi.org/10.1137/0701007</a>.</p>
  69. </div>
  70. <div id="ref-power_fd_dvars">
  71. <p>Power, Jonathan D., Anish Mitra, Timothy O. Laumann, Abraham Z. Snyder, Bradley L. Schlaggar, and Steven E. Petersen. 2014. “Methods to Detect, Characterize, and Remove Motion Artifact in Resting State fMRI.” <em>NeuroImage</em> 84 (Supplement C): 320–41. <a href="https://doi.org/10.1016/j.neuroimage.2013.08.048" class="uri">https://doi.org/10.1016/j.neuroimage.2013.08.048</a>.</p>
  72. </div>
  73. <div id="ref-fieldmapless3">
  74. <p>Treiber, Jeffrey Mark, Nathan S. White, Tyler Christian Steed, Hauke Bartsch, Dominic Holland, Nikdokht Farid, Carrie R. McDonald, Bob S. Carter, Anders Martin Dale, and Clark C. Chen. 2016. “Characterization and Correction of Geometric Distortions in 814 Diffusion Weighted Images.” <em>PLOS ONE</em> 11 (3): e0152472. <a href="https://doi.org/10.1371/journal.pone.0152472" class="uri">https://doi.org/10.1371/journal.pone.0152472</a>.</p>
  75. </div>
  76. <div id="ref-n4">
  77. <p>Tustison, N. J., B. B. Avants, P. A. Cook, Y. Zheng, A. Egan, P. A. Yushkevich, and J. C. Gee. 2010. “N4ITK: Improved N3 Bias Correction.” <em>IEEE Transactions on Medical Imaging</em> 29 (6): 1310–20. <a href="https://doi.org/10.1109/TMI.2010.2046908" class="uri">https://doi.org/10.1109/TMI.2010.2046908</a>.</p>
  78. </div>
  79. <div id="ref-fieldmapless1">
  80. <p>Wang, Sijia, Daniel J. Peterson, J. C. Gatenby, Wenbin Li, Thomas J. Grabowski, and Tara M. Madhyastha. 2017. “Evaluation of Field Map and Nonlinear Registration Methods for Correction of Susceptibility Artifacts in Diffusion MRI.” <em>Frontiers in Neuroinformatics</em> 11. <a href="https://doi.org/10.3389/fninf.2017.00017" class="uri">https://doi.org/10.3389/fninf.2017.00017</a>.</p>
  81. </div>
  82. <div id="ref-fsl_fast">
  83. <p>Zhang, Y., M. Brady, and S. Smith. 2001. “Segmentation of Brain MR Images Through a Hidden Markov Random Field Model and the Expectation-Maximization Algorithm.” <em>IEEE Transactions on Medical Imaging</em> 20 (1): 45–57. <a href="https://doi.org/10.1109/42.906424" class="uri">https://doi.org/10.1109/42.906424</a>.</p>
  84. </div>
  85. </div>
  86. </body>
  87. </html>