\documentclass[a4paper,titlepage,openany]{article} \usepackage{epsfig,amsmath,pifont,moreverb,multirow,multicol} %\usepackage[scaled=.92]{helvet} %\usepackage{newcent} %\usepackage{bookman} %\usepackage{utopia} %\usepackage{avant} %\usepackage{charter} %\usepackage{mathpazo} \renewcommand{\familydefault}{\sfdefault} \usepackage[colorlinks=true, pdfpagemode=UseOutlines, pdftitle={SPM12 Release Notes}, pdfauthor={The SPM Developers}, pdfsubject={Statistical Parametric Mapping}, pdfkeywords={neuroimaging, MRI, PET, EEG, MEG, SPM} ]{hyperref} \pagestyle{headings} \bibliographystyle{plain} \hoffset=15mm \voffset=-5mm \oddsidemargin=0mm \evensidemargin=0mm \topmargin=0mm \headheight=12pt \headsep=10mm \textheight=240mm \textwidth=148mm \marginparsep=5mm \marginparwidth=21mm \footskip=10mm \parindent 0pt \parskip 6pt \newcommand{\matlab}{\textsc{MATLAB}} \begin{document} \let\oldlabel=\label \renewcommand{\label}[1]{ {\pdfdest name {#1} fit} \oldlabel{#1} } \newlength{\centeroffset} \setlength{\centeroffset}{-0.5\oddsidemargin} \addtolength{\centeroffset}{0.5\evensidemargin} %\addtolength{\textwidth}{-\centeroffset} \thispagestyle{empty} \vspace*{\stretch{1}} \noindent\hspace*{\centeroffset}\makebox[0pt][l]{ \begin{minipage}{\textwidth} \flushright \textbf{\Huge{SPM12 Release Notes}} {\noindent\rule[-1ex]{\textwidth}{5pt}\\[2.5ex]} \hfill{{\huge The FIL Methods Group} \\ {\LARGE (and honorary members)}\\} %\vspace{20mm} \end{minipage} } %\vspace{\stretch{2}} \noindent\hspace*{\centeroffset}\makebox[0pt][l]{ \begin{minipage}{\textwidth} \flushright {\footnotesize Functional Imaging Laboratory\\ Wellcome Trust Centre for Neuroimaging\\ Institute of Neurology, UCL\\ 12 Queen Square, London WC1N 3BG, UK\\ \today\\ \url{http://www.fil.ion.ucl.ac.uk/spm/}} \end{minipage}} \vspace{10mm} This is SPM12\footnote{\url{http://www.fil.ion.ucl.ac.uk/spm/software/spm12/}}, a major update to the SPM software, containing substantial theoretical, algorithmic, structural and interface enhancements over previous versions, as detailed in this document. Although SPM12 will read image files from previous versions of SPM, there are differences in the algorithms, templates and models used. Therefore, we recommend you use a single SPM version for any given project. We would like to warmly thank everyone for testing, reporting bugs and giving feedbacks on the beta version. We are always interested to hear feedbacks and comments from SPM users - please contact us at \href{mailto:fil.spm@ucl.ac.uk}{fil.spm@ucl.ac.uk}. If you happen to find any bug, please report them at the same email address. Thank you! SPM is free but copyright software, distributed under the terms of the GNU General Public Licence as published by the Free Software Foundation (either version 2, as given in file \texttt{LICENCE.txt}, or at your option, any later version)\footnote{\url{http://www.gnu.org/copyleft/}}. SPM is developed under the auspices of Functional Imaging Laboratory (FIL), The Wellcome Trust Centre for NeuroImaging, in the Institute of Neurology at University College London (UCL), UK. \vspace{10mm} %\begin{multicols}{2} \section{Software} Updates will be made available from time to time at the following address and advertised on the SPM mailing list: \qquad \url{http://www.fil.ion.ucl.ac.uk/spm/download/spm12_updates/} A function called \texttt{spm\_update.m} can be used to detect and install updates when available. It is also accessible from the Help menu of the Graphics window. As in previous versions of SPM, the file \texttt{spm\_defaults.m} contains a list of defaults values used by the software. If you want to customise some defaults for your installation, we recommend you do not modify this file but rather create a file named \texttt{spm\_my\_defaults.m}, accessible from MATLAB search path; it will automatically be picked up and used by SPM to override some default settings. \subsection{File formats} SPM12 uses \href{http://nifti.nimh.nih.gov/nifti-1}{NIfTI-1} and \href{http://www.nitrc.org/projects/gifti/}{GIfTI} file formats for volumetric and surface-based data. The default format for images is now single file \texttt{.nii} instead of pair of files \texttt{.hdr/.img}. There is also preliminary support for \href{http://nifti.nimh.nih.gov/nifti-2}{NIfTI-2}. \subsection{Batch interface} New folder \texttt{batches} contains a number of ready-made batches (i.e. pipelines). They automatically configure the batch interface to execute preprocessings for fMRI or VBM analyses, as well as first or second level statistical analyses, using consensual settings. To use them, just load them from the batch interface as you would do with any other batch file you may have saved. \subsection{MATLAB} SPM12 is designed to work with MATLAB versions 7.4 (R2007a) to 8.3 (R2014a), and will not work with earlier versions. It is R2014b Graphics Ready and only requires core MATLAB to run (i.e. no toolboxes). It will be updated to handle future versions of MATLAB whenever necessary, until the release of the next major version of SPM. A standalone version of SPM12 (compiled with the MATLAB Compiler) is also available\footnote{\url{http://en.wikibooks.org/wiki/SPM/Standalone}} -- it does not require a MATLAB licence and can be deployed royalty-free. It might therefore be particularly useful for teaching or usage on a computer cluster. However, it is not possible to read or modify the source code, use your own functions or install third-party toolboxes. There is work in progress for compatibility with GNU Octave\footnote{\url{http://www.octave.org/}}. See ways you can contribute on the SPM wiki\footnote{\url{http://en.wikibooks.org/wiki/SPM/Octave}}. \section{Temporal processing} \subsection{Slice timing correction} The slice timing correction implementation was extended by adding the option to use slice acquisition times instead of slice orders and a reference time instead of a reference slice. In particular, this allows to handle datasets acquired using multi-band sequences. This work has been contributed by A. Hoffmann, M. Woletz and C. Windischberger from Medical University of Vienna, Austria \cite{woletz2014st}. \section{Spatial processing} There have been changes to much of the functionality for spatially transforming images -- particularly with respect to inter-subject registration. This is a small step towards reducing SPM to a more manageable size \cite{ashburner2011spm}. \subsection{Normalise} Spatial normalisation is no longer based on minimising the mean squared difference between a template and a warped version of the image. Instead, it is now done via segmentation \cite{ashburner05}, as this provides more flexibility. For those of you who preferred the older way of spatially normalising images, this is still available via the ``Old Normalise'' Tool. However, the aim is to try to simplify SPM and eventually remove the older and less effective \cite{klein_evaluation} routines. Deformation fields are now saved in a form that allows much more precise alignment. Rather than the old sn.mat format, they are now saved as y\_$*$.nii files, which contain three image volumes encoding the x, y and z coordinates (in mm) of where each voxel maps to. Note that for spatially normalising PET, SPECT and other images that have spatially correlated noise, it is a good idea to change the smoothness setting on the user interface (from 0 to about 5 mm). \subsection{Segment} The default segmentation has now been replaced by a slightly modified version of what was unimaginatively called ``New Segment'' in SPM8. For those of you who preferred the older way of segmenting images, this is still available via the ``Old Segment'' Tool. The aim, however, is to try to simplify SPM and eventually remove the older functionality that works less well. Both implementations are based on the algorithm presented in \cite{ashburner05}, although the newer version makes use of additional tissue classes, allows multi-channel segmentation (of eg T2-weighted and PD-weighted images), and incorporates a more flexible image registration component. Changes to the SPM8 version of ``New Segment'' include different regularisation for the deformations, some different default settings, as well as re-introducing the re-scaling of the tissue probability maps (which was in the old segment, but not the new). In addition, the tissue probability maps were re-generated using the T2-weighted and PD-weighted scans from the IXI dataset\footnote{\url{http://www.brain-development.org/}}. This was initially done in an automated way (by enabling a hidden feature in \texttt{spm\_preproc\_run.m}, which allows the necessary sufficient statistics for re-generating the templates to be computed), with some manual editing of the results to tidy them up. Note that eyeballs are now included within the same class as CSF. Separating eyeballs from other non-brain tissue allows the nonlinear registration part to be made more flexible, but the disadvantage is that intra-cranial volumes are now fractionally more difficult to compute. However, the cleanup step (re-introduced from the old segmentation routine, and extended slightly) should allow eyeballs to be removed from the fluid tissue class. \section{Atlas} Maximum probability tissue labels derived from the ``MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labeling''\footnote{\url{https://masi.vuse.vanderbilt.edu/workshop2012/index.php/Challenge_Details}} are available in files \texttt{tpm/labels\_Neuromorphometrics.\{nii,xml\}}. These data were released under the Creative Commons Attribution-NonCommercial (CC BY-NC) with no end date. Users should credit the MRI scans as originating from the OASIS project\footnote{\url{http://www.oasis-brains.org/}} and the labeled data as "provided by Neuromorphometrics, Inc.\footnote{\url{http://neuromorphometrics.com/}} under academic subscription". These references should be included in all workshop and final publications. See \texttt{spm\_templates.man} for more details about the generation of this file. \section{fMRI Statistics} \subsection{Neuroimaging Data Model (NIDM)} The Neuroimaging Data Model (NIDM)\footnote{\url{http://nidm.nidash.org/}} is a collection of specification documents that define extensions of the W3C PROV standard\footnote{\url{http://www.w3.org/TR/prov-primer/}} for the domain of human brain mapping. An implementation of the export of brain mapping statistical results using NIDM-Results \cite{Maumet2014} is available from the batch editor in SPM \textgreater\ Stats \textgreater\ Results Report\ \textgreater\ Print results \textgreater\ NIDM. \subsection{Canonical Variates Analysis} SPM's Canonical Variates Analysis (CVA) function, \texttt{spm\_cva.m}, has been updated so that it also computes Log Bayes Factors to make inferences about the number of significantly non-zero canonical vectors. It computes AIC and BIC approximations to these Log Bayes Factors. These quantities can be used in second-level analysis to make inferences about a group of subjects/sessions using the SPM's random effects model selection tool (\texttt{spm\_BMS.m}). \subsection{Regional Bayesian model comparison - first level} The function \texttt{spm\_vb\_regionF.m} now provides approximation of the log model evidence for a first level fMRI model. If you call this function multiple times with different design matrices it will allow you to implement Bayesian model comparison. This is useful for example for comparing non-nested GLMs with, for example, different (sets of) parametric modulators. The approach can be applied to first level fMRI data from a local region. This is the recommended method for Bayesian comparison of General Linear Models of first level fMRI data. \subsection{Interactive Bayesian model comparison - second level} Multidimensional (or two-sided) inferences about GLM parameters are now implemented using Savage-Dickey ratios. When you specify a multidimensional contrast for a GLM that has been estimated using the Bayesian option, SPM will produce log Bayes factor maps (in favour of the alternative versus the null model). This provides a Bayesian analogue of the F-test and allows one to implement Bayesian model comparisons in a truly interactive manner. This is the recommended method for Bayesian model comparison of second level data (fMRI or MEG/EEG). If you wish to create log Bayes factor maps in favour of the {\em null} model, you can use the MATLAB command line function \texttt{spm\_bms\_test\_null.m}. Operationally, Bayesian inferences at the second level are implemented by first specifying a model, and then estimating it under the 'classical' option. You must then re-estimate the model under the 'Bayesian' option. Subsequent to this, when you are specifying new contrasts in the contrast manager you can have some contrasts that are Bayesian ('t' will provide inference about effect sizes, 'F' will use the Savage-Dickey approach), and others that are classical. The first time you specify a contrast you decide which it is to be. \subsection{Mixed-effects models} The Mixed-Effects (MFX) approach described in \cite{karl_mixed} and implemented in \texttt{spm\_mfx.m} is now available via the batch interface. A new chapter in the SPM manual is devoted to this new option. We do envisage, however, that the great majority of random-effects inferences will be implemented using the usual summary statistic (SS) approach (creating contrast images for SPM analysis of single subject data, and then entering these images as data into a group analysis). This is because the SS approach is highly computationally efficient and only becomes suboptimal when the within-subject variances differ by an order or magnitude (which could be caused eg. by one subject having an order of magnitude more events than another). \subsection{Set-level threshold-free tests} Typically in order to make inference on set level features (like the number of peaks) one has to make an arbitratry choice of threshold (to decide what consitutes a peak). This routine requires no arbitrary feature-defining threshold but is nevertheless sensitive to distributed or spatially extended patterns. It is based on comparing the intrinsic volumes (or Lipschitz-Killing curvature (LKC)) of the residual images with those of the test image \cite{Barnes2013}. In brief, we make an estimate the LKC for every residual field (which are assumed to be realisations of Gaussian random fields) and for the final SPM (which \emph{under the null hypothesis} is assumed to be a realisation of a central random field of F or t statistics). Under the null hypothesis these two measures of intrinsic volume should be the same, and so comparing these two sets of curvatures gives a direct and threshold free test of whether the final SPM deviates from the null hypothesis. In other words, instead of using the LKC to assess the significance of an excursion set (like the number of maxima above a threshold), we assess the significance of the LKC measure per se, and evaluate its null distribution using the residual images that have the same intrinsic volume but contain no treatment effect. Intuitively, we assess whether the numbers of peaks in the statistical field (testing for signal) and the residual fields (noise) are consistent or not. For example, if the SPM contains more maxima than it should under the null hypothesis it would appear to have a greater intrinsic volume. This option is available from the batch editor in SPM \textgreater\ Stats \textgreater\ Set Level test. \subsection{DCM} \subsubsection{Reparametrisation of the bilinear model} The bilinear model used by (single-state) DCM for fMRI has been upgraded. To summarise, a number of people have observed that Bayesian parameter averages (BPA) of self (intrinsic) connections can be positive, despite being negative for each subject or session. This is perfectly plausible behaviour -- in the sense that their prior expectation is $-1/2$ and each subject can provide evidence that the posterior is less than the prior mean (but still negative). When this evidence is accumulated by BPA, the posterior can be `pushed' into positive values. The anomalous aspect of this behaviour rests on the fact that there is no negativity constraint on the self-connections. In the revised code, the self-connections -- in the leading diagonal of the $A$ matrix -- now specify log scaling parameters. This means that these (and only these) parameters encode a self-connections of $-1/2*exp(A)$; where $A$ has a prior mean of $0$ and $-1/2*exp(0) = -1/2$. This re-parameterisation does not affect model inversion very much but guarantees the BPA is always negative because $-1/2*exp(A) < 0$ has to be less than one. The parameterisation of connection strengths in terms of log scale parameters is already used by two-state models for fMRI and all EEG (and MEG) DCMs. \subsubsection{Bayesian model selection for group studies} The function \texttt{spm\_BMS.m} has been updated to also return ''Protected Exceedance Probabilities (PXPs)'' \cite{Rigoux2014}. These can be interpreted as follows. We first define a null hypothesis, $H_0$, as the model frequencies being equal (if you have $K$ models each is used in the population with a frequency of $1/K$). The alternative hypothesis, $H_1$, is that the model frequencies are different. It is then possible to compute the evidence for each hypothesis and the posterior probability that the null is true. This is known as the Bayes Omnibus Risk (BOR), and is also returned by \texttt{spm\_BMS.m}. The PXP is then given by the probability of the null times the model frequencies under the null ($1/K$) plus the probability of the alternative times the XPs. PXPs are more conservative than XPs and are protected against the possibility that the alternative hypothesis is not true (this is implicitly assumed in the computation of the XPs). The Bayesian Model Selection GUI now plots PXPs and the BOR in an additional results window. \subsubsection{Spectral DCM for resting state fMRI} Spectral DCM is a new method developed with the intention of modelling BOLD sessions where there are no exogenous inputs (driving or modulatory), i.e. "resting state" sessions \cite{rsDCM2014}. That being said, the scheme is able to invert models that include driving inputs (e.g. sensory stimulation). This new DCM follows the previous framework by offering a mechanistic description of how distributed neural populations produce observed data. The use of resting state fMRI is now widespread; particularly in attempts to characterise differences in functional connectivity between cohorts. This DCM attempts to quantify the effective connectivity in endogenous brain networks. Stochastic DCM for fMRI is similarly well suited for modelling resting state data but makes limited assumptions about the neuronal fluctuations driving the system, creating a difficult inversion problem that is computationally intensive. Furthermore, when comparing groups of subjects, cohorts could potentially differ in coupling and/or the form of endogenous fluctuations. Spectral DCM overcomes these problems by making stationarity assumptions, and considering the cross-spectra of the observed BOLD data, as opposed to the original time series. This reduces the inversion to an estimation of the spectral density of the neuronal fluctuations 'driving' the system (and observation noise). This permits a more efficient estimation (inversion taking seconds to minutes), yielding estimates of coupling (as with all DCMs), as well as potentially useful parameters describing the endogenous neuronal fluctuations. Stationarity assumptions preclude the inclusion of modulatory inputs, i.e. changes in coupling caused by experimental manipulation. For this, investigators should explore stochastic DCM for fMRI. In summary, this DCM is suited for investigators looking to compare the coupling in an endogenous (or "resting state") network between two groups of subjects (e.g. patients and controls). \section{EEG/MEG} \subsection{\texttt{@meeg} object} The following \texttt{@meeg} methods have been removed to simplify and rationalise the object interface \begin{itemize} \item \texttt{`pickconditions'} - replaced with \texttt{`indtrial'} \item \texttt{`meegchannels'}, \texttt{`ecgchannels'}, \texttt{`emgchannels'}, \texttt{`eogchannels'} - replaced with \texttt{`indchantype'} \item \texttt{`reject'} - replaced with {`badtrials'} \end{itemize} It is now possible to have \texttt{@meeg} objects without a linked data file. This was useful to simplify the conversion code. Also in \textsc{Prepare} one can just load a header of a raw data file and use it to prepare inputs for the full conversion batch (e.g. select channels or prepare trial definition for epoching during conversion). \subsection{Preprocessing} The preprocessing functions now use the SPM batch system and the interactive GUI elements have been removed. This should make it easy to build processing pipelines for performing complete complicated data analyses without programming. The use of batch has many advantages but can also complicate some of the operations because a batch must be configured in advance and cannot rely on information available in the input file. For instance, the batch tool cannot know the channel names for a particular dataset and thus cannot generate a dialog box for the user to choose the channels. To facilitate the processing steps requiring this kind of information additional functionalities have been added to the \textsc{Prepare} tool under \textsc{Batch inputs} menu. One can now make the necessary choices for a particular dataset using an unteractive GUI and then save the results in a mat file and use this file as an input to batch. It is still also possible to call the preprocessing function via scripts and scripts can be generated using \texttt{History2script} functionality as in SPM8. The inputs structures and parameters names of preprocessing function have been standardised so that e.g. time window is always called \texttt{S.timewin} and units of peristimulus time are always ms. Also substructures of \texttt{S} have been removed except for where they are functionally essential. Simple GUI functionality for converting a variable in the workspace into SPM M/EEG dataset was added to \textsc{Prepare} (File/Import from workspace). For MEG systems with planar gradiometers (e.g. Neuromag) there is a new tool for combining planar channels and writing out the results back into an M/EEG file. This allows further processing of combined channels which was impossible in SPM8 e.g. baseline correction or TF rescaling. Conversion to images was completely rewritten. It is now possible to easily create 1D images and average across any set of dimensions to reduce the number of multiple comparisons. \subsection{Online montage} The ``online montage'' lets you apply any type of channel montage (re-referencing, subsampling of channels, mixing as in ICA, etc.) to your data without having to write them explicitly on disk. A series of such ``montages'' can be added to your data and switched on/off at will. This can be particularly useful to visually review your data (e.g. on a subset of channels), visualize the effect of a applying such montage (e.g. 2 types of re-referencing) or perform specific operations on mixed data (e.g. feature detection). \subsection{Convolution modelling} An implementation of recently published convolution modelling method for M/EEG power \cite{Litvak_ConvModel_2013} has been added (under 'Specify 1st-level') button. It is now possible to apply TF analysis to continuous data and save continuous TF datasets to be used as input to this tool. \subsection{Beamforming} The functions previously available in the ``Beamforming'' toolbox in SPM8 have been superseded by the new Data Analysis in Source Space (DAiSS) toolbox that is not included by default in SPM and can be obtained from \url{http://code.google.com/p/spm-beamforming-toolbox/}. \subsection{Surface-based statistics} SPM's GLM implementation (in \texttt{spm\_spm.m}) has been modified to handle both volumetric and surfacic data; input files can then be specified in NIfTI or GIfTI format. This is used so far when making inference at the source level for M/EEG data. \subsection{DCM} \begin{itemize} \item The routine evaluating cross spectra (\texttt{spm\_dcm\_data\_csd}) now performs a moving window cross spectral analysis (based on an eighth order MAR model) to remove (nonstationary) fluctuations in the cross spectra. This is achieved by performing a singular value decomposition on the time-dependent cross spectra and retaining only the principal spectral mode. \item The data features used for inverting dynamic causal models of cross spectral density now include both the cross spectra per se and the cross covariance functions. These are simply concatenated to provide a greater latitude of data features to compute free energy gradients. Heuristically, the cross spectra inform gradients that affect low frequencies, while the covariance functions allow high frequencies to be fitted gracefully. This means that any frequency dependent precision can be removed. \item The inversion routines for event related potentials (\texttt{spm\_dcm\_erp}) and complex cross spectra (\texttt{spm\_dcm\_csd}) now use more precise (hyper) priors on data feature noise (with an expected log precision \texttt{hE} of eight and a log precision of eight). Effectively, this means that, a priori, we expect these data features to be essentially noiseless -- because they accumulate information from long timeseries, with many degrees of freedom. \item To ensure the quantitative veracity of the hyperpriors, the data are scaled to have a maximum amplitude of one (for evoked responses) and a variance of 16 (four cross spectral density analysis).The scaling of the exogenous and endogenous input (\texttt{U}) in the equations of motion has also been adjusted, to ensure that the neural mass and mean field models used in DCM produce an ERP with a maximum height of about 1 and autospectra with about unit variance. \item The amplitude of evoked responses, and the spectral responses (shown in terms of autospectra and covariance functions) can now be visualised -- for all models -- using the survey of models button in the \texttt{Neural\_models} demo. \end{itemize} \section{Utilities} \subsection{DICOM Import} The DICOM dictionary has been updated to reflect changes to the standard over the last decade or so. It is now based on the 2011 edition\footnote{\url{http://medical.nema.org/standard.html}}. \subsection{Deformations} The deformations utility was completely re-written to provide additional flexibility. This was largely to facilitate the re-write of what lies behind the ``Normalise'' button. \subsection{De-face Images} A simple utility is available for attempting to strip the face from images, so individuals are more difficult to identify from surface renderings. \section{Tools} \subsection{Dartel} Much of the C code (in the mex functions that do most of the work in Dartel) has been extensively re-written to make it work more effectively, and to provide a framework on which to base the ``Shoot'' and ``Longitudinal Registration'' toolboxes. \subsection{Shoot} This toolbox is based on the work in \cite{ashburner2011diffeomorphic}, and is a diffeomorphic registration approach similar to Dartel, although much more theoretically sound. Evaluations show that it achieves more robust solutions in situations where larger deformations are required. The eventual plan will be to replace Dartel with this toolbox, although more work needs to be done in terms of user interfaces etc. \subsection{Longitudinal Registration} SPM12 incorporates a new longitudinal registration approach \cite{ashburner2013symmetric}, which replaces the old ``high-dimensional warping'' toolbox. It essentially now involves a group-wise intra-subject modeling framework, which combines diffeomorphic \cite{ashburner2011diffeomorphic} and rigid-body registration, incorporating a correction for the intensity inhomogeneity artefact usually seen in MRI data. Recently, systematic bias in longitudinal image registration has become a more notable issue. The aim of this toolbox is to estimate volume changes that do not depend on whether the first time point is aligned to the second, or vice verca. \subsection{Old Segment} The default segmentation approach in SPM12 is now based on what was known as ``New Segment'' in SPM8. This toolbox keeps the old segmentation approach available to those who wish to continue using it, although we plan to eventually phase out the older approach. \subsection{Old Normalise} The default spatial normalisation approach in SPM12 is now based on what was known as ``New Segment'' in SPM8. This toolbox keeps the old normalisation approach available to those who wish to continue using it, although we plan to eventually phase out the older approach. See \cite{klein_evaluation} to see how poorly the old normalisation approach works. It was definitely time for it to go. %\section{Bibliography} \bibliography{biblio/methods_macros,biblio/methods_group,biblio/external} %\end{multicols} \end{document}