Mpheinrich.de

MIND: Modality Independent Neighbourhood Descriptor for Multi-Modal Deformable Mattias P. Heinricha,b,∗, Mark Jenkinsonb, Manav Bhushana,b, Tahreema Matind, Fergus V. Gleesond, Sir Michael Bradyc, Julia A. Schnabela aInstitute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK bOxford University Centre for Functional MRI of the Brain, UK cDepartment of Oncology, University of Oxford, UK dDepartment of Radiology, Churchill Hospital, Oxford, UK Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linearand deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extractthe distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the conceptof image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguishbetween different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the mostconsiderable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. Themulti-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise localsimilarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide rangeof transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of theimages as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND wouldbe applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3Dthoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show theadvantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect toclinically annotated landmark locations.
Keywords: Non-rigid registration, multi-modal similarity metric, self-similarity, non-local means, computed tomography,magnetic resonance imaging, pulmonary images multi-modal images can also help a clinician to make use of the complementary information present in different modalities and The aim of medical image registration is to find the cor- 19 improve the diagnostic task. One common clinical application rect spatial mapping of corresponding anatomical or functional 20 is the registration of computed tomography (CT) and magnetic structures between images. Patient motion, due to different po- 21 resonance imaging (MRI), as it can combine the good spatial sitioning or breathing level, and pathological changes between 22 resolution and dense tissue contrast of a CT with the better soft scans may cause non-rigid deformations, which need to be com- 23 tissue contrast of MRI.
pensated for. Advances in recent years have resulted in a num- In addition to the geometric distortion caused by patient mo- ber of robust and accurate methods for deformable registration tion, multi-modal registration also has to be able to deal with techniques for scans of the same modality, with registration intensity distortions. Due to the different physical phenomena accuracies close to the scan resolution (as demonstrated in an that are measured by the different modalities, there is no func- evaluation study of lung registration, Murphy et al. (2011)).
tional relation between the intensity mapping of correspond- However, the registration of images from different modalities ing anatomies. This problem can be addressed using geomet- remains a challenging and active area of research. Alignment ric registration approaches, which aim to match a sparse set of of multi-modal images helps to relate clinically relevant and descriptors, such as scale invariant feature transform (SIFT) often complementary information from different scans. For ex- (Lowe (1999)) or gradient location and orientation histograms ample, it can be used in image guided interventions. Using (GLOH) (Mikolajczyk and Schmid (2005)), which are to some extent invariant to changes of intensity (or illumination) since ICorresponding author: they rely on image gradients and local orientations. However, Email address: [email protected] (Mattias P.
they have not been successfully applied to multi-modal images, URL: http://users.ox.ac.uk/ shil3388/ (Mattias P. Heinrich) where the intensity variations are more severe. Voxel-wise in- Preprint submitted to Medical Image Analysis September 29, 2014 tensity based registration can also be used to align multi-modal 95 transformation parameters using a single-modal similarity met- images. This requires the use of a similarity metric derived 96 ric (SSD), in order to compare descriptors across the two im- from the image intensities that is robust to the non-functional 97 This article extends our earlier work (Heinrich et al. (2011)) Mutual information (MI), first introduced by Maes et al. 99 by using a more principled derivation of this image descriptor, (1997) and Viola and Wells III (1997), is an information the-100 thus making it more robust to changes in local noise and con- oretic measure, which aims to find a statistical intensity rela-101 trast and therefore allowing for the use of the L2 norm to com- tionship across images and thereby maximises the amount of102 pare descriptors across modalities. We also present a more thor- shared information between two images. For the rigid align-103 ough evaluation including quantitative comparisons to more re- ment of multi-modal images, MI has been very successful and104 cent multi-modal similarity metrics.
is widely used (an overview is given in Pluim et al. (2003)).105 This paper is structured as follows: Section 2 presents an Its application to deformable multi-modal registration comes106 overview of related work in deformable multi-modal registra- with many difficulties, and several weaknesses have been iden-107 tion, as well as examples of the use of image self-similarity in tified. The main disadvantage is that MI is intrinsically a global108 literature. This includes a brief review of two recent techniques: measure and therefore its local estimation is difficult, which109 conditional mutual information and entropy images, against can lead to many false local optima in non-rigid registration.110 which the proposed technique will be compared. Section 3 de- Moreover, the optimisation of mutual information for non-rigid111 scribes the formulation and implementation of MIND, demon- registration is computationally complex and converges slower112 strating its sensitivity to different types of image features, such than more simple intensity metrics, such as sum of squared dif-113 as corner points, edges and homogenous areas, and their lo- ferences (SSD), calculated over the intensities directly. Con-114 cal orientation. Details of its efficient implementation are pre- sequently, a new approach to deformable multi-modal regis-115 sented, which greatly reduces the computational complexity by tration has emerged, which uses a different scalar represen-116 using convolution filters. The rigid and deformable registration tation of both images based on a modality independent local117 framework used in the experiments, which is based on a multi- quantity, such as local phase, gradient orientation or local en-118 resolution Gauss-Newton optimisation, is presented in Section tropy (Mellor and Brady (2005), Haber and Modersitzki (2006),119 4. Section 5 shows an evaluation of the robustness and accuracy Wachinger and Navab (2012)). These approaches benefit from120 of the presented method, first for the task of landmark detection their attractive properties for the optimisation of the cost func-121 in multi-modal 3D datasets under the influence of intensity dis- tion, since the point-wise (squared) differences can be used to122 tortions, then for deformable registration of CT lung scans, and minimise differences between the image representations. For123 finally on the clinical application of the alignment of volumetric challenging multi-modal scans it is however not always possi-124 CT and MRI scans of patients suffering from the lung disease ble to find a scalar representation that is sufficiently discrimina-125 empyema. The method's performance is quantitatively evalu- ated using gold standard landmarks localised by a clinical ra- In this article, we introduce a new concept for deformable127 diologist. Finally, the results are discussed and future research multi-modal registration using a highly discriminative, multi-128 directions are given.
dimensional image descriptor, called the modality independent neighbourhood descriptor (MIND), which can be efficiently computed in a dense manner over the images and optimised using SSD. We make use of the concept of local self-similarity, 2.1. Mutual information which has been exploited in many different areas of image anal- ysis, such as denoising (Buades et al. (2005)), super-resolution131 Mutual information (MI) is derived from information theory (Manjon et al. (2008)), image retrieval (H¨orster and Lienhart132 and measures the statistical dependency of two random vari- (2008)), detection (Shechtman and Irani (2007)) and segmenta-133 ables. It was first introduced to medical image registration tion (Coup´e et al. (2010)). It allows the formulation of an image134 for the rigid alignment of multi-modal scans by Maes et al.
descriptor, which is independent of the particular intensity dis-135 (1997) and Viola and Wells III (1997), and later used success- tribution across two images and still provides a very good rep-136 fully in a variety of applications, including deformable registra- resentation of the local shape of an image feature. It is based137 tion (Rueckert et al. (1999), Meyer et al. (1997)). Studholme on the assumption that even though the intensity distribution of138 et al. (1999) introduced normalised mutual information (NMI) an anatomical structure may not correspond across modalities,139 to cope with the effect of changing image overlap on MI. It is it is reliable within a local neighbourhood in the same image.140 based on the assumption that a lower entropy of the joint inten- Therefore descriptors based on a simple intensity based met-141 sity distribution corresponds to a better alignment.
ric, like SSD, can be extracted for each modality separately and142 An important disadvantage of mutual information for image then directly compared across images. The overview of our ap-143 registration is that it ignores the spatial neighbourhood of a par- proach is schematically shown in Fig. 1. We first extract a dense144 ticular voxel within one image and consequently, it does not set of high-dimensional image descriptors for both images in-145 use the spatial information shared across images. In the pres- dependently based on the intensity differences within a search146 ence of image intensity distortions, such as a non-stationary region around each voxel in the same modality. We embed this147 bias field in an MRI scan, this can deteriorate the quality of in a standard non-rigid registration framework to optimise the148 the alignment, especially in the case of non-rigid registration MRI intensities with search region CT intensities with search region Figure 1: Proposed concept for the use of MIND for multimodal registration. MIND is calculated in a dense manner in CT and MRI. Three exemplary locationswith different image features:  homogenous intensities (liver),  corner points at one vertebra and  image gradients at the boundary between fat and non-fattissue are shown. The corresponding descriptors (in coloured boxes, high intensities correspond to small patch distances) are independent of the respective modalityand can be easily compared using the L2 norm.
where the geometric constraints of the transformation are re-177 would require more sophisticated histogram strategies like non- laxed compared to rigid body alignment. One approach to over-178 parametric windows (Dowson et al. (2008)), which are compu- come this problem is to include spatial information into the joint179 tationally extremely demanding for 3D volumes. A simplified and marginal histogram computation. In Rueckert et al. (2000)180 computation for this technique was recently presented by Joshi a second-order mutual information measure is defined, which181 et al. (2011).
extends the joint entropy estimation to the spatial neighbours of a voxel and therefore uses a 4D histogram, where the third182 2.1.2. Conditional mutual information and forth dimensions define the probability of the spatial neigh- A number of disadvantages of using the traditional global MI bours of a voxel to have a certain intensity. A problem that approach have been analysed by Loeckx et al. (2010), Haber arises here is the curse of dimensionality, meaning that a lot and Modersitzki (2006), and Studholme et al. (2006). These lie of samples are needed to populate the higher-dimensional his- mainly in the sensitivity of MI (or NMI) to non-uniform bias togram. The authors therefore limit the number of intensity bins fields in MRI. These can be often explained by the lack of spa- to 16, which might again decrease the accuracy. Three more re- tial information in the joint histogram calculation. Different ap- cent approaches of MI including spatial context can be found proaches have been proposed to include spatial context into MI in (Yi and Soatto (2011)), (Heinrich et al. (2012)) and (Zhuang as mentioned aboved. Studholme et al. (2006) introduce a third et al. (2011)).
channel to the joint histogram containing a spatial or regional label. In this work, the recent approach called conditional mu- 2.1.1. Pointwise normalised mutual information tual information (CMI), as introduced by Loeckx et al. (2010) In (Hermosillo et al. (2002), Rogelj et al. (2003)), variants of is used for comparison purposes. In this technique, a third di- mutual information to obtain a pointwise similarity metric have mension is added to the joint histogram and a second dimension been proposed. For the implementation of NMI as comparison is added to the marginals representing the regional location of method, the approach of (Rogelj et al. (2003)) is used in this an intensity pair. The image is subdivided into a number of work. The joint and marginal histograms p of two images I and overlapping regions and each intensity pair only contributes to J are obtained in a conventional manner by summing up the its specific regional histograms. A number of anchor points contribution of all intensity pairs to one global histogram. The are evenly distributed on the image grid. Each voxel in a 3D local contribution NMI(x) for each voxel can then be obtained volume is then attributed to its 8 nearest anchor points, and its contribution to this regional label r(x) is weighted by the recip- rocal spatial distance w(I(x), J(x), x) to it. CMI is then defined Ω p(I(x)) log p(I(x)) Alternatively, a local joint histogram estimation could be w(I(x), J(x), r(x)) log used, which however would limit the number of samples and In (Loeckx et al. (2010)) it was shown that this reduces the neg-258 image patches across a noisy image to obtain a noise-free pixel, ative influence of bias fields and yields a higher registration ac-259 which is computed as a weighted average of all other pixels in curacy for a small number of realistic test cases. The drawbacks260 the image. The weights w(i, j) used for the averaging are based lie again in the difficulty of populating this 3D histogram, and261 on the sum of squared differences between the patch, which in the fact that corresponding anatomical structures, which are262 surrounds the pixel of interest, and all other patches in the im- spatially further apart, are ignored.
age. The denoised pixels NL(i, J) are then calculated using the following equation: 2.2. Structural representation NL(i, J) = Xw(i, j)J( j) A very different approach to multi-modal image registration is the use of a structural representation, which is assumed to be independent of a certain modality. One can then use a sim-265 where N is the neighbourhood of i. The approach demon- ple intensity-based measure across image representations. Us-266 strated a very good performance for image denoising. The use ing image gradients directly would be not representative across267 of patches to measure similarity based on the weights w(i, j) modalities, but the use of the local gradient orientation is pos-268 within the same image can easily capture a variety of image sible and has been used in (Pluim et al. (2000)) for rigid reg-269 features, because it treats regions, edges, corners and textures istration and in (Haber and Modersitzki (2006), Heinrich et al.270 in a unified way and is thus much more meaningful than us- (2010)) and (De Nigris et al. (2010)) for deformable registra-271 ing single intensities. In subsequent work, this approach was tion. In (Mellor and Brady (2005)), the local phase of the im-272 simplified to search for similar patches only within a smaller age was extracted using a technique called the monogenic sig-273 non-local search region (Coup´e et al. (2006)). Figure 1 gives nal, and further used for registration. However, in their work274 an example of how well the self-similarity pattern can describe mutual information was used between phase images, which im-275 the local structure around an image location. Mainly because of plies that there was still no direct dependency between the rep-276 this property, the concept has been used later on in a variety of resentations of different modalities. Our approach is different277 applications. Of particular interest is the application to object in that not a scalar representation, but a vector-valued image278 localisation by Shechtman and Irani (2007). Here, a correlation descriptor is derived for each voxel.
surface is extracted using colour patch distances and then stored in a log-polar histogram, which can be matched across images 2.2.1. Entropy images using the L1 norm.
Local patch-based entropy images have been proposed by Wachinger and Navab (2012), which were then minimised us-282 3. Modality independent neighbourhood descriptor ing SSD across modalities, achieving similar registration ac- curacy as mutual information for rigid multimodal registration283 In this section we will present the modality independent and some synthetic non-rigid experiments. The basic assump-284 neighbourhood descriptor (MIND) and its use to define the tion that drives the registration based on entropy images is that285 similarity between two images based on the SSD of their de- intensity changes occur at the same locations in different modal-286 scriptors. First we motivate the use of image self-similarity for ities. An entropy image is produced by firstly calculating his-287 the construction of an image descriptor. We will then propose tograms of small image patches. The size p and weighting Cσ288 the definition of self-similarity by using a Gaussian-weighted of the local patches is of great importance. The entropy value289 patch-distance and explain the spatial capture range of the de- E(x) for each voxel is then obtained using a Parzen Window290 smoothing of the histogram from which the Shannon entropy is 3.1. Motivation and Concept According to (Wachinger and Navab (2012)), the number of292 Our aim is to find an image descriptor, which is indepen- intensity bins for non-rigid registration should be sufficiently293 dent of the modality, contrast and noise level of images from small to ensure a well populated local histogram, which how-294 different modalities and at the same time sensitive to different ever reduces the sensitivity to small intensity changes. A prob-295 types of image features. Our approach is based on the assump- lem with this approach can be a changing level of noise within296 tion that a local representation of image structure, which can be and across images - which in turn would influence the entropy297 estimated through the similarity of small image patches within calculation. The high complexity (pd per voxel, where d is298 one modality, is shared across modalities. As mentioned before, the dimension of the image) of the entropy image calculation299 many different features may be used to derive a similarity cost could potentially be reduced using a convolution kernel for the300 function for image registration, such as corner points, edges, contribution of each individual voxel to all neighbouring voxels301 gradients, textures or intensity values. Figure 1 shows some within the size of a patch.
examples on two slices of a CT and MRI volume.
Most intensity based similarity metrics employ only one of 2.3. Self-similarity these features or need to define a specific combination of dif- Our approach uses the principle of self-similarity, a concept305 ferent features and a weighting between them. Image patches which has first been introduced in the domain of image denois-306 have been shown to be sensitive to very different types of im- ing by Buades et al. (2005). These authors make use of similar307 age features including edges, points and texture. Using patches for similarity calculations also removes the need for a feature353 3.2. Variance measure for Gaussian function specific weighting scheme. However, they are limited to single-354 We want to obtain a high response for MIND for patches that modal images. In our approach, a multi-dimensional image de-355 are similar to the patch around the voxel of interest, and a low scriptor, which represents the distinctive image structure in a356 response for everything that is dissimilar. A Gaussian function local neighbourhood, is extracted based on patch distances for357 (see Eq. 4) is used for this purpose. The denominator V(I, x) both modalities separately and afterwards compared using sim-358 in Eq. 4 is an estimation of the local variance. A smaller value ple single-modal similarity measures.
for V yields a sharply decaying function, and higher values in- MIND can be generally defined by a distance Dp, a variance360 dicate a broader response. The parameter has to be related to estimate V and a spatial search region R: the amount of noise in the image. The variance of the image noise can be estimated via pseudo-residuals  calculated using MIND(I, x, r) = 1 exp − a six-neighbourhood N (see Coup´e et al. (2008)): where n is a normalisation constant (so that the maximum value is 1) and r ∈ R defines the search region. By using MIND, an image will by represented by a vector of size R at each location is averaged over the whole image domain Ω to obtain a con- stant variance measure V(I, x) = 1 P . This however in- creases the sensitivity of the image descriptors to spatially vary- 3.1.1. Patch-based distance ing noise. Therefore a locally varying function would be ben- To evaluate Eq. 4 we need to define a distance measure be-368 eficial. A better way of determining V(I, x) is to use the mean tween two voxels within the same image. As mentioned before,369 of the patch distances themselves within a six-neighbourhood image patches offer attractive properties and are sensitive to the370 n ∈ N: three main image features: points, gradients and uniformly tex- tured regions. Therefore the straightforward choice of a dis- p(x1 x2) between two voxels x1 and x2 is the sum of squared differences (SSD) of all voxels between the two371 Using this approach (Eq. 8), MIND can be automatically cal- patches P of size (2p + 1)d (with image dimension d) centred at372 culated without the need for any additional parameters.
Exemplary responses of the obtained descriptors for three different image features for both CT and MRI are shown in Fig.
p(I, x1 x2) = X(I(x1 + p) − I(x2 + p))2 1 (second and third row on the right), where a high intensity corresponds to a small patch distance. Fig. 1 demonstrates how well descriptors represent these features independent of modal- The distance value defined in Eq. 5 has to be calculated for all378 ity.
voxels x in the image I and all search positions r ∈ R. The na¨ıve solution (which is e.g. used in Coup´e et al. (2006)) would 3.3. Spatial search region require 3(2p + 1)d operations per voxel and is therefore compu- tationally very expensive.
An important issue using MIND is the spatial extent of the search region (see R in Eq. 4) for which the descriptor is cal- We propose an alternative solution to calculate the exact culated. In the original work of Buades et al. (2005), self- patch-distance very efficiently using a convolution filter C of similarity was defined across the whole image domain, thus size (2p + 1)d. First a copy of the image I0 is translated by r coining the term: "non-local filtering". For the use in object de- yielding I0(r). Then the point-wise squared difference between tection, Shechtman and Irani (2007) used a sparse ensemble of I and I0(r) is calculated. Finally, these intermediate values are self-similarity descriptors calculated with a search radius of 40 convolved with the kernel C, which effectively substitutes the pixels, which was stored in a log-polar histogram. For the use of SSD summation in Eq. 5: MIND in image registration, however, a smaller search region Dp(I, x, x + r) = C ? (I − I0(r))2 is sufficient. This can be explained by the prior knowledge of smooth deformations, which are enforced by the regularisation This procedure is now repeated for all search positions r ∈ R.391 term of most deformable registration algorithms. We will define The solution of Eq. 6 is equivalent to the one obtained using392 three different types of spatial sampling for the spatial search re- Eq. 5. Using this method it is also easily possible to include393 gion R: dense sampling, sparse sampling (rays of 45 degrees), a Gaussian weighting of the patches by using a Gaussian ker-394 and a six-neighbourhood. Figure 2 illustrates these configura- nel Cσ of size (2p + 1)d. The computational complexity per395 tions, where the red voxel in the centre is the voxel of interest,patch distance calculation is therefore reduced from (2p + 1)d and all gray voxels define R. The computational complexity is to d(2p + 1) for an arbitrary separable kernel and 3d for a uni-397 directly proportional to the number of sampled displacements, form patch weighting. A similar procedure has been proposed398 therefore the six-neighbourhood clearly offers the best time ef- in the context of windowed SSD aggregation by Scharstein and399 ficiency. If the neighbourhood is chosen too large, the resulting Szeliski (1996).
descriptor might be affected by non-rigid deformations.
are being compared in Section 5. We chose to use a Gauss- Newton optimisation scheme as it has an improved convergence compared to steepest descent methods (Zikic et al. (2010a)).
For single-modal registration using SSD as similarity metric, Gauss-Newton optimisation is equivalent to the well known Horn-Schunck optical flow solution (Horn and Schunck (1981)) as shown in (Zikic et al. (2010b)).
Figure 2: Different samplings of the search region: (a) dense, (b) sparse and447 4.1. Rigid registration (c) six-neighbourhood. Red voxel is the voxel of interest, gray voxels are beingsampled r ∈ R.
Rigid image registration aims to find the best transformation to align two images while constraining the deformation to be parameterised by a rigid-body (translation and rotation, 6 pa- An evaluation of the influence of both patch-size (and451 rameters). Extending this model to the more general affine weighting) and search region will be given in Section 5.2.1. A452 transformation, the transformed location x0 = (x0, y0, z0)T of a basic MATLAB implementation for the efficient calculation of453 voxel x = (x, y, z)T can be parameterised by q = (q , . . , MIND can be found in the electronic appendix.
3.4. Multi-modal similarity metric using MIND One motivation for the use of MIND is that it allows to align multi-modal images using a simple similarity metric across modalities. Once the descriptors are extracted for both images,454 where u = (u, v, w)T is the displacement of x. For a quadratic yielding a vector for each voxel, the similarity metric between455 image similarity function f2, the Gauss-Newton method can be two images is defined as the SSD between their corresponding456 applied. It uses a linear approximation of the error term: descriptors. Therefore efficient optimisation algorithms, which converge rapidly can be used without further modification. We employ Gauss-Newton optimisation, which minimises the lin- earised error term in a least-square sense (Madsen et al. (1999)).
where J(x) is the derivative of the error term with respect to the In order to optimise the SSD of MIND, the similarity term S(x) transformation and ugn is the update step. We insert Eqs. 10 of two images I and J at voxel x can be to be defined as the sum into Eq. 11 and differentiate with respect to q to calculate J(x).
of absolute differences between descriptors: The advantage of this method is that we can directly use the MIND(I, x, r) − MIND(J, x, r) point-wise cost function derivatives with respect to u to obtain an affine transformation, so that MIND has to be computed only once per image.
This requires R computations to evaluate the similarity at one464 Parameterizing a rigid-body transformation directly is more voxel. Some algorithms, especially discrete optimisation tech-465 difficult. Therefore, at each iteration the best affine matrix is niques (Glocker et al. (2008), Shekhovtsov et al. (2008)) use466 first estimated and then the best rigid-body transformation is many cost function evaluations per voxel. In order to speed-up467 found using the solution presented in Arun et al. (1987). The these computations the descriptor can be quantised to only 4468 Gauss-Newton step is iteratively updated while transforming bit, without significant loss of accuracy. For R = 6 all possible469 the source image towards the target. In order to speed up the distances between descriptors can be pre-computed and stored470 convergence and avoid local minima, a multi-resolution scheme in a lookup-table.
(with downsampling factors of 4 and 2) is used.
The similarity S yields an intuitive display of the difference image after registration. Enabling single-modal similarity met-472 4.2. Diffusion-regularised deformable registration rics by using an intermediate image representation is also the Within the non-rigid registration framework, we aim to min- motivation in (Wachinger and Navab (2012)); in contrast to our imise the following cost function with respect to the deforma- work they reduce the alternative image representation to a sin- tion field u = (u, v, w)T , consisting of a non-linear similarity gle scalar value per voxel.
term S (dependent on u) and a diffusion regularisation term: Our new similarity metric based on the MIND can be used in any registration algorithm with little need for further modifica- argmin = X S (I1(x), I2(x + u))2 + α tr ∇u(x)T ∇u(x) tion. We show in the experimental section that it can improve accuracy for both rigid and deformable registration of multi- Since the objective function to be minimised is of the form we can again apply the Gauss-Newton optimisation method, where f is minimised iteratively with the update rule: 4. Gauss-Newton registration framework (JT J)ugn = −JT f, where J is the derivative of f with re- This section describes the rigid and deformable registration481 spect to u. This can be adapted to this regularised cost func- framework, which will be used for all similarity metrics that482 We simplify the notation to S = S (I1(x), I2(x)) and ∇S = ( δS , δS , δS )T and ∆u = ∇ (∇(u(x)). The regularisation530 5.1. Landmark localisation in visible human dataset term is linear with respect to u as the differential operator is531 Evaluating multi-modal image registration in a controlled linear. The resulting update step given an initial or previous532 manner is not a trivial task. Finding and accurately mark- deformation field uprev becomes then: ing corresponding anatomical landmarks across modalities is a difficult task even for a clinical expert. Random deforma- ∇ST ∇S − α∆ ugn = −(∇ST S − α∆uprev) (13)535 tion experiments, as they are usually performed in the literature for multi-modal registration (e.g. in D'Agostino et al. (2003), Equation 13 is solved using successive over-relaxation (an iter- Glocker et al. (2008), Mellor and Brady (2005), Wachinger ative solver). The final deformation field is calculated by the and Navab (2012)), are mostly unrealistic. In order to per- addition of the update steps ugn. The parameter α balances the form a simulated deformation on multi-modal data, an aligned similarity term with the regulariser. The value of α has to be scan pair must be available, which is only usually possible for found empirically. This choice will be further discussed in Sec- brain scans. Here the number of different tissue classes is a lot smaller than for chest scans, thus these experiments do not generalise very well. Moreover, simulated deformations hardly 4.3. Symmetric and inverse-consistent approach ever capture the complexity and physical realism of patient For many deformable registration algorithms, there is a545 motion. To address these problems, we perform an alterna- choice for one image to be the target and the other to be the546 tive experiment: regional landmark localisation. For this pur- source image. This places a bias on the registration outcome547 pose, we employ the less regularly used Visible Human dataset and may additionally introduce an inverse consistency error548 (VHD) (Ackerman (1998))1. Because the scans were taken (ICE). The ICE has been defined by (Christensen and Johnson549 post-mortem, no motion is present and different modalities are (2001)) for a forward transform A and a backward transform B550 consequently in perfect alignment. We selected two MRI se- to be the difference between AB−1 and the identity. In (Avants551 quences, T1 and PD weighted volumes, as they offer a suffi- et al. (2008)) a symmetric deformable registration is presented,552 cient amount of cross-modality variations. The images are up- which calculates a transform from both images to a common in-553 sampled from their original resolution of 1.875x4x1.875 mm to termediate image and also ensures that the forward transform is554 form isotropic voxels of size 1.875 mm3.
the inverse of the backward transform. The full forward trans- form transformation is calculated by A(0.5) ◦ B(0.5)−1, where 0.5 describes a transformation of half length (or with half the in- tegration time, if velocity fields are used). We follow the same approach and estimate both A and B. We then use a fast it- erative inversion method, as presented in (Chen et al. (2007)), to obtain A(0.5)−1 and B(0.5)−1. This approach helps to obtain diffeomorphic transformations, which means that no physically implausible folding of volume occurs. We use this symmetric approach in all deformable registration experiments.
Fraction landmarks with lower error In this section we perform a number of challenging registra- tion experiments to demonstrate the capabilities of MIND in Landmark localisation error in mm medical image registration. We compare our new descriptor to state-of-the-art multi-modal similarity metrics: normalised mu- Figure 4: Cumulative distribution of landmark localisation error in mm for119 landmarks located in the original T1/PD MRI scan of the Visible Human tual information (NMI), conditional mutual information (CMI), dataset. MIND achieves a significantly higher localisation accuracy.
and SSD of entropy images (eSSD) within the same registration framework. We evaluate our findings based on the target reg- In our tests we automatically select a large number (119) of istration error (TRE) of anatomical landmarks. The TRE for a geometric landmarks using the 3D version of the Harris cor- given transformation u and an anatomical landmark pair (x, x0) ner detector (Rohr (2000)). Cross-sections of both sequences is defined by (Maurer et al. (1997)): are shown in Fig. 3. For each landmark of the MRI-PD scan, we perform an exhaustive calculation of the similarity metric (x + u(x) − x0)2 + (y + v(x) − y0)2 + (z + w(x) − z0)2 560 within a search window of 39x39x39 mm of the T1 image (14)561 around the respective location. Since no regularisation is used We first apply the different methods to landmark localisation562 in this experiment, we average the cost function over a local within an aligned pair of T1 and PD weighted MRI scans of the Visible Human dataset. We then perform deformable registra- tions on ten CT datasets of lung cancer patients, and finally we 1The Visible Human dataset is obtainable from http://www.nlm.nih.
register CT and MRI scans of patients with empyema.
Figure 3: Visible Human Dataset used for landmark localisation experiment. T1 and PD MRI scan of post-mortem human are intrinsically aligned. The landmarks,which were used for evaluation are plotted with red squares.
Fraction of landmark localisation error > 2 mm Fraction of landmark localisation error > 2 mm Fraction of landmark localisation error > 2 mm Maximum absolute deviation of bias field from 1 Initial displacement in mm Standard deviation of additive Gaussian noise (mean intensity=200) Figure 5: Fraction of falsely located landmarks (error > 2mm) for increasing bias field, initial misalignment (translation), and additive Gaussian noise in multi-modalpair of T1/PD MRI scan of Visible Human dataset. The resulting localisation deteriorates for NMI and eSSD with increased bias field. NMI, CMI and eSSD have ahigh localisation error for initially misaligned volumes. eSSD shows a high sensitivity to Gaussian noise.
neighbourhood with a radius 5 voxels. The optimal position581 5.2. Deformable registration of inhale and exhale CT scans (highest similarity) is calculated (up to subpixel accuracy) and582 We performed deformable registration on ten CT scan pairs compared to the known ground truth location. The Euclidean583 between inhale and exhale phase of the breathing cycle, pro- distance serves as localisation error. If the similarity metric is584 vided by the DIR-Lab at the University of Texas (Castillo et al.
sufficiently discriminative, no other local optimum should ap-585 (2009))2.The patients were treated for esophagus cancer, and a pear within the search region. The distribution of the resulting586 breathing cycle CT scan of thorax and upper abdomen was ob- error for all compared similarity metrics is shown in Fig. 4.587 tained, with slice thickness of 2.5 mm, and an in-plane resolu- MIND achieves a significantly lower localisation error than all588 tion ranging from 0.97 to 1.16 mm. Even though this stipulates other similarity metrics. We subsequently apply a non-uniform589 a single-modal registration problem, directly intensity based bias field (multiplicative linear gradient in y-direction), a trans-590 similarity criteria such as SSD may fail in some cases SSD may lation, or additive Gaussian noise to the T1 scan. The fraction591 fail in some cases due to the changing appearance between in- of falsely located landmarks with increasing image distortion592 hale and exhale scans. Particular challenges for these registra- is plotted in Fig. 5. MIND clearly outperforms both NMI and593 tion tasks are the changing contrast between tissue and air, be- eSSD by achieving a consistently lower landmark localisation594 cause the gas density changes due to compression (Castillo et al.
error. CMI is, as expected, not affected by the non-uniform bias595 (2010b)), discontinuous sliding motion between lung lobes and field, however for an initial misalignment of the scan pair the596 the lung rib cage interface, and large deformations of small fea- joint histogram estimation becomes less reliable and the locali-597 tures (lung vessels, airways). For each image 300 anatomical sation accuracy deteriorates.
2This dataset is freely available at http://www.dir-lab.com Fraction of landmarks with lower error 0.2 Table 1: Target registration error in mm for deformable registration of ten CTscans between inhale and exhale. Evaluation based on 300 manual landmark per case. Inter-observer error for landmark selection <1mm. A Wilcoxon rank Target registration error in mm test is performed between MIND and each comparison method. Cases, forwhich a significant improvement (p<0.05) was found are depicted below. As Figure 6: Deformable registration of 10 cases of CT scans evaluated with 300 additional comparison, the results reported in the literature for two other tech- expert landmarks per case. Registrations are performed between maximum in- niques are shown below.
hale and exhale. The plot shows the cumulative distribution of target regis-tration error, in mm. A significant improvement using MIND compared to all other methods has been found using a Wilcoxon rank sum test (p<0.0001). Thestaircase effect of TRE before registration is due to the voxel based landmark quantiles [0.25, 0.5, 0.75] [3.11, 6.97, 12.55] cases for which p<0.05 landmarks have been carefully annotated by thoracic imaging quantiles [0.25, 0.5, 0.75] [0.89, 1.44, 2.85] experts with inter-observer errors of less than 1 mm. The max- cases for which p<0.05 imum average landmark error before registration is 15 mm (for Case 8), the maximum displacement of a single landmark is 30 quantiles [0.25, 0.5, 0.75] [0.86, 1.33, 2.33] cases for which p<0.05 The cumulative distributions of target registration error (TRE) for all 3000 landmarks (all 300 landmarks for all 10 quantiles [0.25, 0.5, 0.75] [0.91, 1.42, 2.67] cases) after registration are shown in Figure 6. MIND achieves cases for which p<0.05 the lowest average and median TRE among all methods. The average error of the second best metric (eSSD) is more than a quantiles [0.25, 0.5, 0.75] [1.00, 1.59, 2.85] third higher. The Wilcoxon rank-sum test was used to compare cases for which p<0.05 the TRE between the different similarity metrics across all cases and for each case individually. We found a significant improve- quantiles [0.25, 0.5, 0.75] [0.77, 1.16, 1.79] ment for MIND compared to all other metrics. Entropy SSD Results reported in literature could significantly improve the accuracy compared to NMI. A Schmidt-Richberg et al. (2012) summary of the registration results is given in Table 1. The range of Jacobian values of the transformations are all positive, diffusion regularisation thus all deformation fields are free from singularities. An ex- ample of the registration outcome using our proposed method Castillo et al. (2010a) along with the magnitude of the deformation field is shown in *These results are not directly comparable, as all frames of the 4D CT cycles are used during registration and more landmarks are evaluated.
5.2.1. Choice of parameters We used a symmetric three-level multiresolution scheme within the presented Gauss-Newton framework for all com- pared methods. The best parameters were carefully chosen based on the TRE obtained for Case 5. An overview is given in Table A.4 in the electronic appendix. The regularisation was chosen sufficiently high to ensure physically plausible transfor- mations with no singularities (negative Jacobians). For CMI the spatial size of each regional label was set to be between 253 and 503 voxels, as suggested in (Loeckx et al. (2010)). The compu- tation time for each 3D registration was between 4 and 5 min- Magnitude of deformations in mm Figure 7: Deformable registration result for Case 5 of the CT dataset. Left: axial, middle: sagittal and right: coronal plane. Top row: before registration and centrerow: after registration, using the proposed MIND technique. The target image is displayed in magenta and the source image in green (complementary colour).
Bottom row shows the magnitude of the deformation field (red for large deformations) in mm.
utes for all methods (see Table 2). The influence of the choice of patch-size and search region for MIND has been evaluated us- ing both single-modal and multi-modal registration tasks. Fig.
8 gives an overview of the obtained TRE. It can be generally seen that a Gaussian weighting σ ≈ 0.5 (with a corresponding patch-size of 3x3x3) as well as a very small search region (six- neighbourhood yield a very high accuracy. For other applica- tions with stronger image distortion and noise (e.g. ultrasound), we expect that larger patches and search regions would provide more robustness.
Table 2: Computation time (in seconds) for presented methods for Case 5 of Fraction of landmarks with lower error 0.2 CT dataset. For all metrics the SOR-solver for the Gauss-Newton optimisationtakes 92 secs. The image dimensions are 256x256x106.
full registration Target registration error in mm (for each GN iteration) Figure 9: Rigid multi-modal registration of 11 cases of CT/MRI scans of empyema patients. Evaluated with 12 expert landmarks per case. The plot shows the cumulative distribution of target registration error, in mm. The man-ual registration error shows the residual error after a least square fit using a rigid transformation model to the ground truth landmark locations. MIND achievesan overall better performance than NMI.
5.3. Multi-modal registration of CT/MRI lung scans ground truth landmark locations. We were not able to use en- Deformable multi-modal registration is important for a range675 tropy images for this multi-modal experiment as the structural of clinical applications. We applied our proposed technique to676 representation is not sufficient to allow for the large variations a clinical dataset of eleven patients, which were scanned with677 in appearance and distortion between the CT and MRI scans both CT and MRI. Different scanning protocols were employed678 and the registration fails for most cases (increased landmark er- for these clinical datasets. The CT volumes include scans with679 ror compared to ground truth after registration).
contrast, without contrast, and a CTPA (CT Pulmonary An-680 We use the rigid transformations obtained from the linear giogram) protocol. For the MRI scans, both T1-weighted and681 registration as initialisation of the subsequent deformable reg- T2-weighted FSE-XL sequences within a single breath-hold682 istration. For eSSD, the rigid transformations obtained using were employed. All patients suffered from empyema, a lung683 MIND, are employed as initialisation. The parameter choice disease characterised by infection of the pleura and excess fluid684 for all compared methods can be found in Table A.5 in the elec- within the pleural space. The extra fluid may progress into an685 tronic appendix.
abscess and additionally, cause the adjacent lung to collapse686 The obtained average TRE is 7.1 mm for MIND, 8.8 mm for and/or consolidate. Both modalities are useful for detecting this687 CMI, 9.2 mm for NMI and 10.5 mm for eSSD. Even though pathology, but because the patients are scanned in two different688 the error for MIND is higher than what can be expected for sessions and at different levels of breath-hold, there are non-689 a CT-to-CT registration, it is lower than the spatial resolution rigid deformations, which makes it difficult for the clinician to690 of the MRI scans and close to the intra-observer error. The relate the scans. The quality of the MRI scans is comparatively691 distribution of landmark errors is shown in Fig. 10. Using a poor, due to motion artefacts, bias fields and a slice thickness692 Wilcoxon rank test, a statistically significant improvement of of around 8 mm.
MIND compared to NMI (p=0.019) and CMI (p=0.023) was We asked a clinical expert to select manual landmarks for694 found. An overview of the registration results is given in Table all eleven cases. 12 corresponding landmarks were selected in695 3. The Jacobian values are all positive, thus no transformations all image pairs, containing both normal anatomical locations696 contained any singularities. An example registration outcome and disease-specific places. It must be noted that some of the697 for MIND and NMI is shown in Figure 11.
landmarks are very challenging to locate, both due to low scan quality and changes of the pathology in the diseased areas be- tween scans. The intra-observer error has been measured to be698 6. Discussion and Conclusion 5.8 mm within the MRI and 3.0 mm within a CT scan.
First a rigid registration of all cases using the proposed699 We have presented a novel modality independent neighbour- Gauss-Newton framework with the respective similarity met-700 hood descriptor (MIND) for volumetric medical image registra- rics is performed. The resulting landmark errors are shown in701 tion. The descriptor can be efficiently computed locally across Figure 9. MIND achieves a lower TRE of 9.3 mm, on average,702 the whole image, and it allows for accurate and reliable align- compared to NMI (10.8 mm). We additionally calculated the703 ment in a variety of registration tasks. Compared to mutual in- optimal rigid body transformation using a least square fit of the704 formation it does not rely on the assumption of a global (or re- Target registration error in mm (3D CT) Target registration error in mm (3D CT) 4.5 Target registration error in mm (3D MRI/CT) Target registration error in mm (3D MRI/CT) Maximum displacement of search region R Maximum displacement of search region R (a) Increasing Gaussian weighting σ for patch-distance (see Sec.
(b) The maximum displacement rmax of the search region R (see 3.1.1), the half-size of the patch is p = d1.5σe Sec. 3.3). 6-NH stands for a six-neighbourhood.
Figure 8: Parameter variation for MIND to determine the best choice of (a) σ in patch-distance Dp and (b) the spatial search region R. The TRE is evaluatedfor one single-modal (3D CT) case (left y-axis) and one multi-modal (3D MRI/CT) registration (right y-axis). Based on these tests, we choose σ = 0.5 and asix-neighbourhood for all experiments.
Table 3: Target registration error in mm for deformable registration of eleven CT/MRI scan pairs of empyema patients. Evaluation based on 12 manual land-marks per case. Slice thickness of MRI scans is 8 mm (in-plane resolution ≈ 1mm), intra-observer error for landmark localisation is 5.8 mm in MRI scans.
Fraction of landmarks with lower error 0.2 A Wilcoxon rank test has been performed between presented methods. Signif-icant improvements using MIND are found compared any other method (using all 132 landmarks).
Target registration error in mm Figure 10: Deformable multi-modal registration of 11 cases of CT/MRI scans quantiles [0.25, 0.5, 0.75] [6.00, 11.18, 17.63] of empyema patients, evaluated with 12 expert landmarks per case. The plot shows the cumulative distribution of target registration error, in mm. MINDachieves a statistically significant (p<0.023) better result than all other methods.
The comparatively high residual error is due to both low scan quality, (in-plane quantiles [0.25, 0.5, 0.75] [5.43, 9.34, 13.79] resolution is ≈ 1mm, but the slice thickness up to 8 mm) and the challenging landmark selection for the clinical expert (intraobserver error is 5.8 mm).
quantiles [0.25, 0.5, 0.75] [4.06, 6.91, 11.84] gional) intensity relation. The negative influence of initial mis- alignment and non-uniform bias fields is massively reduced and quantiles [0.25, 0.5, 0.75] [3.84, 7.01, 11.87] the difficult task of setting the correct parameters for the his- togram calculation can be avoided. Apart from the regularisa- tion parameter, a standard setting can be used for all registration quantiles [0.25, 0.5, 0.75] [3.33, 5.68, 9.10] tasks. The descriptor is not rotationally invariant, which might be a limitation in the case of strong rotations. However, the sensitivity of MIND to the local orientation may in fact lead to improved accuracy as suggested by the previous work of Pluim et al. (2000) and Haber and Modersitzki (2006). The modal- ity independent representation using a vector based on the local neighbourhood (which allows it to capture orientation) instead of a scalar value (used in entropy images) shows clear improve- ments for real multi-modal registration experiments. The im- plementation is straightforward, the running time comparable (a) CT scan of empyema patient with 4 relevant contour plots to guide the visualisation of registration results.
(b) MRI scan with identical CT contour plots before registration.
(c) Identical MRI scan with CT contour plots deformed according to non-rigid registration using NMI. The white arrows depict inaccurate registrationclose to one vertebrae, the inner lung boundary and gas pocket in empyema.
(d) Identical MRI scan with CT contour plots deformed according to non-rigid registration using MIND. A visually better alignment could be achieved.
Figure 11: Deformable CT/MRI registration results for Case 11 of empyema dataset. Left: axial, middle: sagittal and right: coronal plane. The third row shows theregistration outcome using NMI. A better alignment is obtained when using MIND (forth row).
to other methods, and an important advantage of MIND is that Avants, B., Epstein, C., Grossman, M., Gee, J., 2008. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of it is calculated point-wise and can therefore be adapted to al- elderly and neurodegenerative brain. Medical Image Analysis 12, 26 – 41.
most any registration algorithm.
Buades, A., Coll, B., Morel, J.M., 2005. A non-local algorithm for image We performed an extensive evaluation of our proposed denoising, in: CVPR 2005, IEEE Computer Society. pp. 60–65.
method and three state-of-the-art multi-modal similarity met- Castillo, E., Castillo, R., Martinez, J., Shenoy, M., Guerrero, T., 2010a.
Four-dimensional deformable image registration using trajectory modeling.
rics: entropy images, normalised and conditional mutual infor- Physics in Medicine and Biology 55, 305.
mation. Tables 1 and 3 summarise the deformable registration Castillo, R., Castillo, E., Guerra, R., Johnson, V., McPhail, T., Garg, A., Guer- results on two very challenging datasets. The results clearly rero, T., 2009. A framework for evaluation of deformable image registration demonstrate the advantages of the proposed descriptor. MIND spatial accuracy using large landmark point sets. Physics in Medicine andBiology 54, 1849.
achieves a higher accuracy and more robust correspondences Castillo, R., Castillo, E., Martinez, J., Guerrero, T., 2010b. Ventilation from for the CT dataset. The application of deformable registration four-dimensional computed tomography: density versus jacobian methods.
to multi-modal medical images has so far remained a less so- Physics in Medicine and Biology 55, 4661.
phisticated and advanced field with very few published results Chen, M., Lu, W., Chen, Q., Ruchala, K.J., Olivera, G.H., 2007. A simple fixed-point approach to invert a deformation field. Medical Physics 35, 81.
on clinically relevant data. Our proposed descriptor marks a Christensen, G., Johnson, H., 2001. Consistent image registration. IEEE Trans- novel contribution to this area. We verified its robustness to actions on Medical Imaging 20, 568 –582.
noise, field inhomogeneities and complex intensity relations in Coup´e, P., Manj´on, J.V., Fonov, V., Pruessner, J., Robles, M., Collins, D., 2010.
Nonlocal patch-based label fusion for hippocampus segmentation, in: Jiang, two experiments. First, the localisation of geometric landmarks T., Navab, N., Pluim, J., Viergever, M. (Eds.), Medical Image Computing was tested in an intrinsically aligned T1/PD MRI scan pair of and Computer-Assisted Intervention MICCAI 2010. Springer Berlin / Hei- the Visible Human dataset. Here the high discrimination and in- delberg. volume 6363 of Lecture Notes in Computer Science, pp. 129–136.
dependency of bias fields has been demonstrated. Secondly for Coup´e, P., Yger, P., Barillot, C., 2006. Fast non local means denoising for 3D MR images. MICCAI 2006 , 33–40.
the deformable registration of clinical CT and MRI scans, we Coup´e, P., Yger, P., Prima, S., Hellier, P., Kervrann, C., Barillot, C., 2008.
found a significant improvement over all other tested metrics.
An optimized blockwise nonlocal means denoising filter for 3-d magnetic While our validation was focused on CT and MRI modali- resonance images. Medical Imaging, IEEE Transactions on 27, 425 –441.
ties, we believe that our approach generalises well and further D'Agostino, E., Maes, F., Vandermeulen, D., Suetens, P., 2003. A viscous fluid model for multimodal non-rigid image registration using mutual informa- use could be made of this concept in a variety of medical im- tion. Medical Image Analysis 7, 565–575.
age registration tasks. The application of MIND to other multi- De Nigris, D., Mercier, L., Del Maestro, R., Collins, D.L., Arbel, T., 2010.
modal registration tasks, such as registration of PET, contrast Hierarchical multimodal image registration based on adaptive local mutual enhanced MRI and ultrasound, also to other anatomical regions, information, pp. 643–651.
Dowson, N., Kadir, T., Bowden, R., 2008. Estimating the joint statistics of is subject for future work. A limitation of our approach is that it images using nonparametric windows with application to registration us- requires an anatomical feature to be present in both modalities, ing mutual information. Pattern Analysis and Machine Intelligence, IEEE if this assumption is violated the concept of mutual-saliency Transactions on 30, 1841 –1857.
(Ou et al. (2011)) could be incorporated to improve the robust- Foskey, M., Davis, B., Goyal, L., Chang, S., Chaney, E., Strehl, N., Tomei, S., Rosenman, J., Joshi, S., 2005. Large deformation three-dimensional ness in these cases.
image registration in image-guided radiation therapy. Physics in Medicine Further improvements might be possible. The use of more and Biology 50, 5869.
sophisticated deformation models could address application- Glocker, B., Komodakis, N., Tziritas, G., Navab, N., Paragios, N., 2008. Dense image registration through MRFs and efficient linear programming. Medical specific challenges, such as slipping organ motion (Schmidt- Image Analysis 12, 731 – 741.
Richberg et al. (2012)) and bladder filling or bowel gases (Fos- Glocker, B., Sotiras, A., Komodakis, N., Paragios, N., 2011. Deformable med- key et al. (2005)). Employing a different optimisation scheme ical image registration: Setting the state of the art with discrete methods.
such as a registration based on MRF labelling (Glocker et al.
Annual Review of Biomedical Engineering 13, 219–244.
Haber, E., Modersitzki, J., 2006. Intensity gradient based registration and fu- (2011)), may allow us to find better maxima in the similarity sion of multi-modal images. MICCAI 2006 , 726–733.
function. In future work we will investigate the potential ad- Heinrich, M., Jenkinson, M., Bhushan, M., Matin, T., Gleeson, F., Brady, J., vantages of incorporating MIND into a discrete optimisation Schnabel, J., 2011. Non-local shape descriptor: A new similarity metric for deformable multi-modal registration, in: Fichtinger, G., Martel, A., Peters,T. (Eds.), Medical Image Computing and Computer-Assisted InterventionMICCAI 2011. Springer Berlin / Heidelberg. volume 6892 of Lecture Notes in Computer Science, pp. 541–548.
Heinrich, M., Jenkinson, M., Brady, M., Schnabel, J., 2012. Textural mutual The authors would like to thank EPSRC and Cancer Re- information based on cluster trees for multimodal deformable registration,in: Biomedical Imaging: From Nano to Macro, 2012 IEEE International search UK for funding this work within the Oxford Cancer Symposium on, pp. 1–4.
Imaging Centre. J.A.S. acknowledges funding from EPSRC Heinrich, M., Schnabel, J., Gleeson, F., Brady, M., Jenkinson, M., 2010. Non- Rigid Multimodal Medical Image Registration using Optical Flow and Gra-dient Orientation. Proc. Medical Image Analysis and Understanding , 141–145.
Hermosillo, G., Chefd'hotel, C., Faugeras, O., 2002. Variational methods for multimodal image matching. Int. J. Comput. Vision 50, 329–343.
Ackerman, M., 1998. The visible human project. Proceedings of the IEEE 86, Horn, B., Schunck, B., 1981. Determining optical flow. Artificial Intelligence 17, 185–203.
Arun, K.S., Huang, T.S., Blostein, S.D., 1987. Least-squares fitting of two 3-d H¨orster, E., Lienhart, R., 2008. Deep networks for image retrieval on large- point sets. Pattern Analysis and Machine Intelligence, IEEE Transactions scale databases, in: Proceeding of the 16th ACM international conference on PAMI-9, 698 –700.
on Multimedia, ACM, New York, NY, USA. pp. 643–646.
model for non-rigid image matching. Computer Vision and Image Under- Joshi, N., Kadir, T., Brady, S., 2011. Simplified computation for nonparametric standing 112, 91 – 99. Special Issue on Discrete Optimization in Computer windows method of probability density function estimation. Pattern Analy- sis and Machine Intelligence, IEEE Transactions on 33, 1673 –1680.
Studholme, C., Drapaca, C., Iordanova, B., Cardenas, V., 2006. Deformation- Loeckx, D., Slagmolen, P., Maes, F., Vandermeulen, D., Suetens, P., 2010.
based mapping of volume change from serial brain MRI in the presence of Nonrigid image registration using conditional mutual information. Medical local tissue contrast change. Medical Imaging, IEEE Transactions on 25, Imaging, IEEE Transactions on 29, 19 –29.
Lowe, D., 1999. Object recognition from local scale-invariant features, in: Studholme, C., Hill, D., Hawkes, D., 1999. An overlap invariant entropy mea- Computer Vision, 1999. The Proceedings of the Seventh IEEE International sure of 3d medical image alignment . Pattern Recognition 32, 71–86.
Conference on, pp. 1150 –1157 vol.2.
Viola, P., Wells III, W., 1997. Alignment by maximization of mutual informa- Madsen, K., Bruun, H., Tingleff, O., 1999. Methods for non-linear least squares tion. Int. J. Comput. Vision 24, 137–154.
Wachinger, C., Navab, N., 2012. Entropy and laplacian images: Structural Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., Suetens, P., 1997.
representations for multi-modal registration. Medical Image Analysis 16, 1 Multimodality image registration by maximization of mutual information.
IEEE Transactions on Medical Imaging 16, 187–198.
Yi, Z., Soatto, S., 2011. Multimodal registration via spatial-context mutual in- Manjon, J.V., Carbonell-Caballero, J., Lull, J.J., Garca-Mart, G., Mart, L., formation, in: Information Processing in Medical Imaging. Springer Berlin 2008. MRI denoising using non-local means. Medical Image Analysis 12, / Heidelberg. volume 6801, pp. 424–435.
514 – 523.
Zhuang, X., Arridge, S., Hawkes, D., Ourselin, S., 2011. A nonrigid registra- Maurer, C.R., J., Fitzpatrick, J., Wang, M., Galloway, R.L., J., Maciunas, R., tion framework using spatially encoded mutual information and free-form Allen, G., 1997. Registration of head volume images using implantable deformations. Medical Imaging, IEEE Transactions on 30, 1819 –1828.
fiducial markers. Medical Imaging, IEEE Transactions on 16, 447 –462.
Zikic, D., Baust, M., Kamen, A., Navab, N., 2010a. Generalization of de- Mellor, M., Brady, M., 2005. Phase mutual information as a similarity measure formable registration in riemannian sobolev spaces, in: Jiang, T., Navab, N., for registration. Medical Image Analysis 9, 330 – 343.
Pluim, J., Viergever, M. (Eds.), Medical Image Computing and Computer- Meyer, C.R., Boes, J.L., Kim, B., Bland, P.H., Zasadny, K.R., Kison, P.V., Ko- Assisted Intervention MICCAI 2010. Springer Berlin / Heidelberg. volume ral, K., Frey, K.A., Wahl, R.L., 1997. Demonstration of accuracy and clinical 6362 of Lecture Notes in Computer Science, pp. 586–593.
versatility of mutual information for automatic multimodality image fusion Zikic, D., Kamen, A., Navab, N., 2010b. Revisiting horn and schunck: Inter- using affine and thin-plate spline warped geometric deformations. Medical pretation as gauss-newton optimisation, in: BMVC, pp. 1–12.
Image Analysis 1, 195 – 206.
Mikolajczyk, K., Schmid, C., 2005. A performance evaluation of local descrip- tors. Pattern Analysis and Machine Intelligence, IEEE Transactions on 27,1615 –1630.
Murphy, K., van Ginneken, B., Reinhardt, J., Kabus, S., Ding, K., Deng, X., Cao, K., Du, K., Christensen, G., Garcia, V., Vercauteren, T., Ayache, N.,Commowick, O., Malandain, G., Glocker, B., Paragios, N., Navab, N., Gor-bunova, V., Sporring, J., de Bruijne, M., Han, X., Heinrich, M., Schnabel, J.,Jenkinson, M., Lorenz, C., Modat, M., McClelland, J., Ourselin, S., Muen-zing, S., Viergever, M., De Nigris, D., Collins, D., Arbel, T., Peroni, M., Li,R., Sharp, G., Schmidt-Richberg, A., Ehrhardt, J., Werner, R., Smeets, D.,Loeckx, D., Song, G., Tustison, N., Avants, B., Gee, J., Staring, M., Klein,S., Stoel, B., Urschler, M., Werlberger, M., Vandemeulebroucke, J., Rit, S.,Sarrut, D., Pluim, J., 2011. Evaluation of registration methods on thoracicct: The empire10 challenge. Medical Imaging, IEEE Transactions on 30,1901 –1920.
Ou, Y., Sotiras, A., Paragios, N., Davatzikos, C., 2011. Dramms: Deformable registration via attribute matching and mutual-saliency weighting. MedicalImage Analysis 15, 622 – 639. Special section on IPMI 2009.
Pluim, J., Maintz, J., Viergever, M., 2003. Mutual-information-based registra- tion of medical images: a survey. Medical Imaging, IEEE Transactions on22, 986 –1004.
Pluim, J., Maintz, J.B., Viergever, M., 2000. Image registration by maximiza- tion of combined mutual information and gradient information. MICCAI2000 , 103–129.
Rogelj, P., Kovacic, S., Gee, J.C., 2003. Point similarity measures for non-rigid registration of multi-modal data. Comput. Vis. Image Und. 92, 112 – 140.
Rohr, K., 2000. Elastic registration of multimodal medical images: A survey.
K¨unstliche Intelligenz 14, 11–17.
Rueckert, D., Clarkson, M.J., Hill, D.L.G., Hawkes, D.J., 2000. Non-rigid registration using higher-order mutual information, SPIE. pp. 438–447.
Rueckert, D., Sonoda, L., Hayes, C., Hill, D., Leach, M., Hawkes, D., 1999.
Nonrigid registration using free-form deformations: application to breastMR images. Medical Imaging, IEEE Transactions on 18, 712 –721.
Scharstein, D., Szeliski, R., 1996. Stereo matching with non-linear diffusion, in: Computer Vision and Pattern Recognition, 1996. Proceedings CVPR '96,1996 IEEE Computer Society Conference on, pp. 343 –350.
Schmidt-Richberg, A., Werner, R., Handels, H., J., E., 2012. Estimation of slipping organ motion by registration with direction-dependent regulariza-tion. Medical Image Analysis 16, 150–159.
Shechtman, E., Irani, M., 2007. Matching local self-similarities across images and videos, in: Computer Vision and Pattern Recognition, 2007. CVPR '07.
IEEE Conference on, pp. 1 –8.
Shekhovtsov, A., Kovtun, I., Hlavac, V., 2008. Efficient MRF deformation Appendix A. Supplementary data Table A.5: Parameters chosen for multi-modal 3D CT/MRI registration. Toaccount for the increased noise and more complex intensity relations acrossmodalities, we slightly increase the regularisation parameter α.
Appendix A.1. Parameters chosen for all compared methods Table A.4: Parameter variation to obtain best results for Case 5 of CT dataset.
Average target registration error (TRE) is given in mm (before registration Regularisation α TRE=7.10 mm). For values in brackets the transformation resulted in some Gaussian patch σ negative Jacobians. Selected (fixed) settings are in bold.
Regularisation α Regularisation α Regularisation α Gaussian patch σ Regularisation α Gaussian patch σ Regularisation α Regularisation α Regularisation α Gaussian patch σ

Source: http://mpheinrich.de/pub/MEDIA_mycopy.pdf

spinesurgery.co.za2

SAOJ Autumn 2010 2/18/10 2:58 PM Page 37 SA ORTHOPAEDIC JOURNAL Autumn 2010 / Page 37 The medical management of spinal R Dunn MBChB(UCT), MMed(UCT), FCS(SA)Orth Consultant Orthopaedic and Spine Surgeon, Associate Professor University of Cape Town, Head Orthopaedic Spine Services: Groote Schuur Hospital Prof Robert DunnTel: (021) 404-5387

Hme601972 1.5

Original Research Health Services Research andManagerial Epidemiology Medicines Compliance and 1-5ª The Author(s) 2015 Reimbursement Level in Portugal Reprints and permission:DOI: 10.1177/2333392815601972 Maria da Conceic¸a˜o Constantino Portela1and Adalberto Campos Fernandes2 AbstractDuring a severe financial crisis, it is a priority to use scientific evidence to identify factors that enable therapeutic complianceby patients. This study aimed to evaluate a possible association between the number of patients who attended a medicalappointment and had medicine prescribed and the number of these same patients who purchased the prescribed medicine andwhether the level of reimbursement was a deciding factor. We perform a correlation analysis at primary care centers in Portugal,between 2010 and 2012 (n ¼ 96). We found a moderate to high positive association, which is statistical significant, between thenumber of the patients with medicines dispensing and medicines reimbursement levels. The correlation coefficient varies from.5 to .63 (P < .01). The compliance increases along with the increase in the reimbursement levels.

Copyright © 2008-2016 No Medical Care