Appendix A: Local similarity

Let $\mathbf{x}_1$ and $\mathbf{x}_2$ denote the two signal vectors that are reshaped from a 2D matrix or 3D tensor. In the case of evaluating denoising performance, $\mathbf{x}_1$ and $\mathbf{x}_2$ simply means signal and noise. The simplest way to measure the similarity between two signals is to calculate the correlation coefficient,

$\displaystyle c=\frac{\mathbf{x}_1^T\mathbf{x}_2}{\parallel \mathbf{x}_1 \parallel_2 \parallel \mathbf{x}_2 \parallel_2},$ (18)

where $c$ is the correlation coefficient, $\mathbf{x}_1^T\mathbf{x}_2$ denotes the dot product between $\mathbf{x}_1$ and $\mathbf{x}_2$. $\parallel\cdot\parallel_2$ denotes the $L_2$ norm of the input vector. A locally calculated correlation coefficient can be used to measure the local similarity between two signals,

$\displaystyle c(i) = \frac{\sum_{i_w=-N_w/2}^{N_w/2} x_1(i+i_w)x_2(i+i_w)}{\sqr...
..._w=-N_w/2}^{N_w/2} x_1(i+i_w)^2}\sqrt{\sum_{i_w=-N_w/2}^{N_w/2} x_2(i+i_w)^2}},$ (19)

where $x_1(i)$ and $x_2(i)$ denote the $i$the entries of vectors $\mathbf{x}_1$ and $\mathbf{x}_2$, respectively. $i_w$ denotes the index in a local window. $N_w+1$ denotes the length of each local window. The windowing is sometime troublesome, since the measured similarity is largely dependent on the windowing length and the measured local similarity might be discontinuous because of the separate calculations in windows. To avoid the negative performance caused by local windowing calculations, Fomel (2007) proposed an elegant way for calculating smooth local similarity via solving two inverse problems. The local similarity I use to evaluate denoising performance in this paper is defined as

$\displaystyle \mathbf{s}=\sqrt{\mathbf{s}_1\circ\mathbf{s}_2},$ (20)

where $\mathbf{s}$ is the calculated local similarity, $\circ$ denotes Hadamard (or Schur) product, and $\mathbf{s}_1$ and $\mathbf{s}_2$ come from two least-squares inverse problem:

$\displaystyle \mathbf{s}_1$ $\displaystyle =\arg\min_{\tilde{\mathbf{s}}_1}\Arrowvert \mathbf{x}_1-\mathbf{X}_2\tilde{\mathbf{s}}_1 \Arrowvert_2^2,$ (21)
$\displaystyle \mathbf{s}_2$ $\displaystyle =\arg\min_{\tilde{\mathbf{s}}_2}\Arrowvert \mathbf{x}_2-\mathbf{X}_1\tilde{\mathbf{s}}_2 \Arrowvert_2^2,$ (22)

where $\mathbf{X}_1$ is a diagonal operator composed from the elements of $\mathbf{x}_1$: $\mathbf{X}_1=diag(\mathbf{x}_1)$ and $\mathbf{X}_2$ is a diagonal operator composed from the elements of $\mathbf{x}_2$: $\mathbf{X}_2=diag(\mathbf{x}_2)$. Equations 21 and 22 are solved via shaping regularization

$\displaystyle \mathbf{s}_1$ $\displaystyle = [\lambda_1^2\mathbf{I} + \mathcal{T}(\mathbf{X}_2^T\mathbf{X}_2-\lambda_1^2\mathbf{I})]^{-1}\mathcal{T}\mathbf{X}_2^T\mathbf{x}_1,$ (23)
$\displaystyle \mathbf{s}_2$ $\displaystyle = [\lambda_2^2\mathbf{I} + \mathcal{T}(\mathbf{X}_1^T\mathbf{X}_1-\lambda_2^2\mathbf{I})]^{-1}\mathcal{T}\mathbf{X}_1^T\mathbf{x}_2,$ (24)

where $\mathbf{\mathcal{T}}$ is a smoothing operator, and $\lambda_1$ and $\lambda_2$ are two parameters controlling the physical dimensionality and enabling fast convergence when inversion is implemented iteratively. These two parameters can be chosen as $\lambda_1 = \Arrowvert\mathbf{X}_2^T\mathbf{X}_2\Arrowvert_2$ and $\lambda_2 = \Arrowvert\mathbf{X}_1^T\mathbf{X}_1\Arrowvert_2$ (Fomel, 2007).


2020-04-03