Traditional MSSA by TSVD

Consider a block of 3D data $\mathbf{D}_{time}(x,y,t)$ of $N_x$ by $N_y$ by $N_t$ samples $(x=1\cdots N_x,y=1\cdots N_y,t=1\cdots N_t)$. The MSSA (Oropeza and Sacchi, 2011) operates on the data in the following way: first, MSSA transforms $\mathbf{D}_{time}(x,y,t)$ into $\mathbf{D}_{freq}(x,y,w)(w=1\cdots N_w)$ with complex values in the frequency domain. Each frequency slice of the data, at a given frequency $w_0$, can be represented by the following matrix:

$\displaystyle \mathbf{D}(w_0)=\left(\begin{array}{cccc}
D(1,1) & D(1,2) & \cdot...
...ts &\ddots &\vdots \\
D(N_y,1)&D(N_y,2) &\cdots&D(N_y,N_x)
\end{array}\right).$ (1)

To avoid notational clutter we omit the argument $w_0$. Second, MSSA constructs a Hankel matrix for each row of $\mathbf{D}$; the Hankel matrix $\mathbf{R}_i$ for row $i$ of $\mathbf{D}$ is as follows:

$\displaystyle \mathbf{R}_i=\left(\begin{array}{cccc}
D(i,1) & D(i,2) & \cdots &...
...dots &\vdots \\
D(i,N_x-m+1)&D(i,N_x-m+2) &\cdots&D(i,N_x)
\end{array}\right).$ (2)

Then MSSA constructs a block Hankel matrix $\mathbf{M}$ for $\mathbf{R}_i$ as:

$\displaystyle \mathbf{M}=\left(\begin{array}{cccc}
\mathbf{R}_1 &\mathbf{R}_2 &...
...{R}_{N_y-n+1}&\mathbf{R}_{N_y-n+2} &\cdots&\mathbf{R}_{N_y}
\end{array}\right).$ (3)

The size of $\mathbf{M}$ is $I\times J$, $I=(N_x-m+1)(N_y-n+1)$, $J=mn$. $m$ and $n$ are predifined integers chosen such that the Hankel maxtrix $\mathbf{R}_i$ and the block Hankel matrix $\mathbf{M}$ are close to square matrices, for example, $m=N_x-\lfloor\frac{N_x}{2}\rfloor$ and $n=N_y-\lfloor\frac{N_y}{2}\rfloor$, where $\lfloor\cdot\rfloor$ denotes the integer part of the argument. We assume that $I>J$. The filtered data are recovered with random noise attenuated by properly averaging along the anti-diagonals of the low-rank reduction matrix of $\mathbf{M}$ via TSVD. Next, we would like to briefly discuss the TSVD to introduce our work. In general, the matrix $\mathbf{M}$ can be represented as

$\displaystyle \mathbf{M}=\mathbf{S}+\mathbf{N},$ (4)

where $\mathbf{S}$ and $\mathbf{N}$ denote the block Hankel matrix of signal and of random noise, respectively. We assume that $\mathbf{M}$ and $\mathbf{N}$ have full rank, $rank(\mathbf{M})$= $rank(\mathbf{N})=J$ and $\mathbf{S}$ has deficient rank, $rank(\mathbf{S})=K<J$. The singular value decomposition (SVD) of $\mathbf{M}$ can be represented as:

$\displaystyle \mathbf{M} = [\mathbf{U}_1^M\quad \mathbf{U}_2^M]\left[\begin{arr...
...t[\begin{array}{c}
(\mathbf{V}_1^M)^H\\
(\mathbf{V}_2^M)^H
\end{array}\right],$ (5)

where $\Sigma_1^M$ ($K\times K$) and $\Sigma_2^M$ ( $(I-K)\times(J-K)$) are diagonal matrices and contain, respectively, larger singular values and smaller singular values. $\mathbf{U}_1^M$ ($I\times K$), $\mathbf{U}_2^M$ ( $I\times (I-K)$), $\mathbf{V}_1^M$ ($J\times K$) and $\mathbf{V}_2^M$ ( $J\times (J-K)$) denote the associated matrices with singular vectors. The symbol $[\cdot]^H$ denotes the conjugate transpose of a matrix. Generally, the signal is more energy-concentrated and correlative than the random noise. Thus, the larger singular values and their associated singular vectors represent the signal, while the smaller values and their associated singular vectors represent the random noise. We let $\Sigma_2^M$ be $\mathbf{0}$ to achieve the goal of attenuating random noise as follows:

$\displaystyle \tilde{\mathbf{M}} = \mathbf{U}_1^M\Sigma_1^M(\mathbf{V}_1^M)^H.$ (6)

Equation 6 is referred to as the TSVD.


2020-02-21