论文解读第三代GCN《 Deep Embedding for CUnsupervisedlustering Analysis》 (5)

    $\begin{array}{l}\left(g * x\right)&=U\left(U^{T} x \odot U^{T} g\right)\\&=U\left(U^{T} g \odot U^{T} x\right)\\&=g_{\theta}\left(U \Lambda U^{T}\right) x\\&=U g_{\theta}(\Lambda) U^{T} x\end{array}$

  ps:后面两部推导参考 Courant-Fischer min-max theorem :$\underset{\operatorname{dim}(U)=k}{min} \;\;\underset{x \in U,\|x\|=1}{max} x^{H}Ax= \lambda_{k}  $。

  Where

Symmetric normalized Laplacian :$L^{\text {sym }}=D^{-\frac{1}{2}} L D^{-\frac{1}{2}}=D^{-\frac{1}{2}}(D-A) D^{-\frac{1}{2}}=I_{n}-D^{-\frac{1}{2}} A D^{-\frac{1}{2}}=U \Lambda U^{T}$ 

$U$ is the matrix of eigenvectors of the Symmetric normalized Laplacian.

$Λ$ a diagonal matrix of its eigenvalues of the Symmetric normalized Laplacian.

$U^{\top} x$  being the graph Fourier transform of $x$.

We can understand $g_{\theta }$ as a function of the eigenvalues of L, i.e. $g_{\theta }(Λ)$.

  接下来将介绍的图上频域卷积的工作,都是在 $g_{\theta}(\Lambda)$ 的基础上做文章,参数 $\theta$  即为模型需要学习的卷积核参数。

    $g_{\theta}(\Lambda)=\left[\begin{array}{cccc}\hat{g}_{\theta}\left(\lambda_{0}\right) & 0 & \cdots & 0 \\0 & \hat{g}_{\theta}\left(\lambda_{1}\right) & \cdots & 0 \\\vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & \hat{g}_{\theta}\left(\lambda_{n-1}\right)\end{array}\right]$

2.1.2 Improvement 1 :Polynomial parametrization for localized filters

  在二代 GCN 中采用:

    $g_{\theta}(\Lambda)=\sum \limits_{k=0}^{K-1} \theta_{k} \Lambda^{k}$

  which is

     $\hat{g}_{\theta}\left(\lambda_{i}\right)=\sum \limits _{k=0}^{K-1} \theta_{k} \lambda_{i}{ }^{k}$

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zzgjdd.html