
Probabilistic Graphical Model for Image Reconstruction
Image reconstruction algorithm based on a probabilistic graphical model by Shanrui Zhang explores filling missing pixels in images. It utilizes compressed sensing to achieve the effect of full sampling, mapping signal dimensions, and estimating signals using a hierarchical prior model.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Image Reconstruction Algorithm Based on Probabilistic Graphical Model Shanrui Zhang csc696
1.Problem 2.Approach 3.Results
Image Reconstruction Image reconstruction Image reconstruction is a technology that fills in the missing pixels in the image and reconstructs based on the pixel information of the background. Compressed sensing In the process of signal sampling, a few sampling points are used to achieve the same effect as full sampling.
The observation matrix maps the high-demensional signal X to the low-demension space y:observation value (known,compressed image)(demension M) :observation matrix (sparse sampling) :sparse matrix (Fourier transform)(signal->frequency) s:sparse index (natural singal x not sparse) x:input signal (unknow,we need to recover) (demension N)
In general, the overall goal of signal representation is to estimate a vector from a mathematical model of Among them, is the observation vector, is the additive Gaussian random white noise, its variance matrix is , and is the noise precision coefficient. Matrix is a dictionary matrix with columns greater than rows ( ). is the vector to be estimated. C = + y M y C M C M L C I 0 1 L = + y Factorize the joint probability density function of and express it with a 3-L hierarchical prior model as follows: The factor graph of the 3-L model can be obtained from the factorization of the 3-L hierarchical prior model. The vector graphical model of the 3-L hierarchical model. f f fy f f
Algorithm 1. Initialize , , 2. Compute the mean and variance of during the first iteration: = y ( = + H ) -1 H -1 3. Update , , , during subsequent iterations. M = y 2 2 + 1 a + = l l b l l ( ) ( ) 2 2 + + + 2 2 4 l ll l = l 2 l 4. After several iterations, the estimated value of is obtained after the algorithm converges: = f f fy f f