I’m sorry if this question seems out of place, but I have tried asking this in 
other channels, and still wasn’t able to understand the PCD implementation in 
the tutorial. (The tutorial I’m referring to is 
https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/rbm.py) so I 
would appreciate some help here.

If I understand the logic correctly, PCD is using the last Gibbs sampled hidden 
layer of the previous batch/epoch(i.e. stored in ‘persistent’) to generate a 
reconstruction, and compare it to the input/visible layer of the current 
batch/epoch. How does that train the model if it’s comparing data from 
different batches?

Or, is it that the two layers not being compared, but are completely 
independent of each other? (So the current batch is used to compute the 
positive phase of model to the hidden features, while the previous batch is 
used to generate the ‘imagination’ of the model at a random state?)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to