I'm not sure I have all the details, but I think in principle, everything 
that is an output of the scan will be stored (if needed) for the backward 
pass.
For scan checkpointing, we basically hide explicit outputs inside another 
level of scan, which forces the recomputation during the backprop.
In order to do it the other way around, adding explicit outputs should make 
them available without recomputation.

On Friday, July 21, 2017 at 8:43:11 PM UTC-4, Alexander Botev wrote:
>
> So the scan checkpointing seems very ineteresting from the prespective 
> that it can be used for things like learning-to-learn.
> However, my question is can we tell Theano which part of each N-th 
> iteration it to store and which not? For instance in the learning-to-learn 
> framework where we unroll SGD
> the optimal would be to store only the "updated" parameters which get pass 
> to the next time step, rather than the whole computation. Is it possible to 
> achieve something like that? 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to