Thanks a lot for the suggestions. By "freeze", the authors mean:
"Refreshment may be done by propagating all the
elements of the new training set in the tree structure and associating to a
terminal leaf the average
output value of the elements having reached this leaf."
In the most naive form, it
Hi Pierre-Luc,
In addition to Andy's suggestion, you might have a look at the GBRT's
code, which implements a function to update the values stored at
leaves (== "terminal regions")
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/gradient_boosting.py#L198
Is that what you
I think you can just implement a new estimator on top of the tree, by
using the apply function to get the leaf a sample ends up in.
Then you can update your class estimates or learn something else on top
of that.
On 02/18/2015 01:31 PM, Pierre-Luc Bacon wrote:
In the field of reinforcement lea
In the field of reinforcement learning (RL), the Fitted-Q algorithm of
Ernst 2005 (http://www.jmlr.org/papers/volume6/ernst05a/ernst05a.pdf)
relies on the ability to fix the tree structure to ensure convergence (see
p. 515 of the JMLR paper).
The warm_start option is useful, but does not fully al