Hi,

I am trying to implement the Adaboost.M1. algorithm as described in
"The Elements of Statistical Learning" p.301
I don't use Dtettling 's library "boost" because :
  - I don't understande the difference beetween Logitboost and L2boost
  - I 'd like to use larger trees than stumps.

By using option weights set to (1/n, 1/n, ..., 1/n) in rpart or tree
function, the tree obtained is trivial (just root, no split) whereas
without weight or for each weight >1,trees are just fine.

So here is my question : how are weights taken into account in optimal
tree's discovery ?
Did someone implement boosting algorithm ?

Regards,

Olivier Celhay   -  Student  -  Paris, France

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to