My experience is that nnet needs a lot of tuning, not only in terms of
numbers of layers, but also in terms of the other parameters. My first
results where I kept very much of the default parameter values with nnet
have been very bad, as bad as you say. (But as Brian Ripley already wrote,
it's not straight forward to say via the net how to do it better.)

Apart from that, such a large difference of classification accuracy
between different methods is strange, but possible in principle. 
Very different structures of data exist (which means again that nobody can
assess your problem without knowing the data).

Christian

On Sat, 13 Mar 2004, Albedo wrote:

> I was wandering if anybody ever tried to compare the classification
> accuracy of nnet to other (rpart, tree, bagging) models. From what I
> know, there is no reason to expect a significant difference in 
> classification accuracy between these models, yet in my particular case
> I get about 10% error rate for tree, rpart and bagging model and 80% 
> error rate for nnet, applied to the same data.
> 
> Thanks.
> 
> ______________________________________________
> [EMAIL PROTECTED] mailing list
> https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 

***********************************************************************
Christian Hennig
Fachbereich Mathematik-SPST/ZMS, Universitaet Hamburg
[EMAIL PROTECTED], http://www.math.uni-hamburg.de/home/hennig/
#######################################################################
ich empfehle www.boag-online.de

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to