Maggie Wang wrote:
> Hi,
> 
> I run the following tuning function for svm. It's very strange that every
> time i run this function, the best.parameters give different values.
> 
> [A]
> 
>> svm.tune <- tune(svm, train.x, train.y,
> 
>                     validation.x=train.x, validation.y=train.y,
> 
>                  ranges = list(gamma = 2^(-1:2),
> 
>                  cost = 2^(-3:2)))
> 
> 
> 
> # where train.x and train.y are matrix specified.
> 
> 
> 
> # output command:
> 
> 
> 
>> svm.tune$best.parameters$cost
> 
>> svm.tune$best.parameters$gamma
> 
> 
> 
> result:
> 
>  cost gamma
>  0.25  4.00
> 
> 
> 
> run A again:
> 
>  cost gamma
>     1     4
> 
> 
> 
> again:
> 
>   cost gamma
>  0.25  4.00
> 
> 
> 
> The result is so unstable, if it varies so much, why do we need to tune? Do
> you know if this behavior is normal? Can we trust the best.parameters for
> prediction?

I guess you do not have really many observations in your dataset. Then 
it highly depends ion the cross validation sets which parameter is best. 
And therefore you get quite different results.

Uwe Ligges



> 
> 
> Thank you so much to help out!!
> 
> 
> 
> Best Regards,
> 
> Maggie
> 
>       [[alternative HTML version deleted]]
> 
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to