HI, Saeed,

It worked this time.

Thanks, I appreciated it very much!



On Thu, Apr 29, 2010 at 5:23 PM, Saeed Abu Nimeh <sabun...@gmail.com> wrote:

> in svm.roc <- prediction(attributes(svm.pred)$decision.values, valid)
> valid should be the output variable in the validation set. maybe
> valid[,1] assuming that it is in the first column. I think this is a
> typo in my example :)
>
> On Thu, Apr 29, 2010 at 5:13 PM, Changbin Du <changb...@gmail.com> wrote:
> >
> >
> > HI, Saeed,
> >
> > Thanks so much for the help, I run your code and found the following
> > problem, do you have any comments or suggestions?
> >
> >> svm.p<-svm(as.factor(out) ~ ., data=train[,c( 2:18, 20:21, 24, 27:32)],
> >> probability=TRUE, method="C-classification",
> > + kernel="radial", cost=bestc, gamma=bestg, cross=10)
> >>
> >> svm.pred<-predict(svm.p, valid, decision.values = TRUE, probability =
> >> TRUE)
> >
> >>  library(ROCR)
> >>  svm.roc <- prediction(attributes(svm.pred)$decision.values, valid)
> > Error in prediction(attributes(svm.pred)$decision.values, valid) :
> >   Number of cross-validation runs must be equal for predictions and
> labels.
> >
> >> length(svm.pred)
> > [1] 943
> >> dim(valid)
> > [1] 943  32
> >
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Apr 29, 2010 at 4:49 PM, Saeed Abu Nimeh <sabun...@gmail.com>
> wrote:
> >>
> >>  svm.model <- svm(y~.,data=dataset,probability=TRUE)
> >>  svm.pred<-predict(svm.model, test.set, decision.values = TRUE,
> >> probability = TRUE)
> >>  library(ROCR)
> >>  svm.roc <- prediction(attributes(svm.pred)$decision.values, test.set)
> >>  svm.auc <- performance(svm.roc, 'tpr', 'fpr')
> >>  plot(svm.auc)
> >>
> >>
> >> On Thu, Apr 29, 2010 at 4:17 PM, Changbin Du <changb...@gmail.com>
> wrote:
> >> >> x <- train[,c( 2:18, 20:21, 24, 27:31)]
> >> >> y <- train$out
> >> >>
> >> >> svm.pr <- svm(x, y, probability = TRUE, method="C-classification",
> >> > kernel="radial", cost=bestc, gamma=bestg, cross=10)
> >> >>
> >> >> pred <- predict(svm.pr, valid[,c( 2:18, 20:21, 24, 27:31)],
> >> > decision.values = TRUE, probability = TRUE)
> >> >>      attr(pred, "decision.values")[1:4,]
> >> >        16         23         43         52
> >> > 1.08157648 0.51241842 0.06234319 1.20656580
> >> >>      attr(pred, "probabilities")[1:4,]
> >> > NULL
> >> >
> >> >
> >> > HI, Dear David and R community,
> >> >
> >> > I am trying to print out the probabilities and set a threshold for
> make
> >> > ROC
> >> > curve.  I dont know  why  it showed NULL for the probabilities.
> >> >
> >> > y<-train$out, is consisting of 0 and 1 binary values.
> >> >
> >> > Can you help me with this?
> >> >
> >> > Thanks so much!
> >> >
> >> >
> >> >
> >> > --
> >> > Sincerely,
> >> > Changbin
> >> > --
> >> >
> >> >        [[alternative HTML version deleted]]
> >> >
> >> > ______________________________________________
> >> > R-help@r-project.org mailing list
> >> > https://stat.ethz.ch/mailman/listinfo/r-help
> >> > PLEASE do read the posting guide
> >> > http://www.R-project.org/posting-guide.html
> >> > and provide commented, minimal, self-contained, reproducible code.
> >> >
> >
> >
> >
> > --
> > Sincerely,
> > Changbin
> > --
> >
> >
> >
> >
>



-- 
Sincerely,
Changbin
--

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to