_predict.sum()
>>>> 9077
>>>>
>>>> >>> nb = NaiveBayes.train(lp)
>>>> >>> nb_predict = nb.predict(predict_feat)
>>>> >>> nb_predict.sum
>>> nb = NaiveBayes.train(lp)
>>> >>> nb_predict = nb.predict(predict_feat)
>>> >>> nb_predict.sum()
>>> 10287.0
>>>
>>> >>> rf = RandomForest.trainClassifier(lp, numClasses=2,
>>> >>> categoricalFeatu
fo={}, numTrees=100, seed=422)
>> >>> rf_predict = rf.predict(predict_feat)
>> >>> rf_predict.sum()
>> 0.0
>>
>> This code was all run back to back so I didn't change anything in between.
>> Does anybody have a possible explanation for this
> 0.0
>
> This code was all run back to back so I didn't change anything in between.
> Does anybody have a possible explanation for this?
>
> Thanks!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Extre
between.
Does anybody have a possible explanation for this?
Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Extremely-poor-predictive-performance-with-RF-in-mllib-tp24112.html
Sent from the Apache Spark User List mailing list archive at Nabble.com