Github user srowen commented on a diff in the pull request: https://github.com/apache/spark/pull/653#discussion_r12314729 --- Diff: docs/mllib-naive-bayes.md --- @@ -58,29 +67,36 @@ optionally smoothing parameter `lambda` as input, and output a can be used for evaluation and prediction. {% highlight java %} +import org.apache.spark.api.java.JavaPairRDD; +import org.apache.spark.api.java.JavaRDD; +import org.apache.spark.api.java.function.Function; import org.apache.spark.mllib.classification.NaiveBayes; +import org.apache.spark.mllib.classification.NaiveBayesModel; +import org.apache.spark.mllib.regression.LabeledPoint; +import scala.Tuple2; JavaRDD<LabeledPoint> training = ... // training set JavaRDD<LabeledPoint> test = ... // test set -NaiveBayesModel model = NaiveBayes.train(training.rdd(), 1.0); +final NaiveBayesModel model = NaiveBayes.train(training.rdd(), 1.0); -JavaRDD<Double> prediction = model.predict(test.map(new Function<LabeledPoint, Vector>() { --- End diff -- Yeah, being Java, there is not the usual Vector import issue that you see in Scala, if that's what you mean. It wouldn't begin to compile without that import. Here's the touched-up version, that I couldn't make work: ``` JavaRDD<Double> prediction = model.predict(test.map(new Function<LabeledPoint, Vector>() { public Vector call(LabeledPoint p) { return p.features(); } }).rdd()).toJavaRDD(); ``` The returned `RDD` and `JavaRDD` both have generic type `Object` instead of `Double`, and I could not see why, given that the `predict()` method plainly returns `RDD[Double]`. Maybe you see something I am missing.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---