It is used in data loading:

https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/SparseNaiveBayes.scala#L76

On Thu, Aug 7, 2014 at 12:47 AM, SK <skrishna...@gmail.com> wrote:
> I followed the example in
> examples/src/main/scala/org/apache/spark/examples/mllib/SparseNaiveBayes.scala.
>
> IN this file Params is defined as follows:
>
> case class Params (
>     input: String = null,
>     minPartitions: Int = 0,
>     numFeatures: Int = -1,
>     lambda: Double = 1.0)
>
> In the main function, the option parser accepts numFeatures as an option.
> But I looked at the code in more detail just now and found the following:
>
>   val model = new NaiveBayes().setLambda(params.lambda).run(training)
>
> So looks like at the time of creation only the lambda parameter is used.
> Perhaps the example needs to be cleaned up during the next release. I am
> currently using Spark version 1.0.1.
>
>
> thanks
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Naive-Bayes-parameters-tp11592p11623.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to