Anyone knows anything about it? Or should I actually move this topic to a
MLlib specif mailing list? Any information is appreciated! Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-K-fold-validation-in-spark-1-0-tp8142p8172.html
Sent from
Thanks Evan! I think it works!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-K-fold-validation-in-spark-1-0-tp8142p8188.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hello,
I noticed there are some discussions about implementing K-fold validation to
Mllib on Spark and believe it should be in Spark-1.0 now. However there
isn't any documentation or example about how to use it in the training.
While I am reading the code to find out, does anyone use it
I used some standard Java IO libraries to write files directly to the
cluster. It is a little bit trivial tho:
val sc = getSparkContext
val hadoopConf = SparkHadoopUtil.get.newConfiguration
val hdfsPath = hdfs://your/path
val fs = FileSystem.get(hadoopConf)
val path