Hi, I have the same exception. Can you tell me how did you fix it? Thank you!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/wholeTextFiles-not-working-with-HDFS-tp7490p7665.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Thanks. Now I know how to broadcast the dataset but I still wonder after
broadcasting the dataset how can I apply my algorithm to training the model
in the wokers. To describe my question in detail, The following code is used
to train LDA(Latent Dirichlet Allocation) model with JGibbLDA in single
Someone suggests me to use Mahout, but I'm not familiar with it. And in that
case, using Mahout will add difficulties to my program. I'd like to run the
algorithm in Spark. I'm a beginner, can you give me some suggestions?
--
View this message in context:
Thank you for your reply, I don't quite understand how to do one-vs-all
manually for multiclass
training. And for the second question, My algorithm is implemented in Java
and designed for single machine, How can I broadcast the dataset to each
worker, train models on workers? Thank you very much.
Thanks. Now I know how to broadcast the dataset but I still wonder after
broadcasting the dataset how can I apply my algorithm to training the model
in the wokers. To describe my question in detail, The following code is used
to train LDA(Latent Dirichlet Allocation) model with JGibbLDA in single
Hi All,
As we know, In MLlib the SVM is used for binary classification. I wonder
how to train SVM model for mutiple classification in MLlib. In addition, how
to apply the machine learning algorithm in Spark if the algorithm isn't
included in MLlib. Thank you.
--
View this message in context: