e LDA implementation?
>
> For reference, my prototype code can be found here:
> https://github.com/marko-asplund/tech-protos/blob/master/mllib-lda/src/main/scala/fi/markoa/proto/mllib/LDADemo.scala
>
>
> thanks,
> marko
--
Carsten Schnober
Doctoral Researcher
Ubiquitous Knowledge Pr
ong, Vector)] = ...
sc.parallelize(v1).zipWithIndexes
.map{ v => (computeDistances(v._1, v2), v._2) }
Is there any good practice to approach problems like this?
Thanks!
Carsten
--
Carsten Schnober
Doctoral Researcher
Ubiquitous Knowledge Processing (UKP) Lab
FB 20 / Computer Science Departm
ed in Spark,
has it? I just wonder whether I am interpreting the current situation
correctly.
Thanks!
Carsten
[1] https://issues.apache.org/jira/browse/SPARK-2510
--
Carsten Schnober
Doctoral Researcher
Ubiquitous Knowledge Processing (UKP) Lab
FB 20 / Computer Science Department
Techn
ed in Spark,
has it? I just wonder whether I am interpreting the current situation
correctly.
Thanks!
Carsten
[1] https://issues.apache.org/jira/browse/SPARK-2510
--
Carsten Schnober
Doctoral Researcher
Ubiquitous Knowledge Processing (UKP) Lab
FB 20 / Computer Science Department
Techn
As a Spark newbie, I've come across this thread. I'm playing with Word2Vec in
our Hadoop cluster and here's my issue with classic Java serialization of
the model: I don't have SSH access to the cluster master node.
Here's my code for computing the model:
val input = sc.textFile("README.md").