[ 
https://issues.apache.org/jira/browse/SPARK-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15176296#comment-15176296
 ] 

Qi Dai commented on SPARK-13289:
--------------------------------

I tried "build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Phive 
-Phive-thriftserver -DskipTests clean package". I looks successful but I can't 
run it in yarn-client mode. Then I turned to "./make-distribution.sh --name 
spark210 --tgz -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn" but it 
can't go through. I also tried the compiled one at: 
http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest/ and 
it also can't run with yarn-client mode. It showed some error related with yarn:

16/03/02 14:22:52 ERROR SparkContext: Error initializing SparkContext.
java.lang.ClassNotFoundException: 
org.apache.spark.deploy.yarn.history.YarnHistoryService
        at 
scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
        at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5$$anonfun$apply$4.apply(SchedulerExtensionService.scala:111)
        at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5$$anonfun$apply$4.apply(SchedulerExtensionService.scala:110)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5.apply(SchedulerExtensionService.scala:110)
        at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5.apply(SchedulerExtensionService.scala:108)
        at scala.Option.map(Option.scala:146)
        at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices.start(SchedulerExtensionService.scala:108)
        at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend.start(YarnSchedulerBackend.scala:80)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:61)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:143)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
        at org.apache.spark.repl.Main$.createSparkContext(Main.scala:98)
        at $line3.$read$$iw$$iw.<init>(<console>:12)
        at $line3.$read$$iw.<init>(<console>:22)
        at $line3.$read.<init>(<console>:24)
        at $line3.$read$.<init>(<console>:28)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
        at $line3.$eval$.$print(<console>:6)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:784)
        at 
scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1039)
        at 
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:636)
        at 
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:635)
        at 
scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at 
scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at 
scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:635)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:563)
        at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:802)
        at 
scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:836)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:694)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:404)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcZ$sp(SparkILoop.scala:39)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:38)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:38)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:213)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:38)
        at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:95)
        at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:922)
        at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
        at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
        at 
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:911)
        at org.apache.spark.repl.Main$.doMain(Main.scala:64)
        at org.apache.spark.repl.Main$.main(Main.scala:47)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:734)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/03/02 14:22:52 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to 
request executors before the AM has registered!
16/03/02 14:22:52 WARN MetricsSystem: Stopping a MetricsSystem that is not 
running
java.lang.ClassNotFoundException: 
org.apache.spark.deploy.yarn.history.YarnHistoryService
  at 
scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:348)
  at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
  at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5$$anonfun$apply$4.apply(SchedulerExtensionService.scala:111)
  at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5$$anonfun$apply$4.apply(SchedulerExtensionService.scala:110)
  at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
  at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
  at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
  at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
  at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5.apply(SchedulerExtensionService.scala:110)
  at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices$$anonfun$start$5.apply(SchedulerExtensionService.scala:108)
  at scala.Option.map(Option.scala:146)
  at 
org.apache.spark.scheduler.cluster.SchedulerExtensionServices.start(SchedulerExtensionService.scala:108)
  at 
org.apache.spark.scheduler.cluster.YarnSchedulerBackend.start(YarnSchedulerBackend.scala:80)
  at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:61)
  at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:143)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:501)
  at org.apache.spark.repl.Main$.createSparkContext(Main.scala:98)
  ... 48 elided
java.lang.NullPointerException
  at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1033)
  at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:88)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
  at org.apache.spark.repl.Main$.createSQLContext(Main.scala:108)
  ... 48 elided
<console>:13: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:13: error: not found: value sqlContext
       import sqlContext.sql
              ^
Probably it's better to wait for the newer version released. 

> Word2Vec generate infinite distances when numIterations>5
> ---------------------------------------------------------
>
>                 Key: SPARK-13289
>                 URL: https://issues.apache.org/jira/browse/SPARK-13289
>             Project: Spark
>          Issue Type: Bug
>          Components: MLlib
>    Affects Versions: 1.6.0
>         Environment: Linux, Scala
>            Reporter: Qi Dai
>              Labels: features
>
> I recently ran some word2vec experiments on a cluster with 50 executors on 
> some large text dataset but find out that when number of iterations is larger 
> than 5 the distance between words will be all infinite. My code looks like 
> this:
> val text = sc.textFile("/project/NLP/1_biliion_words/train").map(_.split(" 
> ").toSeq)
> import org.apache.spark.mllib.feature.{Word2Vec, Word2VecModel}
> val word2vec = new 
> Word2Vec().setMinCount(25).setVectorSize(96).setNumPartitions(99).setNumIterations(10).setWindowSize(5)
> val model = word2vec.fit(text)
> val synonyms = model.findSynonyms("who", 40)
> for((synonym, cosineSimilarity) <- synonyms) {
>   println(s"$synonym $cosineSimilarity")
> }
> The results are: 
> to Infinity
> and Infinity
> that Infinity
> with Infinity
> said Infinity
> it Infinity
> by Infinity
> be Infinity
> have Infinity
> he Infinity
> has Infinity
> his Infinity
> an Infinity
> ) Infinity
> not Infinity
> who Infinity
> I Infinity
> had Infinity
> their Infinity
> were Infinity
> they Infinity
> but Infinity
> been Infinity
> I tried many different datasets and different words for finding synonyms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to