Re: Driver hangs on running mllib word2vec

2015-01-05 Thread Eric Zhen
ends on the vocabSize. Even without overflow, there > are still other bottlenecks, for example, syn0Global and syn1Global, each > of them has vocabSize * vectorSize elements. > > Thanks. > > Zhan Zhang > > > > On Jan 5, 2015, at 7:47 PM, Eric Zhen wrote: > > Hi X

Re: Driver hangs on running mllib word2vec

2015-01-05 Thread Eric Zhen
ary size? -Xiangrui > > On Sun, Jan 4, 2015 at 11:18 PM, Eric Zhen wrote: > > Hi, > > > > When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup > > usage. Here is the jstack output: > > > > "main" prio=10 tid=0x

Driver hangs on running mllib word2vec

2015-01-04 Thread Eric Zhen
Hi, When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup usage. Here is the jstack output: "main" prio=10 tid=0x40112800 nid=0x46f2 runnable [0x4162e000] java.lang.Thread.State: RUNNABLE at java.io.ObjectOutputStream$BlockDataOutputStream.drain(Object

Re: SparkSQL exception on spark.sql.codegen

2014-11-18 Thread Eric Zhen
n't have the resources > to investigate backporting a fix. However, if you can reproduce the > problem in Spark 1.2 then please file a JIRA. > > On Mon, Nov 17, 2014 at 9:37 PM, Eric Zhen wrote: > >> Yes, it's always appears on a part of the whole tasks in a stage(i.e. 1

Re: SparkSQL exception on spark.sql.codegen

2014-11-17 Thread Eric Zhen
17, 2014 at 7:04 PM, Eric Zhen wrote: > >> Hi Michael, >> >> We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4. >> >> On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust > > wrote: >> >>> What version of Spark SQL? >>> >>&g

Re: SparkSQL exception on spark.sql.codegen

2014-11-17 Thread Eric Zhen
Hi Michael, We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4. On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust wrote: > What version of Spark SQL? > > On Sat, Nov 15, 2014 at 10:25 PM, Eric Zhen wrote: > >> Hi all, >> >> We run SparkS

SparkSQL exception on spark.sql.codegen

2014-11-15 Thread Eric Zhen
Hi all, We run SparkSQL on TPCDS benchmark Q19 with spark.sql.codegen=true, we got exceptions as below, has anyone else saw these before? java.lang.ExceptionInInitializerError at org.apache.spark.sql.execution.SparkPlan.newProjection(SparkPlan.scala:92) at org.apache.spark.sql.ex