ends on the vocabSize. Even without overflow, there
> are still other bottlenecks, for example, syn0Global and syn1Global, each
> of them has vocabSize * vectorSize elements.
>
> Thanks.
>
> Zhan Zhang
>
>
>
> On Jan 5, 2015, at 7:47 PM, Eric Zhen wrote:
>
> Hi X
ary size? -Xiangrui
>
> On Sun, Jan 4, 2015 at 11:18 PM, Eric Zhen wrote:
> > Hi,
> >
> > When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup
> > usage. Here is the jstack output:
> >
> > "main" prio=10 tid=0x
Hi,
When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup
usage. Here is the jstack output:
"main" prio=10 tid=0x40112800 nid=0x46f2 runnable
[0x4162e000]
java.lang.Thread.State: RUNNABLE
at
java.io.ObjectOutputStream$BlockDataOutputStream.drain(Object
n't have the resources
> to investigate backporting a fix. However, if you can reproduce the
> problem in Spark 1.2 then please file a JIRA.
>
> On Mon, Nov 17, 2014 at 9:37 PM, Eric Zhen wrote:
>
>> Yes, it's always appears on a part of the whole tasks in a stage(i.e. 1
17, 2014 at 7:04 PM, Eric Zhen wrote:
>
>> Hi Michael,
>>
>> We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4.
>>
>> On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust > > wrote:
>>
>>> What version of Spark SQL?
>>>
>>&g
Hi Michael,
We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4.
On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust
wrote:
> What version of Spark SQL?
>
> On Sat, Nov 15, 2014 at 10:25 PM, Eric Zhen wrote:
>
>> Hi all,
>>
>> We run SparkS
Hi all,
We run SparkSQL on TPCDS benchmark Q19 with spark.sql.codegen=true, we got
exceptions as below, has anyone else saw these before?
java.lang.ExceptionInInitializerError
at
org.apache.spark.sql.execution.SparkPlan.newProjection(SparkPlan.scala:92)
at
org.apache.spark.sql.ex