depends on the vocabSize. Even without overflow, there
are still other bottlenecks, for example, syn0Global and syn1Global, each
of them has vocabSize * vectorSize elements.
Thanks.
Zhan Zhang
On Jan 5, 2015, at 7:47 PM, Eric Zhen zhpeng...@gmail.com wrote:
Hi Xiangrui,
Our dataset is about
is the vocabulary size? -Xiangrui
On Sun, Jan 4, 2015 at 11:18 PM, Eric Zhen zhpeng...@gmail.com wrote:
Hi,
When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup
usage. Here is the jstack output:
main prio=10 tid=0x40112800 nid=0x46f2 runnable
[0x4162e000
Hi,
When we run mllib word2vec(spark-1.1.0), driver get stuck with 100% cup
usage. Here is the jstack output:
main prio=10 tid=0x40112800 nid=0x46f2 runnable
[0x4162e000]
java.lang.Thread.State: RUNNABLE
at
won't have the resources
to investigate backporting a fix. However, if you can reproduce the
problem in Spark 1.2 then please file a JIRA.
On Mon, Nov 17, 2014 at 9:37 PM, Eric Zhen zhpeng...@gmail.com wrote:
Yes, it's always appears on a part of the whole tasks in a stage(i.e. 100/100
(65
Hi Michael,
We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4.
On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust mich...@databricks.com
wrote:
What version of Spark SQL?
On Sat, Nov 15, 2014 at 10:25 PM, Eric Zhen zhpeng...@gmail.com wrote:
Hi all,
We run SparkSQL on TPCDS
at 7:04 PM, Eric Zhen zhpeng...@gmail.com wrote:
Hi Michael,
We use Spark v1.1.1-rc1 with jdk 1.7.0_51 and scala 2.10.4.
On Tue, Nov 18, 2014 at 7:09 AM, Michael Armbrust mich...@databricks.com
wrote:
What version of Spark SQL?
On Sat, Nov 15, 2014 at 10:25 PM, Eric Zhen zhpeng
Hi all,
We run SparkSQL on TPCDS benchmark Q19 with spark.sql.codegen=true, we got
exceptions as below, has anyone else saw these before?
java.lang.ExceptionInInitializerError
at
org.apache.spark.sql.execution.SparkPlan.newProjection(SparkPlan.scala:92)
at