. This is a
judgement that you will have to make.
Regards,
Skanda
On Sun, Mar 13, 2016 at 11:23 PM, trung kien <kient...@gmail.com> wrote:
> Thanks all for actively sharing your experience.
>
> @Chris: using something like Redis is something I am trying to figure out.
> I have a lots of
)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Regards,
Skanda
On Wed, Jan 28, 2015 at 4:59 PM, Pala M Muthaia mchett...@rocketfuelinc.com
wrote
Hi
I was using the wrong version of the spark-hive jar. I downloaded the
right version of the jar from the cloudera repo and it works now.
Thanks,
Skanda
On Fri, May 22, 2015 at 2:36 PM, Skanda skanda.ganapa...@gmail.com wrote:
Hi All,
I'm facing the same problem with Spark 1.3.0 from
Hi,
My spark-env.sh has the following entries with respect to classpath:
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/usr/lib/hive/lib/*:/etc/hive/conf/
-Skanda
On Sun, Feb 1, 2015 at 11:45 AM, guxiaobo1982 guxiaobo1...@qq.com wrote:
Hi Skanda,
How do set up your SPARK_CLASSPATH?
I add
This happened to me as well, putting hive-site.xml inside conf doesn't seem to
work. Instead I added /etc/hive/conf to SPARK_CLASSPATH and it worked. You can
try this approach.
-Skanda
-Original Message-
From: guxiaobo1982 guxiaobo1...@qq.com
Sent: 25-01-2015 13:50
To: user
);
}
})
*.saveAsNewAPIHadoopFile(/mllib/data/clusteroutput_seq,
Text.class, IntWritable.class, SequenceFileOutputFormat.class);*
Regards,
Skanda
not be directly related to the
serialization problem I suspect it is. Your function is not serializable
since it contains references to these cached writables. I think removing
them fixes both problems.
On Jan 22, 2015 9:42 AM, Skanda skanda.ganapa...@gmail.com wrote:
Hi All,
I'm using