Does Spark 1.3.1 support Hive 1.0? If not, which version of Spark will
start supporting Hive 1.0?
--
Kannan
Ignore the question. There was a Hadoop setting that needed to be set to
get it working.
--
Kannan
On Wed, Apr 1, 2015 at 1:37 PM, Kannan Rajah kra...@maprtech.com wrote:
Running a simple word count job in standalone mode as a non root user from
spark-shell. The spark master, worker services
Running a simple word count job in standalone mode as a non root user from
spark-shell. The spark master, worker services are running as root user.
The problem is the _temporary under /user/krajah/output2/_temporary/0 dir
is being created with root permission even when running the job as non root
SparkConf.scala logs a warning saying SPARK_CLASSPATH is deprecated and we
should use spark.executor.extraClassPath instead. But the online
documentation states that spark.executor.extraClassPath is only meant for
backward compatibility.
Vanzin van...@cloudera.com wrote:
On Thu, Feb 26, 2015 at 5:12 PM, Kannan Rajah kra...@maprtech.com wrote:
Also, I would like to know if there is a localization overhead when we
use
spark.executor.extraClassPath. Again, in the case of hbase, these jars
would
be typically available on all
at 2:43 PM, Kannan Rajah kra...@maprtech.com wrote:
SparkConf.scala logs a warning saying SPARK_CLASSPATH is deprecated and
we
should use spark.executor.extraClassPath instead. But the online
documentation states that spark.executor.extraClassPath is only meant for
backward compatibility
value of mapred.map.tasks is 2
https://hadoop.apache.org/docs/r1.0.4/mapred-default.html. You may see
that the Spark SQL result can be divided into two sorted parts from the
middle.
Cheng
On 2/19/15 10:33 AM, Kannan Rajah wrote:
According to hive documentation, sort by is supposed