Hi,
if Spark thrift JDBC server is started with non-secure mode, it is working
fine. with a secured mode in case of pluggable authentication, I placed the
authentication class configuration in conf/hive-site.xml
property
namehive.server2.authentication/name
valueCUSTOM/value
/property
On Tue, Aug 12, 2014 at 12:53 PM, Yin Huai huaiyin@gmail.com wrote:
Hi Jenny,
Have you copied hive-site.xml to spark/conf directory? If not, can you
put it in conf/ and try again?
Thanks,
Yin
On Mon, Aug 11, 2014 at 8:57 PM, Jenny Zhao linlin200...@gmail.com
wrote:
Thanks Yin
:
Hi Jenny,
How's your metastore configured for both Hive and Spark SQL? Which
metastore mode are you using (based on
https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin
)?
Thanks,
Yin
On Mon, Aug 11, 2014 at 6:15 PM, Jenny Zhao linlin200...@gmail.com
wrote:
you
Hi,
I am able to run my hql query on yarn cluster mode when connecting to the
default hive metastore defined in hive-site.xml.
however, if I want to switch to a different database, like:
hql(use other-database)
it only works in yarn client mode, but failed on yarn-cluster mode with the
Hi,
For running spark sql, the dataneuclus*.jar are automatically added in
classpath, this works fine for spark standalone mode and yarn-client mode,
however, for Yarn-cluster mode, I have to explicitly put these jars using
--jars option when submitting this job, otherwise, the job will fail, why
my guess would be that the DB2 JDBC drivers are not being
correctly included. How are you trying to add them to the classpath?
Michael
On Tue, Jun 17, 2014 at 1:29 AM, Jenny Zhao linlin200...@gmail.com
wrote:
Hi,
my hive configuration use db2 as it's metastore database, I have built
finally got it work out, mimicked how spark added datanucleus jars in
compute-classpath.sh, and added the db2jcc*.jar in the classpath, it works
now.
Thanks!
On Tue, Jun 17, 2014 at 10:50 AM, Jenny Zhao linlin200...@gmail.com wrote:
Thanks Michael!
as I run it using spark-shell, so I added
Hi,
my hive configuration use db2 as it's metastore database, I have built
spark with the extra step sbt/sbt assembly/assembly to include the
dependency jars. and copied HIVE_HOME/conf/hive-site.xml under spark/conf.
when I ran :
hql(CREATE TABLE IF NOT EXISTS src (key INT, value STRING))
got
we experienced similar issue in our environment, below is the whole stack
trace, it works fine if we run local mode, if we run it in cluster mode
(even with Master and 1 worker on the same node), we have this
serialversionUID issue. we use Spark 1.0.0 and compiled with JDK6.
here is a link about
Hi,
I have installed spark 1.0 from the branch-1.0, build went fine, and I have
tried running the example on Yarn client mode, here is my command:
/home/hadoop/spark-branch-1.0/bin/spark-submit
/home/hadoop/spark-branch-1.0/examples/target/scala-2.10/spark-examples-1.0.0-hadoop2.2.0.jar
--master
Hi all,
I have been able to run LR in local mode, but I am facing problem to run
it in cluster mode, below is the source script, and stack trace when
running it cluster mode, I used sbt package to build the project, not sure
what it is complaining?
another question I have is for
the fat
jar you have created for your code.
libraryDependencies += org.apache.spark % spark-mllib_2.9.3 %
0.8.1-incubating
Thanks,
Jagat Singh
On Thu, Apr 10, 2014 at 8:05 AM, Jenny Zhao linlin200...@gmail.comwrote:
Hi all,
I have been able to run LR in local mode, but I am facing
12 matches
Mail list logo