Caused by: java.lang.NoClassDefFoundError: Could not initialize class
org.apache.derby.jdbc.EmbeddedDriver
It will be included in the assembly jar usually, not sure what's wrong. But can
you try add the derby jar into the driver classpath and try again?
-Original Message-
From: bdev
Rusty,
I am very thankful for your help. Actually, I am facing difficulty in
objects. My plan is that, I have an object list containing list of User
objects. After parallelizing it through spark context, I apply comparator
on user.getUserName(). As usernames are sorted, their related user object
we tried --master yarn-client with no different result.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/The-auxService-spark-shuffle-does-not-exist-tp23662p23689.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Sorry ignore my last reply.
Rusty,
I am very thankful for your help. Actually, I am facing difficulty in
objects. My plan is that, I have an object list containing list of User
objects. After parallelizing it through spark context, I apply comparator
on user.getUserName(). As usernames are
Hi,
I am using the new experimental Direct Stream API. Everything is working
fine but when it comes to fault tolerance, I am not sure how to achieve it.
Presently my Kafka config map looks like this
configMap.put(zookeeper.connect,192.168.51.98:2181);
configMap.put(group.id,
I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration,
but still I am getting a SIGTERM!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
Sent
Is this normal?
15/07/07 15:27:04 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkExecutor@cruncher02.stratified:50063]
15/07/07 15:27:04 INFO util.Utils: Successfully started service
'sparkExecutor' on port 50063.
15/07/07 15:27:04 INFO
Hi,
problem not solved yet. Compiling Spark by myself is no option. I don't have
permissions and skills for doing that. Could someone please explain, what
exactly is causing the problem? If Spark is distributed via pre-compiled
versions, why not to add the corresponding JDBC driver jars?
At
101 - 108 of 108 matches
Mail list logo