Ok, it just seems to be an issue with the syntax of the spark-submit command.
It should be :
spark-submit --queue default \
--class com.my.Launcher \
--deploy-mode cluster \
--master yarn-cluster \
--driver-java-options "-Dfile.encoding=UTF-8" \
--jars
I've included that in my build file for the fat jar already.
libraryDependencies += "com.amazonaws" % "aws-java-sdk" % "1.11.155"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-s3" % "1.11.155"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-core" % "1.11.155"
Not sure if I need
Ensure com.amazonaws.services.s3.AmazonS3ClientBuilder in your classpath
which include your application jar and attached executor jars.
2017-07-20 6:12 GMT+08:00 Noppanit Charassinvichai :
> I have this spark job which is using S3 client in mapPartition. And I get
> this
Thanks Marcelo.
Problem solved.
Best,
Carlo
Hi Marcelo,
Thanks you for your help.
Problem solved as you suggested.
Best Regards,
Carlo
> On 5 Aug 2016, at 18:34, Marcelo Vanzin wrote:
>
> On Fri, Aug 5, 2016 at 9:53 AM, Carlo.Allocca
> wrote:
I have also executed:
mvn dependency:tree |grep log
[INFO] | | +- com.esotericsoftware:minlog:jar:1.3.0:compile
[INFO] +- log4j:log4j:jar:1.2.17:compile
[INFO] +- org.slf4j:slf4j-log4j12:jar:1.7.16:compile
[INFO] | | +- commons-logging:commons-logging:jar:1.1.3:compile
and the POM
On Fri, Aug 5, 2016 at 9:53 AM, Carlo.Allocca wrote:
>
> org.apache.spark
> spark-core_2.10
> 2.0.0
> jar
>
>
> org.apache.spark
> spark-sql_2.10
> 2.0.0
>
Please Sean, could you detail the version mismatch?
Many thanks,
Carlo
On 5 Aug 2016, at 18:11, Sean Owen
> wrote:
You also seem to have a
version mismatch here.
-- The Open University is incorporated by Royal Charter (RC 000391), an exempt
One option is to clone the class in your own project.
Experts may have better solution.
Cheers
On Fri, Aug 5, 2016 at 10:10 AM, Carlo.Allocca
wrote:
> Hi Ted,
>
> Thanks for the promptly answer.
> It is not yet clear to me what I should do.
>
> How to fix it?
>
>
Hi Ted,
Thanks for the promptly answer.
It is not yet clear to me what I should do.
How to fix it?
Many thanks,
Carlo
On 5 Aug 2016, at 17:58, Ted Yu
> wrote:
private[spark] trait Logging {
-- The Open University is incorporated by Royal
In 2.0, Logging became private:
private[spark] trait Logging {
FYI
On Fri, Aug 5, 2016 at 9:53 AM, Carlo.Allocca
wrote:
> Dear All,
>
> I would like to ask for your help about the following issue:
> java.lang.ClassNotFoundException:
> org.apache.spark.Logging
>
> I
Can you try running the example like this
./bin/run-example sql.RDDRelation
I know there are some jars in the example folders, and running them this
way adds them to the classpath
On Jul 7, 2016 3:47 AM, "kevin" wrote:
> hi,all:
> I build spark use:
>
>
Thanks Jacob,
I've looked into the source code here and found that I miss this property
there:
spark.repl.class.uri
Putting it solved the problem
Cheers
2016-03-17 18:14 GMT-03:00 Jakob Odersky :
> The error is very strange indeed, however without code that reproduces
> it,
The error is very strange indeed, however without code that reproduces
it, we can't really provide much help beyond speculation.
One thing that stood out to me immediately is that you say you have an
RDD of Any where every Any should be a BigDecimal, so why not specify
that type information?
When
Hi Ted, thanks for answering.
The map is just that, whenever I try inside the map it throws this
ClassNotFoundException, even if I do map(f => f) it throws the exception.
What is bothering me is that when I do a take or a first it returns the
result, which make me conclude that the previous code
bq. $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1
Do you mind showing more of your code involving the map() ?
On Thu, Mar 17, 2016 at 8:32 AM, Dirceu Semighini Filho <
dirceu.semigh...@gmail.com> wrote:
> Hello,
> I found a strange behavior after executing a prediction with MLIB.
> My code
you need make sure this class is accessible to all servers since its a
cluster mode and drive can be on any of the worker nodes.
On Fri, Dec 25, 2015 at 5:57 PM, Saiph Kappa wrote:
> Hi,
>
> I'm submitting a spark job like this:
>
>
I found out that by commenting this line in the application code:
sparkConf.set("spark.executor.extraJavaOptions", " -XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC -XX:+AggressiveOpts -XX:FreqInlineSize=300
-XX:MaxInlineSize=300 ")
the exception does not occur anymore. Not entirely sure why, but
I'm not %100 sure, but I don't think a jar within a jar will work without a
custom class loader. You can perhaps try to use "maven-assembly-plugin" or
"maven-shade-plugin" to build your uber/fat jar. Both of these will build a
flattened single jar.
--
Ali
On Nov 26, 2015, at 2:49 AM, Marc de
It turned out to be a problem with `SerializationUtils` from Apache Commons
Lang. There is an open issue where the class will throw a
`ClassNotFoundException` even if the class is in the classpath in a
multiple-classloader environment:
https://issues.apache.org/jira/browse/LANG-1049
We moved away
Where is the exception thrown (full stack trace)? How are you running your
application, via spark-submit or spark-shell?
On Tue, Nov 3, 2015 at 1:43 AM, hveiga wrote:
> Hello,
>
> I am facing an issue where I cannot run my Spark job in a cluster
> environment (standalone or
Now I am running up against some other problem while trying to schedule tasks:
15/05/01 22:32:03 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.IllegalStateException: unread block data
at
bq. Caused by: java.lang.ClassNotFoundException: com.example.Schema$MyRow
So the above class is in the jar which was in the classpath ?
Can you tell us a bit more about Schema$MyRow ?
On Fri, May 1, 2015 at 8:05 AM, Akshat Aranya aara...@gmail.com wrote:
Hi,
I'm getting a
Yes, this class is present in the jar that was loaded in the classpath
of the executor Java process -- it wasn't even lazily added as a part
of the task execution. Schema$MyRow is a protobuf-generated class.
After doing some digging around, I think I might be hitting up against
SPARK-5470, the
I cherry-picked the fix for SPARK-5470 and the problem has gone away.
On Fri, May 1, 2015 at 9:15 AM, Akshat Aranya aara...@gmail.com wrote:
Yes, this class is present in the jar that was loaded in the classpath
of the executor Java process -- it wasn't even lazily added as a part
of the task
Hi Kevin,
yes I can test it means I have to build Spark from git repository?
Ralph
Am 17.03.15 um 02:59 schrieb Kevin (Sangwoo) Kim:
Hi Ralph,
It seems like https://issues.apache.org/jira/browse/SPARK-6299 issue,
which is I'm working on.
I submitted a PR for it, would you test it?
Hi Ralph,
It seems like https://issues.apache.org/jira/browse/SPARK-6299 issue, which
is I'm working on.
I submitted a PR for it, would you test it?
Regards,
Kevin
On Tue, Mar 17, 2015 at 1:11 AM Ralph Bergmann ra...@dasralph.de wrote:
Hi,
I want to try the JavaSparkPi example[1] on a
Thanks for the notification!
For now, I'll use the Kryo serializer without registering classes until the
bug fix has been merged into the next version of Spark (I guess that will
be 1.3, right?).
arun
On Sun, Feb 1, 2015 at 10:58 PM, Shixiong Zhu zsxw...@gmail.com wrote:
It's a bug that has
It's a bug that has been fixed in https://github.com/apache/spark/pull/4258
but not yet been merged.
Best Regards,
Shixiong Zhu
2015-02-02 10:08 GMT+08:00 Arun Lists lists.a...@gmail.com:
Here is the relevant snippet of code in my main program:
===
=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
View this message in context: Re: ClassNotFoundException in standalone
modehttp://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException
Can you make sure the class SimpleApp$$anonfun$1 is included in your app
jar?
2014-11-20 18:19 GMT+01:00 Benoit Pasquereau [via Apache Spark User List]
ml-node+s1001560n19391...@n3.nabble.com:
Hi Guys,
I’m having an issue in standalone mode (Spark 1.1, Hadoop 2.4, Windows
Server 2008).
%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
--
View this message in context: Re: ClassNotFoundException in standalone
mode
http://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException-in-standalone-mode
Hi,
Yes, the error still occurs when we replace the lambdas with named
functions:
(same error traces as in previous posts)
--
View this message in context:
Note that runnning a simple map+reduce job on the same hdfs files with the
same installation works fine:
Did you call collect() on the totalLength? Otherwise nothing has actually
executed.
Oh, I'm sorry... reduce is also an operation
On Wed, Jul 16, 2014 at 3:37 PM, Michael Armbrust mich...@databricks.com
wrote:
Note that runnning a simple map+reduce job on the same hdfs files with the
same installation works fine:
Did you call collect() on the totalLength? Otherwise
Hi Michael,
Thanks for your reply. Yes, the reduce triggered the actual execution, I got
a total length (totalLength: 95068762, for the record).
--
View this message in context:
H, it could be some weirdness with classloaders / Mesos / spark sql?
I'm curious if you would hit an error if there were no lambda functions
involved. Perhaps if you load the data using jsonFile or parquetFile.
Either way, I'd file a JIRA. Thanks!
On Jul 16, 2014 6:48 PM, Svend
Hi Tobias,
Regarding my comment on closure serialization:
I was discussing it with my fellow Sparkers here and I totally overlooked
the fact that you need the class files to de-serialize the closures (or
whatever) on the workers, so you always need the jar file delivered to the
workers in order
Hi Tobias,
On Wed, May 21, 2014 at 5:45 PM, Tobias Pfeiffer t...@preferred.jp wrote:
first, thanks for your explanations regarding the jar files!
No prob :-)
On Thu, May 22, 2014 at 12:32 AM, Gerard Maas gerard.m...@gmail.com
wrote:
I was discussing it with my fellow Sparkers here and I
Here's the 1.0.0rc9 version of the docs:
https://people.apache.org/~pwendell/spark-1.0.0-rc9-docs/running-on-mesos.html
I refreshed them with the goal of steering users more towards prebuilt
packages than relying on compiling from source plus improving overall
formatting and clarity, but not
Hi Andrew,
Thanks for the current doc.
I'd almost gotten to the point where I thought that my custom code needed
to be included in the SPARK_EXECUTOR_URI but that can't possibly be
correct. The Spark workers that are launched on Mesos slaves should start
with the Spark core jars and then
I just ran into the same problem. I will respond if I find how to fix.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException-tp5182p5342.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
41 matches
Mail list logo