You have to provide your own Hadoop distro and all its dependencies. This
build is intended for use on a Hadoop cluster, really. If you're running
stand-alone, you should not be using it. Use a 'normal' distribution that
bundles Hadoop libs.
On Wed, Aug 24, 2022 at 9:35 AM FLORANCE Grégory
Hi,
I've downloaded the 3.2.2-without Hadoop Spark distribution in
order to test it in a without Hadoop context.
I tested the version with Hadoop and it worked well.
When I wanted to start the master, I
Hello Spark Community,
I have a Spark-SQL problem where I am receiving a NoClassDefFoundError error
for org.apache.spark.sql.catalyst.util.RebaseDateTime$ . This happens for any
query with a filter on a Timestamp column when the query is first run
programmatically but not when the query
The org.bdgenomics.adam is one of the Components of the GATK, and I just
download the release version from its github website . However, when I build
a new docker image with spark2.4.5 and scala 2.12.4,It works well and that
makes me confused.
root@master2:~# pyspark
Python 2.7.17 (default,
How are you depending on that org.bdgenomics.adam library? Maybe you're
pulling the 2.11 version of that.
Spark 3 supports only Scala 2.12. This actually sounds like third party
library is compiled for 2.11 or something.
On Fri, Jun 5, 2020 at 11:11 PM charles_cai <1620075...@qq.com> wrote:
> Hi Pol,
>
> thanks for your suggestion, I am going to use Spark-3.0.0 for GPU
> acceleration,so I update the
Hi Pol,
thanks for your suggestion, I am going to use Spark-3.0.0 for GPU
acceleration,so I update the scala to the *version 2.12.11* and the latest
*2.13* ,but the error is still there, and by the way , the Spark version is
*spark-3.0.0-preview2-bin-without-hadoop*
Caused by:
> Hi,
>
> I run the GATK MarkDuplicates in Spark mode and it throws an
> *NoClassDefFoundError: scala/Product$class*. The GATK version is 4.1.7 and
> 4.0.0,the environment is: spark-3.0.0, scala-2.11.12
>
> *GATK commands:*
>
> gatk MarkDuplicatesSpark \
> -I h
Hi,
I run the GATK MarkDuplicates in Spark mode and it throws an
*NoClassDefFoundError: scala/Product$class*. The GATK version is 4.1.7 and
4.0.0,the environment is: spark-3.0.0, scala-2.11.12
*GATK commands:*
gatk MarkDuplicatesSpark \
-I hdfs://master2:9000/Drosophila/output
Hi All,
I have simply added exception handling in my code in Scala. I am
getting NoClassDefFoundError . Any leads would be appreciated.
Thanks
Kind Regards,
Sachit Murarka
Error in the highlighted line. Code, error and pom.xml included below
code :
final Session session = connector.openSession();
final PreparedStatement prepared = session.prepare("INSERT INTO
spark_test5.messages JSON?");
JavaStreamingContext ssc = new
onf
> "spark.driver.extraJavaOptions=-XX:MaxPermSize=6G -XX:+UseConcMarkSweepGC"
> --conf "spark.executor.extraJavaOptions=-XX:+UseConcMarkSweepGC -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps" --class MY_DRIVER
> ~/project-assembly-0.0.1-SNAPSHOT.j
ize=6G -XX:+UseConcMarkSweepGC"
--conf "spark.executor.extraJavaOptions=-XX:+UseConcMarkSweepGC -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" --class MY_DRIVER
~/project-assembly-0.0.1-SNAPSHOT.jar
==
Is there anything I am missing here? I understand that NoClass
LContext class. my maven project doesn't have any problem during
> compile and packaging phase. but when I send .jar of project to sjs and run
> it "NoClassDefFoundError" will be issued. the trace of exception is :
>
>
> job-server[ERROR] Exception in thread "pool-20-thread
Hello
I've extended the JavaSparkJob (job-server-0.6.2) and created an object
of SQLContext class. my maven project doesn't have any problem during
compile and packaging phase. but when I send .jar of project to sjs and run
it "NoClassDefFoundError" will be issued. the trace of
I’ve gotten a little further along. It now submits the job via Yarn, but now
the jobs exit immediately with the following error:
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/spark/Logging
at java.lang.ClassLoader.defineClass1(Native Method)
at
Not sure why your code will search Logging class under org/apache/spark,
this should be “org/apache/spark/internal/Logging”, and it changed long
time ago.
On Sun, Oct 16, 2016 at 3:25 AM, Brad Cox wrote:
> I'm experimenting with Spark 2.0.1 for the first time and hitting a
I'm experimenting with Spark 2.0.1 for the first time and hitting a problem
right out of the gate.
My main routine starts with this which I think is the standard idiom.
SparkSession sparkSession = SparkSession
.builder()
Thanks Ben
The thing is I am using Spark 2 and no stack from CDH!
Is this approach to reading/writing to Hbase specific to Cloudera?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Mich,
I know up until CDH 5.4 we had to add the HTrace jar to the classpath to make
it work using the command below. But after upgrading to CDH 5.7, it became
unnecessary.
echo "/opt/cloudera/parcels/CDH/jars/htrace-core-3.2.0-incubating.jar" >>
/etc/spark/conf/classpath.txt
Hope this helps.
Trying bulk load using Hfiles in Spark as below example:
import org.apache.spark._
import org.apache.spark.rdd.NewHadoopRDD
import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor}
import org.apache.hadoop.hbase.client.HBaseAdmin
import
Which version of Java 8 do you use? AFAIK, it's recommended to exploit Java
1.8_0.66 +
On Fri, Jul 22, 2016 at 8:49 PM, Jacek Laskowski wrote:
> On Fri, Jul 22, 2016 at 6:43 AM, Ted Yu wrote:
> > You can use this command (assuming log aggregation is turned
On Fri, Jul 22, 2016 at 6:43 AM, Ted Yu wrote:
> You can use this command (assuming log aggregation is turned on):
>
> yarn logs --applicationId XX
I don't think it's gonna work for already-running application (and I
wish I were mistaken since I needed it just yesterday) and
ay to get the Classpath for the spark application
> itself?
>
> On Thu, Jul 21, 2016 at 9:37 PM Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Might be classpath issue.
>>
>> Mind pastebin'ning the effective class path ?
>>
>> Stack trace of NoClassDefFoundError ma
what's the easiest way to get the Classpath for the spark application
itself?
On Thu, Jul 21, 2016 at 9:37 PM Ted Yu <yuzhih...@gmail.com> wrote:
> Might be classpath issue.
>
> Mind pastebin'ning the effective class path ?
>
> Stack trace of NoClassDefFoundError may also h
Might be classpath issue.
Mind pastebin'ning the effective class path ?
Stack trace of NoClassDefFoundError may also help provide some clue.
On Thu, Jul 21, 2016 at 8:26 PM, Ilya Ganelin <ilgan...@gmail.com> wrote:
> Hello - I'm trying to deploy the Spark TimeSeries library
Hello - I'm trying to deploy the Spark TimeSeries library in a new
environment. I'm running Spark 1.6.1 submitted through YARN in a cluster
with Java 8 installed on all nodes but I'm getting the NoClassDef at
runtime when trying to create a new TimeSeriesRDD. Since ZonedDateTime is
part of Java 8
*To:* Ted Yu
> *Cc:* Jack Yang; Fengdong Yu; user@spark.apache.org
>
> *Subject:* Re: spark with breeze error of NoClassDefFoundError
>
>
>
> Dear Ted,
>
> I just looked at the link you provided, it is great!
>
>
>
> For my understanding, I could also dire
>
>
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Wednesday, 18 November 2015 4:01 PM
> To: Jack Yang
> Cc: user@spark.apache.org
> Subject: Re: spark with breeze error of NoClassDefFoundError
>
> Looking in local maven repo, breeze_2.10-0.7.jar contains De
LClassLoader.findClass(URLClassLoader.java:354)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>
> ... 10 more
>
> 15/11/18 17:15:15 INFO util.Utils: Shutdown hook called
>
&
ubject: Re: spark with breeze error of NoClassDefFoundError
The simplest way is remove all “provided” in your pom.
then ‘sbt assembly” to build your final package. then get rid of ‘—jars’
because assembly already includes all dependencies.
On Nov 18, 2015, at 2:15 PM, Jack Yang
<j...@uow.ed
ser@spark.apache.org
Subject: Re: spark with breeze error of NoClassDefFoundError The simplest
way is remove all “provided” in your pom. then ‘sbt assembly” to build your
final package. then get rid of ‘—jars’ because assembly already includes all
dependencies.
On Nov 18,
ith breeze error of NoClassDefFoundError The simplest
way is remove all “provided” in your pom. then ‘sbt assembly” to build your
final package. then get rid of ‘—jars’ because assembly already includes all
dependencies.
On Nov 18, 2015, at 2:15 PM, Jack Yang <j...@uow.edu.au&
/11/19 10:28:29 INFO util.Utils: Shutdown hook called
>
> Meanwhile, I will prefer to use maven to compile the jar file rather than
> sbt, although it is indeed another option.
>
> Best regards,
> Jack
>
>
>
> From: Fengdong Yu [mailto:fengdo...@everstring.com
Yu; user@spark.apache.org
Subject: Re: spark with breeze error of NoClassDefFoundError
Dear Ted,
I just looked at the link you provided, it is great!
For my understanding, I could also directly use other Breeze part (except spark
mllib package linalg ) in spark (scala or java ) program after
Looking in local maven repo, breeze_2.10-0.7.jar contains DefaultArrayValue
:
jar tvf
/Users/tyu/.m2/repository//org/scalanlp/breeze_2.10/0.7/breeze_2.10-0.7.jar
| grep !$
jar tvf
/Users/tyu/.m2/repository//org/scalanlp/breeze_2.10/0.7/breeze_2.10-0.7.jar
| grep DefaultArrayValue
369 Wed Mar
Re: spark with breeze error of NoClassDefFoundError
Looking in local maven repo, breeze_2.10-0.7.jar contains DefaultArrayValue :
jar tvf
/Users/tyu/.m2/repository//org/scalanlp/breeze_2.10/0.7/breeze_2.10-0.7.jar |
grep !$
jar tvf
/Users/tyu/.m2/repository//org/scalanlp/breeze_2.10/0.7/breeze_2.
using scala
2.10.4, and spark was compiled against scala 2.10.x. Perhaps I’m missing
something here.
Also, the NoClassDefFoundError presents itself when debugging in eclipse,
but running directly via the jar, the following error appears:
Exception in thread main
version that
I'm using. However, that's not the case for me. I'm using scala 2.10.4, and
spark was compiled against scala 2.10.x. Perhaps I'm missing something here.
Also, the NoClassDefFoundError presents itself when debugging in eclipse, but
running directly via the jar, the following
@spark.apache.org
Subject: Re: NoClassDefFoundError: scala/collection/GenTraversableOnce$class
You can generate dependency tree using:
mvn dependency:tree
and grep for 'org.scala-lang' in the output to see if there is any clue.
Cheers
On Wed, Jul 29, 2015 at 5:14 PM, Benjamin Ross
br...@lattice
I think this is a bug of Spark SQL dates back to at least 1.1.0.
The json_tuple function is implemented as
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple. The
ClassNotFoundException should complain with the class name rather than
the UDTF function name.
The problematic line
Filed https://issues.apache.org/jira/browse/SPARK-6708 to track this.
Cheng
On 4/4/15 10:21 PM, Cheng Lian wrote:
I think this is a bug of Spark SQL dates back to at least 1.1.0.
The json_tuple function is implemented as
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple. The
How did you build spark? which version of spark are you having? Doesn't
this thread already explains it?
https://www.mail-archive.com/user@spark.apache.org/msg25505.html
Thanks
Best Regards
On Thu, Apr 2, 2015 at 11:10 PM, Todd Nist tsind...@gmail.com wrote:
Hi Akhil,
Tried your suggestion
I placed it there. It was downloaded from MySql site.
On Fri, Apr 3, 2015 at 6:25 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:
Akhil
you mentioned /usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar .
how come you got this lib into spark/lib folder.
1) did you place it there ?
2) What
Hi Deepujain,
I did include the jar file, I believe it is hive-exe.jar, through the
--jars option:
./bin/spark-shell --master spark://radtech.io:7077
--total-executor-cores 2 --driver-class-path
/usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar --jars
I think you need to include the jar file through --jars option that
contains the hive definition (code) of UDF json_tuple. That should solve
your problem.
On Fri, Apr 3, 2015 at 3:57 PM, Todd Nist tsind...@gmail.com wrote:
I placed it there. It was downloaded from MySql site.
On Fri, Apr 3,
Started the spark shell with the one jar from hive suggested:
./bin/spark-shell --master spark://radtech.io:7077
--total-executor-cores 2 --driver-class-path
/usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar --jars
/opt/apache-hive-0.13.1-bin/lib/hive-exec-0.13.1.jar
Results in the same
Copy pasted his command in the same thread.
Thanks
Best Regards
On Fri, Apr 3, 2015 at 3:55 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:
Akhil
you mentioned /usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar .
how come you got this lib into spark/lib folder.
1) did you place it there
Akhil
you mentioned /usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar .
how come you got this lib into spark/lib folder.
1) did you place it there ?
2) What is download location ?
On Fri, Apr 3, 2015 at 3:42 PM, Todd Nist tsind...@gmail.com wrote:
Started the spark shell with the one
Try adding all the jars in your $HIVE/lib directory. If you want the
specific jar, you could look fr jackson or json serde in it.
Thanks
Best Regards
On Thu, Apr 2, 2015 at 12:49 AM, Todd Nist tsind...@gmail.com wrote:
I have a feeling I’m missing a Jar that provides the support or could this
Hi Akhil,
Tried your suggestion to no avail. I actually to not see and jackson or
json serde jars in the $HIVE/lib directory. This is hive 0.13.1 and
spark 1.2.1
Here is what I did:
I have added the lib folder to the –jars option when starting the
spark-shell,
but the job fails. The
Hi,
I am using CDH 5.3.2 packages installation through Cloudera Manager 5.3.2
I am trying to run one spark job with following command
PYTHONPATH=~/code/utils/ spark-submit --master yarn --executor-memory 3G
--num-executors 30 --driver-memory 2G --executor-cores 2 --name=analytics
InputSplit is in hadoop-mapreduce-client-core jar
Please check that the jar is in your classpath.
Cheers
On Mon, Mar 23, 2015 at 8:10 AM, , Roy rp...@njit.edu wrote:
Hi,
I am using CDH 5.3.2 packages installation through Cloudera Manager 5.3.2
I am trying to run one spark job with
do you assemble the uber jar ?
you can use sbt assembly to build the jar and then run. It should fix the
issue
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoClassDefFoundError-when-trying-to-run-spark-application-tp20707p20944.html
Sent from the Apache
Adding a hadoop-2.6 profile is not necessary. Use hadoop-2.4, which
already exists and is intended for 2.4+. In fact this declaration is
missing things that Hadoop 2 needs.
On Thu, Dec 18, 2014 at 3:46 AM, Kyle Lin kylelin2...@gmail.com wrote:
Hi there
The following is my steps. And got the
Spark 1.3 does not exist. Spark 1.2 hasn't been released just yet. Which
version of Spark did you mean?
Also, from what I can see in the docs
http://spark.apache.org/docs/1.1.1/building-with-maven.html#specifying-the-hadoop-version,
I believe the latest version of Hadoop that Spark supports is
Spark works fine with 2.4 *and later*. The docs don't mean to imply
2.4 is the last supported version.
On Wed, Dec 17, 2014 at 10:19 AM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
Spark 1.3 does not exist. Spark 1.2 hasn't been released just yet. Which
version of Spark did you mean?
Thanks for the correction, Sean. Do the docs need to be updated on this
point, or is it safer for now just to note 2.4 specifically?
On Wed Dec 17 2014 at 5:54:53 AM Sean Owen so...@cloudera.com wrote:
Spark works fine with 2.4 *and later*. The docs don't mean to imply
2.4 is the last
Thanks for your replies.
I was building spark from trunk.
Daniel
On 17 בדצמ׳ 2014, at 19:49, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
Thanks for the correction, Sean. Do the docs need to be updated on this
point, or is it safer for now just to note 2.4 specifically?
On Wed
Hi there
The following is my steps. And got the same exception with Daniel's.
Another question: how can I build a tgz file like the pre-build file I
download from official website?
1. download trunk from git.
2. add following lines in pom.xml
+ profile
+ idhadoop-2.6/id
+
I also got the same problem..
2014-12-09 22:58 GMT+08:00 Daniel Haviv danielru...@gmail.com:
Hi,
I've built spark 1.3 with hadoop 2.6 but when I startup the spark-shell I
get the following exception:
14/12/09 06:54:24 INFO server.AbstractConnector: Started
Hi,
I've built spark 1.3 with hadoop 2.6 but when I startup the spark-shell I
get the following exception:
14/12/09 06:54:24 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
14/12/09 06:54:24 INFO util.Utils: Successfully started service 'SparkUI'
on port 4040.
14/12/09
...@cloudera.com]
Sent: Tuesday, December 2, 2014 11:35 AM
To: Judy Nash
Cc: Patrick Wendell; Denny Lee; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
On Tue, Dec 2, 2014 at 11:22 AM, Judy Nash judyn...@exchange.microsoft.com
Hi everyone,
I am new to Spark and encountered a problem.
I want to use an external library in a java project and compiling
works fine with maven, but during runtime (locally) I get a
NoClassDefFoundError.
Do I have to put the jars somewhere, or tell spark where they are?
I can send the pom.xml
project and compiling
works fine with maven, but during runtime (locally) I get a
NoClassDefFoundError.
Do I have to put the jars somewhere, or tell spark where they are?
I can send the pom.xml and my imports or source code, if this helps you.
Best regards
Julius Kolbe
thrift server fail with NoClassDefFoundError on
Guava
Thanks Judy. While this is not directly caused by a Spark issue, it is likely
other users will run into this. This is an unfortunate consequence of the way
that we've shaded Guava in this release, we rely on byte code shading of Hadoop
itself
with NoClassDefFoundError on
Guava
Thanks Judy. While this is not directly caused by a Spark issue, it is likely
other users will run into this. This is an unfortunate consequence of the way
that we've shaded Guava in this release, we rely on byte code shading of
Hadoop itself as well
-
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Wednesday, November 26, 2014 8:17 AM
To: Judy Nash
Cc: Denny Lee; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Just to double check - I looked at our own
: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Just to double check - I looked at our own assembly jar and I confirmed that
our Hadoop configuration class does use the correctly shaded version of
Guava. My best guess here is that somehow a separate Hadoop library
:47 PM
To: Judy Nash; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError
on Guava
To determine if this is a Windows vs. other configuration, can you just try
to call the Spark-class.cmd SparkSubmit without actually
; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError
on Guava
To determine if this is a Windows vs. other configuration, can you just try
to call the Spark-class.cmd SparkSubmit without actually referencing the
Hadoop
, November 24, 2014 11:50 PM
To: Cheng Lian; u...@spark.incubator.apache.org
Subject: RE: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
This is what I got from jar tf:
org/spark-project/guava/common/base/Preconditions.class
org/spark-project/guava/common/math
AM
To: Judy Nash; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Oh so you're using Windows. What command are you using to start the Thrift
server then?
On 11/25/14 4:25 PM, Judy Nash wrote:
Made progress but still blocked.
After
Nash; u...@spark.incubator.apache.org
*Subject:* Re: latest Spark 1.2 thrift server fail with
NoClassDefFoundError on Guava
Oh so you're using Windows. What command are you using to start the Thrift
server then?
On 11/25/14 4:25 PM, Judy Nash wrote:
Made progress but still blocked
with NoClassDefFoundError on
Guava
To determine if this is a Windows vs. other configuration, can you just try to
call the Spark-class.cmd SparkSubmit without actually referencing the Hadoop or
Thrift server classes?
On Tue Nov 25 2014 at 5:42:09 PM Judy Nash
judyn
SparkContext unsuccessfully.
Let me know if you need anything else.
*From:*Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Friday, November 21, 2014 8:02 PM
*To:* Judy Nash; u...@spark.incubator.apache.org
*Subject:* Re: latest Spark 1.2 thrift server fail with
NoClassDefFoundError on Guava
Hi
...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Hm, I tried exactly the same commit and the build command locally, but couldn’t
reproduce this.
Usually this kind of errors are caused by classpath misconfiguration. Could you
please try
Hi,
Thrift server is failing to start for me on latest spark 1.2 branch.
I got the error below when I start thrift server.
Exception in thread main java.lang.NoClassDefFoundError: com/google/common/bas
e/Preconditions
at org.apache.hadoop.conf.Configuration$DeprecationDelta.init(Configur
Hi Judy, could you please provide the commit SHA1 of the version you're
using? Thanks!
On 11/22/14 11:05 AM, Judy Nash wrote:
Hi,
Thrift server is failing to start for me on latest spark 1.2 branch.
I got the error below when I start thrift server.
Exception in thread main
I just built the 1.2 snapshot current as of commit 76386e1a23c using:
$ ./make-distribution.sh —tgz —name my-spark —skip-java-test -DskipTests
-Phadoop-2.4 -Phive -Phive-0.13.1 -Pyarn
I drop in my Hive configuration files into the conf directory, launch
spark-shell, and then create my
Hi Terry
I think the issue you mentioned will be resolved by following PR.
https://github.com/apache/spark/pull/3072
- Kousuke
(2014/11/03 10:42), Terry Siu wrote:
I just built the 1.2 snapshot current as of commit 76386e1a23c using:
$ ./make-distribution.sh —tgz —name my-spark
@spark.apache.orgmailto:user@spark.apache.org
user@spark.apache.orgmailto:user@spark.apache.org
Subject: Re: NoClassDefFoundError encountered in Spark 1.2-snapshot build with
hive-0.13.1 profile
Hi Terry
I think the issue you mentioned will be resolved by following PR.
https://github.com/apache/spark
terry@smartfocus.com, user@spark.apache.org
user@spark.apache.org
Subject: Re: NoClassDefFoundError encountered in Spark 1.2-snapshot build
with hive-0.13.1 profile
Hi Terry
I think the issue you mentioned will be resolved by following PR.
https://github.com/apache/spark/pull/3072
Hi,
I am trying to get my Spark application to run on YARN and by now I have
managed to build a fat jar as described on
http://markmail.org/message/c6no2nyaqjdujnkq (which is the only really
usable manual on how to get such a jar file). My code runs fine using sbt
test and sbt run, but when
Hi again,
On Thu, Oct 30, 2014 at 11:50 AM, Tobias Pfeiffer t...@preferred.jp wrote:
Spark assembly has been built with Hive, including Datanucleus jars on
classpath
Exception in thread main java.lang.NoClassDefFoundError:
com/typesafe/scalalogging/slf4j/Logger
It turned out scalalogging
I had an offline with Akhil, but this issue is still not resolved.
2014-10-24 0:18 GMT-07:00 Akhil Das ak...@sigmoidanalytics.com:
Make sure the guava jar
http://mvnrepository.com/artifact/com.google.guava/guava/12.0 is
present in the classpath.
Thanks
Best Regards
On Thu, Oct 23, 2014
I have checked out from master, cleaned/rebuilt on command line in maven,
then cleaned/rebuilt in intellij many times. This error persists through it
all. Anyone have a solution?
2014-10-23 1:43 GMT-07:00 Stephen Boesch java...@gmail.com:
After having checked out from master/head the
Make sure the guava jar
http://mvnrepository.com/artifact/com.google.guava/guava/12.0 is present
in the classpath.
Thanks
Best Regards
On Thu, Oct 23, 2014 at 2:13 PM, Stephen Boesch java...@gmail.com wrote:
After having checked out from master/head the following error occurs when
attempting
After having checked out from master/head the following error occurs when
attempting to run any test in Intellij
Exception in thread main java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/ThreadFactoryBuilder
at org.apache.spark.util.Utils$.init(Utils.scala:648)
There appears to
FYI, in case anybody else has this problem, we switched to Spark 1.1
(outside CDH) and the same Spark application worked first time (once
recompiled with Spark 1.1 libs of course). I assume this is because Spark
1.1 is compiled with Hive.
On 29 September 2014 17:41, Patrick McGloin
Hi,
I have an error when submitting a Spark SQL application to our Spark
cluster:
14/09/29 16:02:11 WARN scheduler.TaskSetManager: Loss was due to
java.lang.NoClassDefFoundError
*java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf*
at
By the way, for anyone using elasticsearch-hadoop, there is a fix for this
here: https://github.com/elasticsearch/elasticsearch-hadoop/issues/239
Ryan - using the nightly snapshot build of 2.1.0.BUILD-SNAPSHOT fixed this
for me.
On Thu, Aug 7, 2014 at 3:58 PM, Nick Pentreath
I'm also getting this - Ryan we both seem to be running into this issue
with elasticsearch-hadoop :)
I tried spark.files.userClassPathFirst true on command line and that
doesn;t work
If I put it that line in spark/conf/spark-defaults it works but now I'm
getting:
java.lang.NoClassDefFoundError:
In case you still have issues with duplicate files in uber jar, here is a
reference sbt file with assembly plugin that deals with duplicates
https://github.com/databricks/training/blob/sparkSummit2014/streaming/scala/build.sbt
On Fri, Jul 11, 2014 at 10:06 AM, Bill Jay
Alsom the reason the spark-streaming-kafka is not included in the spark
assembly is that we do not want dependencies of external systems like kafka
(which itself probably has a complex dependency tree) to cause conflict
with the core spark's functionality and stability.
TD
On Sun, Jul 13, 2014
Easiest fix would be adding the kafka jars to the SparkContext while
creating it.
Thanks
Best Regards
On Fri, Jul 11, 2014 at 4:39 AM, Dilip dilip_ram...@hotmail.com wrote:
Hi,
I am trying to run a program with spark streaming using Kafka on a stand
alone system. These are my details:
Hi Akhil,
Can you please guide me through this? Because the code I am running
already has this in it:
[java]
SparkContext sc = new SparkContext();
sc.addJar(/usr/local/spark/external/kafka/target/scala-2.10/spark-streaming-kafka_2.10-1.1.0-SNAPSHOT.jar);
Is there something I am
I have met similar issues. The reason is probably because in Spark
assembly, spark-streaming-kafka is not included. Currently, I am using
Maven to generate a shaded package with all the dependencies. You may try
to use sbt assembly to include the dependencies in your jar file.
Bill
On Thu, Jul
A simple
sbt assembly
is not working. Is there any other way to include particular jars with
assembly command?
Regards,
Dilip
On Friday 11 July 2014 12:45 PM, Bill Jay wrote:
I have met similar issues. The reason is probably because in Spark
assembly, spark-streaming-kafka is not
You may try to use this one:
https://github.com/sbt/sbt-assembly
I had an issue of duplicate files in the uber jar file. But I think this
library will assemble dependencies into a single jar file.
Bill
On Fri, Jul 11, 2014 at 1:34 AM, Dilip dilip_ram...@hotmail.com wrote:
A simple
sbt
1 - 100 of 107 matches
Mail list logo