$YARN_CONF_DIR
export SPARK_HOME=/hadoop/user/ooxpdeva/spark151
echo $SPARK_HOME
$SPARK_HOME/bin/spark-submit --verbose --class anubhav.Main --master
yarn-client --num-executors 7 --driver-memory 6g --executor-memory 6g
--executor-cores 8 --queue deva --conf
"spark.executor.extraJavaOp
new JavaSparkContext(sctx);
> HiveContext hiveCtx=new HiveContext(ctx.sc());
> DataFrame df= hiveCtx.sql("show tables");
> System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
rting
"+sctx.isLocal());
JavaSparkContext ctx=new JavaSparkContext(sctx);
HiveContext hiveCtx=new HiveContext(ctx.sc());
DataFrame df= hiveCtx.sql("show tables");
System.out.println(">>>>>>>>&
Hi,
I am able to fetch data, create table, put data from spark shell (scala
command line) from spark to hive
but when I create java code to do same and submitting it through
spark-submit i am getting *"Initial job has not accepted any resources;
check your cluster UI to ensure that wo
or this.
>
> Best Regards,
>
> Jerry
>
> On Sat, Oct 3, 2015 at 12:50 PM, Burak Yavuz <brk...@gmail.com> wrote:
>
>> Hi Jerry,
>>
>> The --packages feature doesn't support private repositories right now.
>> However, in the case of s3, maybe it might wor
. Could you please try using
> the --repositories flag and provide the address:
> `$ spark-submit --packages my:awesome:package --repositories
> s3n://$aws_ak:$aws_sak@bucket/path/to/repo`
>
> If that doesn't work, could you please file a JIRA?
>
> Best,
> Burak
>
>
>
Hi Jerry,
The --packages feature doesn't support private repositories right now.
However, in the case of s3, maybe it might work. Could you please try using
the --repositories flag and provide the address:
`$ spark-submit --packages my:awesome:package --repositories
s3n://$aws_ak:$aws_sak@bucket
Hi,
We have python2.6 (default) on cluster and also we have installed
python2.7.
I was looking a way to set python version in spark-submit.
anyone know how to do this ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/python-version-in-spark
; python2.7.
>
> I was looking a way to set python version in spark-submit.
>
> anyone know how to do this ?
>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/python-version-in-spark-submit-tp24902.ht
Hi spark users and developers,
I'm trying to use spark-submit --packages against private s3 repository.
With sbt, I'm using fm-sbt-s3-resolver with proper aws s3 credentials. I
wonder how can I add this resolver into spark-submit such that --packages
can resolve dependencies from private repo
ndings:* When I run my program as a *java application within eclipse
> everything works fine*. But when I am running the program using
> *spark-submit* I am getting following error:
>
> URL content Could not initialize class
> org.apache.http.conn.ssl.SSLConnectionSocketFact
Hello all,
Goal: I want to use APIs from HttpClient library 4.4.1. I am using maven
shaded plugin to generate JAR.
Findings: When I run my program as a java application within eclipse everything
works fine. But when I am running the program using spark-submit I am getting
following error
Either setting it programatically doesn't work:
sparkConf.setIfMissing("class", "...Main")
In my current setting moving main to another package requires to propagate
change to deploy scripts. Doesn't matter I will find some other way. Petr
On Fri, Sep 25, 2015 at 4:40 PM, Petr Novak
Ortherwise it seems it tries to load from a checkpoint which I have deleted
and cannot be found. Or it should work and I have wrong something else.
Documentation doesn't mention option with jar manifest, so I assume it
doesn't work this way.
Many thanks,
Petr
I'm sorry. Both approaches actually work. It was something else wrong with
my cluster. Petr
On Fri, Sep 25, 2015 at 4:53 PM, Petr Novak wrote:
> Either setting it programatically doesn't work:
> sparkConf.setIfMissing("class", "...Main")
>
> In my current setting moving
spark-submit.
Anyone knows What's the solution to this?
best,
/Shahab
Hi,
I am using facing strange issue while using chronos, As job is not able to find
the Main class while invoking spark-submit using chronos.
Issue I identified as "colon" in the task name
Env -Chronos scheduled job on mesos
/tmp/mesos/slaves/20150911-070325-218147008-5050-30275-S4/
by data source as a DataFrame, use the
header for column names
val df = sqlContext.load("com.databricks.spark.csv", Map("path" ->
"sfpd.csv", "header" -> "true"))
Now, I want to do the above as part of my package using spark-submit
sp
gt;
> "spark-submit.py -v test.py"
>
> I see that my "spark.files" default option has been replaced with
> "spark.files test.py", basically spark-submit is overwriting
> spark.files with the name of the script.
>
> Is this a bug or is ther
t more investigation, shows that:
> >
> > if I have configured spark-defaults.conf with:
> >
> > "spark.files library.py"
> >
> > then if I call
> >
> > "spark-submit.py -v test.py"
> >
> > I see that my
in my spark-defaults.conf I have:
spark.files file1.zip, file2.py
spark.master spark://master.domain.com:7077
If I execute:
bin/pyspark
I can see it adding the files correctly.
However if I execute
bin/spark-submit test.py
where test.py relies on the file1.zip, I get
; If I execute:
> bin/pyspark
>
> I can see it adding the files correctly.
>
> However if I execute
>
> bin/spark-submit test.py
>
> where test.py relies on the file1.zip, I get and error.
>
> If I i instead execute
>
> bin/spark-submit --py-files file1.zip tes
So a bit more investigation, shows that:
if I have configured spark-defaults.conf with:
"spark.files library.py"
then if I call
"spark-submit.py -v test.py"
I see that my "spark.files" default option has been replaced with
"spark.files
ing Spark 1.3.1 ) Create a
> command
> string that uses "spark-submit" in it ( with my Class file etc ), and i
> store this string in a temp file somewhere as a shell script Using
> Runtime.exec, i execute this script and wait for its completion, using
> pro
> thx for the inputs Igor,, i am actually building an Analytics layer ( 'As
> a service model' using Spark as the backend engine ) and hence i am
> implementing it this way... Initially, i was opening the spark-contenxt in
> the JVM that i had spawned ( without even using Spark-submit ) and add
thx for the inputs Igor,, i am actually building an Analytics layer ( 'As a
service model' using Spark as the backend engine ) and hence i am implementing
it this way... Initially, i was opening the spark-contenxt in the JVM that i
had spawned ( without even using Spark-submit ) and adding all
s immediately
exits .
From: Igor Berman <igor.ber...@gmail.com>
Sent: Monday, August 31, 2015 12:41 PM
To: Pranay Tonpay
Cc: user
Subject: Re: spark-submit issue
might be you need to drain stdout/stderr of subprocess...otherwise subprocess
can deadlock
http://stackoverflow.com/quest
1. think once again if you want to call spark submit in such way...I'm not
sure why you do it, but please consider just opening spark context inside
your jvm(you need to add spark jars to classpath..)
2. use https://commons.apache.org/proper/commons-exec/ with
PumpStreamHandler
On 31 August 2015
, 2015 11:02 AM
To: Pranay Tonpay
Cc: user@spark.apache.org
Subject: Re: spark-submit issue
You can also add a System.exit(0) after the sc.stop.
On 30 Aug 2015 23:55, "Pranay Tonpay"
<pranay.ton...@impetus.co.in<mailto:pranay.ton...@impetus.co.in>> wrote:
yes, the contex
5 9:18 PM
To: Pranay Tonpay
Cc: Igor Berman; user@spark.apache.org
Subject: Re: spark-submit issue
Can you not use the spark jobserver instead? Just submit your job to the job
server who already has the sparkcontext initialized in it, it would make it
much easier i think.
Thanks
Best Regards
.
--
*From:* Akhil Das ak...@sigmoidanalytics.com
*Sent:* Sunday, August 30, 2015 9:03 AM
*To:* Pranay Tonpay
*Cc:* user@spark.apache.org
*Subject:* Re: spark-submit issue
Did you try putting a sc.stop at the end of your pipeline?
Thanks
Best Regards
On Thu, Aug 27, 2015 at 6:41 PM
yes, the context is being closed at the end.
From: Akhil Das ak...@sigmoidanalytics.com
Sent: Sunday, August 30, 2015 9:03 AM
To: Pranay Tonpay
Cc: user@spark.apache.org
Subject: Re: spark-submit issue
Did you try putting a sc.stop at the end of your pipeline
*To:* Pranay Tonpay
*Cc:* user@spark.apache.org
*Subject:* Re: spark-submit issue
Did you try putting a sc.stop at the end of your pipeline?
Thanks
Best Regards
On Thu, Aug 27, 2015 at 6:41 PM, pranay pranay.ton...@impetus.co.in
wrote:
I have a java program that does this - (using Spark 1.3.1
Did you try putting a sc.stop at the end of your pipeline?
Thanks
Best Regards
On Thu, Aug 27, 2015 at 6:41 PM, pranay pranay.ton...@impetus.co.in wrote:
I have a java program that does this - (using Spark 1.3.1 ) Create a
command
string that uses spark-submit in it ( with my Class file etc
I have a java program that does this - (using Spark 1.3.1 ) Create a command
string that uses spark-submit in it ( with my Class file etc ), and i
store this string in a temp file somewhere as a shell script Using
Runtime.exec, i execute this script and wait for its completion, using
This worked for me locally:
spark-1.4.1-bin-hadoop2.4/bin/spark-submit --conf
spark.executor.extraClassPath=/.m2/repository/ch/qos/logback/logback-core/1.1.2/logback-core-1.1.2.jar:/.m2/repository/ch/qos/logback/logback-classic/1.1.2/logback-classic-1.1.2.jar
--conf
spark.driver.extraClassPath
On Tue, Aug 25, 2015 at 10:48 AM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
Now I am going to try it out on our mesos cluster.
I assumed spark.executor.extraClassPath takes csv as jars the way --jars
takes it but it should be : separated like a regular classpath jar.
Ah, yes, those options
So do I need to manually copy these 2 jars on my spark executors?
On Tue, Aug 25, 2015 at 10:51 AM, Marcelo Vanzin van...@cloudera.com
wrote:
On Tue, Aug 25, 2015 at 10:48 AM, Utkarsh Sengar utkarsh2...@gmail.com
wrote:
Now I am going to try it out on our mesos cluster.
I assumed
On Tue, Aug 25, 2015 at 1:50 PM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
So do I need to manually copy these 2 jars on my spark executors?
Yes. I can think of a way to work around that if you're using YARN,
but not with other cluster managers.
On Tue, Aug 25, 2015 at 10:51 AM, Marcelo
Looks like I stuck then, I am using mesos.
Adding these 2 jars to all executors might be a problem for me, I will
probably try to remove the dependency on the otj-logging lib then and just
use log4j.
On Tue, Aug 25, 2015 at 2:15 PM, Marcelo Vanzin van...@cloudera.com wrote:
On Tue, Aug 25, 2015
artifactIdslf4j-log4j12/artifactId
/exclusion
/exclusions
/dependency
Now, when I run my job from Intellij (which sets the classpath), things
work perfectly.
But when I run my job via spark-submit:
~/spark-1.4.1-bin-hadoop2.4/bin/spark-submit --class
groupIdorg.slf4j/groupId
artifactIdslf4j-log4j12/artifactId
/exclusion
/exclusions
/dependency
Now, when I run my job from Intellij (which sets the classpath), things work
perfectly.
But when I run my job via spark-submit:
~/spark
artifactIdslf4j-log4j12/artifactId
/exclusion
/exclusions
/dependency
The SparkRunner class works fine (from IntelliJ) but when I build a jar and
submit it to spark-submit:
I get this error:
Caused by: java.lang.ClassCastException
/dependency
And no exclusions from my logging lib.
And I submit this task: spark-1.4.1-bin-hadoop2.4/bin/spark-submit --class
runner.SparkRunner --conf
spark.driver.extraClassPath=/.m2/repository/ch/qos/logback/logback-classic/1.1.2/logback-classic-1.1.2.jar
--conf
spark.executor.extraClassPath=/.m2
Hi Utkarsh,
A quick look at slf4j's source shows it loads the first
StaticLoggerBinder in your classpath. How are you adding the logback
jar file to spark-submit?
If you use spark.driver.extraClassPath and
spark.executor.extraClassPath to add the jar, it should take
precedence over the log4j
git:(bulkrunner) ✗ spark-1.4.1-bin-hadoop2.4/bin/spark-submit
--class runner.SparkRunner --jars
/.m2/repository/ch/qos/logback/logback-classic/1.1.2/logback-classic-1.1.2.jar,/.m2/repository/ch/qos/logback/logback-core/1.1.2/logback-core-1.1.2.jar
--conf spark.executor.userClassPathFirst=true --conf
On Mon, Aug 24, 2015 at 3:58 PM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
That didn't work since extraClassPath flag was still appending the jars at
the end, so its still picking the slf4j jar provided by spark.
Out of curiosity, how did you verify this? The extraClassPath
options are
submit this task: spark-1.4.1-bin-hadoop2.4/bin/spark-submit --class
runner.SparkRunner --conf
spark.driver.extraClassPath=/.m2/repository/ch/qos/logback/logback-classic/1.1.2/logback-classic-1.1.2.jar
--conf
spark.executor.extraClassPath=/.m2/repository/ch/qos/logback/logback-classic/1.1.2/logback
Is there any possibility to run standalone scala program via spark submit? Or
I have always put it in some packages, build it with maven (or sbt)?
What if I have just simple program, like that example word counter?
Could anyone please, show it on this simple test file Greeting.scala
I haven't tried it, but scala-shell should work if you give it a scala
script file, since it's basically a wrapper around the Scala REPL.
dean
On Thursday, August 20, 2015, MasterSergius master.serg...@gmail.com
wrote:
Is there any possibility to run standalone scala program via spark submit
Hi Satish,
The problem is that `--jars` accepts a comma-delimited list of jars! E.g.
spark-submit ... --jars lib1.jar,lib2.jar,lib3.jar main.jar
where main.jar is your main application jar (the one that starts a
SparkContext), and lib*.jar refer to additional libraries that your main
Please notice that 'jars: null'
I don't know why you put ///. but I would propose you just put normal
absolute paths.
dse spark-submit --master spark://10.246.43.15:7077 --class HelloWorld
--jars /home/missingmerch/postgresql-9.4-1201.jdbc41.jar
/home/missingmerch/dse.jar
/home/missingmerch
*HI,*
Please let me know if i am missing anything in the command below
*Command:*
dse spark-submit --master spark://10.246.43.15:7077 --class HelloWorld
--jars ///home/missingmerch/postgresql-9.4-1201.jdbc41.jar
///home/missingmerch/dse.jar
///home/missingmerch/spark-cassandra-connector
11, 2015 at 2:44 PM, satish chandra j jsatishchan...@gmail.com
wrote:
HI ,
I have used --jars option as well, please find the command below
*Command:*
dse spark-submit --master spark://10.246.43.15:7077 --class HelloWorld
*--jars* ///home/missingmerch/postgresql-9.4-1201.jdbc41.jar
///home
,*
Please let me know if i am missing anything in the command below
*Command:*
dse spark-submit --master spark://10.246.43.15:7077 --class HelloWorld
--jars ///home/missingmerch/postgresql-9.4-1201.jdbc41.jar
///home/missingmerch/dse.jar
///home/missingmerch/spark-cassandra-connector-java_2.10
HI,
Please find the log details below:
dse spark-submit --verbose --master local --class HelloWorld
etl-0.0.1-SNAPSHOT.jar --jars
file:/home/missingmerch/postgresql-9.4-1201.jdbc41.jar
file:/home/missingmerch/dse.jar
file:/home/missingmerch/postgresql-9.4-1201.jdbc41.jar
Using properties file
command line to spark-submit:
bin/spark-submit --verbose --master local[2]--class
org.yardstick.spark.SparkCoreRDDBenchmark
/shared/ysgood/target/yardstick-spark-uber-0.0.1.jar
Here is the output:
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes
ahead of assembly
Did you try this way?
/usr/local/spark/bin/spark-submit --master mesos://mesos.master:5050 --conf
spark.mesos.executor.docker.image=docker.repo/spark:latest --class
org.apache.spark.examples.SparkPi *--jars hdfs://hdfs1/tmp/spark-*
*examples-1.4.1-hadoop2.6.0-**cdh5.4.4.jar* 100
Thanks
Best
/deanwampler
http://polyglotprogramming.com
On Sun, Aug 9, 2015 at 4:30 AM, Akhil Das ak...@sigmoidanalytics.com
wrote:
Did you try this way?
/usr/local/spark/bin/spark-submit --master mesos://mesos.master:5050
--conf spark.mesos.executor.docker.image=docker.repo/spark:latest --class
Did you try this way?
/usr/local/spark/bin/spark-submit --master mesos://mesos.master:5050 --conf
spark.mesos.executor.docker.image=docker.repo/spark:latest --class
org.apache.spark.examples.SparkPi --jars
hdfs://hdfs1/tmp/spark-examples-1.4.1-hadoop2.6.0-cdh5.4.4.jar 100
I did, and got
-applications.html
[2]
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in-hdfs-td21840.html
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-fails-when-jar-is-in-HDFS-tp24163.html
Sent from
Given the following command line to spark-submit:
bin/spark-submit --verbose --master local[2]--class
org.yardstick.spark.SparkCoreRDDBenchmark
/shared/ysgood/target/yardstick-spark-uber-0.0.1.jar
Here is the output:
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes
Hi Guru,
I am executing this on DataStax Enterprise Spark node and ~/.dserc file
exists which consists Cassandra credentials but still getting the error
Below is the given command
dse spark-submit --master spark://10.246.43.15:7077 --class HelloWorld
--jars ///home/missingmerch/postgresql-9.4
spark-submit spark error exception in thread main java.io.ioexception:
Invalid Request Exception(Why you have not logged in)
Note: submitting datastax spark node
please let me know if anybody have a solutions for this issue
Regards,
Saish Chandra
this on DataStax Enterprise Spark node and ~/.dserc file
exists which consists Cassandra credentials but still getting the error
Below is the given command
dse spark-submit --master spark://10.246.43.15:7077
http://10.246.43.15:7077/ --class HelloWorld --jars
///home/missingmerch
running on a cluster. Deployment to YARN is not supported directly by
SparkContext. Please use spark-submit
running on a cluster. Deployment to YARN is not supported directly by
SparkContext. Please use spark-submit
This is my java code what I tried in Single node cluster
SparkConf sparkConf = new
SparkConf().setAppName(Hive).setMaster(local).setSparkHome(path);
JavaSparkContext ctx = new
HI,
I have submitted a Spark Job with options jars,class,master as *local* but
i am getting an error as below
*dse spark-submit spark error exception in thread main java.io.ioexception:
Invalid Request Exception(Why you have not logged in)*
*Note: submitting datastax spark node*
please let me
:
I saw such example in docs:
--conf spark.driver.extraJavaOptions=-Dlog4j.configuration=
file://$path_to_file
but, unfortunately, it does not work for me.
On 30.07.2015 05:12, canan chen wrote:
Yes, that should work. What I mean is is there any option in spark-submit
command that I can
I saw such example in docs:
--conf
spark.driver.extraJavaOptions=-Dlog4j.configuration=file://$path_to_file
but, unfortunately, it does not work for me.
On 30.07.2015 05:12, canan chen wrote:
Yes, that should work. What I mean is is there any option in
spark-submit command that I can specify
If you run it on yarn with kerberos setup. You authenticate yourself by kinit
before launching the job.
Thanks.
Zhan Zhang
On Jul 28, 2015, at 8:51 PM, Anh Hong
hongnhat...@yahoo.com.INVALIDmailto:hongnhat...@yahoo.com.INVALID wrote:
Hi,
I'd like to remotely run spark-submit from a local
Anyone know how to set log level in spark-submit ? Thanks
Put a log4j.properties file in conf/. You can copy
log4j.properties.template as a good base
El miércoles, 29 de julio de 2015, canan chen ccn...@gmail.com escribió:
Anyone know how to set log level in spark-submit ? Thanks
Hi Zhan,I'm running Standalone Spark cluster and execute spark-submit from a
local host outside the cluster. Beside kerberos, do you know any other existing
method? Is there any JIRA opened on this enhancement request?
Regards,Anh.
On Wednesday, July 29, 2015 4:15 PM, Zhan Zhang zzh
Yes, that should work. What I mean is is there any option in spark-submit
command that I can specify for the log level
On Thu, Jul 30, 2015 at 10:05 AM, Jonathan Coveney jcove...@gmail.com
wrote:
Put a log4j.properties file in conf/. You can copy
log4j.properties.template as a good base
El
Hi,I'd like to remotely run spark-submit from a local machine to submit a job
to spark cluster (cluster mode).
What method do I use to authenticate myself to the cluster? Like how to pass
user id or password or private key to the cluster
Any help is appreciated.
Hi,
I'm using spark-submit cluster mode to submit a job from local to spark
cluster. There are input files, output files, and job log files that I need to
transfer in and out between local machine and spark cluster.Any recommendation
methods to use file transferring. Is there any future
-shell for exploration and I have a runner class that
executes some tasks with spark-submit. I used to run against
1.4.0-SNAPSHOT. Since then 1.4.0 and 1.4.1 were released so I tried to
switch to the official release. Now, when I run the program as a shell,
everything works but when I try
that is pretty odd -- toMap not being there would be from scala...but what
is even weirder is that toMap is positively executed on the driver machine,
which is the same when you do spark-shell and spark-submit...does it work
if you run with --master local[*]?
Also, you can try to put a set -x
The problem should be toMap, as I tested that val maps2=maps.collect
runs ok. When I run spark-shell, I run with --master
mesos://cluster-1:5050 parameter which is the same with spark-submit.
Confused here.
2015-07-22 20:01 GMT-05:00 Yana Kadiyska yana.kadiy...@gmail.com:
Is it complaining
for exploration and I have a runner class that
executes some tasks with spark-submit. I used to run against
1.4.0-SNAPSHOT. Since then 1.4.0 and 1.4.1 were released so I tried to
switch to the official release. Now, when I run the program as a shell,
everything works but when I try to run
Hi,
I have a simple test spark program as below, the strange thing is that it
runs well under a spark-shell, but will get a runtime error of
java.lang.NoSuchMethodError:
in spark-submit, which indicate the line of:
val maps2=maps.collect.toMap
has problem. But why the compilation has
Is it complaining about collect or toMap? In either case this error is
indicative of an old version usually -- any chance you have an old
installation of Spark somehow? Or scala? You can try running spark-submit
with --verbose. Also, when you say it runs with spark-shell do you run
spark shell
I have a spark program that uses dataframes to query hive and I run it both
as a spark-shell for exploration and I have a runner class that executes
some tasks with spark-submit. I used to run against 1.4.0-SNAPSHOT. Since
then 1.4.0 and 1.4.1 were released so I tried to switch to the official
Thanks for the reply.
Actually, I don't think excluding spark-hive from spark-submit --packages
is a good idea.
I don't want to recompile spark by assembly for my cluster, every time a
new spark release is out.
I prefer using binary version of spark and then adding some jars for job
execution
/bin/spark-submit \
--class mgm.tp.bigdata.ma_spark.SparkMain \
--master yarn-cluster \
--executor-memory 9G \
--total-executor-cores 16 \
ma-spark.jar \
1000
maybe my configuration is not the optimal??
best regards,
paul
when I do run this command:
ashutosh@pas-lab-server7:~/spark-1.4.0$ ./bin/spark-submit \
--class org.apache.spark.graphx.lib.Analytics \
--master spark://172.17.27.12:7077 \
assembly/target/scala-2.10/spark-assembly-1.4.0-hadoop2.2.0.jar \
pagerank soc-LiveJournal1.txt --numEPart=100 --nverts
do run this command:
ashutosh@pas-lab-server7:~/spark-1.4.0$ ./bin/spark-submit \
--class org.apache.spark.graphx.lib.Analytics \
--master spark://172.17.27.12:7077 \
assembly/target/scala-2.10/spark-assembly-1.4.0-hadoop2.2.0.jar \
pagerank soc-LiveJournal1.txt --numEPart=100 --nverts
...@gmail.com wrote:
I'm trying to submit a spark job from a different server outside of my
Spark
Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the
spark-submit
script :
spark/bin/spark-submit --master yarn-client --executor-memory 4G
myjobScript.py
The think is that my application
I'm trying to submit a spark job from a different server outside of my Spark
Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the spark-submit
script :
spark/bin/spark-submit --master yarn-client --executor-memory 4G
myjobScript.py
The think is that my application never pass from
a different server outside of my
Spark
Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the
spark-submit
script :
spark/bin/spark-submit --master yarn-client --executor-memory 4G
myjobScript.py
The think is that my application never pass from the accepted state, it
stuck on it :
15/07/08
I want to add spark-hive as a dependence to submit my job, but it seems that
spark-submit can not resolve it.
$ ./bin/spark-submit \
→ --packages
org.apache.spark:spark-hive_2.10:1.4.0,org.postgresql:postgresql:9.3-1103-jdbc3,joda-time:joda-time:2.8.1
\
→ --class
, but it seems
that
spark-submit can not resolve it.
$ ./bin/spark-submit \
→ --packages
org.apache.spark:spark-hive_2.10:1.4.0,org.postgresql:postgresql:9.3-1103-jdbc3,joda-time:joda-time:2.8.1
\
→ --class fr.leboncoin.etl.jobs.dwh.AdStateTraceDWHTransform \
→ --master spark://localhost:7077 \
Ivy
)
... 9 more
From: Akhil Das [ak...@sigmoidanalytics.com]
Sent: 29 June 2015 09:43
To: Hisham Mohamed
Cc: user@spark.apache.org
Subject: Re: spark-submit in deployment mode with the --jars option
Can you paste the stacktrace? Looks like you are missing few
. Have you got the rights
to execute it?
niedz., 28.06.2015 o 04:53 użytkownik Ashish Soni asoni.le...@gmail.com
napisał:
Not sure what is the issue but when i run the spark-submit or spark-shell
i am getting below error
/usr/bin/spark-class: line 24: /usr/bin/load-spark-env.sh: No such file
I assume that /usr/bin/load-spark-env.sh exists. Have you got the rights to
execute it?
niedz., 28.06.2015 o 04:53 użytkownik Ashish Soni asoni.le...@gmail.com
napisał:
Not sure what is the issue but when i run the spark-submit or spark-shell
i am getting below error
/usr/bin/spark-class
Hi,
I want to deploy my application on a standalone cluster.
Spark submit acts in strange way. When I deploy the application in
*client* mode, everything works well and my application can see the
additional jar files.
Here is the command:
spark-submit --master spark://1.2.3.4:7077 --deploy
Not sure what is the issue but when i run the spark-submit or spark-shell i
am getting below error
/usr/bin/spark-class: line 24: /usr/bin/load-spark-env.sh: No such file or
directory
Can some one please help
Thanks,
Try to add them in the SPARK_CLASSPATH in your conf/spark-env.sh file
Thanks
Best Regards
On Thu, Jun 25, 2015 at 9:31 PM, Bin Wang binwang...@gmail.com wrote:
I am trying to run the Spark example code HBaseTest from command line
using spark-submit instead run-example, in that case, I can
I am trying to run the Spark example code HBaseTest from command line using
spark-submit instead run-example, in that case, I can learn more how to run
spark code in general.
However, it told me CLASS_NOT_FOUND about htrace since I am using CDH5.4. I
successfully located the htrace jar file but I
401 - 500 of 789 matches
Mail list logo