This can help us to solve the immediate issue, however the ideally one should
submit the jars in the beginning of the job.
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail:
The purpose of broadcast variable is different.
@Malveeka, could you please explain your usecase and issue.
If the FAT/ Uber jar is not having required dependent jars then the spark
job will fail at the start itself.
What is your scenario in which you want to add new jars?
Also, what you mean by
In case of already running jobs, you can make use of broadcasters which will
broadcast the jars to workers, if you want to change it on the fly you can
rebroadcast it.
You can explore broadcasters bit more to make use of.
Regards,
Kedar Dixit
Data Science at Persistent Systems Ltd.
--
Sent
Hi
With spark-submit we can start a new spark job, but it will not add new
jar files in already running job.
~Sushil
On Wed, May 23, 2018, 17:28 kedarsdixit
wrote:
> Hi,
>
> You can add dependencies in spark-submit as below:
>
> ./bin/spark-submit \
>
Hi,
You can add dependencies in spark-submit as below:
./bin/spark-submit \
--class \
--master \
--deploy-mode \
--conf = \
*--jars * \
... # other options
\
[application-arguments]
Hope this helps.
Regards,
Kedar Dixit
Data Science at Persistent Systems Ltd
--
Sent
Hi.
Can I add jars to the spark executor classpath in a running context?
Basically if I have a running spark session, if I edit the spark.jars in
the middle of the code, will it pick up the changes?
If not, is there any way to add new dependent jars to a running spark
context ?
We’re using Livy
In my production setup spark is always taking 40 seconds between these steps
like a fixed counter is set. In my local lab these steps take exact 1
second. I am not able to find the exact root cause of this behaviour. My
Spark application is running on Hortonworks platform in yarn client mode.
Can
btw, this issue happens only with classes needed for the inputFormat. if
the input format is org.apache.hadoop.mapred.TextInputFormat and the serde
is from an additional jar it works just fine.
I don't want to upgrade cdh for this. also, if it should work on cdh5.5 why
is that. what patch fixes
Upgrade to CDH 5.5 for spark. It should work
On Sat, Jan 9, 2016 at 12:17 AM, Ophir Etzion wrote:
> It didn't work. assuming I did the right thing.
> in the properties you could see
>
>
It didn't work. assuming I did the right thing.
in the properties you could see
You can not 'add jar' input formats and serde's. They need to be part of
your auxlib.
On Fri, Jan 8, 2016 at 12:19 PM, Ophir Etzion wrote:
> I tried now. still getting
>
> 16/01/08 16:37:34 ERROR exec.Utilities: Failed to load plan:
>
I tried now. still getting
16/01/08 16:37:34 ERROR exec.Utilities: Failed to load plan:
hdfs://hadoop-alidoro-nn-vip/tmp/hive/hive/c2af9882-38a9-42b0-8d17-3f56708383e8/hive_2016-01-08_16-36-41_370_3307331506800215903-3/-mr-10004/3c90a796-47fc-4541-bbec-b196c40aefab/map.xml:
did you try -- jars property in spark submit? if your jar is of huge size,
you can pre-load the jar on all executors in a common available directory
to avoid network IO.
On Thu, Jan 7, 2016 at 4:03 PM, Ophir Etzion wrote:
> I' trying to add jars before running a query
I' trying to add jars before running a query using hive on spark on cdh
5.4.3.
I've tried applying the patch in
https://issues.apache.org/jira/browse/HIVE-12045 (manually as the patch is
done on a different hive version) but still hasn't succeeded.
did anyone manage to do ADD JAR successfully
14 matches
Mail list logo