> I have to call Oracle sequence using spark.
You might use jdbc and write your own lib from scala
I did such thing for postgres
(https://framagit.org/parisni/spark-etl/tree/master/spark-postgres)
see sqlExecWithResultSet
On Thu, Aug 15, 2019 at 10:58:11PM +0530, rajat kumar wrote:
> Hi
This is what you're looking for:
Handle large corrupt shuffle blocks
https://issues.apache.org/jira/browse/SPARK-26089
So until 3.0 the only way I can think of is to reduce the size/split your
job into many
On Thu, Aug 15, 2019 at 4:47 PM Mikhail Pryakhin
wrote:
> Hello, Spark community!
>
>
Hi guys,
Have anyone been using spark (spark-submit) with yarn mode which pull
images from a private Docker repositories/registries ??
how do you pass in the docker config.json which included the auth tokens ?
or is there any environment variable can be added in the system environment
to make it
Hi, All.
Spark 2.3.3 was released six months ago (15th February, 2019) at
http://spark.apache.org/news/spark-2-3-3-released.html. And, about 18
months have been passed after Spark 2.3.0 has been released (28th
February, 2018).
As of today (16th August), there are 103 commits (69 JIRAs) in
Increasing your driver memory as 12g.
On Thursday, August 15, 2019, Dennis Suhari
wrote:
> Hi community,
>
> I am using Spark on Yarn. When submiting a job after a long time I get an
> error mesage and retry.
>
> It happens when I want to store the dataframe to a table.
>
>
Thanks Tianlang. I saw the DAG on YARN, but what really solved my problem
is adding intermediate steps and evaluating them eagerly to find out where
the bottleneck was.
My process now runs in 6 min. :D
Thanks for the help.
[]s
On Thu, 15 Aug 2019 at 07:25, Tianlang
wrote:
> Hi,
>
> Maybe you