Hi All,
I came across different parameters in spark submit
--jars , --spark.executor.extraClassPath , --spark.driver.extraClassPath
What are the differences between them? When to use which one? Will it differ
if I use following:
--master yarn --deploy-mode client
--master yarn --deploy-mode
And a plug for the Graph Processing track -
A discussion of comparison talk between the various Spark options (GraphX,
GraphFrames, CAPS), or the ongoing work with SPARK-25994 Property Graphs,
Cypher Queries, and Algorithms
Would be great!
From: Felix Cheung
Please note that limit drops the partitions to 1.
If it is only 100 records you might be able to fit it in one executor , so
limit followed by a write is okay.
From: Brandon Geise
Sent: Sunday, April 14, 2019 9:54 AM
To: Chetan Khatri
Cc: Nuthan Reddy ; user
Subject: Re: How to
Use .limit on the dataframe followed by .write
On Apr 14, 2019, 5:10 AM, at 5:10 AM, Chetan Khatri
wrote:
>Nuthan,
>
>Thank you for reply. the solution proposed will give everything. for me
>is
>like one Dataframe show(100) in 3000 lines of Scala Spark code.
>However, yarn logs --applicationId
Nuthan,
Thank you for reply. the solution proposed will give everything. for me is
like one Dataframe show(100) in 3000 lines of Scala Spark code.
However, yarn logs --applicationId > 1.log also gives all
stdout and stderr.
Thanks
On Sun, Apr 14, 2019 at 10:30 AM Nuthan Reddy
wrote:
> Hi