This may be within your yarn constraints, but you can look at the configuration
parameters of your yarn
On 7/25/2019 20:23,Amit Sharma wrote:
I have cluster with 26 nodes having 16 cores on each. I am running a spark job
with 20 cores but i did not understand why my application get 1-2 cores on
Unsubscribe
I also have this problem, hope to be able to solve here, thank you
On 12/14/2018 10:38,lk_spark wrote:
hi,all:
I want't to generate some test data , which contained about one hundred
million rows .
I create a dataset have ten rows ,and I do df.union operation in 'for'
circulation , but
i think you can add executer memory
| |
15313776907
|
|
邮箱:15313776...@163.com
|
签名由 网易邮箱大师 定制
On 12/11/2018 08:28, lsn24 wrote:
Hello,
I have a requirement where I need to get total count of rows and total
count of failedRows based on a grouping.
The code looks like below