Hi spark team
Have cluster wide property spark.kubernetis.executor.deleteontermination to
true.
During the long running job, some of the executor got deleted which have
shuffle data. Because of this, in the subsequent stage , we get lot of
spark shuffle fetch fail exceptions.
Please let me
Hello
May I know from what version of spark, the RDD syntax can be shorten as
this?
rdd.groupByKey().mapValues(lambda x:len(x)).collect()
[('b', 2), ('d', 1), ('a', 2)]
rdd.groupByKey().mapValues(len).collect()
[('b', 2), ('d', 1), ('a', 2)]
I know in scala the syntax: xxx(x => x.len)