Thanks a lot for the answers foks.
It turned out that spark was just IOPs starved. Using better disks solved
my issue, so nothing related to kubernetes at all.
Have a nice weekend everyone
On Fri, Sep 30, 2022 at 4:27 PM Artemis User wrote:
> The reduce phase is always more resource-intensive
The reduce phase is always more resource-intensive than the map phase.
Couple of suggestions you may want to consider:
1. Setting the number of partitions to 18K may be way too high (the
default number is only 200). You may want to just use the default
and the scheduler will automaticall
Hi Sungwoo,
I tend to agree - for a new system, I would probably not go that route, as
Spark on Kubernetes is getting there and can do a lot already. Issue I
mentioned before can be fixed with proper node fencing - it is a typical
stateful set problem Kubernetes has without fencing - node goes dow