vinothchandar commented on pull request #2673:
URL: https://github.com/apache/hudi/pull/2673#issuecomment-801263227
Makes sense to explicitly `unpersist()`. Do you think the issue is a leak of
some sort, where we are hanging onto the writeStatus RDDs? or simply the spark
automatic cleaning not keeping up?
https://spark.apache.org/docs/latest/rdd-programming-guide.html#removing-data
I ask coz, it seems like you don't use `blocking=true`, so its all async
unpersisting anyway .
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org