Github user tejasapatil commented on the pull request:

    https://github.com/apache/spark/pull/13042#issuecomment-218540631
  
    @podwhitehawk I agree with @srowen about single argument being passed.
    
    Generally speaking, there is not a maximum path length in Unix but there 
might be restrictions on the length of each individual filename / dirname (this 
might be due to FS) [0]. Now its possible that the input `dir` is itself a 
super super large string and which might cause `rm -rf` to fail due to too 
large command. I would argue that even `find .. -delete` would be a victim of 
the same. Worse case, we are not able to cleanup the dir using Unix command.. 
but we still have backup from using Java IO (which I guess will also hit some 
limits because it will boil down to a system call).
    
    [0] : 
https://www.quora.com/Why-does-UNIX-system-have-maximum-path-of-108-bytes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to