Hi Arkadiusz,

You wrote:

> rm -r while deleting a directory that's longer than PATH_MAX walks it in 
> a way to avoid hitting max limit
> 
> $ (for i in `seq 1 2000`; do mkdir 
> 1234567890123456789012345678901234567890; cd 
> 1234567890123456789012345678901234567890; done)
> $ rm -r 1234567890123456789012345678901234567890
>
> 
> but cp doesn't do that:
> 
> $ (for i in `seq 1 2000`; do mkdir 
> 1234567890123456789012345678901234567890; cd 
> 1234567890123456789012345678901234567890; done)
> $ cp -a 1234567890123456789012345678901234567890 2
> > cp: cannot stat '[long filename snipped]': File name too long

An easier way to test this is (assuming your system supports it):

    $ mkdir -p `python3 -c 'print("./" + "a/" * 32768)'`
    $ cp -r a b
    cp: cannot stat '[long file name snipped]': File name too long
    $ rm -rf a b

> I wonder (+ report as a enhancement request) why cp isn't made to do the 
> same smart thing and avoid hitting ENAMETOOLONG?

It is known about, and has been mentioned in the TODO file for a very
long time:

    cp --recursive: use fts and *at functions to perform directory
    traversals in source and destination hierarchy rather than forming
    full file names. The latter (current) approach fails unnecessarily
    when the names become very long, and requires space and time that is
    quadratic in the depth of the hierarchy.

I suppose very few have run into the limit with real usage.

I agree it should be fixed though. I'll have a look at it.

Thanks,
Collin




Reply via email to