Hi there,
the new tar buffers the filenames read from stdin (cat <some_filenames>
| tar c -T- | ...).
There are two problems with that.
1. tar does not check the available ram or address space. On a 32 bit
system tar crashes
when the filelist is too large.
2. On an 64 bit system it uses all available ram resources.
We used tar to copy a lot of files (billions) over the network like thist:
zcat large_filelist.gz | tar c -T- | netcat ip port
on the other side:
netcat -lp port | tar x
It was the fastest way to copy a large amount of files and this is no
longer possible.
Currently we downgraded to the old tar, but will there be a switch to
turn this new behaviour off ?
The gnu tools have always been incredible reliable (thanks for that) and
now the streaming
capability and reliability of tar get's thrown away for a imho
unneccessary feature ?
Best regards,
Christian Wetzel