Mordy Ovits <[EMAIL PROTECTED]> wrote: > On Wednesday 21 April 2004 03:35 pm, Jim Meyering wrote: >> If you want to continue using cut, you'll have better >> luck with the latest: >> >> ftp://ftp.gnu.org/gnu/coreutils/coreutils-5.2.1.tar.gz >> ftp://ftp.gnu.org/gnu/coreutils/coreutils-5.2.1.tar.bz2 > > Excellent. Will do. > >> Or, just use head and tail with their --bytes=N options. >> >> e.g., head --bytes=N < FILE | tail --bytes=412569600 >> >> where N is chosen so that the head command outputs everything >> in the file up to and including the desired range of bytes. > > Sure, but that reads the whole file and pushes it through the pipe. That's > far more IO than is strictly necessary. This is especially true with a 9GB > file.
dd is probably the best choice, then. Using two separate processes is best, unless you can find a reasonably large input block size that evenly divides both the initial offset and the number of bytes you want to output. Then dd will use lseek to skip past the initial 1*NUM bytes. ( dd ibs=1 skip=N_SKIP count=0 && dd ibs=4096 count=100725 ) < BIG > out since 4096 * 100725 == 412569600. _______________________________________________ Bug-coreutils mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-coreutils
