Jim Meyering wrote: > Pádraig Brady wrote: > ... >> How about the attached? > >> From: Cojocaru Alexandru <xo...@gmx.com> >> Date: Thu, 6 Dec 2012 03:03:41 +0100 >> Subject: [PATCH] cut: avoid a redundant heap allocation >> >> * src/cut.c (set_fields): Don't allocate memory for >> `printable_field' if there are no finite ranges. >> The extra allocation was introduced via commit v8.10-3-g2e636af. > ... > > Thanks to both of you. > That's a fine bug fix, actually. > Consider that before, this would fail on my 64-bit system: > > $ : | cut -b999999999999999999- > cut: memory exhausted > [Exit 1] > $ > > Now, it no longer tries to allocate all that memory, so completes normally: > > $ : | cut -b999999999999999999- > $
Note that on a 32-bit system it fails the same way in both cases: $ : | src/cut -b999999999999999999- src/cut: byte offset '999999999999999999' is too large [Exit 1] And even with 2^32-1, it'll probably just allocate the memory, assuming 500MB doesn't cause trouble. To differentiate portably, we probably have to use ulimit: Limit VM to 22000k, but make the old cut require about 10x that: $ (ulimit -v 22000; :| cut -b$(echo 2^31|bc)-) cut: memory exhausted [Exit 1] The new, just-built one passes: $ (ulimit -v 22000; :| ./cut -b$(echo 2^31|bc)-) $ If someone feels like turning the above into a test case, please do (and update NEWS). Otherwise, I'll get to it over the weekend.