Hi all,

I like 'less' very much for viewing data, so its lack of speed
when viewing larger amounts of data coming from pipe was irritating
me. 
Unfortunately the time less spends processing data from stdin is
proportional to a square of data size. While the proper fix would
be to improve algorithm used in less, it is not what could be done as a
one-line patch I have done; Here it is:
----------------------------------------------------------------
diff -ru less-346/ch.c less-346.bld02/ch.c
--- less-346/ch.c       Fri Nov  5 02:47:33 1999
+++ less-346.bld02/ch.c Wed Dec 12 15:26:00 2001
@@ -29,7 +29,7 @@
  * in order from most- to least-recently used.
  * The circular list is anchored by the file state "thisfile".
  */
-#define LBUFSIZE       1024
+#define LBUFSIZE       102400
 struct buf {
        struct buf *next, *prev;  /* Must be first to match struct filestate */
        long block;

----------------------------------------------------------------
try the following (on a machine with 128M or more of RAM):

time perl -e '$n=800000; $o=0;
for ($i=1;$i<= $n; ++$i) {
  $l=sprintf "%s line %8d, offset %10d\n", "="x16, $i, $o;
  print $l;
  $o += length $l; }
' | lless | wc

where 'lless' is 'less' with the above patch applied,
and then same with standard less;
You will see 10-fold or better speed improvement.

The price of some extra memory consumption for small data amount
is something I am most happy to pay for this extra speed.
Of course one might want to use different values for 
LBUFSIZE, or, with some more work, make it variable (passed through
a parameter or from environment).

Best regards,

Wojtek
--------------------
Wojtek Pilorz
[EMAIL PROTECTED]




_______________________________________________
Redhat-devel-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-devel-list

Reply via email to