I'm not quite sure what this test should show ?

Compressing random data is the perfect way to generate heat.
After all, compression working relies on input entropy being low.
But good random generators are characterized by the opposite - output entropy being high. Even a good compressor, if operated on a good random generator's output, will only end up burning cycles, but not reducing the data size.

Hence, is the request here for the compressor module to 'adapt', kind of first-pass check the input data whether it's sufficiently low-entropy to warrant a compression attempt ?

If not, then what ?

FrankH.

On Thu, 3 May 2007, Jürgen Keil wrote:

The reason you are busy computing SHA1 hashes is you are using
/dev/urandom.  The implementation of drv/random uses
SHA1 for mixing,
actually strictly speaking it is the swrand provider that does that part.

Ahh, ok.

So, instead of using dd reading from /dev/urandom all the time,
I've now used this quick C program to write one /dev/urandom block
over and over to the gzip compressed zpool:

=================================
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>

int
main(int argc, char **argv)
{
       int fd;
       char buf[128*1024];

       fd = open("/dev/urandom", O_RDONLY);
       if (fd < 0) {
               perror("open /dev/urandom");
               exit(1);
       }
       if (read(fd, buf, sizeof(buf)) != sizeof(buf)) {
               perror("fill buf from /dev/urandom");
               exit(1);
       }
       close(fd);
       fd = open(argv[1], O_WRONLY|O_CREAT, 0666);
       if (fd < 0) {
               perror(argv[1]);
               exit(1);
       }
       for (;;) {
               if (write(fd, buf, sizeof(buf)) != sizeof(buf)) {
                       break;
               }
       }
       close(fd);
       exit(0);
}
=================================


Avoiding the reads from /dev/urandom makes the effect even
more noticeable, the machine now "freezes" for 10+ seconds.

CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 0    0   0 3109  3616  316  196    5   17   48   45   245    0  85   0  15
 1    0   0 3127  3797  592  217    4   17   63   46   176    0  84   0  15
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 0    0   0 3051  3529  277  201    2   14   25   48   216    0  83   0  17
 1    0   0 3065  3739  606  195    2   14   37   47   153    0  82   0  17
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
 0    0   0 3011  3538  316  242    3   26   16   52   202    0  81   0  19
 1    0   0 3019  3698  578  269    4   25   23   56   309    0  83   0  17

# lockstat -kIW -D 20 sleep 30

Profiling interrupt: 6080 events in 31.341 seconds (194 events/sec)

Count indv cuml rcnt     nsec Hottest CPU+PIL        Caller
-------------------------------------------------------------------------------
2068  34%  34% 0.00     1767 cpu[0]                 deflate_slow
1506  25%  59% 0.00     1721 cpu[1]                 longest_match
1017  17%  76% 0.00     1833 cpu[1]                 mach_cpu_idle
 454   7%  83% 0.00     1539 cpu[0]                 fill_window
 215   4%  87% 0.00     1788 cpu[1]                 pqdownheap
 152   2%  89% 0.00     1691 cpu[0]                 copy_block
  89   1%  90% 0.00     1839 cpu[1]                 z_adler32
  77   1%  92% 0.00    36067 cpu[1]                 do_splx
  64   1%  93% 0.00     2090 cpu[0]                 bzero
  62   1%  94% 0.00     2082 cpu[0]                 do_copy_fault_nta
  48   1%  95% 0.00     1976 cpu[0]                 bcopy
  41   1%  95% 0.00    62913 cpu[0]                 mutex_enter
  27   0%  96% 0.00     1862 cpu[1]                 build_tree
  19   0%  96% 0.00     1771 cpu[1]                 gen_bitlen
  17   0%  96% 0.00     1744 cpu[0]                 bi_reverse
  15   0%  97% 0.00     1783 cpu[0]                 page_create_va
  15   0%  97% 0.00     1406 cpu[1]                 fletcher_2_native
  14   0%  97% 0.00     1778 cpu[1]                 gen_codes
  11   0%  97% 0.00      912 cpu[1]+6               ddi_mem_put8
   5   0%  97% 0.00     3854 cpu[1]                 fsflush_do_pages
-------------------------------------------------------------------------------


It seems the same problem can be observed with "lzjb" compression,
but the pauses with lzjb are much shorter and the kernel consumes
less system cpu time with "lzjb" (which is expected, I think).


This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to