I re-ran a variant of the program I posted on 4 Jun 1998
under 2.2.5 and 2.3.38. The program consists of two
processes writing to each other across a pipe and performing
computation. I append a shell script to compile and run the
program.

Here are the results for 2.2.5 and 2.3.38. The y-axis is the
size of the write (or, equivalently, the delay between
writes). The x-axis is the amount of computation per byte
done between writes. The values are the % CPU usage on a
dual 450 MHz PII.

2.2.5:
   1k: 198% 198% 198% 176% 177% 185% 191% 195% 200% 197%
   2k: 199% 200% 199% 162% 157% 191% 196% 193% 198% 198%
   4k: 160% 198% 199% 144% 171% 192% 186% 198% 194% 199%
   8k: 112% 140% 183% 134% 172% 143% 191% 169% 193% 196%
  16k: 100% 100% 124% 147% 156% 190% 189% 176% 113% 124%
  32k: 100% 100% 100% 101% 102% 100% 173% 177% 100% 127%
  64k: 101% 100% 101% 100% 101% 101% 101% 112%  99% 151%
 128k: 100% 100% 101% 102% 104% 103% 107% 101% 127% 174%
 256k: 100% 100% 100% 101% 101% 123% 113% 126% 186% 188%
 512k:  99%  99% 100% 101%  99% 105% 102% 154% 184% 174%

2.3.38:
   1k: 190% 190% 196% 173% 172% 187% 190% 197% 195% 196%
   2k: 181% 189% 194% 162% 173% 183% 195% 191% 199% 196%
   4k: 190% 191% 195% 146% 184% 174% 197% 186% 196% 196%
   8k: 162% 169% 187% 163% 142% 194% 189% 193% 196% 196%
  16k: 173% 173% 185% 180% 180% 185% 193% 196% 196% 196%
  32k: 156% 170% 179% 163% 182% 189% 194% 193% 196% 197%
  64k: 177% 175% 183% 170% 189% 189% 193% 197% 197% 199%
 128k: 146% 168% 182% 184% 189% 194% 193% 197% 198% 199%
 256k: 156% 175% 178% 182% 190% 197% 189% 193% 199% 199%
 512k: 149% 170% 179% 186% 192% 191% 195% 197% 199% 199%

You can see that for certain combinations of the parameters,
2.2.5 fails to parallelize the pipe. 2.3.38 is much better,
although not perfect by any means.

Regards,

Alan

#!/bin/sh

cat <<EOF >pipe.c
#include <unistd.h>
#include <stdlib.h>

size_t total_write_size;
size_t write_size;
unsigned char *buffer;
long think_size;

void
write_or_read(char *which)
{
  ssize_t n;
  n = 0;
  while (n < write_size) {
    if (strcmp(which, "write") == 0)
      n += write(1, buffer + n, write_size - n);
    else
      n += read(0, buffer + n, write_size - n);
  }
}

void
think(void)
{
  long i, j;
  volatile unsigned int v = 1;
  for (j = 0; j < write_size; ++j)
    for (i = 0; i < think_size; ++i)
      v *= 3;
}

int
main(int argc, char **argv)
{
  size_t i;

  think_size = atol(argv[2]);
  write_size = atol(argv[3]) * 1024;

  total_write_size = (64L * 1024L * 1024L) / think_size;
  if (total_write_size < write_size)
    total_write_size = write_size;

  buffer = malloc(write_size);

  for (i = 0; i < total_write_size / write_size; ++i) {
    write_or_read(argv[1]);
    think();
  }

  return 0;
}
EOF

gcc -O4 -o pipe pipe.c

for write in 1 2 4 8 16 32 64 128 256 512
do
    printf "%4dk:" $write
    for think in 1 2 4 16 32 64 128 256 1024 2048
    do
    set -- `
        time sh -c "./pipe write $think $write | ./pipe read $think $write" 2>&1
    `
    elapsed=`
        echo "$3" |
        sed 's/elapsed//' |
        awk -F: '{ printf "%.2f", $1 * 60 + $2; }'
    `
    percent=`
        echo "$4" |
        sed 's/CPU//'
    `
    #printf " %5.2f/%4s" $elapsed $percent
    printf " %4s" $percent
    done
    printf "\n"
done


-- 
Dr Alan Watson
Instituto de Astronom�a UNAM
-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/dmentre/smp-howto/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to