On Mon, 28 Oct 2002 00:54:57 -0800 (PST),
  Matthew Dillon <[EMAIL PROTECTED]> said:

dillon>     I can demonstrate the issue with a simple test.  Create a large file
dillon>     with dd, larger then physical memory:

dillon>     dd if=/dev/zero of=test bs=1m count=4096    # create a 4G file.

dillon>     Then dd (read) portions of the file and observe the performance.
dillon>     Do this several times to get stable numbers.

dillon>     dd if=test of=/dev/null bs=1m count=16      # repeat several times
dillon>     dd if=test of=/dev/null bs=1m count=32      # etc...

dillon>     You will find that read performance will drop in two significant
dillon>     places:  (1) When the data no longer fits in the buffer cache and
dillon>     the buffer cache is forced to teardown wirings and rewire other
dillon>     pages from the VM page cache.  Still no physical I/O is being done.
dillon>     (2) When the data no longer fits in the VM page cache and the system
dillon>     is forced to perform physical I/O.

I tried that on the same PC as my last benchmark.  The PC has 160MB
RAM, so I created a file of 256MB.

One pre-read (in order to stabilize the buffer cache) and four read
tests were run consecutively for each of six distinct read sizes just
after boot.  The average read times (in seconds) and speeds (in
MB/sec) are shown below:


                without my patch        with my patch
read size       time    speed           time    speed
32MB            .497    65.5            .471    69.0
64MB            1.02    63.6            .901    72.1
96MB            2.24    50.5            5.52    18.9
128MB           20.7    6.19            16.5    7.79
192MB           32.9    5.83            32.9    5.83
256MB           42.5    6.02            43.0    5.95


dillon>     Its case (1) that you are manipulating with your patch, and as you can
dillon>     see it is entirely dependant on the number of wired pages that the 
dillon>     system is able to maintain in the buffer cache.

The results of 128MB-read are likely to be so.

96MB-read gave interesting results.  Since vfs_unwirepages() passes
buffer pages to vm_page_dontneed(), it seems that the page scanner
reclaims buffer cache pages too aggressively.

The table below shows the results with my patch where
vfs_unwirepages() does not call vm_page_dontneed().


read size       time    speed
32MB            .503    63.7
64MB            .916    70.5
96MB            4.57    27.1
128MB           17.0    7.62
192MB           35.8    5.36
256MB           46.0    5.56


The 96MB-read results were a little bit better, although the reads of
larger sizes became slower.  The unwired buffer pages may be putting
a pressure on user process pages and the page scanner.

-- 
Seigo Tanimura <[EMAIL PROTECTED]>

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to