At 18:35 Uhr -0200 23.01.2003, Dyego Souza do Carmo wrote:
/dev/hda1:20Gnewraw
/dev/raw1:20Gnewraw

The speed is more fast in the last case or no ?

In my tests which I did half a year ago with mysql 3.X it depended on the usage pattern.

The advantage of using hda1 (buffered raw partition) is that linux
(you're talking about linux right?) dynamically handles buffering. So
if you are frequently accessing more memory than you have configured
in innodb_buffer_pool, in the hda1 case linux will deliver data from
linux' cache (if linux does have enough ram to buffer it) and thus
will be faster than the raw1 case where data will be fetched from
disk again.

If, on the other side, you configure innodb_buffer_pool as big as
perhaps 3/4 of all available RAM, linux will not be able to deliver
from RAM either since it's own buffers are too small, and the raw1
case starts to be faster because of lower overhead.

So on a dedicated mysql machine where +-nothing else than innodb is
running, the raw1 case will (probably) gain you some advantage. The
difference is not very big, though. I.e. if you are careful
configuring your setup you should gain something but if you are not
then you will loose much more.

I thought I had sent my results to the list, but I just realized that
I never sent that mail. Well I think I didn't because it's though to
write correctly down what you have seen without making wrong
conclusions, and also because just after writing it i did some more
tests, and the test scripts were (and are still) not prepared to be
published either. Maybe it's still worth something, see below.

Christian.


Date: Mon, 22 Jul 2002 23:59:59 +0100
To: "Heikki Tuuri" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
From: Christian Jaeger <[EMAIL PROTECTED]>
Subject: Re: Innodb and unbuffered raw io on linux?

> I'll test it more thoroughly in the next days.

mysql 3.23.51 with your patches worked without problems in my tests.

But, performance seemed generally rather (a bit) worse than with OS
buffered IO (= using /dev/sdaX).

- Batches of 1000 inserts of ~2kb data and a commit took about 8%
more time with rawio (/dev/raw/raw1) than buffered (/dev/sdaX).
Though I must say I did take another partition on the same disk for
this measurement (it seems that the 'raw' tool, at least the one
from Debian, cannot unmap a raw device once mapped onto a block
device, and so I had to use two different partitions of equal size,
one mapped to /dev/raw/raw1, the other not mapped), so the disk
itself could have made this difference.

- The biggest difference was (as expected) when doing full table
scans on tables bigger than what seemed to fit into the innodb
buffer pool. The first scanning query took the same amount of time
in both cases (~10 seconds), whereas with OS buffered IO the
subsequent scans took only about 6 seconds, but unbuffered IO still
10 seconds for each run. Increasing innodb_buffer_pool_size from
90MB to 290MB changed it so that also with rawio the subsequent
queries profit from cached data: (the test machine has 512MB RAM,
runs linux 2.4.17, one seagate SCSI disk, 1Ghz PentiumIII)

               Query time for the first runs after a fresh mysqld start [s] *)
90MB buffer pool:
        /dev/raw/raw1   10.60  11.09  11.01
           /dev/sda10   10.67   6.82   6.80
290MB buffer pool:
        /dev/raw/raw1    9.53   4.18   4.17   3.96**)
           /dev/sda10    9.48   3.98   3.96

*) note that kernel 2.4 drops it's own buffered pages when the last
application closes the buffer device, so it's enough to restart
mysqld and not necessary to reboot the machine before subsequent
test runs (there's no cache left over).
**) this last number is from a different run

çç .. ist the autoadapting secondlevel cache.

So if I conclude correctly this means: (no big news)

1. if there are no applications besides mysql running on the
machine, it's important to make the buffer cache just as big as
possible, regardless of whether rawio is used or not. In this
scenario there's no compelling reason for rawio either.

2. if there *are* other applications besides mysql, then there may
be two possible strategies:
a) to set the innodb_buffer_pool_size to exactly the size of the
working set, and use rawio. Thus innodb keeps everything in it's own
memory, and the OS doesn't spend cache memory for useless duplicates
which makes more memory available to the other apps.
b) to set the innodb_buffer_pool_size smaller, but use normal
buffered /dev/sd* devices (or files in the filesystem), thus making
the kernel buffer what doesn't fit into the innodb buffer. Some data
will be buffered twice, though.

I haven't made tests with other applications competing with mysql for ram.

I also haven't tested using raw partitions or unbuffered IO for the
logs. Would this work, or make sense? Are the logs cached in innodb
as well? I've noticed some memory pressure (?, at least emacs has
been swapped out) when using a 290MB buffer pool and 2*125M logs, so
with big logs one should maybe compensate the buffer pool (or use
rawio for the logs as well).

ç Putting the logs on reiserfs vs. ext2 did show a 2-9% faster
operation on ext2.

ç What I have NOT tested:
- real world in the sense that apache and an app server are running
in parallel to mysql, although my  ç  program should not be that far
from that.
- using buffer devices or unbuffered IO for the logs (my tests used
either reiserfs or ext2 files). Would this work, or make sense?
- RAID
çç - separate disks for log and data
- the patches from the SGI people. Maybe their rawio code is faster.
- multiple clients in parallel

Thanks,
Christian.



PS. some random observations (with mysql 3.23.51+your patches, built
from Debian source package):

- in some tests (1000 inserts of ~2kb and one commit in a loop,
looping about once per second) the CPU is >40% idle even when using
/dev/sda10 (os buffered) for data and
innodb_flush_log_at_trx_commit=0 (I would have expected it to take
all CPU since disk shouldn't be limiting?). At first, I had put the
logs on Reiserfs, but using ext2 didn't change anything.

- mysql takes a long time (at least a minute) for shutting down
shortly after having deleted 300'000 rows (of ~2kb each), in spite
of innodb_fast_shutdown=1 and innodb_flush_log_at_trx_commit=0  in
my.cnf. Mysql is taking about 20% cpu during this phase. It seems
like there's cleanup work going on in the background that can't be
interrupted even by fast_shutdown, correct? (I assume it isn't
problematic when mysql is terminated by init issuing SIGKILL).
  I've read the www.innodb.com/bench.html page now which mentions
the same phenomena in the "other market-leading db" test, otherwise
I haven't seen this mentioned. It's just a bit confusing if one puts
the fast shutdown option into the config and mysql still doesn't
shutdown fast :), maybe a note in the paragraph about
innodb_fast_shutdown would be useful.

(- compared with a mysql 3.23.49 installation on an identical
machine (with identical setup and identical data, both without any
other load, both using /dev/sdaX with OS buffering), 3.23.51 took
longer for the same queries involving smaller table scans (0.26 vs.
0.17 seconds in mysql client). Not sure what's the reason.)


---------------------------------------------------------------------
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Reply via email to