Thanks Guys;
The reason I was seeking RAW/AIO, is mostly about non-blocking write;
Which i mean:
    Even though single write is not faster on RAW, if it supports raw and
Asynch IO write, then MySQL can continue to submit write request to disk
without waiting for the previous write to complete, and then submit a second
write request;
In this case,  the commit(write throughput) can be enhanced greatly, without
blocking /keeping user wait;  In our current test, we are using SAN with
huge cache and each single write only takes 0.3ms(yes very fast, close to
ramdisk i guess); But the sequential/blocking fsync call is the bottleneck:
and it can't  be parallized;

That's the reason i was seeking for such option;

I was an oracle DBA before and oracle has such kind of capability(aio write)
so lgwr can have very high throughput(tens of thousands of commit per
second, and it does group commit);

Sample Trace in Unix/Oracle lgwr:
/1:     semtimedop(105, 0xFFFFFFFF7FFFC914, 1, 0xFFFFFFFF7FFFC900) = 0
/1:     kaio(AIOWRITE, 261, 0x390D3CE00, 8704, 0x0F5FB0007BB2B218) = 0
/1:     kaio(AIOWRITE, 261, 0x390C80000, 253952, 0x0F5FD2007BB2B4A8) = 0
/1:     kaio(AIOWRITE, 261, 0x390D60400, 211456, 0x0F63B2007BB2B738) = 0
/1:     kaio(AIOWRITE, 261, 0x390E8EC00, 182272, 0x0F66EC007BB2B9C8) = 0
/1:     kaio(AIOWRITE, 261, 0x390F10A00, 230912, 0x0F69B4007BB2BC58) = 0
/1:     kaio(AIOWRITE, 261, 0x391024A00, 91648, 0x0F6D3A007BB2BEE8) = 0

Thx


On Fri, Mar 18, 2011 at 6:00 AM, Karen Abgarian <a...@apple.com> wrote:

> Hi,
>
> For the actual question, I agree with the points Johan mentioned.   MySQL,
> to my knowledge, does not have an option to use raw devices for binary logs.
>  Even if it had it, it would not have the benefits Chao is seeking.   There
> is indeed a tradeoff between losing transactions and performance.   If the
> goal is performance, the raw device would be slower since  every write would
> have to actually complete, instead of leaving the block in the OS cache.
>  The best is probably achieved by the battery backed cache: the server could
> be configured to not lose transactions and at the same time perform the work
> fast.
>
> For the question of tweaking the sync_binlog, I find difficult to use
> values other than 0 and 1.   With 0, it just ignores fsyncs, and the amount
> of transactions lost is at the mercy of OS cache.  With 1, all transactions
> will always be on disk before returning to the user.  I cannot make sense
> out of the doco's remark about that this would lose 'at most one
> transaction' and I assume it is a mistake.
>
> With the value of 10, say, however, what I expect to happen, is the server
> will attempt to do fsync every 10 statements.   Say 10 transactions are in
> the binary log buffer, and the server does an fsync.  What is to happen with
> the other transactions that keep coming?  If they commit in memory and
> return, the statement that sync_binlog syncs every 10 transactions is false.
>   If they wait, the wait would be as large as the wait for the disk write
> and the result is that all transactions will be waiting for disk writes.
>
> If somebody can shed more light on this, I would like to hear it.
>
> Tx
> Karen.
>
>
> On Mar 17, 2011, at 12:14 AM, Johan De Meersman wrote:
>
> > ----- Original Message -----
> >> From: "Chao Zhu" <zhuc...@gmail.com>
> >>
> >>   One Q: Can mysql binlog use raw device on Linux?
> >
> > Mmm, good question. Don't really know; but I'm not convinced you'll get
> huge benefits from it, either. Modern filesystems tend to perform pretty
> close to raw throughput.
> >
> > From a just-thinking-it-through point of view, I'd guess no - mysqld
> never seems to open binlogs for append, it always opens a new one. This may
> have something to do with the way replication works; not to mention the
> question of what'll happen if the log is full - it's not a circular buffer.
> >
> >> Can we use asynch IO for binlog writing? sequential non-qio fsync is
> slowing our throughput...
> >
> > Mmm... Theoretically, yes, you could use an async device (even nfs over
> UDP if you're so inclined) but async means that you're going to be losing
> some transactions if the server crashes.
> >
> > You can also tweak
> http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#sysvar_sync_binlog-
>  basically, this controls how often the binlog fsyncs. Same caveat applies,
> obviously: set this to ten, and you'll have ten times less fsyncs, but you
> risk losing ten transactions in a crash.
> >
> > If your binlogs are async, then you also risk having slaves out of sync
> if your master crashes.
> >
> >
> > Personally, if your binlogs are slowing you down, I would recommend
> putting them on faster storage. Multiple small, fast disks in RAID10 are
> going to be very fast, or you could invest in solid state disks - not all
> that expensive anymore, really. Maybe even just a RAM disk - you'll lose
> data when the machine crashes (and need an initscript for save/load of the
> data on that disk), but not if just the mysqld crashes.
> >
> >
> > Weigh the benefits of each option very, very carefully against the risk
> of losing data before you go through with this.
> >
> >
> > --
> > Bier met grenadyn
> > Is als mosterd by den wyn
> > Sy die't drinkt, is eene kwezel
> > Hy die't drinkt, is ras een ezel
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:    http://lists.mysql.com/mysql?unsub=a...@apple.com
> >
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=zhuc...@gmail.com
>
>


-- 
Regards
Zhu Chao

Reply via email to