I just managed to get two identical test-servers running, both being slaves of my production system replicating a few databases including two of the heavy-use tables. One server uses heap-tables, on the other one i changed the table-format to innodb.

I've had some problems with the replication but now it seems like everything is running - although I still don't know what the problem was/is. I hope I'll be able to do some testing during the next days... I'll give more feedback later this week. Thanks for the help!

Jan



sheeri kritzer schrieb:
I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. The wonder of transactions, foreign keys, etc., with the speed of
memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri <[EMAIL PROTECTED]> wrote:
Jan,

if you make the InnoDB buffer pool big enough to hold all your data, or at
least all the 'hot data', and set ib_logfiles large as recommended at
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
InnoDB performance should be quite close to MEMORY/HEAP performance for
small SQL queries. If all the data is in the buffer pool, then InnoDB is
essentially a 'main-memory' database. It even uses automatically built hash
indexes.

This assumes that you do not bump into extensive deadlock issues. Deadlocks
can occur even with single row UPDATEs if you update indexed columns.
Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
caveats about it.

Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
----- Original Message -----
From: "Jan Kirchhoff" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?


Hi,

I am currently experiencing trouble getting my new mysql 5-servers
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the
longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the
heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:
sheeri kritzer schrieb:
No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)

Thanks for your hardware/database information. I will look at that
close tomorrow since I want to go home for today - it's already  9 pm
over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats
on other systems). This is part of our database with data of major
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they
change all the time... We have much more updates than selects on the
main database.
Our Application that receives the datastream writes blocks (INSERT ...
ON DUPLICATE KEY UPDATE...) with all records that changed since the
last write. It gives me debug output like "[timestamp] Wrote 19427
rows in 6 queries" every 30 seconds - and that are numbers that I can
rely on.

Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to