Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-06 Thread sheeri kritzer
I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. 
The wonder of transactions, foreign keys, etc., with the speed of
memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri [EMAIL PROTECTED] wrote:
 Jan,

 if you make the InnoDB buffer pool big enough to hold all your data, or at
 least all the 'hot data', and set ib_logfiles large as recommended at
 http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
 InnoDB performance should be quite close to MEMORY/HEAP performance for
 small SQL queries. If all the data is in the buffer pool, then InnoDB is
 essentially a 'main-memory' database. It even uses automatically built hash
 indexes.

 This assumes that you do not bump into extensive deadlock issues. Deadlocks
 can occur even with single row UPDATEs if you update indexed columns.
 Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
 caveats about it.

 Best regards,

 Heikki

 Oracle Corp./Innobase Oy
 InnoDB - transactions, row level locking, and foreign keys for MySQL

 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
 tables
 http://www.innodb.com/order.php
 - Original Message -
 From: Jan Kirchhoff [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Tuesday, January 31, 2006 1:09 PM
 Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?


  Hi,
 
  I am currently experiencing trouble getting my new mysql 5-servers
  running as slaves on my old 4.1.13-master.
  Looks like I'll have to dump the whole 30GB-database and import it on
  the new servers :( At this moment I
  do no see any oppurtunity to do this before the weekend since the
  longest time I can block any of our production
  systems is only 2-3 hours between midnight and 2am :(
 
  I am still curious if Innodb could handle the load of my updates on the
  heavy-traffic-tables since its disk-bound and
  does transactions.
 
  What I would probably need is an in-memory-table without any kind of
  locking - at least not table-locks! But there
  is no such engine in mysql. When a cluster can handle that (although it
  has the transaction-overhead) it would probably be
  perfect for since it even adds high availability in a very easy way...
 
  Jan
 
  Jan Kirchhoff schrieb:
  sheeri kritzer schrieb:
  No problem:
 
  Firstly, how are you measuring your updates on a single table?  I took
  a few binary logs, grepped out for things that changed the table,
  counting the lines (using wc) and then dividing by the # of seconds
  the binary logs covered.  The average for one table was 108 updates
  per second.
I'm very intrigued as to how you came up with 2-300 updates per second
  for one table. . . did you do it that way?  If not, how did you do it?
   (We are a VERY heavily trafficked site, having 18,000 people online
  and active, and that accounts for the 108 updates per second.  So if
  you have more traffic than that. .  .wow!)
 
  Thanks for your hardware/database information. I will look at that
  close tomorrow since I want to go home for today - it's already  9 pm
  over here... I need beer ;)
 
  We are not running a webservice here (actually we do, too, but thats
  on other systems). This is part of our database with data of major
  stock exchanges worldwide that we deliver realtime data for.
  Currently that are around 900,000 quotes, during trading hours they
  change all the time... We have much more updates than selects on the
  main database.
  Our Application that receives the datastream writes blocks (INSERT ...
  ON DUPLICATE KEY UPDATE...) with all records that changed since the
  last write. It gives me debug output like [timestamp] Wrote 19427
  rows in 6 queries every 30 seconds - and that are numbers that I can
  rely on.
 
  Jan
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-06 Thread Jan Kirchhoff
I just managed to get two identical test-servers running, both being 
slaves of my production system replicating a few databases including two 
of the heavy-use tables.
One server uses heap-tables, on the other one i changed the table-format 
to innodb.


I've had some problems with the replication but now it seems like 
everything is running - although I still don't know what the problem was/is.
I hope I'll be able to do some testing during the next days... I'll give 
more feedback later this week. Thanks for the help!


Jan



sheeri kritzer schrieb:

I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. 
The wonder of transactions, foreign keys, etc., with the speed of

memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri [EMAIL PROTECTED] wrote:
  

Jan,

if you make the InnoDB buffer pool big enough to hold all your data, or at
least all the 'hot data', and set ib_logfiles large as recommended at
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
InnoDB performance should be quite close to MEMORY/HEAP performance for
small SQL queries. If all the data is in the buffer pool, then InnoDB is
essentially a 'main-memory' database. It even uses automatically built hash
indexes.

This assumes that you do not bump into extensive deadlock issues. Deadlocks
can occur even with single row UPDATEs if you update indexed columns.
Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
caveats about it.

Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
- Original Message -
From: Jan Kirchhoff [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?




Hi,

I am currently experiencing trouble getting my new mysql 5-servers
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the
longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the
heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:
  

sheeri kritzer schrieb:


No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)

  

Thanks for your hardware/database information. I will look at that
close tomorrow since I want to go home for today - it's already  9 pm
over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats
on other systems). This is part of our database with data of major
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they
change all the time... We have much more updates than selects on the
main database.
Our Application that receives the datastream writes blocks (INSERT ...
ON DUPLICATE KEY UPDATE...) with all records that changed since the
last write. It gives me debug output like [timestamp] Wrote 19427
rows in 6 queries every 30 seconds - and that are numbers that I can
rely on.

Jan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-05 Thread Heikki Tuuri

Jan,

if you make the InnoDB buffer pool big enough to hold all your data, or at 
least all the 'hot data', and set ib_logfiles large as recommended at 
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then 
InnoDB performance should be quite close to MEMORY/HEAP performance for 
small SQL queries. If all the data is in the buffer pool, then InnoDB is 
essentially a 'main-memory' database. It even uses automatically built hash 
indexes.


This assumes that you do not bump into extensive deadlock issues. Deadlocks 
can occur even with single row UPDATEs if you update indexed columns. 
Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the 
caveats about it.


Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables

http://www.innodb.com/order.php
- Original Message - 
From: Jan Kirchhoff [EMAIL PROTECTED]

Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?



Hi,

I am currently experiencing trouble getting my new mysql 5-servers
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the
longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the
heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:

sheeri kritzer schrieb:

No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)


Thanks for your hardware/database information. I will look at that
close tomorrow since I want to go home for today - it's already  9 pm
over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats
on other systems). This is part of our database with data of major
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they
change all the time... We have much more updates than selects on the
main database.
Our Application that receives the datastream writes blocks (INSERT ...
ON DUPLICATE KEY UPDATE...) with all records that changed since the
last write. It gives me debug output like [timestamp] Wrote 19427
rows in 6 queries every 30 seconds - and that are numbers that I can
rely on.

Jan





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-31 Thread Jan Kirchhoff

Hi,

I am currently experiencing trouble getting my new mysql 5-servers 
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on 
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the 
longest time I can block any of our production

systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the 
heavy-traffic-tables since its disk-bound and

does transactions.

What I would probably need is an in-memory-table without any kind of 
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it 
has the transaction-overhead) it would probably be

perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:

sheeri kritzer schrieb:

No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)
  
Thanks for your hardware/database information. I will look at that 
close tomorrow since I want to go home for today - it's already  9 pm 
over here... I need beer ;)


We are not running a webservice here (actually we do, too, but thats 
on other systems). This is part of our database with data of major 
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they 
change all the time... We have much more updates than selects on the 
main database.
Our Application that receives the datastream writes blocks (INSERT ... 
ON DUPLICATE KEY UPDATE...) with all records that changed since the 
last write. It gives me debug output like [timestamp] Wrote 19427 
rows in 6 queries every 30 seconds - and that are numbers that I can 
rely on.


Jan





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Kishore Jalleda
a cluster would not necessarily give you speed but would give you
scalability, basically it increases your concurrency at which you can
service clients, also in your case the lockups are occuring because of
the obvious reason that the threads are competing for the system
resources, so a cluster may be a good option, but you can also use
replication and have multiple slaves and distribute the load, if you
have the resources to do that ..

Kishore Jalleda

On 1/27/06, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,

 Did anybody ever benchmark heap-tables against a cluster?
 I have a table with 900.000 rows (40 fields, CHARs, INTs and DOUBLEs, 
 Avg_row_length=294) that gets around 600 updates/sec (grouped in about 12 
 extended inserts a minute inserting/updating 3000 rows each).
 This is currently a HEAP-table (and get replicated onto a slave, too). I 
 experience locking-problems on both the master and the slave, queries that 
 usually respond within 0.0x seconds suddenly hang and take 10 seconds or 
 sometimes even longer.
 I wonder if a cluster setup would give me any speedup in this issue? I will 
 be doing some benchmarking myself next week, but It would be very helpful if 
 anybody could share experiences with me so I don't have to start from 
 scratch... It is difficult and very time-consuming to set up a test-suite 
 comparable to our production systems... Any tips will help! Thanks!

 regards
 Jan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread sheeri kritzer
Why are you using a heap table?

My company has tables with much more information than that, that get
updated much more frequently.  We use InnoDB tables, with very large
buffer sizes and have tweaked which queries use the cache and which
don't, on a system with lots of RAM (10Gb).  Basically we've set it up
so everything is in memory anyway.

Perhaps a similar setup would help for you?

Sincerely,

Sheeri Kritzer

On 1/27/06, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,

 Did anybody ever benchmark heap-tables against a cluster?
 I have a table with 900.000 rows (40 fields, CHARs, INTs and DOUBLEs, 
 Avg_row_length=294) that gets around 600 updates/sec (grouped in about 12 
 extended inserts a minute inserting/updating 3000 rows each).
 This is currently a HEAP-table (and get replicated onto a slave, too). I 
 experience locking-problems on both the master and the slave, queries that 
 usually respond within 0.0x seconds suddenly hang and take 10 seconds or 
 sometimes even longer.
 I wonder if a cluster setup would give me any speedup in this issue? I will 
 be doing some benchmarking myself next week, but It would be very helpful if 
 anybody could share experiences with me so I don't have to start from 
 scratch... It is difficult and very time-consuming to set up a test-suite 
 comparable to our production systems... Any tips will help! Thanks!

 regards
 Jan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jan Kirchhoff

sheeri kritzer schrieb:

Why are you using a heap table?
  
We started out with a myisam-table years ago when the table was much 
smaller und less frequently updated. We tried innodb about 2 or 3 years 
ago and couldn't get a satisfying result. We then changed it to HEAP and 
everything was fine.
Now we are getting locking-Problems as the number of updates and selects 
constantly increases and need to upgrade our server-hardware anyway. I 
like the scalability of clusters for load-balancing and HA and we have 
had problems with our mysql-replications on the heavy load servers 
(total  2000 updates/Sec average) every 2-3 months that we couldn't 
reproduce. Other replications with less throughput run stable for years 
(same kernel, same mysqld). I'd get rid of all my replication problems 
when I put the most frequently updatet tables into a cluster...

My company has tables with much more information than that, that get
updated much more frequently.  We use InnoDB tables, with very large
buffer sizes and have tweaked which queries use the cache and which
don't, on a system with lots of RAM (10Gb).  Basically we've set it up
so everything is in memory anyway.

Perhaps a similar setup would help for you?
  
that sounds interesting since we couldn't get good performance using 
innodb in our case - but thats a few years ago. things may have changed? 
I'll definitely give it a try next week, too.
Could you give me more information on your system? hardware, size of the 
table, average number of updates/sec?


thanks for your suggestions
Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jimmy Guerrero
Hello,

Another consideration besides the performance aspects, are the
characteristics between MEMORY and the NDB storage engines. (You'll be
gaining or losing functionality depending on how you look at it.)

Briefly:

MEMORY - in memory, table locks, hash  B-tree indexes, no disk i/o or
persistence
NDB - in memory, supports transactions, persistence, row-level locks, hash 
T-tree indexes

Also, moving to cluster means more machines, and as stated by Kishore,
Cluster really buys you scalability, not necessarilly performance right off
the bat (unless you plan on using the NDB API to access data.)

As, Sherri suggests another storage engine might be a better play here.

Jimmy Guerrero, Senior Product Manager
MySQL Inc, www.mysql.com
Houston, TX

-Original Message-
From: sheeri kritzer [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 27, 2006 11:11 AM
To: Jan Kirchhoff
Cc: mysql@lists.mysql.com
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?


Why are you using a heap table?

My company has tables with much more information than that, that get updated
much more frequently.  We use InnoDB tables, with very large buffer sizes
and have tweaked which queries use the cache and which don't, on a system
with lots of RAM (10Gb).  Basically we've set it up so everything is in
memory anyway.

Perhaps a similar setup would help for you?

Sincerely,

Sheeri Kritzer

On 1/27/06, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,

 Did anybody ever benchmark heap-tables against a cluster?
 I have a table with 900.000 rows (40 fields, CHARs, INTs and DOUBLEs, 
 Avg_row_length=294) that gets around 600 updates/sec (grouped in about 
 12 extended inserts a minute inserting/updating 3000 rows each). This 
 is currently a HEAP-table (and get replicated onto a slave, too). I 
 experience locking-problems on both the master and the slave, queries 
 that usually respond within 0.0x seconds suddenly hang and take 10 
 seconds or sometimes even longer. I wonder if a cluster setup would 
 give me any speedup in this issue? I will be doing some benchmarking 
 myself next week, but It would be very helpful if anybody could share 
 experiences with me so I don't have to start from scratch... It is 
 difficult and very time-consuming to set up a test-suite comparable to 
 our production systems... Any tips will help! Thanks!

 regards
 Jan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread sheeri kritzer
No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.

I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)

my.cnf:

[mysqld]
old-passwords
tmpdir  = /tmp/
datadir = /var/lib/mysql
socket  = /var/lib/mysql/mysql.sock
port= 3306
key_buffer  = 320M
max_allowed_packet  = 16M
table_cache = 1024
thread_cache= 80
ft_min_word_len = 3

# Log queries taking longer than long_query_time seconds
long_query_time = 4
log-slow-queries = /var/lib/mysql/slow-queries.log
log-error = /var/lib/mysql/mysqld.err

# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 12

interactive_timeout = 28800
wait_timeout = 30

max_connections = 2200
max_connect_errors  = 128

# Replication Master Server (default)
# binary logging is required for replication
log-bin
server-id   = 15
binlog-do-db= manhunt
binlog-do-db= phpAdsNew
binlog-do-db= mobile
max_binlog_size = 2G

# InnoDB tables
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:3G;ibdata2:3G;
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_files_in_group = 2
innodb_log_arch_dir = /var/lib/mysql/
innodb_buffer_pool_size = 5G
innodb_additional_mem_pool_size = 40M
innodb_log_file_size = 160M
innodb_log_buffer_size = 80M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 50
innodb_thread_concurrency = 8
innodb_file_io_threads = 4

# Query Cache Settings
query_cache_size = 32M
query_cache_type = 2
--
table info for the table in question:
  Name: Sessions
 Engine: InnoDB
Version: 9
 Row_format: Dynamic
   Rows: 10600
 Avg_row_length: 792
Data_length: 8404992
Max_data_length: NULL
   Index_length: 24297472
  Data_free: 0
 Auto_increment: NULL
Create_time: 2005-12-01 15:04:52
Update_time: NULL
 Check_time: NULL
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options:
Comment: InnoDB free: 317440 kB
--

We're running MySQL Version 4.1.12 on Fedora Core 3 x86_64.
-
The hardware is a Dell 2850 with 4 Intel(R) Xeon(TM) CPU 3.40GHz
processors, 1024 KB cache size.  Total RAM:  8199448 kB (8G)
-

We're quite beefy because of the amount of queries per second (3000)
and updates.  We are not aware of locking issues.  Our customers are
quite vocal (there are 8 folks in IT, and over 20 in Customer
service!) so when things are slow we know about it.

Sincerely,

Sheeri Kritzer

On 1/27/06, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 that sounds interesting since we couldn't get good performance using
 innodb in our case - but thats a few years ago. things may have changed?
 I'll definitely give it a try next week, too.
 Could you give me more information on your system? hardware, size of the
 table, average number of updates/sec?

 thanks for your suggestions
 Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jan Kirchhoff

sheeri kritzer schrieb:

No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  
I'm very intrigued as to how you came up with 2-300 updates per second

for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)
  
Thanks for your hardware/database information. I will look at that close 
tomorrow since I want to go home for today - it's already  9 pm over 
here... I need beer ;)


We are not running a webservice here (actually we do, too, but thats on 
other systems). This is part of our database with data of major stock 
exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they 
change all the time... We have much more updates than selects on the 
main database.
Our Application that receives the datastream writes blocks (INSERT ... 
ON DUPLICATE KEY UPDATE...) with all records that changed since the last 
write. It gives me debug output like [timestamp] Wrote 19427 rows in 6 
queries every 30 seconds - and that are numbers that I can rely on.


Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]