harrison, thanks for you mail,

I think mysql uses way too much memory (overhead) to store my data.


How much overhead do you think it is using? Each row is 61 bytes in geldbrief, which is *exactly* the amount needed for the datatypes you have.

[...]

Now if you take 61 * 2449755 (number of rows) = 149435055 bytes used

157468096 Real amount
149435055 Data size
---------
  8033041 Overhead
     5.1% Total overhead of data

I don't see how you could get it to be much smaller than that. Even with zero overhead it would only be 5% smaller, which still would require *a lot* of memory to store it all. The primary key itself is only using about 8 bytes of memory per row (because it is a hashed index, btree would be much larger), which is also very compact.

With your own in-memory database, do you some sort of compression algorithm? That is the only way that I could see it taking up much less space. MySQL is pretty close to as efficient as you can get without compression.

I did that calculation after my last post, too. I should have done that earlier ;) There is compression in our old solution, but I never thoght it was very efficient. I was definitly wrong, we get around 20% more data in our old database. That was what I thought mysql's overhead has to be since I didn't take the compression to serious.



With all of that being said, I would just go with InnoDB, which can buffer the data in memory as well. In a later email you mention that you need to delete a lot of rows per hour. HEAP wouldn't work all that well for that since it uses table level locks. If it took 5 seconds to delete a large portion of rows, then the table would be locked for the duration of that. InnoDB, with its row level locking, would be much better for that purge process. If you turn off the innodb_flush_log_at_trx_commit (which would most likely be OK if your case) then most of the inserting would be done in memory anyways, and only written to disk in batches.


The other option is as you said before using myisam merge tables. That will make dropping old records easier if you partition the data into the separate days. The only thing I would be worried about in that scenario is if inserts are coming from multiple threads. In that case you could still possibly run into locking issues with the inserts locking each other.


I'm away next week, but I guess I'll just give all options a try on our development-system afterwards. I guess if something works with 2GB of RAM, it should also work with 6GB (Opteron). Or is mysql's behaviour changing with very big heap-tables or key_buffers?

Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to