Philippe Poelvoorde wrote:

Maybe you should try to normalize your table,
'symbol' could have its own table, that would reduce data and index.
And then try to reduce the size of your rows, bidsize and asksize should be in integer I think. Maybe 'float' would be enough.
What represents the 'quelle' column ?
Is kurszeit necessary in your primary key ?


I changed a few columns, bidsize and asksize are integer now, and i changed ticknumber to smallint unsigned.
At first I used the ticknumbers by the feedserver, now I count up to 65,000 and then reset the counter back to 0. I need that additional column to handle multiple ticks within one second.
now I have a row_length of 41 instead of 61 on the "geldbrief"-table, but there is still just way to much memory-usage


quelle is the stock-exchange ("source");

That table should store trades and bid/asks of stock-exchanges, so the primary key has to include:

symbol ( i.e. "IBM")
quelle (numeric code for the stock-exchange)
date and time
ticknumber (in order be able to handle multiple ticks per second)

any more suggestions?

Maybe I'll test how a InnoDB-table with a huge innodb_buffer_pool_size will work. But since I'll have to do big delete's once every hour (kick old records) I have no idea if that would work out on a table with much more than 100,000,000 rows and insert coming in all the time...
Another idea is to use a bunch of myisam-tables (4 or more for each day) and a merge-table. I could then do a "flush tables with write lock;truncate table big_merge_table;unlock tables;" on the myisam-tables to delete the old rows.
I don't think that the disc-based table engines can respond quick enough and handle all the inserts at the same time... but I might give it a try next week.


Does anybody have comments on those two ideas in case my in-memory-concept doesn't work..

thanks
Jan




-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to