At 02:53 PM 9/26/2006, Michael Gargiullo wrote:

-----Original Message-----
From: mos [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 26, 2006 3:40 PM
To: mysql@lists.mysql.com
Subject: Re: Partition Help

At 02:03 PM 9/26/2006, you wrote:
>I'm working on a project in which we'd like to convert from Oracle to
>MySQL. We need to partition our data for speed concerns.  Currently in
>Oracle I create 8, 3 hour partitions for each day (Currently running
>450M -750M rec inserts/day). I was looking for matching functionality
in
>MySQL, but it seams daily partitions are as close as I'm going to come.
>
>
>
>We're running 5.1.10 and I'm having a bit of trouble creating
partitions
>in both new tables and altering old tables.  Below is one example of
>what I've tried.
>
>
>
>Can anyone shed some light on this subject?
>
>
>
>-Mike


Mike,
         How is this table being updated?

a) From one source like a batch job?
b) Or from hundreds of users concurrently?

If a), then why not just create 1 table per day (or 3 tables per day)
and
when you want to reference (the entire day or) a week, just create a
Merge
Table?
http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html

If b), then you need to use InnoDb tables because that has row locks
compared to MyISAM's table locks.

Mike


We're using the Load infile function to load the data generated by
another process. We do not do updates, but occasionally need to either
walk the table or run a query against it. On Oracle, we currently need 3
hour partitions to keep the 5 indexes timely.

This system handles 450-750 Million inserted rows per day with 5 fields
being indexed. This number will be closer to 2 Billion records / day by
Spring 2007 we've been told.

For example, I diverted the full flow of data to MySQL for 15 minutes
and inserted 9 Million records with a back up of loader files.  I need
to speed this up. Unfortunately, table structure and indexes are static
and cannot be changed.

-Mike


Mike,
I've done a lot of Load Data with large tables and as you no doubt discovered, as the number of rows in the table increases, the insert speed decreases. This is due to the extra effort involved in maintaining the index as the rows are being loaded. As the index grows in size, it takes longer to maintain the index. This is true of any database. MyISAM tables are going to be faster than InnoDb in this case.

You can speed it up by:
1) Add as much memory as possible in the machine because building the index will be much faster if it has lots of ram. 2) Modify your My.Cnf file so key_buffer_size=1500M or more. (Assuming you have 3gb or more installed) This allocates memory for building the index. 3) If the table is empty before you add any rows to it, Load Data will run much faster because it will build the index *after* all rows have been loaded. But if you have as few as 1 row in the table before running Load Data, the index will have to be maintained as the rows are inserted and this slows down the Load Data considerably. 4) Try throwing an exclusive lock on the table before loading the data. I'm not sure but this might help. 5) If your table already has rows in it before running Load Data, and the table has indexes defined, it is much faster if your disable the indexes to the table before running Load Data, and then enable the index after Load Data has completed. See "Alter Table Enable/Disable Indexes" for more info. 6) If you are using Alter Table to add indexes after the table has data, make sure you are adding all indexes in one Alter Table statement because MySQL will copy the table each time the Alter Table is run.

If you are going to be adding 2 billion rows per day, you might want to try 1 table per hour which will reduce the number of rows to < 100 million which may be more manageable (assuming 24 hour day). You can then create a merge table on the 24 rows so you can traverse them. You can of course create a merge table just for the morning hours, afternoon hours, evening hours etc.. Name each table like: 20060925_1400 for 4PM on 9/25/2006. Of course you may also want to summarize this data into a table so you don't need all of this raw data lying around.

Hope this helps.

Mike






>create table t1 (c1 int default NULL, c2 varchar(30) default NULL, c3
>datetime default NULL) engine=myisam PARTITION BY RANGE(to_days(c3))
>
>       PARTITION p0 VALUES LESS THAN (to_days('2006-09-24'))(
>
>         SUBPARTITION s0a
>
>           DATA DIRECTORY = '/FW_data1'
>
>           INDEX DIRECTORY = '/FW_indx1'
>
>       ),
>
>      PARTITION p1 VALUES LESS THAN (to_days('2006-09-26'))(
>
>         SUBPARTITION s1a
>
>           DATA DIRECTORY = '/FW_data2'
>
>           INDEX DIRECTORY = '/FW_indx2'
>
>       )
>
>       PARTITION p2 VALUES LESS THAN (to_days('2006-09-28'))(
>
>         SUBPARTITION s2a
>
>           DATA DIRECTORY = '/FW_data3'
>
>           INDEX DIRECTORY = '/FW_indx3'
>
>       )
>
>);

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to