On Thursday 01 June 2006 03:05, Mathieu Bruneau wrote:
> Harish TM a écrit :
> > hi...
> >       I need to store something like a couple of million rows is a MySql
> > table. Is that ok or do I have to split them up. I intend to index each
> > of the columns that I will need to access so as to speed up access.
>
> Many peoples have multi-millions tables and it's all fine. The only
> thing is that MySQL built table with a default of 4Gb maximum. It's
> possible to go but you need to consider it if you think your table may
> grow bigger than that (for MyISAM that is)!

This can be affected for all table creations by setting max_rows on the create 
DDL, or by altering the pointer_size server variable.  Certain releases of 
the 4.1 series had a bug with respect the pointer_size variable, but recent 
versions fixed that bug as far as I know.  It can also be fixed 
after-the-fact with an alter table and myisamchk, but that equates to 
downtime.

If you have time to wait for 5.1 to mature, the partitioning, and promised 
'MySQL will detect which partitions to search (not yet done)' would possibly 
give you a speed boost.
----
Scanned by mailCritical.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to