Mark Kozikowski a écrit :
Hello all,
I have been using MySQL for about 5 years now in a company project.
I store a lot of data, very rapidly into the database.
Presently, I am having a problem where the MySQL server appears to
be denying a connection when I reach a database size of about
10 billion bytes.
I am running a mostly default installation on Fedora core 4.
We modified the blob size to 1 million for a special case. But for
the failing operations, the blob size is only about 1.5K.
I am storing about 3 millions records per hour, each averaging
1.5K.
The DB is a single table with columns. 4 of which are integers.
One is a blob.
I am using RHFC 4 with ext3 file system.
After running for about 2.5 hours, MySQL drops the connection and
refuses to allow any others to the specified database.
Are there any configurations I can adjust or look at that may enable
me to extend my DB to more data storage?
Mark Kozikowski
Hi Mark,
First if you are using a 32 bits architecture, mysql by default use 32
bits to create is pointer for dynamic table. Thus creating a "false" but
effective limit of 4G of data in a table. Not sure if that's applicable
to your case. You can easily see, if that's your issue by doing a show
table status and look at the data_lenght and max_data_length.
If that's the case, there's a few possible solutions.. First you could
split your table to have your blog outside in but I don't know if that
would be efficient, depends of your application I guess :) Second you
could create the table with generous MAX_ROWS and AVG_ROW_LENGTH to tell
mysql to use bigger pointer. I don't know the real impact on performance
that will have but at least you won't be limited to 4G of data anymore!
Regards,
--
Mathieu Bruneau
aka ROunofF
===
GPG keys available @ http://rounoff.darktech.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]