What host OS are you running? And which file system? MySQL is always
limited by the file size that the host file system can handle.

- michael dykman


On 3/15/07, JP Hindin <[EMAIL PROTECTED]> wrote:

Greetings all;

I have a quandary regarding table limits, and clearly I am not
understanding how this all works together. I have a test database which
needs to keep long-term historical data, currently the total dataset in
this one table is probably about 5.5GB in size - although since I have a
4GB table limit that I can't seem to shake, I'm not entirely positive yet.

First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
chaps are willing to help me with this distribution version, but I imagine
the primary question is fairly non-specific. The OS is obviously Deb
Sarge, running on a recent x86 machine (so it does include the large file
support in the kernel).

So, when I first received a 'Table is full' error I looked up the MySQL
documentation and found the section regarding to altering max_rows on a
table. Nice and simple. I ran the following on my DB:
        mysql> ALTER TABLE mytable max_rows=200000000000;
And some four days later when I looked at it, this was on the screen:
        Segmentation fault

I checked the table status, and max_data_length had not changed. I thought
perhaps I was being too pushy with the max_rows, so I dropped a zero and
tried again - with the same results. About four days in, seg fault. So I
figured perhaps it was getting bent out of shape with a 4.0GB table
already in place, so I removed all rows, optimised the table, and tried
the first query again. Success immediately! The SHOW STATUS gave this:
             Row_format: Dynamic
                   Rows: 0
         Avg_row_length: 0
            Data_length: 0
        Max_data_length: 281474976710655
           Index_length: 1024

Looks good. Nice high max_data_length - so I loaded all the data into the
table. Again, four days pass for the data to complete the bulk INSERT, and
I run a SHOW STATUS again:
             Row_format: Dynamic
                   Rows: 18866709
         Avg_row_length: 224
            Data_length: 4236151548
        Max_data_length: 4294967295
           Index_length: 1141235712

And suddenly I'm back to square one. Now I'm suspecting that the
max_data_length is a combination of a lot of factors, and the
avg_row_length plays into this. The documentation suggests setting
avg_row_length in the ALTER TABLE, however it also says:
"You have to specify avg_row_length only for tables with BLOB or TEXT
columns", so I didn't bother as this table is a combination of ints,
varchars and datetimes.

I wanted to check with you wizened lot before I set another query going.
I'm going to assume that running an ALTER with the data in the DB is only
going to garner me another wasted week and a Seg Fault, so I think what I
should probably do is clean the table again, run the following:
        mysql> ALTER TABLE mytable max_rows=200000000000 avg_row_length=224;
And then reload all my data and see if that helps.

Can someone explain to me if my guess that avg_row_length is a factor in
the max_data_length of the table, and is my above query going to release
me from my hovering 4GB table limit?
Has anyone seen this blasted SegFault issue before?

I appreciate any help I can get with this one, I'm obviously missing
something, flame away. Many thanks.

 - JP


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]




--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to