Hi,

We have given up trying to understand why the problem (please see below)
occurs, and have moved on to see what will be the best workaround. We have
three different suggestions. Alternatives 1 and 2 below are under the
assumption that setting the max_rows parameter to 10.000.000.000 caused the
problem


Regenerate the table without the max_rows set.

As described below we have a table 20G of data (76 million records) and a
set of indexes where only one is unique.  We create a new table without the
max_rows set (use the default) and only the unique index and fill it with
all the data.  The next step will be to add the other indexes with one alter
table statement

Regenerate the table with max_rows set to 300 million.

This is a limit that is a little less then twice what we expect to be max.
Other than that this is a copy of number 1 above.

We dump the table with the -e parameter set and pipe it in to a new table
with all indexes in place



I would be very happy if you let us know any advantages/disadvantages you
might see with the three alternatives.

Best regards

Gunnar Lunde

-------------------------------------
Hello!

I hope someone can help us with a problem we have:

We had a table that was 35G big and the index file was about 16G (ca 120
million records).  We ran into a problem where the database produced an "no
more room in index file" error.  

To fix this we regenerated the table and set max_rows to 10.000.000.000.
(MySQL changed max_rows to 4.294.967.295, which I guess is an upper limit?).
We stripped the table for all indecies except one unique index and inserted
the data. We used the opportunity to remove old, unneeded data leaving us
with a table of 20G and index file of 4G.

Now, our next step was to add the other indecies. We added all indecies with
one alter table statement. Mysql started with "copy to tmp table" and after
a few days it changed to "repair by keycache", which is odd because we have
set the mysql variables pretty high (se below), so we had anticipated
"repair by sorting"

When finished, the index file should be about 10G, but when the tmp MYI
reach 4.5 G it slows down and when it reach 5.2G it stops growing. There is
no error message. The CPU usage is minimal and the IO is also minimal
compared to what we have seen the disk system perform earlier. (It's a 1+0
raid with 10 disks, IIRC.There is also plenty of disk space)
Could it be that setting the max_rows to this high number and then adding
the indecies would make the index tree balancing go astray? Any pointer to a
solution would be very welcome!

All table are MyISAM. (I know that it might not bee the smartest type to
use)
Mysql version : 3.23.41
OS: RedHat 7.1(Linux version 2.4.9-31Enterprise) (Red Hat Linux 7.1 2.96-98)
DISK The diskset is a 1+0 raid with about 10 15k rpm disks  (plenty of
space)


-----------------------+----------------------------------------------------
----------------------------------------------------------------------------
-------------------------------------------------------------------------+ 
| Variable_name | Value | 
+---------------------------------+-----------------------------------------
----------------------------------------------------------------------------
----------------------------------------------------------------------------
--------+ 
| back_log | 50 
| basedir | /usr/ 
| bdb_cache_size | 8388600 
| bdb_log_buffer_size | 262144 
| bdb_home | /data/mysql/ 
| bdb_max_lock | 10000 
| bdb_logdir | 
| bdb_shared_data | OFF 
| bdb_tmpdir | /data/mysql/ 
| bdb_version | Sleepycat Software: Berkeley DB 3.2.9a: (August 11, 2001) 
| binlog_cache_size | 32768 
| character_set | latin1 
| character_sets | latin1 big5 cp1251 cp1257 croat czech danish dec8 dos
estonia euc_kr gb2312 gbk german1 greek hebrew hp8 hungarian koi8_ru
koi8_ukr latin2 latin5 swe7 usa7 win1250 win1251 win1251ukr ujis sjis tis620
| 
| concurrent_insert | ON 
| connect_timeout | 5 
| datadir | /data/mysql/ 
| delay_key_write | ON 
| delayed_insert_limit | 100 
| delayed_insert_timeout | 300 
| delayed_queue_size | 1000 
| flush | OFF 
| flush_time | 0 
| have_bdb | YES 
| have_gemini | NO 
| have_innodb | NO 
| have_isam | YES 
| have_raid | NO 
| have_ssl | NO 
| init_file | 
| interactive_timeout | 28800 
| join_buffer_size | 131072 
| key_buffer_size | 402649088 
| language | /usr/share/mysql/english/ 
| large_files_support | ON 
| locked_in_memory | OFF 
| log | OFF 
| log_update | OFF 
| log_bin | ON 
| log_slave_updates | OFF 
| log_long_queries | OFF 
| long_query_time | 10 
| low_priority_updates | OFF 
| lower_case_table_names | 0 
| max_allowed_packet | 1047552 
| max_binlog_cache_size | 4294967295 
| max_binlog_size | 1073741824 
| max_connections | 100 
| max_connect_errors | 10 
| max_delayed_threads | 20 
| max_heap_table_size | 16777216 
| max_join_size | 4294967295 
| max_sort_length | 1024 
| max_user_connections | 0 
| max_tmp_tables | 32 
| max_write_lock_count | 4294967295 
| myisam_recover_options | 0 
| myisam_max_extra_sort_file_size | 2000 
| myisam_max_sort_file_size | 2097152000 
| myisam_sort_buffer_size | 1572864000 
| net_buffer_length | 16384 
| net_read_timeout | 30 
| net_retry_count | 10 
| net_write_timeout | 60 
| open_files_limit | 0 
| pid_file | /data/mysql/dlp-news02.pid 
| port | 3306 
| protocol_version | 10 
| record_buffer | 2093056 
| record_rnd_buffer | 2093056 
| query_buffer_size | 0 
| safe_show_database | OFF 
| server_id | 2 
| slave_net_timeout | 3600 
| skip_locking | ON 
| skip_networking | OFF 
| skip_show_database | OFF 
| slow_launch_time | 2 
| socket | /var/lib/mysql/mysql.sock 
| sort_buffer | 2097144 
| sql_mode | 0 
| table_cache | 512 
| table_type | MYISAM 
| thread_cache_size | 8 
| thread_stack | 65536 
| transaction_isolation | READ-COMMITTED 
| timezone | CEST 
| tmp_table_size | 33554432 
| tmpdir | /data/mysql/ 
| version | 3.23.41-log 
| wait_timeout | 28800 




Mvh / Best Regards

Gunnar Lunde

sql

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to