Re: Slave I/O thread dies, fatal error 1236

2004-09-08 Thread matt ryan
Remigiusz Sokoowski wrote:
matt ryan wrote:
Tobias Asplund wrote:
On Tue, 7 Sep 2004, matt ryan wrote:
I forgot, did you have multiple slaves on multiple machines? If so, 
do they
have identical hardware/drivers?


Multiple slaves on same machine, one works fine
Do You tried to distribute replication to other machines? Is it option 
to You?

Remigiusz

I will setup a separate server as a slave and see if it works, that 
would narrow it down to the master being the problem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: What's Faster? MySQL Queries or PHP Loops?

2004-09-08 Thread matt ryan
I use foxpro to do similar loops
I've found that I get 10 queries per second on large tables, when 
connecting once, and issuing individual select statements via odbc.

It is much faster if you can narrow the recordset into an array within 
php, and spool through that, unfortunatly I deal with 250+ million rows, 
so hitting the table by a index one at a time is faster

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slave I/O thread dies, fatal error 1236

2004-09-07 Thread matt ryan
Still have not got this fixed, I'm all out of idea's, the slave has been 
reloaded again today

gerald_clark wrote:
We have no idea what you are running, or what you are running it on.
matt ryan wrote:
040901 18:36:21  Error reading packet from server: binlog truncated 
in the middle of event (server_errno=1236)
040901 18:36:21  Got fatal error 1236: 'binlog truncated in the 
middle of event' from master when reading data from binary log
040901 18:36:21  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 7517914

2 slaves on one box
both masters have identical config
one slave will not stay running for more than an hour, it seems that 
it will run until it catches up with the master, then it dies.

Any sugestions?  All servers have plenty of free drive space
Oops sorry
windows 2000, mysql 4.0.20d
here's the config on the affected server, it is identical to the 
server that works fine, except for the port number and base/data dir's

it's interesting to note, that if I wait a few minutes after the 
thread dies, and issue start slave it'll usually start right up and 
run until it's caught up to the master server again.

Matt
skip-locking
set-variable= key_buffer_size=1500M
set-variable=join_buffer_size=512M
set-variable= max_allowed_packet=300M
set-variable= table_cache=512
set-variable=delay_key_write=ALL
set-variable= sort_buffer_size=256M
set-variable= record_buffer=300M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=300M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
set-variable=thread_concurrency=4
server-id=5
#log-bin
master-host=192.168.1.168
master-port=3306
master-user=repl
master-password=Daredevil22
master-connect-retry=60


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slave I/O thread dies, fatal error 1236

2004-09-07 Thread matt ryan
Tobias Asplund wrote:
On Tue, 7 Sep 2004, matt ryan wrote:
I forgot, did you have multiple slaves on multiple machines? If so, do 
they
have identical hardware/drivers?


Multiple slaves on same machine, one works fine
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Slave I/O thread dies, fatal error 1236

2004-09-02 Thread matt ryan
040901 18:36:21  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040901 18:36:21  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040901 18:36:21  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 7517914

2 slaves on one box
both masters have identical config
one slave will not stay running for more than an hour, it seems that it 
will run until it catches up with the master, then it dies.

Any sugestions?  All servers have plenty of free drive space
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slave I/O thread dies, fatal error 1236

2004-09-02 Thread matt ryan
gerald_clark wrote:
We have no idea what you are running, or what you are running it on.
matt ryan wrote:
040901 18:36:21  Error reading packet from server: binlog truncated 
in the middle of event (server_errno=1236)
040901 18:36:21  Got fatal error 1236: 'binlog truncated in the 
middle of event' from master when reading data from binary log
040901 18:36:21  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 7517914

2 slaves on one box
both masters have identical config
one slave will not stay running for more than an hour, it seems that 
it will run until it catches up with the master, then it dies.

Any sugestions?  All servers have plenty of free drive space
Oops sorry
windows 2000, mysql 4.0.20d
here's the config on the affected server, it is identical to the server 
that works fine, except for the port number and base/data dir's

it's interesting to note, that if I wait a few minutes after the thread 
dies, and issue start slave it'll usually start right up and run until 
it's caught up to the master server again.

Matt
skip-locking
set-variable= key_buffer_size=1500M
set-variable=join_buffer_size=512M
set-variable= max_allowed_packet=300M
set-variable= table_cache=512
set-variable=delay_key_write=ALL
set-variable= sort_buffer_size=256M
set-variable= record_buffer=300M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=300M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
set-variable=thread_concurrency=4
server-id=5
#log-bin
master-host=192.168.1.168
master-port=3306
master-user=repl
master-password=Daredevil22
master-connect-retry=60

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-09-01 Thread matt ryan
Mikhail Entaltsev wrote:
You _could_ try adding an identical primary key to the stat_in table as
you have on the 321st_stat table. However, since we need all of the rows
from stat_in in the results, I am not sure that it will help speed up the
join (because it's a left join). Even though I think the index would be
ignored, it is worth a try to see if it would make a difference with a new
EXPLAIN.
   

Tried this, setup a matching index on the temp table, it took 17 hours 
to load the input file into the temp stat_in table, so it's definitly 
not going to save me any time

I've used enable/disable keys before, but 2 problems, one it was only 5 
% faster, and two, I will have primary key violations when I enable the 
primary key, it wont enable it, at least that's my understanding of it.

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-08-27 Thread matt ryan
Mikhail Entaltsev wrote:
Hi,
insert into 321st_stat select * from stat_in group by primary key fields
from 321st_stat table;
did you try to use this query?
Best regards,
Mikhail.
 

Ran it, it took at least 24 hours, it finished but never gave me the 
total time, when I checked the server mysql dropped me back to the 
command prompt, with no time or number of records :(

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-08-27 Thread matt ryan
Mikhail Entaltsev wrote:
Could you execute show create table 321st_stat and show create table
stat_in
and send results back?
 

I have no key's on the temp table, stat_in, do you think adding keys on 
the whole primary key would be faster?

I wasnt sure if you could join mysql keys, the key is called primary 
key so would it be a.primary key = b.primary key ?


mysql explain select a.* from stat_in a left outer join 321st_stat b on 
a.don=b.don and a.dic=b.dic and a.niin=b.niin and a.sta=b.sta and 
a.qty=b.qty and a.fr_ric=b.fr_ric and a.suf=b.suf and a.dte_txn 
=b.dte_txn where isnull(b.don);

| id | select_type | table | type   | possible_keys| 
key | key_len | ref| rows | 
Extra   |
|  1 | SIMPLE  | a | ALL| NULL | 
NULL|  NULL | NULL| 77269086 
| |
|  1 | SIMPLE  | b | eq_ref | PRIMARY,don,niin,dic,dte_txn | 
PRIMARY |39 | 
finlog.a.dic,finlog.a.niin,finlog.a.fr_ric,finlog.a.don,finlog.a.suf,finlog.a.dte_txn,finlog.a.sta
| 1 | Using where; Not exists |
2 rows in set (0.11 sec)

---+
| 321st_stat | CREATE TABLE `321st_stat` (
 `dic` char(3) NOT NULL default '',
 `fr_ric` char(3) NOT NULL default '',
 `niin` char(11) NOT NULL default '',
 `ui` char(2) NOT NULL default '',
 `qty` char(5) NOT NULL default '',
 `don` char(14) NOT NULL default '',
 `suf` char(1) NOT NULL default '',
 `dte_txn` char(5) NOT NULL default '',
 `ship_to` char(3) NOT NULL default '',
 `sta` char(2) NOT NULL default '',
 `lst_sos` char(3) NOT NULL default '',
 `esd` char(4) NOT NULL default '',
 `stor` char(3) NOT NULL default '',
 `d_t` char(4) NOT NULL default '',
 `ctasc` char(10) NOT NULL default '',
 PRIMARY KEY  (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta`),
 KEY `don` (`don`),
 KEY `niin` (`niin`),
 KEY `stor` (`stor`),
 KEY `dic` (`dic`),
 KEY `ctasc` (`ctasc`),
 KEY `dte_txn` (`dte_txn`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=900,000,000 PACK_KEYS=1 |
1 row in set (0.03 sec)
| stat_in | CREATE TABLE `stat_in` (
 `dic` char(3) NOT NULL default '',
 `fr_ric` char(3) NOT NULL default '',
 `niin` char(11) NOT NULL default '',
 `ui` char(2) NOT NULL default '',
 `qty` char(5) NOT NULL default '',
 `don` char(14) NOT NULL default '',
 `suf` char(1) NOT NULL default '',
 `dte_txn` char(5) NOT NULL default '',
 `ship_to` char(3) NOT NULL default '',
 `sta` char(2) NOT NULL default '',
 `lst_sos` char(3) NOT NULL default '',
 `esd` char(4) NOT NULL default '',
 `stor` char(3) NOT NULL default '',
 `d_t` char(4) NOT NULL default '',
 `ctasc` char(10) NOT NULL default ''
) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=99,999,999 PACK_KEYS=1 |
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


1 day 28 min insert

2004-08-19 Thread matt ryan
I think oracle parallel query is calling me
110,832,565 stat records
77,269,086 on weekly update, I get small daily files, but daily sql's 
dont work very well, and miss records, in this case it missed 563 records.

mysql update stat_in set ctasc='321ST';
Query OK, 77269086 rows affected (24 min 17.60 sec)
Rows matched: 77269086  Changed: 77269086  Warnings: 0
mysql insert ignore into 321st_stat select * from stat_in;
Query OK, 563 rows affected (1 day 28 min 35.95 sec)
Records: 77269086  Duplicates: 77268523  Warnings: 0
I just cant deal with speeds this slow, an insert onto a table with a 
primary key that tosses out almost all records shouldnt take this long to do

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-08-19 Thread matt ryan

One alternative, since you know you don't have many records to insert,
is to pull the IDs of the missing records and insert just those.  Do an
outer join on the two tables (joining on the primary key), get a list
of the IDs of records in stat_in but not in 321st_stat, and add a
WHERE id IN (list,of,ids) clause to the end of your INSERT ... SELECT
statement.  If you're running 4.1, you can use a subquery and embed the
first query directly in the INSERT.
 

Running 4.0.x
something like, select a.*, b.* from a left outer join b on 
a.col1=b.col1, a.col2=b.col2, a.col3=b.col3 where b.col1 is null into 
temptable

then insert from temptable into table a
I think I tried this once, but it ran all day 

Is there a way to join on an index, instead of on each column?   The 
primary key is 6-8 columns I forget

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-08-19 Thread matt ryan
Andrew Pattison wrote:
I'm guessing that you have indexes on the 321st_stat table? If this is 
the case, try dropping them before you do the insert, then rebuilding 
them. MySQL is known to be slow at doing bulk inserts on indexed 
tables. Also, updates are much faster than inserts since with inserts 
there are much more disk IOs required.

Cheers
Table has a large primary key, to keep duplicates out, so I cant drop 
the index and remove.

Also, reindexing the table takes all day, dropping is not an option
Would I be better off doing an insert replace on all  17m new records, 
or only inserting the 500 new records using insert ignore ?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 1 day 28 min insert

2004-08-19 Thread matt ryan
The table is 9 gig, and the index is 8 gig
unfortunately the primary unique key is almost every column, if I were 
to make it one using concat, it would be huge.

I tried making those fields a hash, but it did not work, I had duplicate 
hashes for non duplicate records!!

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-17 Thread matt ryan
Still no solution on this
anybody have any ideas?
It's not network, or hard drive, it's got to be some type of bug in my 
config files, attached in original email

The master is on 4.0.20a and the slave is on 4.0.20a

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan
Replace deletes and inserts.
?
what do you mean?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
Anybody else have any ideas?
I cant keep the slave up
only thing I have not tried is upgrading to 4.0.20, however, nothing 
changed to cause this problem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
I reset the master, flush logs, reset master, show master status..
shows FINANCE-bin.186 at position 79
so I started the slave
CHANGE MASTER TO
MASTER_HOST='192.168.1.168',
MASTER_USER='repl',
MASTER_PASSWORD='Daredevil22',
MASTER_LOG_FILE='FINANCE-bin.186',
MASTER_LOG_POS=79;
start slave;
and I get this error after a few seconds..
040813  8:55:15  Slave SQL thread initialized, starting replication in 
log 'FINANCE-bin.186' at position 79, relay log 
'.\databasebackup-relay-bin.001' position: 4
040813  8:55:15  Slave I/O thread: connected to master 
'[EMAIL PROTECTED]:3306',  replication started in log 'FINANCE-bin.186' 
at position 79
040813  8:55:39  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040813  8:55:39  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040813  8:55:39  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 79

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
More info..
I dont see anythign wrong with the binlog the slave has
E:\mysql\datamysqlbinlog databasebackup-relay-bin.001
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 4
#691231 19:00:00 server id 1  log_pos 0 Rotate to 
FINANCE-bin.186  pos:79
# at 46
#691231 19:00:00 server id 1  log_pos 0 Rotate to 
FINANCE-bin.186  pos:79

E:\mysql\data
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-13 Thread matt ryan
[EMAIL PROTECTED] wrote:
Have you considered that a proxy server may be in the way. I have been 
watching this thread but I can't remember if you said anything about 
your network connectivity (sorry!). I have seen several programs make 
what they thought was a connection then fail because they don't know 
they are connecting through a proxy and not the real server. Also if 
your proxy is dropping your session, it could cause the same 
interrupted behavior.

my 2 cents
Both servers are connected to the same switch, no proxy servers between 
them.

The slave will connect, and will process all the way up to the current event
as soon as it hits the current event it dies, all I have to do is wait 5 
min for more events to build up, and start slave and it takes off again

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan
matt ryan wrote:

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
I've switched to innodb but performance isnt very good
while the insert runs, here's what I get for performance
select count(*) from rondon;
1 row in .13 sec
select count(*) from rondon;
1 row in 21.88 sec
select count(*) from rondon;
1 row in 42.47 sec
select count(*) from rondon;
1 row in 1 min 47.69 sec
not sure why the first was so fast, the rest SUCK
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replace delayed locks table

2004-08-13 Thread matt ryan
matt ryan wrote:

Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
Doh another problem
innodb has no merge option, I have too much data, and the only way to 
deal with it, is partition the data and then tie it together with merge 
views.

Unfortunatly innodb will not work for me :(
Anybody know if SQL Server  desktop supports what I need?  I know oracle 
does, but the cost is an issue, maxdb costs too much too.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Help, slave wont stay running!

2004-08-12 Thread matt ryan
I cant keep the slave up for more than 10 minutes
constantly getting these errors
040812 10:32:25  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040812 10:32:25  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040812 10:32:25  Slave I/O thread exiting, read up to log 
'FINANCE-bin.185', position 284963878

both servers have plenty of free space
Here is the master setup..
skip-locking
set-variable= key_buffer_size=1000M
set-variable=bulk_insert_buffer_size=256M
set-variable=delay_key_write=ALL
set-variable=join_buffer_size=256M
set-variable= max_allowed_packet=256M
set-variable= table_cache=512
set-variable= sort_buffer_size=256M
set-variable=tmp_table_size=256M
set-variable= record_buffer=256M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=256M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
here is the slave setup
skip-locking
set-variable= key_buffer_size=1500M
set-variable=bulk_insert_buffer_size=512M
set-variable=delay_key_write=ALL
set-variable=join_buffer_size=512M
set-variable= max_allowed_packet=384M
set-variable= table_cache=512
set-variable= sort_buffer_size=256M
set-variable=tmp_table_size=400M
set-variable= record_buffer=384M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=384M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
I'm out of ideas, I've played with buffer sizes, packet sizes, but still 
get the same error

my other master/slave has no problems at all, the slave is the same 
server (one box that's slave for two sites)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
Victor Pendleton wrote:
Can you reset the slave to read the next event its relay log? If this is not
possible, is refreshing the data from the master a viable option?
 

I can start slave, and it runs a little while, then stops again.
I can refresh the data from the master, iv'e done it 25 times at least, 
3 times a week, the database is 90 gig, so it's not very fun!

every time I resync them, I'll reset master first, resync, set the slave 
to start on the new master info, and then start the slave, boom fails in 
15 min

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
I deleted every table off the slave, and reloaded them, I do this twice 
a week because it wont replicate

The master server has a check  optimize every sunday
I had a similar situation one week ago. Found one of the tables (MyISAM) had a 
corrupt index. After fixing it, everything was fine again.

Regards.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replace delayed locks table

2004-08-12 Thread matt ryan
SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from 
temp table.

This takes 2 hours to comlete, the temp table is rather large.
The table being updated is locked, the whole time, all web requests are 
locked and the pages time out.

Is there any way to get this to run without locking the whole table?  I 
thought with myisam it would only lock a table if you delete records, 
and insert records, it locks it to fill the gaps.

If I need to switch to another table type it's an option, having locked 
tables is NOT an option.

Thanks in advance Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Help, slave wont stay running!

2004-08-12 Thread matt ryan
Check it out
mysql start slave;
Query OK, 0 rows affected (0.00 sec)
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run 
SLAVE S
TOP first
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run 
SLAVE S
TOP first
mysql start slave;
Query OK, 0 rows affected (0.00 sec)

mysql start slave;
Query OK, 0 rows affected (0.00 sec)
I can start slave over and over, it does one event, stops, start it and 
it does one event, then stops, over and over and over

I just keep running start slave really fast to get threw the updates.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Large 30 GB Database

2004-07-28 Thread matt ryan
Should I even attempt this using mysql?
Has anyone played with this much data in mysql?
I've got two 100 gig databases in mysql, and slave replication on both 
of them, the only time I have a problem is table scans, that much data 
will be slow.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


binlog truncated in the middle of event

2004-07-28 Thread matt ryan
One of my slaves has decided to stop replicating
every time I reset it, I get this
040728  8:46:46  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040728  8:46:46  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040728  8:46:46  Slave I/O thread exiting, read up to log 
'FINANCE-bin.185', position 419216838

All slaves share the same config, but this one refuses to work, the 
master server has plenty of drive space, I have made changes to the 
buffer sizes, I thought one of those could play into it

anyone have a sugestion for a fix?
both a 4.017 (found bugs in current version) master is on windows 2000, 
slave is on server 2003

From the main server..
+-++
| Variable_name   | Value  |
+-++
| binlog_cache_size   | 32768  |
| bulk_insert_buffer_size | 536870912  |
| delayed_queue_size  | 1000   |
| innodb_additional_mem_pool_size | 1048576|
| innodb_buffer_pool_size | 8388608|
| innodb_log_buffer_size  | 1048576|
| innodb_log_file_size| 5242880|
| join_buffer_size| 536866816  |
| key_buffer_size | 1572864000 |
| max_binlog_cache_size   | 4294967295 |
| max_binlog_size | 1073741824 |
| max_heap_table_size | 16777216   |
| max_join_size   | 4294967295 |
| max_relay_log_size  | 0  |
| myisam_max_extra_sort_file_size | 268435456  |
| myisam_max_sort_file_size   | 2147483647 |
| myisam_sort_buffer_size | 268435456  |
| query_alloc_block_size  | 8192   |
| query_cache_size| 0  |
| query_prealloc_size | 8192   |
| range_alloc_block_size  | 2048   |
| read_buffer_size| 268431360  |
| read_rnd_buffer_size| 262144 |
| sort_buffer_size| 268435448  |
| thread_cache_size   | 8  |
| tmp_table_size  | 419430400  |
| transaction_alloc_block_size| 8192   |
| transaction_prealloc_size   | 4096   |
+-++
From the slave server..
+-++
| Variable_name   | Value  |
+-++
| binlog_cache_size   | 32768  |
| bulk_insert_buffer_size | 8388608|
| delayed_queue_size  | 1000   |
| innodb_additional_mem_pool_size | 1048576|
| innodb_buffer_pool_size | 8388608|
| innodb_log_buffer_size  | 1048576|
| innodb_log_file_size| 5242880|
| join_buffer_size| 536866816  |
| key_buffer_size | 1572864000 |
| max_binlog_cache_size   | 4294967295 |
| max_binlog_size | 1073741824 |
| max_heap_table_size | 16777216   |
| max_join_size   | 4294967295 |
| max_relay_log_size  | 0  |
| myisam_max_extra_sort_file_size | 268435456  |
| myisam_max_sort_file_size   | 2147483647 |
| myisam_sort_buffer_size | 268435456  |
| query_alloc_block_size  | 8192   |
| query_cache_size| 0  |
| query_prealloc_size | 8192   |
| range_alloc_block_size  | 2048   |
| read_buffer_size| 314568704  |
| read_rnd_buffer_size| 262144 |
| sort_buffer_size| 268435448  |
| thread_cache_size   | 8  |
| tmp_table_size  | 33554432   |
| transaction_alloc_block_size| 8192   |
| transaction_prealloc_size   | 4096   |
+-++
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: binlog truncated in the middle of event

2004-07-28 Thread matt ryan
Update on this, I found that when the slave stops, all I have to do is 
start the slave and it's good again

Here's what the log shows.. the only thing I did was start slave and 
it picked right back up

040728  9:25:13  Error reading packet from server: binlog truncated in 
the middle of event (server_errno=1236)
040728  9:25:13  Got fatal error 1236: 'binlog truncated in the middle 
of event' from master when reading data from binary log
040728  9:25:13  Slave I/O thread exiting, read up to log 
'FINANCE-bin.186', position 171309530
040728  9:29:40  Slave I/O thread: connected to master 
'[EMAIL PROTECTED]:3306',  replication started in log 'FINANCE-bin.186' 
at position 171309530
repeats removed
040728  9:29:56  Error reading packet from server: Lost connection to 
MySQL server during query (server_errno=2013)
040728  9:29:56  Slave I/O thread: Failed reading log event, 
reconnecting to retry, log 'FINANCE-bin.186' position 171309530
040728  9:29:58  Error reading packet from server: Lost connection to 
MySQL server during query (server_errno=2013)
040728  9:29:58  Slave I/O thread: Failed reading log event, 
reconnecting to retry, log 'FINANCE-bin.186' position 171309530
040728  9:30:26  Slave: load data infile on table 'e47wk_in' at log 
position 979433898 in log 'FINANCE-bin.185' produced 601027 warning(s). 
Default database: 'finance'

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-22 Thread matt ryan

Split the myisam table into seperate tables.  We will
call each table a bucket.
Create a MERGE table of all of them. For selecting the
data.
When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.
 

I hit duplicate hashes for unique records, not sure why, I think I used 
aes_encrypt, how do you recomend creating a hash column via sql?

I already split the data into separate tables, and use a merge, it 
really didnt speed things up that much

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan

I went over your data. This is what I noticed first:
| Select_full_join | 0|
| Select_full_range_join   | 0|
| Select_range | 1|
| Select_range_check   | 0|
| Select_scan  | 301  |
 

What command will provide this data?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
Hmm
I'm guessing my stats arent too good, lots of full table scans, but this 
is to be expected, my users can query any table by any column, and I 
cant index all column combinations

Variable_name  
Value  

Select_full_join   
0  

Select_full_range_join 
24 

Select_range   
145321 

Select_range_check 
0  

Select_scan29402   
Sort_Scan   15360
Key_reads 37811885

and on the other big db..
Variable_name  
Value  

Select_full_join   
535

Select_full_range_join 
0  

Select_range   
1098   

Select_range_check 
0  

Select_scan10443 
Sort_Scan2464
Key_reads 20282002

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
Resend, firefox did not send the way it looked when I typed it!
I'm guessing my stats arent too good, lots of full table scans, but this
is to be expected, my users can query any table by any column, and I
cant index all column combinations
Variable_name   Value
Select_full_join 0
Select_full_range_join 24
Select_range   145321
Select_range_check 0
Select_scan29402
Sort_Scan   15360
Key_reads 37811885
and on the other big db..
Variable_name   Value
Select_full_join   535
Select_full_range_join 0
Select_range   1098
Select_range_check 0
Select_scan10443
Sort_Scan 2464
Key_reads 20282002

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
I've found the slow query log is useless to me, it's 50 meg right now.
Is there a tool that will identify common querys?   I could probably 
come up with some sql's if I load it into a table, but it would take 
quite a while to sort out.

I posted a request on the mysql bugtraq to move it to a table instead of 
that raw file, but they closed the request, guess they didnt like that idea

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How do I import a .dmp file?

2004-07-22 Thread matt ryan

David
Did you look at MYSQL LOAD DATA INFILE ???
doc is available at
http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html
regards,
Load data infile only works with a text file going into a table, if the 
data is in another format, like raw oracle, or EBCDIC it wont work, 
you'll need to convert it to ascii first.

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How do I import a .dmp file?

2004-07-22 Thread matt ryan

OK, so if I can convert it into ascii, then it will be
a text file, which I can import using the instructions
at http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html,
right?
Thanks.
 

Yep, just have the table structure match the ascii file and load it in
Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fw: Tuning MySQL for Large Database

2004-07-22 Thread matt ryan
There is a perl script that comes with MySQL called mysqldumpslow.  
You can just run it on your slow log and it will output summary 
statistics about the slow log.

I saw that in the docs, but I definitly dont want to install perl on a 
production server, I never looked to see if I could do it offline, I 
only have ms boxes, no linux here, none of our techs know linux well 
enough to move to it.

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
I load all the data into a table with no keys
then I insert this data into a table with 225 million records, this 
large table has the primary key, this is what takes a LONG time

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lachlan Mulcahy wrote:
MySQL Version: 4.0.18
Server OS: windows 2000, or 2003
Memory 2 gig
CPU(s) dual 2.6-3ghz  xeon 500-2mb cache (cpu load is low)
Disks (RAIDs and Independent disk speed/types) 8x72 gig 15,000 rpm scsi 
II u320 raid 5 dell perc setup


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lopez David E-r9374c wrote:
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
 

This creates a file that inserts the records back into the same table
it also does not do an insert ignore
I need the records to go into the historical table, with an insert ignore
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Do you ever delete from this table?
Temp table is trunicated before the EBCDIC file is loaded
Have you removed the unecessary duplicate key on the first column of your primary key?
Have not touched the DIC index yet, I need a backup server to change
indexes, it would take the main server down for too long, and it wont be
that big an increase, it's only a 3 character index, I also do joines on
that field to other tables, so I was hesitant on removing that index.
Can you post the results from show variables for 
nope, the list wont let me send an email that big
I did put it on the web though, here's the results from show variables
http://www.geekopolis.com/Query_Result.txt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Donny Simonton wrote:
Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you are
running.
For example, we have one table that has 8 billion rows in it and it close to
100 gigs and we can hammer it all day long without any problems.  It really
depends on how you are doing things.
But as far as you mentioning about mysql not using multiple indexes, it
does.  You just have to create an index on multiple fields at one time.
I've got tables with 10 fields in one index, now the trick with mysql is
that you must use all top 10 fields in your where clause for mysql to really
take advantage of the index.
But I would definitely send the list your table structure with your indexes
and some of your selects and inserts.  You can always change the names of
things if you don't want people to know the names of everything.
Just my 2 cents.
 

Original email has the table structure, query speed is not an issue (it 
is, but I will deal with that later)

the issue is insert speed, I get 150k-1M records daily, of these, only 
5-10 % are new records, I load the EBCDIC file into a temp table, and 
then do insert ignore into historytable select * from temp table

and I cant create multiple field indexes, I would need 100 indexes on a 
table, the index side already exceeds the table size, I tried to add an 
index on date, but gave up because it ran for 2 days and was not done yet.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.
 mysql -u matt -p dbname  filename.sql
This is very fast way to insert rows. Speeds up insertion by at
least 10x on my large tables (11 million rows). I noticed someone
with 1000x more rows w/o problems.
BTW, by forcing the table to have packed keys, the docs say it
will slow your insertion. Maybe not that much, i don't know.
David
 

Isnt the text file it creates, going to insert the records back into the 
temp table when I load it back in?

Does this do insert ignore or insert replace?  I need to control that, 
on some tables I do insert ignore, on others i do insert replace.

Almost all of the speed issue is read related, the disk writes are 
nearly 0, the reads are as fast as the drive can run, reading to see if 
the record violates the primary key I assume

about 3 gig seems to be the magic number, less than that is lightning 
fast, more than that is extreemly slow




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Justin Swanhart wrote:
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.
 

Using 8x72 gig 15,000 rpm U320 scsi drives in raid 5 now, that should be 
a fast read raid config

no more will fit in the server, and solid state are 70,000 $ it's out of 
our budget

I optimize the tables every weekened
any other sugestions?
Would it help to defrag?  The only way I can do it, is backup every 
file, wipe out the server, and then restore the files, there's not 
enough free space to do a proper defrag

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Tim Brody wrote:
You may find that the 'dic' KEY isn't necessary, as it's the first part of
your PRIMARY KEY.
I've found better performance for multi-column keys by putting the columns
in order of least variance first, e.g. for a list of dates:
1979-04-23
1979-07-15
1980-02-04
1980-06-04
You want a key on (YEAR-MONTH-DAY)
If you can you could put the index/data on different disks - not sure how
you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?).
You should definitely put the binary log file on another disk, but again not
something I've used.
I've found MySQL to be a royal pain working with multi-GB tables (my biggest
is 12GB+13GB index). I've learnt that MySQL is a bit like a confused puppy -
it doesn't know why it wet the floor, but it expects you to know that pained
expression means you need to move it somewhere else ...
 

I need the DIC in the key to keep the record unique, I have thousands 
with everything identical except the DIC.

I was confused on the multi key index issue, I thought it would seek 
faster if I put the most unique field up front, which I do on most 
tables, I did not on this one though.   I have one large raid array now, 
so I cant split the data, or put the binary log on another disk.

I found mysql was great up to about 3 gig, then everything hit the 
brakes and got really really really slow

I'm scared of joines, every time I do a join in mysql on indexed fields 
in mysql, the performance is horrible, because the where clause is not a 
field that's in the join, performance is poopy

I wish mysql could use multiple indexes like oracle, to narrow down the 
results, I've got some simple queries that take hours due to single 
index use, but every query field is indexed.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Consider replicating to some slave servers and dividing reads among them.
I already replicate to slaves, and sites will do read only queries off 
these slaves

99.9 % of the tables are read only anyway, the only tables we update or 
insert into, are very very small and fast.

These big tables are daily extracts from IBM DB2 sites, in ebcdic 
format, we archive the data and users then query our site which is 
faster, unless they start doing multiple query options, then things get 
slow.

If you query only one feild its FAST, but if you query two feilds, its 
slow, very slow, need multiple key per query support in mysql.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan

You might be out of luck with MySQL ... sorry.
You may need to switch to a database that has a parallel query 
facility. Then - every query becomes a massive table scan but gets 
divided into multiple concurrent subqueries - and overall the job 
finishes in a reasonable amount of time. The epitomy of brute force. 
It's hard to rationalize initially but after a while you see it's the 
only way to go. Remember -  indexes are no longer required.

We have a billion row 100GB table the users search any and every way. 
Response time is less than a minute.

We are anxiously waiting to see this technology added to MySQL. Maybe 
one day we'll have some money to contribute to the effort. Parallel 
query is not trivial. That's why these databases are expensive.

I can send you more details privately if you are interested.

I've used it, with oracle, but oracles index searches are better, hit 
the best one first, then 2nd best, then 3rd, but I really dont want to 
go to oracle, it's too complicated for my tech's

vs mysql, hit the best one first, and use no other
Query time is a non issue at this point, it's load time, load daily file 
into temp table, then insert ignore into main table, on key violation 
the violating record is ignored

load time is the issue, the server loads files 15 hours a day, that big 
primary key makes loading any table over 2-3 gig VERY slow

I thought it was a bug, everything was great untill you get up to about 
3 or 4 gig, then it gets SLOW

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Lopez David E-r9374c wrote:
matt
1) inserts using this format is much faster:
INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
   is much faster then single row insert. My experience is
   2.5 hrs vs.. 36 hrs.
2) The PACK_KEYS=1 may be hurting you. I've never used it.
3) There may be a cache somewhere that's to small. You'll
  have to do some digging in this area.
4) dup key ignore - what does that mean exactly?
5) what is your OS  rev, mysql rev.
Please post any suggestions that you find valuable so we 
can all learn..

david
 

If I understand it, pack_keys should help if your disk IO limited vs cpu 
limited, cpu is ususally near idle.

I increased the cache sizes, helped a little, but not much
delay_key_write=ALL
key_buffer_size=1000M
read_buffer_size=512M
record_buffer=512M
What would the syntax for that type of insert be?
I have a table with 30,000 records, I need to insert them into the main 
table with millions of records, I thought that insert into table select 
* from  2nd table  would be the fastest way.

insert ignore will not insert a record, if it violates the primary key, 
I do this to keep duplicate records out of the system

windows 2003 and 2000 servers, mysql-nt 4.0.16  I tried the newer 
versions, but found bugs on all of them, I submitted them to the bug system.

I believe my bottleneck is reading the data to ensure the primary key is 
not violated, I see lots of read IO, but little write IO

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Egor Egorov wrote:
Are you running this under Microsoft Windows? 
 

Yes, windows 2k and 2003, mysql-nt 4.0.16
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
[EMAIL PROTECTED] wrote:
You may want more indexes but you might be getting killed because you already have too 
many.
To test - try loading into a table without indexes and see if it makes a difference.
At the very least - check to see if the primary index which starts with 'dic' can make 
your special 'dic' index superfluous.
If write speed is a bottleneck you might consider Raid-1 instead of Raid-5.
Reading lots of rows via index is a killer. Depending on your hardware it may be 
cheaper to table scan 50 rows than to read 1 via index. However, this requires 
partitioning of the data based on some column which appears in every query and acts as 
an initial filter. If you are lucky enough to be in that situation - consider a MERGE 
table.
 

These tables are merged, the total table size is huge, on this 
particular table, it's , 45,449,534 rows, however, all the merge tables 
combined are 258,840,305 records

perhaps I should reorder the pimary key, putting the longest most unique 
record up front, and the least unique at the end, would that speed up 
the key check?   I can tell that almost everything is read IO, very 
little write IO

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread matt ryan
Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..
I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
backplanes (new ones will have dual channel)
All have 2 gig of ram, but I've never seen mysql use more than 600mb
of ram.
The servers handle huge loads, each day there are 30 1-2 gig files
loaded into large tables, total mysql data size is 96 gig, the large
tables are 2-6 gig.
Inserts are done on dupe key ignore, this takes hours on the large
files, it barely keeps up with input files.
At the bottom of this post I've got the mysql.ini config lines, any
suggestions are welcome, I'm already beyond the mysql huge sample
they used to include in the program.
Sample table that I load is as follows.
each day I get 40 % new records on the text file, the input file is
normally 20mb, once a week I get one that's 1-2 gig, these take all
day to load.
I need more multiple column indexes, as some querys return millions of
rows that must be scanned, but the index size already exceeds the
table size, and the combinations I need would result in an myi that's
5x larger than the data itself.
Here's an example of the speed problem, the index was corrupt so I
dropped all and recreated, rather than a myisam repair. I think 3 days
is a little excessive for a table that's only 3.428 gig, index is
2.729 gig.  I cant remove the primary key, as it keeps duplicates out
of the system, the input files are from old database's, we use mysql
to store the data for the web frontend, mostly done in ASP, most
queries take less than a second, unforuntatly we have big queries that
take way more than the IIS timeout setting all the time, but no way
around it, I cant add more indexes without making it even slower :(

I cant tell if it's mysql that's the problem, or the hardware, Here's a 
screenshot of the disk IO, if I copy a file while mysql is doing the 
build index, the io shoots way up, which tells me, mysql is NOT maxing 
out the drives, and it's also not maxing out the memory.

Unless it's doing lots and lots of seeks on the drive, which is harder 
to test using perfmon, are there any mysql test setups that would help 
identify where the bottleneck is?

screenshot of disk io usage
http://www.geekopolis.com/pics/diskio.jpg
I'm all out of ideas, other than switching to another db, and the table 
 indexes split across drives, maybe a 2 channel setup, 4 drives per 
channel, each 4 is a separate raid 5 setup, one holds data one holds 
indexes, cant do this with mysql though

mysql alter table hood_stat add primary key
(dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
(`dic`), add index `ctasc` (`ctasc`);
Query OK, 45449534 rows affected (3 days 19 hours 6 min 34.94 seconds
Records: 45449534 Duplicates: 0 Warnings: 0
CREATE TABLE `hood_stat` (
`dic` char(3) NOT NULL default '',
`fr_ric` char(3) NOT NULL default '',
`niin` char(11) NOT NULL default '',
`ui` char(2) NOT NULL default '',
`qty` char(5) NOT NULL default '',
`don` char(14) NOT NULL default '',
`suf` char(1) NOT NULL default '',
`dte_txn` char(5) NOT NULL default '',
`ship_to` char(3) NOT NULL default '',
`sta` char(2) NOT NULL default '',
`lst_sos` char(3) NOT NULL default '',
`esd` char(4) NOT NULL default '',
`stor` char(3) NOT NULL default '',
`d_t` char(4) NOT NULL default '',
`ctasc` char(10) NOT NULL default '',
PRIMARY KEY (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta` ),
KEY `don` (`don`),
KEY `niin` (`niin`),
KEY `stor` (`stor`),
KEY `dic` (`dic`),
KEY `ctasc` (`ctasc`)
) TYPE=MyISAM MAX_ROWS=10 PACK_KEYS=1
skip-locking
set-variable=delay_key_write=ALL
set-variable= key_buffer_size=1500M
set-variable=join_buffer=512M
set-variable= max_allowed_packet=256M
set-variable= table_cache=512
set-variable= sort_buffer=256M
set-variable=tmp_table_size=400M
set-variable= record_buffer=512M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=512M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
log-bin
server-id=1
replicate-do-db=finlog
set-variable=open-files-limit=500
set-variable=table-cache=400
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]