Hi,

To put the unique index on like you suggest is fine for this table but this
table is just the top level of a hierarchy.

table a has 1 record
table b has 100's of records linked to 1 table a record
table c has 100's of records linked to 1 table b record

All the records in table b and c would need to be updated/deleted for a new
record.
It think this would be very time consuming, and the clients that are
inserting are public internet users therefore I'd rather not slow these
down.


under load iostat -x 1 gives me this

avg-cpu:  %user   %nice    %sys   %idle
          38.50    0.00   18.00   43.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   104.00 552.00 31.00 39.00 1088.00 4728.00   544.00  2364.00
83.09    62.20 1174.29 141.43  99.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  104.00 552.00 31.00 39.00 1088.00 4728.00   544.00  2364.00
83.09    82.20 1174.29  75.71  53.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          44.50    0.00   16.50   39.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     6.00 838.00  1.00 58.00   64.00 7168.00    32.00  3584.00
122.58     3.30  393.22 169.49 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2    6.00 838.00  1.00 58.00   64.00 7168.00    32.00  3584.00
122.58    23.30  393.22  23.73  14.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           2.00    0.00    0.00   98.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   195.00 162.00 58.00  8.00 2080.00 1392.00  1040.00   696.00
52.61    44.40  740.91 128.79  85.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  195.00 162.00 58.00  8.00 2080.00 1392.00  1040.00   696.00
52.61    64.40  740.91 151.52 100.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           8.00    0.00    3.00   89.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   174.00   0.00 60.00  5.00 1856.00    8.00   928.00     4.00
28.68    50.00 1235.38 147.69  96.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  174.00   0.00 60.00  5.00 1856.00    8.00   928.00     4.00
28.68    70.00 1235.38 153.85 100.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          29.50    0.00   16.50   54.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   102.00  71.00 40.00  6.00 1088.00  616.00   544.00   308.00
37.04     5.60  671.74 193.48  89.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  102.00  71.00 40.00  6.00 1088.00  616.00   544.00   308.00
37.04    25.60  671.74 163.04  75.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          57.50    0.00   20.00   22.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00 398.00  0.00 28.00    0.00 3408.00     0.00  1704.00
121.71 42949657.76  171.43 357.14 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00 398.00  0.00 28.00    0.00 3408.00     0.00  1704.00
121.71     4.80  171.43  14.29   4.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          39.00    0.00    9.50   51.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda    48.00 810.00 11.00 57.00  512.00 6936.00   256.00  3468.00
109.53    19.80  538.24 147.06 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2   48.00 810.00 11.00 57.00  512.00 6936.00   256.00  3468.00
109.53    39.80  538.24  63.24  43.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          14.00    0.00   13.00   73.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   213.00 112.00 63.00  3.00 2208.00  960.00  1104.00   480.00
48.00    60.10 1086.36 151.52 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  213.00 112.00 63.00  3.00 2208.00  960.00  1104.00   480.00
48.00    80.10 1086.36 151.52 100.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          59.00    0.00   25.50   15.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda   126.00   0.00 35.00  6.00 1248.00    8.00   624.00     4.00
30.63     5.10  895.12 209.76  86.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2  126.00   0.00 35.00  6.00 1248.00    8.00   624.00     4.00
30.63    25.10  895.12 139.02  57.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
          79.50    0.00   16.50    4.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00 557.00  0.00 41.00    0.00 4784.00     0.00  2392.00
116.68 42949661.76  214.63 241.46  99.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00 557.00  0.00 41.00    0.00 4784.00     0.00  2392.00
116.68     8.80  214.63  12.20   5.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00   0.00   0.00



Have switched innodb_flush_log_at_trx_commit to 0 but still do not see any
difference in performance.


Marvin.

-----Original Message-----
From: Mechain Marc [mailto:[EMAIL PROTECTED]
Sent: 20 April 2004 13:02
To: Marvin Wright; Dathan Vance Pattishall; [EMAIL PROTECTED]
Subject: RE: InnoDB Load Problem


> This query is to see if there are duplicate records, I'm not sure how else
I
could do this and it only runs once every 10 minutes.

Why don't you put a unique index on locale,ggd,from,to,supplier,date so you
won't have to play that request any more ...

> How can I determine if the problem is disk bound ?

iostat -x 1

The right column of the display gives you the busy state (in percent) of the
disk.

> If I can get some hard evidence of this ...

Try to set innodb_flush_log_at_trx_commit to 0 (the default value is 1)

Marc.

-----Message d'origine-----
De : Marvin Wright [mailto:[EMAIL PROTECTED]
Envoyé : mardi 20 avril 2004 12:45
À : Dathan Vance Pattishall; Marvin Wright; [EMAIL PROTECTED]
Objet : RE: InnoDB Load Problem


Hi,

Thanks Dathan for your response.

So far I have upgraded the mysql to 4.0.18, this supports O_DIRECT as my
4.0.13 did not.
I increased my buffer pool by another 256 Meg and so far I have not seen any
change in performance.

I've looked at the SHOW INNODB STATUS (pasted further down), my buffer pool
hit rate is constantly at 1000 / 1000, what does this tell you ?

Regarding your other points, a slow query is this, only 1 second though but
you mentioned count(*) as bad.

# Time: 040420 11:10:09
# [EMAIL PROTECTED]: web[web] @  [10.168.78.207]
# Query_time: 1  Lock_time: 0  Rows_sent: 3310  Rows_examined: 185723
select locale,ggd,from,to,supplier,date,count(*) as count from cache group
by locale,ggd,from,to,supplier,date having count > 1;

This query is to see if there are duplicate records, I'm not sure how else I
could do this and it only runs once every 10 minutes.

The indexes look fine on all other queries.

The disk layout is probably not good, unfortunately these are standard built
single drive machines by our tech services department and it too bigger
hassle for them to do something different for me :(

The disk layout is 

Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda2             34218636  22979948   9500456  71% /
/dev/hda1               101089     19894     75976  21% /boot
none                   2005632         0   2005632   0% /dev/shm

The data being on hda2.

How can I determine if the problem is disk bound ?
If I can get some hard evidence of this then I can go to our tech department
and get them to build me a decent box.

Many Thanks.

Marvin.


=====================================
040420 11:15:32 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 15 seconds
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 245688, signal count 208343
Mutex spin waits 10498150, rounds 63549544, OS waits 69764
RW-shared spins 113294, OS waits 50819; RW-excl spins 83135, OS waits 26235
------------
TRANSACTIONS
------------
Trx id counter 0 464704697
Purge done for trx's n:o < 0 464704447 undo n:o < 0 0
Total number of lock structs in row lock hash table 0
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 22455, OS thread id 19320851
MySQL thread id 38508, query id 2467002 localhost root
SHOW INNODB STATUS
---TRANSACTION 0 0, not started, process no 17652, OS thread id 36874
MySQL thread id 1, query id 2431662 192.168.35.181 web
---TRANSACTION 0 464704694, ACTIVE 0 sec, process no 27189, OS thread id
38580247 inserting, thread declared inside InnoDB 500
mysql tables in use 1, locked 1
3 lock struct(s), heap size 320, undo log entries 39
MySQL thread id 73569, query id 2467003 10.168.77.231 web update
insert into negotiated_classes_cache set id=108245613219642041, route_id=3,
segment_id=1, class='C', num='4'
--------
FILE I/O
--------
I/O thread 0 state: waiting for i/o request (insert buffer thread)
I/O thread 1 state: waiting for i/o request (log thread)
I/O thread 2 state: waiting for i/o request (read thread)
I/O thread 3 state: waiting for i/o request (write thread)
Pending normal aio reads: 0, aio writes: 0,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
4831 OS file reads, 41891 OS file writes, 40144 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 20.53 writes/s, 18.13 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf for space 0: size 1, free list len 0, seg size 2,
364 inserts, 364 merged recs, 5 merges
Hash table size 5312557, used cells 469072, node heap has 483 buffer(s)
43.86 hash searches/s, 3380.24 non-hash searches/s
---
LOG
---
Log sequence number 295 378143163
Log flushed up to   295 378138460
Last checkpoint at  295 42216323
0 pending log writes, 0 pending chkp writes
39690 log i/o's done, 17.27 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 1469851048; in additional pool allocated 4584832
Buffer pool size   81920
Free buffers       56299
Database pages     25138
Modified db pages  18737
Pending reads 0 
Pending writes: LRU 0, flush list 0, single page 0
Pages read 5113, created 20025, written 7489
0.00 reads/s, 9.27 creates/s, 22.80 writes/s
Buffer pool hit rate 1000 / 1000
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
Main thread process no. 17648, id 28680, state: sleeping
Number of rows inserted 2183633, updated 1997, deleted 945139, read 2006850
1482.37 inserts/s, 1.20 updates/s, 0.00 deletes/s, 653.56 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================


-----Original Message-----
From: Dathan Vance Pattishall [mailto:[EMAIL PROTECTED]
Sent: 19 April 2004 23:35
To: 'Marvin Wright'; [EMAIL PROTECTED]
Subject: RE: InnoDB Load Problem




> -----Original Message-----
> From: Marvin Wright [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 19, 2004 8:55 AM
> To: [EMAIL PROTECTED]
> Subject: InnoDB Load Problem
> 
> Hi,
> 
> Apologies in advance for the large post but I've tried to explain my
> problem
> as much as I can.
> 
> 
> 
> I've read the InnoDB tuning page (
> http://dev.mysql.com/doc/mysql/en/InnoDB_tuning.html ), there are a few
> things I could try but I'd like to make sure that I'm on the right tracks.
> 
>   * Firstly I could put more queries into a single transaction.

        That will help, increase your innodb_log_file_size and changed your
flush method to O_DIRECT - this avoids double writes. You can verify the
benefit of the change by looking at SHOW INNODB STATUS in the LOG IO
section.

Also, look at your buffer pool hit ratio. If its close to 100% then there is
no need to raise the allocation of the buffer_pool. Next, make sure you disk
layout is okay. Ensure that your not saturating on IO. Next make sure your
not doing queries like SELECT count(*) FROM <table> -- this is bad.

Finally look at which queries are slow. Make sure that your indexes are
proper.

One last note if your doing many many small quiries think about tweaking
your innodb_thread_conncurrency setting -- Ignore
innodb_thread_io_conncurrency unless you want to change code.

--
Dathan Vance Pattishall
Sr. Database Engineer / Sr. Programmer 
Friendster Inc.





________________________________________________________________________
This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk
________________________________________________________________________

________________________________________________________________________
This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk
________________________________________________________________________

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


________________________________________________________________________
This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk
________________________________________________________________________

________________________________________________________________________
This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk
________________________________________________________________________

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to