Run Show INNODB status. Look at 

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 1299859045; in additional pool allocated 6113152
Buffer pool size   71936
Free buffers       59
Database pages     70898
Modified db pages  57113
Pending reads 1 
Pending writes: LRU 0, flush list 0, single page 0
Pages read 379011342, created 2581822, written 233133461
58.62 reads/s, 0.12 creates/s, 61.24 writes/s
==> Buffer pool hit rate 981 / 1000


> -----Original Message-----
> From: Emmett Bishop [mailto:[EMAIL PROTECTED]
> Sent: Friday, April 23, 2004 8:01 AM
> To: Dathan Vance Pattishall
> Subject: RE: InnoDB Load Problem
> 
> I've been keeping tabs on this thread and would just
> like to know how to tell what the buffer pool ratio
> is. What is it a ratio of? What command do I run to
> take a look at it?
> 
> Thanks,
> 
> Tripp
> 
> --- Dathan Vance Pattishall <[EMAIL PROTECTED]>
> wrote:
> > Look at your fsync stat and your buffer pool ratio.
> > You may get better
> > performance out of use O_DIRECT since it does not
> > double buffer your log
> > writes.
> >
> > Next make sure your buffer pool ratio is close to 1
> > (100%), if not raise
> > your bugger pool if you can. Additionally make sure
> > you transaction logs are
> > large like 1/2 your buffer pool. Also note if your
> > doing many fast small
> > queries set innodb_thread_conncurency high (cpu+
> > number of disk)*2
> >
> >
> > For the hardware portion, you might need to use
> > elvtune to get better
> > throughput for your hard drive or update the kernel
> > to a kernel that
> > supports better interaction with your hardware
> > makeup.
> >
> > This all assumes that your queries are already
> > optimized.
> >
> > --
> > DVP
> >
> > > -----Original Message-----
> > > From: Marvin Wright
> > [mailto:[EMAIL PROTECTED]
> > > Sent: Tuesday, April 20, 2004 5:13 AM
> > > To: Mechain Marc; Marvin Wright; Dathan Vance
> > Pattishall;
> > > [EMAIL PROTECTED]
> > > Subject: RE: InnoDB Load Problem
> > >
> > > Hi,
> > >
> > > To put the unique index on like you suggest is
> > fine for this table but
> > > this
> > > table is just the top level of a hierarchy.
> > >
> > > table a has 1 record
> > > table b has 100's of records linked to 1 table a
> > record
> > > table c has 100's of records linked to 1 table b
> > record
> > >
> > > All the records in table b and c would need to be
> > updated/deleted for a
> > > new
> > > record.
> > > It think this would be very time consuming, and
> > the clients that are
> > > inserting are public internet users therefore I'd
> > rather not slow these
> > > down.
> > >
> > >
> > > under load iostat -x 1 gives me this
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >           38.50    0.00   18.00   43.50
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda   104.00 552.00 31.00 39.00 1088.00
> > 4728.00   544.00  2364.00
> > > 83.09    62.20 1174.29 141.43  99.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2  104.00 552.00 31.00 39.00 1088.00
> > 4728.00   544.00  2364.00
> > > 83.09    82.20 1174.29  75.71  53.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >           44.50    0.00   16.50   39.00
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda     6.00 838.00  1.00 58.00   64.00
> > 7168.00    32.00  3584.00
> > > 122.58     3.30  393.22 169.49 100.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2    6.00 838.00  1.00 58.00   64.00
> > 7168.00    32.00  3584.00
> > > 122.58    23.30  393.22  23.73  14.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >            2.00    0.00    0.00   98.00
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda   195.00 162.00 58.00  8.00 2080.00
> > 1392.00  1040.00   696.00
> > > 52.61    44.40  740.91 128.79  85.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2  195.00 162.00 58.00  8.00 2080.00
> > 1392.00  1040.00   696.00
> > > 52.61    64.40  740.91 151.52 100.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >            8.00    0.00    3.00   89.00
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda   174.00   0.00 60.00  5.00 1856.00
> > 8.00   928.00     4.00
> > > 28.68    50.00 1235.38 147.69  96.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2  174.00   0.00 60.00  5.00 1856.00
> > 8.00   928.00     4.00
> > > 28.68    70.00 1235.38 153.85 100.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >           29.50    0.00   16.50   54.00
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda   102.00  71.00 40.00  6.00 1088.00
> > 616.00   544.00   308.00
> > > 37.04     5.60  671.74 193.48  89.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2  102.00  71.00 40.00  6.00 1088.00
> > 616.00   544.00   308.00
> > > 37.04    25.60  671.74 163.04  75.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >           57.50    0.00   20.00   22.50
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> > > avgrq-sz avgqu-sz   await  svctm  %util
> > > /dev/hda     0.00 398.00  0.00 28.00    0.00
> > 3408.00     0.00  1704.00
> > > 121.71 42949657.76  171.43 357.14 100.00
> > > /dev/hda1    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda2    0.00 398.00  0.00 28.00    0.00
> > 3408.00     0.00  1704.00
> > > 121.71     4.80  171.43  14.29   4.00
> > > /dev/hda3    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > > /dev/hda5    0.00   0.00  0.00  0.00    0.00
> > 0.00     0.00     0.00
> > > 0.00     0.00    0.00   0.00   0.00
> > >
> > > avg-cpu:  %user   %nice    %sys   %idle
> > >           39.00    0.00    9.50   51.50
> > >
> > > Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s
> > wsec/s    rkB/s    wkB/s
> >
> === message truncated ===
> 
> 
> 
> 
> 
> __________________________________
> Do you Yahoo!?
> Yahoo! Photos: High-quality 4x6 digital prints for 25¢
> http://photos.yahoo.com/ph/print_splash



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to