Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread Joerg Bruehe
Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]
 
 All, I was running slamdb against one of our QA boxes and noticed that the 
 innodb database is 190Gb in size BUT the worrying issue is that the indexes 
 are 
 30GB in size!!!  When I hit this server hard, it tanks on memory but still 
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

 Any suggestions on what I should do?  I am 
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.

 
 1. Remove all queries, run for a few days, look at the slow query logs and 
 then 
 find those queries that really need them and index those specificially for 
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad core 
 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
 indexing 
 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).

 
 Anyone had this problem before???
 
 Oh this is a single box, 100% mysql only and it talks to 3 front end iPlanet 
 web 
 
 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
   (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread Nunzio Daveri
Thanks again :-)

Nunzio





From: Joerg Bruehe joerg.bru...@oracle.com
To: Nunzio Daveri nunziodav...@yahoo.com; mysQL General List 
mysql@lists.mysql.com
Sent: Fri, July 30, 2010 1:31:54 PM
Subject: Re: Indexes larger than RAM (was: Do you know who can answer this 
question I posted yesterday please?)

Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]
 
 All, I was running slamdb against one of our QA boxes and noticed that the 
 innodb database is 190Gb in size BUT the worrying issue is that the indexes 
 are 

 30GB in size!!!  When I hit this server hard, it tanks on memory but still 
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

                                Any suggestions on what I should do?  I am 
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.

 
 1. Remove all queries, run for a few days, look at the slow query logs and 
 then 

 find those queries that really need them and index those specificially for 
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad core 
 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
 indexing 

 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).

 
 Anyone had this problem before???
 
 Oh this is a single box, 100% mysql only and it talks to 3 front end iPlanet 
web 

 
 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
              (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,  Komturstrasse 18a,  D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


  

Re: Indexes larger than RAM (was: Do you know who can answer this question I posted yesterday please?)

2010-07-30 Thread mos

Nunzio Daveri,
 Joerg Bruehe gave you a lot of good tips to try and speed things up. 
A few hundred queries per second seem to be a relatively small number to 
cause the server to crawl. I don't have the rest of your thread, but can 
you publish some of the slow queries (see Slow Query Log) and the table 
structure?


Mike


At 01:31 PM 7/30/2010, you wrote:

Hi!


I am no InnoDB and tuning expert, so I had intended to stay away from
this question. Ok, I'll give some general remarks:


Nunzio Daveri schrieb:
 [[...]]

 All, I was running slamdb against one of our QA boxes and noticed that the
 innodb database is 190Gb in size BUT the worrying issue is that the 
indexes are

 30GB in size!!!  When I hit this server hard, it tanks on memory but still
 performs, slower of course ;-)

Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes are spread so
wide that even the index pages become subject to frequent IO.

 Any suggestions on what I should do?  I am
 thinking of doing one of these:

Whether any action is needed, and which, depends on the problem you
experience:

- If the system as a whole (both CPU and disk) has a significant idle
  percentage, it isn't yet maxed out, and I don't expect that adding
  resources would improve performance significantly.

- If your CPUs have significant waiting for IO percentage, then data
  accesses need speedup. This could be done by faster disks, but I would
  expect more results from adding RAM for larger caches.
  This holds especially if your disk throughput is close to the possible
  maximum.
  (Assuming your bulk work is read/select. If it is insert/update, then
  *removing* indexes might reduce the workload, as there are fewer
  indexes to maintain.)

- If your CPUs are busy, then I don't expect any increase of caching
  would help.


 1. Remove all queries, run for a few days, look at the slow query logs 
and then

 find those queries that really need them and index those specificially for
 performance.

Makes sense (only) if you have indexes which aren't really helpful for
accesses, so they just add maintenance load. If you do few
inserts/updates, an unused index should be paged out and not do much harm.
Comes with the cost of reduced performance during that test time, and
the need to rebuild the essential indexes afterwards. Has the benefit of
getting rid of unused indexes (which just cause maintenance load).

 2. Split the single server into two servers both with 16 gb and 2 quad 
core

 cpu's. One master the other a slave.

Makes sense if your CPUs are busy, *and* you can distribute the read
accesses to the two servers (= most accesses are select). If most load
is insert/update, I don't expect a real improvement.
Biggest cost in hardware and admin effort, so I would do this only after
a decent analysis. OTOH, it gives you some (hardware) fault tolerance,
this could be an important argument depending on your requirements.

 3. Just add another 16gb (32GB total) and that should take care of the 
indexing

 issue.

Makes sense if the disks are the bottleneck (CPUs are in waiting for
IO), so that larger caches will avoid disk accesses.
Assumes your machine supports that amount of RAM (many mainboards have a
limit at 16 GB, AIUI).


 Anyone had this problem before???

 Oh this is a single box, 100% mysql only and it talks to 3 front end 
iPlanet web


 servers that hit it with a few hundread queries per second.

For a specific answer, the distribution of accesses between read and
write is needed, as well as information which resource is close to the
limit.


HTH,
Jörg

--
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
   (+49 30) 417 01 487
ORACLE Deutschland B.V.  Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org