> Jeff wrote:
> > 
> > Ugh...
> > 
> > mysqladmin -uroot -ptelaurus processlist | grep -c "Sleep"
> > 
> > And it returned 200 sleeping connections, all persistant 
> connections 
> > from our app servers and 4 threads_running
> > 
> > Also a show status gave me a max_used_connections of 236.
> > 
> > If that's the case then I can probably only set it to about 
> 250 which 
> > means if I set my innodb_buffer_pool_size = 100M  and dropping my 
> > key_buffer_size to 250, I'll need 1884M of ram according to the 
> > formula above, which is dangerously close to the 2G limit 
> specified in 
> > the warning on the link above.
> > 
> > Currently the key_reads to Key_reads_requests is about 
> 1:1970 with the 
> > key_buffer_size of 384M, so I guess I can safely drop this to 250M
> > 
> > Even if I changed the entire DB over to InnoDB, and pushed the 
> > key_buffer_size down really low it wouldn't drop the total memory 
> > usage below 1600M.
> > 
> > So what is this telling me?  I need more ram or less 
> connections or I 
> > should just stay with MyISAM?
> > 
> > Thanks,
> > 
> > Jeff
> > 
> 
> I would suggest taking a hard look at why your application 
> servers are 
> creating 200 sleeping connections, and if that is necessary. You may 
> also be able to reduce sort_ and read_buffer_size to 1M each, but I 
> couldn't tell you how that might affect your application, so 
> you may not 
> want to do that. (Does anyone on the list have experience 
> modifying these?)
> 
> I think the biggest issue will be the system's RAM - the 2G limit on 
> MySQL's total allocated RAM is a per-process hard limit on 32-bit 
> architecture, but most 32-bit systems benefit greatly from 
> having more 
> than 2G total RAM (the OS may use the rest for disk caching, 
> etc). If, 
> say, your server had 4G RAM, then you could safely configure MySQL to 
> use very close to 2G, and performance should fly. With only 2G in the 
> system, setting MySQL to use as much RAM as possible would 
> leave next to 
> nothing for the OS or other processes, and that is the 
> problem (as I see 
> it).
> 
> However, that said, more RAM is not always the answer. You 
> may get much 
> more of a performance increase by modifying your application code so 
> that it doesn't "waste" so many connections (thus allowing you to 
> allocate plenty of RAM to the innodb_buffer_pool).
> 
> Of course, you can do both (just to play it safe, right?).  ;)
> 

Well the applications with persistant connections is a touchy subject.
Our apps send and rec data over satelite links which are very expensive.
The shorter the duration of the link the less it costs us.  So the
pervailing theory is that with persistant connections the apps will
spend less time re-connecting/dis-connecting from the db.  Even
fractions of a second counts when you're talking about thousands of
connections a day and we are charged by the second for airtime.  That's
the whole driving force behind wanting to switch over to InnoDB.  The
thought is it would give us faster writes when we have a hundred apps
trying to write at or very near the same time because of the record
level locking as opposed to the MyISAM Table level locking during writes
and updates.

Now, the question is, if we need to drop the persistant connections in
order to move to an InnoDB engine, will the speed benefit of record
level locking outweigh what is lost by not having persistant
connections?  

That being said and having just looked at our connections for the past
60 minutes during what is our roughly our peak time I only see about 350
which is roughly one every 10 seconds with a rough avg connection time
of about 28 seconds most of which is transfer of data and not db
read/write/updates.  So, I believe, from that information I can make an
educated guess that the MyISAM table locking is not the real bottleneck
here and therefore it's probably not going to do us a lot of good to
switch to InnoDB, especially with our current hardware and application
behavior.  Thoughts?

At some point however, as our traffic grows we probably will hit a point
where the db read/write/updates will start to become a bottleneck and
we'll need to look at moving to a 64bit arch, >2gig ram and the InnoDB
engine.  What status variables should I be looking at to see if we have
a lot of read/write/updates being delayed?


Thanks,

Jeff



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to