> "Jeff" <[EMAIL PROTECTED]> wrote on 09/29/2005 08:47:52 AM:
> 
> > 
> > > Jeff wrote:
> > > > 
> > > > Ugh...
> > > > mysqladmin -uroot -ptelaurus processlist | grep -c 
> "Sleep" And it 
> > > > returned 200 sleeping connections, all persistant
> > > connections
> > > > from our app servers and 4 threads_running
> > > > Also a show status gave me a max_used_connections of 236. If 
> > > > that's the case then I can probably only set it to about
> > > 250 which
> > > > means if I set my innodb_buffer_pool_size = 100M  and 
> dropping my
> > > > key_buffer_size to 250, I'll need 1884M of ram according to the 
> > > > formula above, which is dangerously close to the 2G limit 
> > > specified in
> > > > the warning on the link above.
> > > > 
> > > > Currently the key_reads to Key_reads_requests is about
> > > 1:1970 with the
> > > > key_buffer_size of 384M, so I guess I can safely drop 
> this to 250M
> > > > 
> > > > Even if I changed the entire DB over to InnoDB, and pushed the
> > > > key_buffer_size down really low it wouldn't drop the 
> total memory 
> > > > usage below 1600M.
> > > > 
> > > > So what is this telling me?  I need more ram or less
> > > connections or I
> > > > should just stay with MyISAM?
> > > > 
> > > > Thanks,
> > > > 
> > > > Jeff
> > > > 
> > > 
> > > I would suggest taking a hard look at why your application
> > > servers are 
> > > creating 200 sleeping connections, and if that is 
> necessary. You may 
> > > also be able to reduce sort_ and read_buffer_size to 1M 
> each, but I 
> > > couldn't tell you how that might affect your application, so 
> > > you may not 
> > > want to do that. (Does anyone on the list have experience 
> > > modifying these?)
> > > 
> > > I think the biggest issue will be the system's RAM - the 
> 2G limit on
> > > MySQL's total allocated RAM is a per-process hard limit on 32-bit 
> > > architecture, but most 32-bit systems benefit greatly from 
> > > having more 
> > > than 2G total RAM (the OS may use the rest for disk caching, 
> > > etc). If, 
> > > say, your server had 4G RAM, then you could safely 
> configure MySQL to 
> > > use very close to 2G, and performance should fly. With 
> only 2G in the 
> > > system, setting MySQL to use as much RAM as possible would 
> > > leave next to 
> > > nothing for the OS or other processes, and that is the 
> > > problem (as I see 
> > > it).
> > > 
> > > However, that said, more RAM is not always the answer. You
> > > may get much 
> > > more of a performance increase by modifying your 
> application code so 
> > > that it doesn't "waste" so many connections (thus allowing you to 
> > > allocate plenty of RAM to the innodb_buffer_pool).
> > > 
> > > Of course, you can do both (just to play it safe, right?).  ;)
> > > 
> > 
> > Well the applications with persistant connections is a 
> touchy subject. 
> > Our apps send and rec data over satelite links which are very 
> > expensive. The shorter the duration of the link the less it 
> costs us.  
> > So the pervailing theory is that with persistant 
> connections the apps 
> > will spend less time re-connecting/dis-connecting from the 
> db.  Even 
> > fractions of a second counts when you're talking about thousands of 
> > connections a day and we are charged by the second for airtime.
> 
> And all of those sleeping connections are costing you how 
> much in unused 
> air time? 

I think there's a missunderstanding here.  The applications run on
servers in our datacenter and wait for client connections to call in.
The "client" which is another server on the other end of the sat link,
transfers the data to our apps and our apps send data to it (depending
on whether or not data is waiting for it, one of the db queries tells
the local app this) and the local apps in turn write the connection
information to the database.

>Compared with many other databases, the cost (time 
> and data) of 
> making and breaking a MySQL connection is cheap. Try a small 
> set of test 
> cases and see for yourself. Maybe you could move 10 of your 
> normal clients 
> from using your persistent connections into a 
> connect-as-needed model and 
> see what that does to your air-time, sleeping connection 
> counts, and total 
> throughput.
> 
> The only way to know for certain is to try it in your 
> environment but I 
> know that in the world of web development (where connections are also 
> precious and throughput is king) that being connected only 
> when necessary 
> usually works much better than trying to stay connected all 
> of the time. 
> By minimizing the communications overhead imposed on the server by 
> maintaining unused open connections, the server should be 
> able to respond 
> better. You should not only have less "dead air" but each connection 
> itself will take less time as the server will be more responsive.
> 
> Remember, I recommend making and breaking connections around 
> blocks of 
> execution not per-statement. Let's say you have a "lookup" 
> routine that 
> uses 6 queries and massages the data into something useful 
> client-side. It 
> makes no sense to flip a connection 6 times for those 6 
> queries as they 
> are all part of one larger process. Prepare your SQL 
> statements as much as 
> possible, make one connection, run the 6 queries, cache the 
> results, drop 
> the connection, process the results from cache. Another trick to 
> maximizing connection usage is to make a few trips to the server as 
> necessary. Using the same scenario I just described, if 4  of those 
> queries did not contain data useful to the user but were used 
> primarily to 
> build the results of the final 2 queries, you may be able to 
> cache the 
> results of the first queries server-side, minimizing the # of 
> frames sent 
> across your satellite link. 
> 

Again, there are no "over the sat link" queries going on here.  The
local apps do all the db work.  If data files come in from the remote
systems, the local apps collect the files and then write records into
the database to record the information about those files so that we can
bill for them later.  Likewise, when a remote connects and identifies
itself, the local apps query the database to see if there are any data
files waiting to be sent to the remote system.  Then once they are sent,
the local apps again update the database with the appropriate
information about the transaction so we can bill.  So basically, the
local apps just stand there "at the ready" to receive a connection and
then try to get the data transfers done as quickly as possible.  Part of
the transaction is reading and writing to the local DB.  

> >That's
> > the whole driving force behind wanting to switch over to 
> InnoDB.  The  
> >thought is it would give us faster writes when we have a 
> hundred apps  
> >trying to write at or very near the same time because of the record  
> >level locking as opposed to the MyISAM Table level locking during 
> >writes  and updates.
> 
> It sounds as though you do a lot of "burst" processing. Your 
> client apps 
> collect information from the user then interacts with the 
> database and 
> waits for more user input. It's that "dead air" time (waiting 
> on the live 
> people to do something) that is costing you a small fortune in unused 
> connection time and consuming valuable server-side resources.
> 

No live people here, just server to server transactions.  Moving data
over a sat link cost effectively is tricky.  You can't use packet
acknowlegements like with TCP/IP because it just uses too much time.
We're talking connections of 2400bps max here.

> > 
> > Now, the question is, if we need to drop the persistant 
> connections in 
> > order to move to an InnoDB engine, will the speed benefit of record 
> > level locking outweigh what is lost by not having persistant 
> > connections?
> 
> Dropping the persistent connections are not necessary to move 
> to InnoDB. I 
> think many of us believe that 200 sleeping connections out of 
> 236 total 
> are worrisome no matter which storage engine you are using.
> 

I agree here but convincing our developers of this is another story.

> > 
> > That being said and having just looked at our connections 
> for the past 
> > 60 minutes during what is our roughly our peak time I only 
> see about 
> > 350 which is roughly one every 10 seconds with a rough avg 
> connection 
> > time of about 28 seconds most of which is transfer of data 
> and not db 
> > read/write/updates.  So, I believe, from that information I 
> can make 
> > an educated guess that the MyISAM table locking is not the real 
> > bottleneck here and therefore it's probably not going to do 
> us a lot 
> > of good to switch to InnoDB, especially with our current 
> hardware and 
> > application behavior.  Thoughts?
> 
> The three primary benefits of InnoDB: Row level locking (more 
> concurrency 
> for tables with heavy writes or updates), transactional support (for 
> action atomicity, data consistency, and process isolation), 
> and foreign 
> keys (also consistency). If you need any of those features for your 
> application design, that would also be a compelling reason to change 
> engines. 
> 
> > 
> > At some point however, as our traffic grows we probably will hit a 
> > point where the db read/write/updates will start to become a 
> > bottleneck and we'll need to look at moving to a 64bit 
> arch, >2gig ram 
> > and the InnoDB engine.  What status variables should I be 
> looking at 
> > to see if we have a lot of read/write/updates being delayed?
> > 
> > 
> > Thanks,
> > 
> > Jeff
> > 
> 
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to