<SNIP SNIP>On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff:
So.. My tips for you:
1) Consider a switch to InnoDB, the performance hit was dramatic, and it's
about SO much more than transactions (which we still don't do)!
Consider it switched! as soon as I find the way to do so :) Are there any changes necessary to his code/queries to use innodb?
No changes needed to code/queries, except probably the daily table optimize/repair can go away... As far as how to do it...
<SNIP>Unfortunately, while we would love to have similar down times to that,The stats say that the Query Cache is WAY big, but I predicted that. As for
we obviously have overloads on a daily basis, bringing the machine down, or
bogging it out, for vast periods of the day... This is only recently (past
2-3 months) before that, it bogged out onlya couple time a day, and the
slowness was, at least, functional- though, thats a very relative term,
especially when dealing with users who may have shorter attention spans.
How are you spanning your queries over multiple DB servers? and are all
writes being done on one master server, or have you found a way to do 2 way
replication? Currently we have only the main appserver apache2/php/thttpd,
and are trying a secondary apache2/php server (dual PIII/850/2G) which
doesn't appear to be handling the load well at all... We're looking,
obviously, at adding several front end appservers, though we want to ensure
the G5 will be able to handle it's job if there are 5-6+ frontends on it.
Here's a few stats from today. If you see anything oddball, please let me
know. I see the Qcache_free_memory is awfully large, I assume that means
it's being tremendously wasted.
the slowness, I can't address the application side, but MySQL shouldn't be
the cause of problems with this sort of hardware and server load. On the
rest of the stats my comments are the number of queries aren't that high,
this server should be more than enough properly tuned....
Number of connections seems high, that's an Application thing, I can only
assume it's efficient in how it manages it's database connections etc? My
stats show 47K connections on an uptime of a week, yours is at 242K
connections on less than a day. My server typically has a little less than
400 connections actually present at any time and if I have 10 threads
running (including replication etc) I start to get nervous and look for a
problem (and pagers go off if we get over 20 threads running for more than
30 seconds). The number of times your connections change database seems high
too, there are as many change database commands as selects. Again this is
likely an Application thing, but might be an area that could be improved.
While we prepare for the innodb switch, I would like to see if there is some idea as to why we are seeing so many DB changes and connections. Apache is set to allow unlimited persistant connections (prefork mpm... worker is still apparently unable to be used with php) and php.ini is set to allow unlimited persistent connections. I am unsure, yet, what calls are made exactly in the client's code, but, I do know his mysql module uses pconnects. Is there a timeout setting I should be looking for or something else? I will include some more important settings to apache/php/sysctrl below. The G5 has no odd settings, though it is OSX not OSXS (10.3.2).
I have felt that the app server is a primary cause of the slowdowns... every page has debug info which report the timing of every sql call on the page, if the debug bit is set in the url when requested, and it doesn't match the page load times all the time. The pages may load slowly (time to data starting to be sent to browser, not time browser takes to completely receive it and images, etc) but the sql info says the queries took no time to complete.
The overall generation time of the page is also calculated/reported for every page, and show the same discrepancies at times. In general, however, the sql times added together form 85-95% of the total page generation time, and that time is roughly the correct amount of time it takes for the page to start rendering. But the larger select queries which form the big/heavy portions of the pages (item lists within their categories) can have just plain wild times, with seemingly no reason. The same query done over and over will sometimes jump from >1sec to 5, 30,60 sec. and then back down again. The mysql server may not appear heavily loaded at these times, either- they will appear mostly normal.
While we have nice SystemStats on our front end boxes (load/process counts/bandwidth/file sizes, etc) we do not, yet, have anything like it on the G5. The same stats calls do not run ont he G5, or, their output is unrecognized. I am trying to find a good stats grapher for OSX, particularly a non-console app (I use SystemStats2 from de-neef.net.. a simple, php/rrd poller that generates nice graphs in web page formats. My client has created a page load time graph very recently which I will be correlating to our other graphs shortly to see if i can find any points to look at.
(Only odd/important settings included, all on app servers)
php.ini: output_buffering = Off
zlib.output_compression = Off
max_execution_time = 120
max_input_time = 90
memory_limit = 40M (was 16M then 24M... )
register_globals = On
default_socket_timeout = 60
sql.safe_mode = Off
mysql.allow_persistent = On
mysql.max_persistent = -1 (was 230, apache max clients was set to 230 once, changed now)
mysql.max_links = 1200 (not -1, to prevent runaway)
mysql.connect_timeout = 60 (try higher... or -1 ? )
mysql.trace_mode = Off
session.save_handler = files
session.use_cookies = 1
session.auto_start = 0
from apach2 conf: Timeout 160 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 10 ListenBacklog 150 SendBufferSize 1024 StartServers 40 MinSpareServers 40 MaxSpareServers 100 ServerLimit 512 MaxClients 450 MaxRequestsPerChild 1000
From sysctl: net.ipv4.tcp_ecn = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 2000 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 vm.bdflush = 50 1000 64 256 1000 3000 60 20 0 net.ipv4.tcp_wmem = 4096 16384 262143 vm.max-readahead = 512 vm.min-readahead = 10
-- Adam Goldstein White Wolf Networks http://whitewlf.net
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]