The primary server (Dual Athlon) has several U160 scsi disks, 10K and 15K rpm... Approximately half the full size images are on one 73G U160, the other half on another (about 120G of large images alone being stored... I am trying to get him to abandon/archive old/unused images). The system/logs run on a 36G 10K, Mysql used to run on another 36G 15k, and /home (with thumbnails and php files) is on another 36G 10K... There is also a 250G U133 drive for archives/backups. Apache2.0.47/PHP4.3.4

We are going to upgrade the rest of the 10Krpm drives to 15Krpm, but, that does not (yet) help the G5... it is a full tower unit at the moment, though we are now looking at replacing it with a G5 Xserve. The desktop unit can only contain 2xSATA drives internally, and we do not have an external raid/scsi/FC system to use on it.. yet. My thought when setting this up was to use more RAM cache than disk for the DB. The entire DB is about 5.5GB total, currently, and resides on it's own partition on it's own disk.

The G5 is using std. HFS+ on all disks, but the Athlon/linux server is using reiserfs on most disks.

I will relay the HEAP/EXPLAIN info to my client, as I do not work on that portion of the system... He does the code, I keep the systems up/running. We are trying to implement load balancing and, eventually, failover redundancy... The initial thought was the G5 and Dual Athlon being cooperative/redundant machines.... but, it is looking like we will need several frontends and the G5/D.Athlon be backends...

All of this needs to be done in the tightest budget & shortest time possible... we are looking at adding 3-5 1U frontend machines, but only if we can make sure the G5/D.Athlon boxes can handle it. Obviously there need to be some larger changes, but we want to avoid throwing hardware & money at it without reason.

We also have a second 'frontend' machine temporarily being used, a Dual PIII/850 w/2G ram and 4xscsi drives. It seems strangely unable to handle much user load at all.... Initially I tried simple DNS load balancing, but, that was quickly discarded for subdomain/site topic separation. It can handle only about 20% on the main server's userload it seems. (php files reside local on it, all images are served via main server/thttpd, some dynamic includes are done via NFS mount to main server).

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net


On Jan 26, 2004, at 4:39 PM, Brent Baisley wrote:


Have you tried reworking your queries a bit? I try to avoid using "IN" as much as possible. What does EXPLAIN say about how the long queries are executed? If I have to match something against a lot of values, I select the values into a HEAP table and then do a join. Especially if YOU are going to be reusing the values within the current session.
Are you storing images (img1, img2, img3) in the database? I would recommend against that in a high load database, it bloats the database size forcing the database to use a lot more RAM to cache the database. It also prevents you from creating a database with fixed length records. Keeping the images as files will push the loading of the images out to the file system and web server.
What kind of RAID setup do you have? You just said you had 73GB 10K disks. Why didn't you go with 15k disks? Cost?


On Jan 26, 2004, at 3:42 PM, Adam Goldstein wrote:

Yes, I saw this port before... I am not sure why I cannot allocate more ram on this box- It is a clean 10.3 install, with 10.3.2 update. I got this box as I love OSX, and have always loved apple, but, this is not working out great. Much less powerful (and less expensive) units can do a better job of this (the entire site was run on ONE dual athlon box with 3G ram, and it seems to have made -NO- difference moving the mysql to the dedicated G5.)

Obviously, there is something wrong somewhere- And, I need to find where. My client (site creator) is depending on me to help him boost the ability of the site to handle more users, but we've always been able to do it on a light budget. I need to know where to look first, as we are running out of time... His users are a fickle bunch, and will likely migrate off to other sites if this slowness continues (it has been degrading for past 3-4 months from slow at peak, to dead for all peak hours).

These are example queries from the heavier pages:

1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 4204 LIMIT 0,1
Time: 0.0004551410675 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, e.img3, e.prix_depart, e.price_present, e.fnb_buyitnow_price, e.date_debut, e.date_debut_origine, e.date_fin, e.fin_bids, e.fvc_active FROM enchere e WHERE e.id_categorie IN (4204) AND (e.date_fin >= '2004-01-26 14:41:59') ORDER BY date_fin ASC LIMIT 0, 80
Time: 37.60733294 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM enchere e WHERE e.id_categorie IN (4204) AND (e.date_fin >= '2004-01-26 14:41:59')
Time: 0.9267110825 sec / Type: Buffered
4: INSERT INTO td_loadtime SET fvc_page = '/liste.php', fvc_query = 'language=E&cat=4204&sql_log=Y', fdt_date = '2004-01-26', fdt_time = '14:42:38', fnb_seconds = 39.22
Time: 0.005410909653 sec / Type: Buffered


making the page take > 40 seconds to load.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to