Basil Hussain wrote:
> 
> Hi,
> 
> > > We currently have a dedicate server for MySQL. The server is a dual
> > > pentium III (1Ghz) with 2GB of RAM in it. It also has 2 18GB 10,000
> > > RPM drives in it arranged in a Raid 1 configuration (mirror).
> > > Sometime in the next 3-6 months we will be maxing out its
> > > capacity. (We were maxed out a few days ago, but we added 1GB of RAM
> > > and cached some query results). The system is currently running
> > > RedHat Linux 6.2.
> > >
> > > While there are some non-optimal queries and maybe some variable tuning
> > > that we can and should do, we will need to upgrade at some point and its
> > > not obvious to me what the upgrade path is.
> >
> > Until we have more of an idea where your system is stressed, it's hard
> > to say.  Are the CPUs maxed?  I/O channels?  RAM?
> 
> If your system is getting stressed with disk I/O, then a good first step
> could be to move to a different RAID configuration. As you're running RAID
> 1, when writing data, each bit of data has to be written to both drives. If
> your environment involves a lot of INSERT queries, then it may be worth
> adding another disk and moving up to RAID 5.

Strange. My understanding was that RAID 5 was good for read bandwidth
but that keeping the parity disk uptodate slowed it down for write
bandwidth.

> Alternatively, you might
> consider foresaking redundancy and going down to RAID 0. Also, are you
> running hardware or software RAID? 

AMI Megaraid hardware.

> If software, getting a dedicated RAID
> card will lessen the load on your CPUs.
> 
> > > The axes of expansion I see are:
> > >     1) CPU speed (2 GHz processors?)
> > >     2) # of CPUs (quad processor, 8 processors?)
> > >     3) Multiple machines (replication)
> > >     4) More memory (current system maxes out at 4GB)
> > >     5) Different CPUs (SPARC, Alpha, IBM Power, HP-PA, Itanium)
> > >     6) Faster disks (15,000 RPM)
> > >     7) More disks (striping, different databases/tables on
> > different disks,
> > > MySQL striping)
> > >     8) Switch some high contention tables to InnoDB, BDB or
> > Gemini to avoid
> > > lock contention
> > >     9) Optimize server variables
> > >
> > > Which approach or combination of approaches is likely to double
> > > (quadruple?) our throughput at the best price performance?  I have
> > > attached some info to help characterize our usage.
> >
> > Replication.  You can do it with less expensive hardware.  You'll get
> > good performance and probably be able to scale farther wit it.  Of
> > course, you'll want to look at #9 before spending any money.  And try
> > to get an idea of where your contention for resources is today.
> 
> Yes, you should definitely look at option #9 first. Here's a few pointers to
> some things that immediately spring off the screen at me:
> 
> | Open_tables              | 1296       |
> | Open_files               | 2180712    |
> | Open_streams             | 0          |
> | Opened_tables            | 1277057    |
> | table_cache              | 2024       |
> 
> Your Opened_tables figure is quite large, which means you are incurring
> extra I/O penalties as tables have to be constantly opened and closed. You
> should try increasing your table_cache size.
> 
> | Slow_launch_threads      | 1          |
> | Threads_cached           | 0          |
> | Threads_created          | 346157     |
> | thread_cache_size        | 0          |
> 
> Slow_launch_threads should never be more than zero. And, seeing as your
> configured slow launch yardstick time is 2 seconds, this indicates you may
> be starting to have a bottleneck here. You should trying setting a
> thread_cache_size of something like 32 - maybe higher.
> 
> | Table_locks_immediate    | 27157119   |
> | Table_locks_waited       | 58498      |
> | Key_read_requests        | 1535872968 |
> | Key_reads                | 5560163    |
> 
> This is good. Table locks that had to be waited for are less than 1% of
> total locks. You don't seem to have too much of a problem with lock
> contention. Also, your ratio of key reads/requests is way less than 0.01, so
> no general problems with index usage on your queries.
> 
> | Created_tmp_disk_tables  | 415975     |
> | tmp_table_size           | 2097144    |
> 
> Created_tmp_disk_tables could probably be a little lower. Try increasing
> your tmp_table_size memory figure to lessen the number of temp tables
> written to disk - 2Mb is probably quite small if you're shuffling large
> amounts of data.
> 
> As for replication, there could be a couple of sticking points with this
> strategy that you may need to overcome. The first is whether your client
> applications (be they web scripts, custom apps, whatever) can easily be
> re-programmed to support distributing their SQL query load amongst several
> servers. Secondly, if you are chucking large amounts of data around and your
> servers are replicating it all, your networking may not be up to scratch. If
> you go for replication you should make sure you're running at least 100Mbps
> between your MySQL servers. (BTW, if in a closed environment, running
> 100Mbps to the clients might help also.)
> 
> Looking at option #8, you may see quite a large performance boost if you go
> for InnoDB tables and your query load consists of large quantities of small
> queries, as this is where row-level locking, etc. will help most.
> 
> Regards,
> 
> Basil Hussain
> ---------------------------------------
> Internet Developer, Kodak Weddings
> E-Mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to