I had done many instances on one machine before,  the most important thing
is about the my.cnf.
And there are many individual my.cnf, which belonged to their own instance.
Since your total memory is 32GB, you can assign them properly.

On Fri, Nov 21, 2008 at 3:40 AM, Claudio Nanni <[EMAIL PROTECTED]>wrote:

> <quote>
> we are going to be
> setting up a 3 to 4 node MySQL replication cluster (1 master-rw and 2
> slaves-ro)...each having 16 to 32 GB of RAM.
> <quote>
>
>
> If it is still true what you wrote you need different installations.
> Of course master and slave on the same host has the only use of an online
> backup solution,
> better if using different storage for data partitions, anyway adding not
> much to high availability.
> But if your only concern is to test a Master/Slave configuration I would go
> for multiple instances on same host.
> If you need a complete description on how to do it contact me.
> Sorry if I repeat myself, but for reliable test you should have the same
> architecture for both prod and preprod,
>
>
>
>
> Claudio Nanni
>
>
>
>
>
> Shain Miley wrote:
>
>> Ok...based on the responses that I received so far...it seems like maybe I
>> should be leaning toward a non virtualized solution.
>>
>> What I am wondering now is...
>>
>> 1)    would it be better to have one MySQL instance running and have the
>> developers each have their own DB inside that one instance?
>> or
>> 2)   would it be better to have each developer have their own MySQL
>> instance on the same machine?
>> or
>> 3)   some combination of the above...maybe have the developers split
>> between 2 or 3 MySQL instances on the same machine...
>>
>> Any thoughts?
>>
>> Thanks again,
>>
>> Shain
>>
>> Simon J Mudd wrote:
>>
>>> [EMAIL PROTECTED] (Shain Miley) writes:
>>>
>>>
>>>
>>>> I am looking into the idea of setting up 10 - 15 virtualized instances
>>>> of MySQL.  The reason for this is as follows...we are going to be
>>>> setting up a 3 to 4 node MySQL replication cluster (1 master-rw and 2
>>>> slaves-ro)...each having 16 to 32 GB of RAM.
>>>>
>>>> In order for our development team to do their work...they must have
>>>> access to some Mysql resources that are close to the production
>>>> environment.  I am not currently in a position to provide each
>>>> developer two MySQL servers (one master and one slave with 16 to 32 GB
>>>> of RAM) for testing...or obvious reasons...mainly cost ;-)
>>>>
>>>> So I have been thinking about how best to provide such resources,  at
>>>> this  point I am thinking that I can use OpenVZ to help me out a bit.
>>>>
>>>> I was wondering if anyone had any thoughts on this issue...should I
>>>> just run 10 instances of MySQL on the same server...are there other
>>>> options?
>>>>
>>>> I am concerned with trying to ensure that the metrics, resources,
>>>> workloads, etc from these development servers has some sort of
>>>> relevance to our production environment...otherwise we are testing
>>>> apples and oranges...which the dev team will clearly point out...and
>>>> in a way I know we are...but I would like to minimize the effects....
>>>>
>>>>
>>>
>>> My only concern would be that if you have busy mysql instances that
>>> they will interfere with each other. We used to have a couple of busy
>>> mysqld processes running on the same Linux server only to find that
>>> the performance characteristics were worse than 1/2 of the performance
>>> of having each instance on a separate server. Both mysqld instances
>>> were busy and so fought each other for I/O and for CPU often at the
>>> same time. If this might be an issue for your virtual servers may not
>>> be an ideal solution as most of the free virtualisation options don't
>>> control sufficiently the hardware resources distributed to each
>>> virtual machine.
>>>
>>> YMMV.
>>>
>>> Simon
>>>
>>>
>>>
>>
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
I'm a MySQL DBA in china.
More about me just visit here:
http://yueliangdao0608.cublog.cn

Reply via email to