Thanks, Ian.
W.
On 4/4/2012 4:02 AM, Ian wrote:
> On 04/04/2012 01:11, Wes Modes wrote:
>> On 4/3/2012 3:04 AM, Ian wrote:
>>> On 03/04/2012 00:47, Wes Modes wrote:
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I have requirements that I ca
On 4/3/2012 3:04 AM, Ian wrote:
> On 03/04/2012 00:47, Wes Modes wrote:
>> Thanks again for sharing your knowledge. I do believe the answers I've
>> receiving, but since I have requirements that I cannot easily alter, I'm
>> also gently pushing my expert advisers here to look beyond their own
>> p
Am I right in seeing that if you can split reads and writes without the
application having to be replication-aware, one does not need multiple
masters? One can simply have standard MySQL replication, yes?
For us, we only were interested in multiple masters so that we could
read or write from any
- Original Message -
> From: "Ian"
>
> with each master having any number of slaves. Set MySQL Proxy to
> send writes to the masters and reads to the slaves.
Yes, except when you have replication delays (asynchronous, remember?) like
someone else recently posted, your application write
On 03/04/2012 00:47, Wes Modes wrote:
> Thanks again for sharing your knowledge. I do believe the answers I've
> receiving, but since I have requirements that I cannot easily alter, I'm
> also gently pushing my expert advisers here to look beyond their own
> preferences and direct experience.
>
>
ssage -
From: "Wes Modes"
To: mysql@lists.mysql.com
Sent: Monday, April 2, 2012 7:47:18 PM
Subject: Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion
Wanted
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I
Thanks again for sharing your knowledge. I do believe the answers I've
receiving, but since I have requirements that I cannot easily alter, I'm
also gently pushing my expert advisers here to look beyond their own
preferences and direct experience.
RE: Shared storage. I can easily let go of the p
Hello Wes,
On 4/2/2012 4:05 PM, Wes Modes wrote:
Thanks Shawn and Karen, for the suggestions, even given my vague
requirements.
To clarify some of my requirements.
*Application: *We are using an open-source application called Omeka,
which is a "free, flexible, and open source web-publishing p
DRBD, SAN, etc. Sure, they are highly redundant. Sure they are
reliable. But they do not handle the building being in a
flood/earthquake/tornado/etc. If you want HA, you have to start with
having two (or more) copies of all the data sitting in geographically
distinct flood plains, etc. If
Thanks Shawn and Karen, for the suggestions, even given my vague
requirements.
To clarify some of my requirements.
*Application: *We are using an open-source application called Omeka,
which is a "free, flexible, and open source web-publishing platform for
the display of library, museum, archiv
Hello Wes,
On 3/29/2012 9:23 PM, Wes Modes wrote:
First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.
I am working at a Un
Hi,
First, it is kind of funny to advise on something that is unknown. The devil
of such systems is in details. A small detail might cancel the whole big idea
of using, say, sharing, clustering, etc.So any discussion on this will be
quite general and can only be applied to your project
Caution: You are not going to like my answers.
> and VMWare
> shared storage
Why? Seems like scalability should plan on having dedicated hardware.
> replication
The best choice
> multi-mastering
Dual-Master gives good HA
> DRBD
Partially solves one subset of HA; don't bother with it; set your
First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.
I am working at a University on a high-profile database driven project
th
14 matches
Mail list logo