Re: Questions about mysql-proxy...

2012-04-05 Thread Wes Modes
No one here has any experience with it? 

W.

On 4/4/2012 2:17 PM, Claudio Nanni wrote:
> Wes,
> Thanks for these questions about this 'ghost' of the MySQL world, it
> seems more a legend than a real thing!
> I am sorry I do not have the answers but I would love to hear some.
>
> All I can say is that MySQL Proxy is currently (still) in Alpha
> http://dev.mysql.com/downloads/mysql-proxy/
> https://launchpad.net/mysql-proxy
>
> so it is unlikely to be used in production.
>
> Such a shame that it was not developed further, this and other
> features (like online backups) really miss in MySQL, according to me.
>
> Cheers
>
> Claudio
>  
> 2012/4/4 Wes Modes mailto:wmo...@ucsc.edu>>
>
> I asked these questions in context of my clustering enquiries, but
> here
> it is more specific to mysql-proxy:
>
>  1. First, what is the best place to ask specific questions about
>mysql-proxy?
>
>  2. Does the proxy sit on a separate server and route all MySQL
>requests, or is it installed on each of the MySQL nodes and
>re-shuffle MySQL requests to the appropriate place?
>
>  3. Can multiple proxies be run in concert to provide redundancy and
>scalability as well as eliminate SPoF and bottlenecks?
>
>  4. In 2007 when RW Splitting was new, there were a few problems and
>limitations. What is the current status of development of this
>important feature? Thanks!
>
> Wes
>
> --
> Wes Modes
> Systems Designer, Developer, and Administrator
> University Library ITS
> University of California, Santa Cruz
>
>
>
>
> -- 
> Claudio

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz



Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-04 Thread Wes Modes
Thanks, Ian.

W.

On 4/4/2012 4:02 AM, Ian wrote:
> On 04/04/2012 01:11, Wes Modes wrote:
>> On 4/3/2012 3:04 AM, Ian wrote:
>>> On 03/04/2012 00:47, Wes Modes wrote:
>>>> Thanks again for sharing your knowledge.  I do believe the answers I've
>>>> receiving, but since I have requirements that I cannot easily alter, I'm
>>>> also gently pushing my expert advisers here to look beyond their own
>>>> preferences and direct experience.
>>>>
>>>> RE: Shared storage.  I can easily let go of the preference to take
>>>> advantage of shared storage.  I understand duplicated databases are the
>>>> essence of database redundancy.  You make good points.
>>>>
>>>> In terms of the acceptability of a small fraction of users being
>>>> temporarily unable to access services:  rather than sharding, which
>>>> again requires more control over the application than we have, I was
>>>> more envisioning that would be the fraction of users who hit the one
>>>> peer MySQL server that is temporarily unavailable due to h/w or s/w
>>>> failure or DB corruption while its fail over is powered up.
>>>>
>>>> Does MySQL cluster seem like it will address my requirements to allow us
>>>> to horizontally scale a number of MySQL nodes as peers without
>>>> separating reads and writes, or slaves and masters. 
>>>>
>>>> Wes
>>> Hi Wes,
>>>
>>> If you can't alter the application to split reads and writes, why not
>>> let MySQL Proxy to do it for you?
>>>
>>> http://forge.mysql.com/wiki/MySQL_Proxy
>>>
>>> Combine this with haproxy and you could build a multi-master environment
>>> with each master having any number of slaves.  Set MySQL Proxy to send
>>> writes to the masters and reads to the slaves.
>>>
>>> Regards
>>>
>>> Ian
>> Ian, what is the best place to ask specific questions about mysql-proxy? 
>>
>> In general, here are my questions:
>>
>>  1. Does the proxy sit on a separate server and route all MySQL
>> requests, or is it installed on each of the MySQL nodes and
>> re-shuffle MySQL requests to the appropriate place?
>>
>>  2. Can multiple proxies be run in concert to provide redundancy and
>> scalability as well as eliminate SPoF and bottlenecks?
>>
>>  3. In 2007 when RW Splitting was new, there were a few problems and
>> limitations.  What is the current status of development of this
>> important feature?
>>
>> Thanks!
>>
>> And again I appreciate the brainstorming that many have done here to
>> find a solution that fits my requirements.
>>
>> Wes
> Hi Wes,
>
> I'm afraid I can't help you with your questions as I have never actually
> used MySQL Proxy!  I was aware of its existence so brought it up in case
> it fitted your needs.
>
> I sympathize with your position and hope you find the answers you need.
>
> Regards
>
> Ian

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-03 Thread Wes Modes
On 4/3/2012 3:04 AM, Ian wrote:
> On 03/04/2012 00:47, Wes Modes wrote:
>> Thanks again for sharing your knowledge.  I do believe the answers I've
>> receiving, but since I have requirements that I cannot easily alter, I'm
>> also gently pushing my expert advisers here to look beyond their own
>> preferences and direct experience.
>>
>> RE: Shared storage.  I can easily let go of the preference to take
>> advantage of shared storage.  I understand duplicated databases are the
>> essence of database redundancy.  You make good points.
>>
>> In terms of the acceptability of a small fraction of users being
>> temporarily unable to access services:  rather than sharding, which
>> again requires more control over the application than we have, I was
>> more envisioning that would be the fraction of users who hit the one
>> peer MySQL server that is temporarily unavailable due to h/w or s/w
>> failure or DB corruption while its fail over is powered up.
>>
>> Does MySQL cluster seem like it will address my requirements to allow us
>> to horizontally scale a number of MySQL nodes as peers without
>> separating reads and writes, or slaves and masters. 
>>
>> Wes
> Hi Wes,
>
> If you can't alter the application to split reads and writes, why not
> let MySQL Proxy to do it for you?
>
> http://forge.mysql.com/wiki/MySQL_Proxy
>
> Combine this with haproxy and you could build a multi-master environment
> with each master having any number of slaves.  Set MySQL Proxy to send
> writes to the masters and reads to the slaves.
>
> Regards
>
> Ian

Ian, what is the best place to ask specific questions about mysql-proxy? 

In general, here are my questions:

 1. Does the proxy sit on a separate server and route all MySQL
requests, or is it installed on each of the MySQL nodes and
re-shuffle MySQL requests to the appropriate place?

 2. Can multiple proxies be run in concert to provide redundancy and
scalability as well as eliminate SPoF and bottlenecks?

 3. In 2007 when RW Splitting was new, there were a few problems and
limitations.  What is the current status of development of this
important feature?

Thanks!

And again I appreciate the brainstorming that many have done here to
find a solution that fits my requirements.

Wes

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz



Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-03 Thread Wes Modes
Am I right in seeing that if you can split reads and writes without the
application having to be replication-aware, one does not need multiple
masters?  One can simply have standard MySQL replication, yes?

For us, we only were interested in multiple masters so that we could
read or write from any of the available MySQL nodes.

W.

On 4/3/2012 3:04 AM, Ian wrote:
> On 03/04/2012 00:47, Wes Modes wrote:
>> Thanks again for sharing your knowledge.  I do believe the answers I've
>> receiving, but since I have requirements that I cannot easily alter, I'm
>> also gently pushing my expert advisers here to look beyond their own
>> preferences and direct experience.
>>
>> RE: Shared storage.  I can easily let go of the preference to take
>> advantage of shared storage.  I understand duplicated databases are the
>> essence of database redundancy.  You make good points.
>>
>> In terms of the acceptability of a small fraction of users being
>> temporarily unable to access services:  rather than sharding, which
>> again requires more control over the application than we have, I was
>> more envisioning that would be the fraction of users who hit the one
>> peer MySQL server that is temporarily unavailable due to h/w or s/w
>> failure or DB corruption while its fail over is powered up.
>>
>> Does MySQL cluster seem like it will address my requirements to allow us
>> to horizontally scale a number of MySQL nodes as peers without
>> separating reads and writes, or slaves and masters. 
>>
>> Wes
> Hi Wes,
>
> If you can't alter the application to split reads and writes, why not
> let MySQL Proxy to do it for you?
>
> http://forge.mysql.com/wiki/MySQL_Proxy
>
> Combine this with haproxy and you could build a multi-master environment
> with each master having any number of slaves.  Set MySQL Proxy to send
> writes to the masters and reads to the slaves.
>
> Regards
>
> Ian

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication rings/maatkit (was: HA & Scalability w MySQL + SAN + VMWare)

2012-04-02 Thread Wes Modes

> Replication rings are possible but you must design your application to
> take special care to NOT update the same row in multiple nodes of the
> ring at the same time. This is even harder to design and code for than
> splitting writes/reads to master/slaves.
>
> Also the loss of one node of a replication ring is not as easy to
> recover from as simply promoting one slave to become the new master of
> a replication tree (demoting the recovered former-master to become yet
> another slave) as there may be pending events in the relay logs of the
> lost node that have not yet been relayed to the downstream node.
>
> I may not have every answer, but I have seen nearly every kind of
> failure.  Everyone else is encouraged to add their views to the
> discussion.
>

Has anyone used maatkit or Percona to setup circular replication?  How
does it affect this system's reliability and robustness?  Do the tools
help to deal with fail over?

W.

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-02 Thread Wes Modes
Thanks again for sharing your knowledge.  I do believe the answers I've
receiving, but since I have requirements that I cannot easily alter, I'm
also gently pushing my expert advisers here to look beyond their own
preferences and direct experience.

RE: Shared storage.  I can easily let go of the preference to take
advantage of shared storage.  I understand duplicated databases are the
essence of database redundancy.  You make good points.

In terms of the acceptability of a small fraction of users being
temporarily unable to access services:  rather than sharding, which
again requires more control over the application than we have, I was
more envisioning that would be the fraction of users who hit the one
peer MySQL server that is temporarily unavailable due to h/w or s/w
failure or DB corruption while its fail over is powered up.

Does MySQL cluster seem like it will address my requirements to allow us
to horizontally scale a number of MySQL nodes as peers without
separating reads and writes, or slaves and masters. 

Wes

On 4/2/2012 2:25 PM, shawn green wrote:
> Hello Wes,
>
> On 4/2/2012 4:05 PM, Wes Modes wrote:
>> Thanks Shawn and Karen, for the suggestions, even given my vague
>> requirements.
>>
>> To clarify some of my requirements.
>>
>> *Application:  *We are using an open-source application called Omeka,
>> which is a "free, flexible, and open source web-publishing platform for
>> the display of library, museum, archives, and scholarly collections and
>> exhibitions."  Without getting into how free (or scalable) free software
>> really is, we can view it as one aspect we cannot change, having been
>> written into the grant requirements we received for the project.
>> Experienced Omeka developers and our own developer have suggested
>> that/it is not feasible to separate database writes from reads in the
>> application/ (given time and resources).
>>
>
> That's a shame. Sounds like you are back to one big server or several
> smaller servers with in-program sharding.
>
>> *SAN: *The SAN is a Dell EqualLogic 6100 which has redundant everything,
>> including multiple NICs, controllers, and power.  So we are less
>> concerned about the SAN being a SPoF.  On the other hand, if we have a
>> single big MySQL server that fails, we could bring up another copy of it
>> via VMWare, but until the server came up, the application would be dead
>> in the water.  If the database is corrupted, service will be interrupted
>> for a considerable time.
>>
>
> Again, each MySQL instance needs it's own copy of the data. Having
> only one big powerful disk system means that each instance you fire up
> must both share spindles and networking to access its data. Just like
> a freeway at rush hour, you may find the traffic into and out of this
> one device crawling to a halt exactly when you don't want it to.
>
>> *High Availability:*  It sounds like there is some debate over how to
>> provide HA best, but do people really disagree on the desired results?
>> Without getting into the many meanings of this buzz word, here's what we
>> mean: /We desire to maintain high availability of service, allowing a
>> small fraction of users to experience outage for only seconds at a
>> time.  We desire to provide this through horizontal scaling, redundancy,
>> failover planning, and external monitoring.  /
>>
>
> "Small fraction of users" - this implies data sharding. Multiple MySQL
> instances each with enough data to operate independently for one slice
> of your most important data and an application smart enough to know
> which shard to go to for each slice of data.
>
> "For a few seconds at a time" - you do not want a shared disk. Should
> the active MySQL die, it's data will be in an inconsistent state. Once
> you fire up the passive daemon it will need to perform a recovery
> restart. This down time is more than likely not going to take only a
> few seconds. The more data you have, the longer the checks will take. 
> An independent copy maintained by a slave instance, provides a
> logically consistent copy of the master's data as it will only
> replicate complete transactions.
>
> "horizontal scaling" - one master, multiple slaves. This requires the
> separation of writes and reads.
>
>
>> *Scalability:  *Again, seems like there are lots of applications and
>> implementation, but people agree on the general concept.  Here's what we
>> mean for this project:  /We desire to  scale our services so that a
>> usage surge does not cause unavailability of the services for some
>> users.  We prefer to horizontally increase scalability using

MySQL Multi-Master Replication

2012-04-02 Thread Wes Modes
Howdy all.

I am looking for a MySQL solution that allows us to horizontally scale a
number of MySQL nodes as peers without separating reads and writes, or
slaves and masters.  This may not be ideal, but the application we are
using is an unchangeable aspect of the project.

I ran into this post by Giuseppe Maxia
(http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html)
that details our concern exactly:

MySQL Cluster... is a complex architecture to achieve high
availability and performance. One of the advantages of MySQL Cluster
is that each node is a peer to the others, whereas in a normal
replicating system you have a master and many slaves, and
applications must be careful to write only to the master...

There are some cases where the MySQL Cluster is the perfect
solution, but for the vast majority, replication is still the best
choice.

Replication, too, has its problems, though:

  * There is a fastidious distinction between master and slaves.
Your applications must be replication-aware, so that they will
write on the master and read from the slaves. It would be so
nice to have a replication array where you could use all the
nodes in the same way, and every node could be at the same time
master and slave.
  * There is the fail-over problem. When the master fails, it's true
that you have the slaves ready to replace it, but the process of
detecting the failure and acting upon it requires the
administrator's intervention.

Fixing these two misfeatures is exactly the purpose of this article.
Using features introduced in MySQL 5.0 and 5.1, it is possible to
build a replication system where all nodes act as master and slave
at the same time, with a built-in fail-over mechanism.

The article goes on to talk about setting up a Multimaster Replication
System.

At one point, I was lured into using the Multi-Master Replication
Manager for MySQL (MMM) since it was said to be a set of scripts that
made this process easier, but found that like standard replication made
a distinction between masters and slaves, so I'm back to the original
Giuseppe article.

Anyone have experience setting up MySQL Multi-Master Replication?  Or is
there a better list to ask this question on?

Wes

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz



Re: HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-02 Thread Wes Modes
Thanks Shawn and Karen, for the suggestions, even given my vague
requirements. 

To clarify some of my requirements. 

*Application:  *We are using an open-source application called Omeka,
which is a "free, flexible, and open source web-publishing platform for
the display of library, museum, archives, and scholarly collections and
exhibitions."  Without getting into how free (or scalable) free software
really is, we can view it as one aspect we cannot change, having been
written into the grant requirements we received for the project. 
Experienced Omeka developers and our own developer have suggested
that/it is not feasible to separate database writes from reads in the
application/ (given time and resources).

*SAN: *The SAN is a Dell EqualLogic 6100 which has redundant everything,
including multiple NICs, controllers, and power.  So we are less
concerned about the SAN being a SPoF.  On the other hand, if we have a
single big MySQL server that fails, we could bring up another copy of it
via VMWare, but until the server came up, the application would be dead
in the water.  If the database is corrupted, service will be interrupted
for a considerable time.

*High Availability:*  It sounds like there is some debate over how to
provide HA best, but do people really disagree on the desired results? 
Without getting into the many meanings of this buzz word, here's what we
mean: /We desire to maintain high availability of service, allowing a
small fraction of users to experience outage for only seconds at a
time.  We desire to provide this through horizontal scaling, redundancy,
failover planning, and external monitoring.  /

*Scalability:  *Again, seems like there are lots of applications and
implementation, but people agree on the general concept.  Here's what we
mean for this project:  /We desire to  scale our services so that a
usage surge does not cause unavailability of the services for some
users.  We prefer to horizontally increase scalability using
load-balancing strategies to treat clusters of servers as single logical
units./ 

The application may have not been designed with great scalability in
mind, but if multiple application instances are accessing multiple
database servers treated as one logical unit, that may not be too
relevant. 

I am responsible for creating an architecture upon which this project
will run.  I am not responsible for redesigning the application.   So
far, no one has suggested anything that approached meeting our
requirements, even our vague ones.  Perhaps I am asking the wrong list? 

Does anyone have any experience with MySQL Multi-Master Replication? 
Perhaps that should be a separate post.

Wes

On 3/30/2012 3:56 PM, shawn green wrote:
> Hello Wes,
>
> On 3/29/2012 9:23 PM, Wes Modes wrote:
>> First, thank you in advance for good solid suggestions you can offer. I
>> suppose someone has already asked this, but perhaps you will view it as
>> a fun challenge to meet my many criteria with your suggested MySQL
>> architecture.
>>
>> I am working at a University on a high-profile database driven project
>> that we expect to be slammed within the first few months. Since this is
>> a new project and one that we expect to be popular, we don't know what
>> kind of usage to expect, but we want to be prepared. Therefore, we are
>> building in extra capacity.
>>
>> Our top goals are scalability and high availability, provided we hope
>> through multiple MySQL nodes and VMWare functionality. I've been
>> surprised that there are not more MySQL architects trying to meet these
>> high-level goals using virtualization and shared storage (or at least
>> they do not seem to be writing about it).
>>
>> I've looked at replication, multi-mastering, DRBD, clustering,
>> partitioning, and sharding.
>>
>> Here's what we got, and some of our constraints:
>>
>> * We are concerned that One Big Database instance won't be enough to
>> handle all of the queries, plus it is a single point of failure.
>> Therefore, multiple nodes are desirable.
>>
>> * With the primary application that will be using the database, writes
>> and reads cannot be split off from each other. This limitation alone,
>> rules out replication, MMM, and a few other solutions.
>>
>> * We do not expect to be especially write-heavy.
>>
>> * We have shared storage in the form of an iSCSI SAN. We'd like to
>> leverage the shared storage, if possible.
>>
>> * We have VMWare HA which already monitors hosts and brings them up
>> within minutes elsewhere if we lose a host. So some of the suggested HA
>> solutions are redundant.
>>
>> * We expect to have another instance of our system running in the Amazon
>> cloud for the first few months whi

Re: Percona: Contact Details - a word on poaching

2012-04-02 Thread Wes Modes
Hi, I received a suggestion from Baron Schwartz that I consider your
company for consulting advice as a solution to an enquiry I made to the
MySQL list.  I did not respond to Baron Schwartz and now I receive this
email from an account executive.  I don not think I am alone is
believing poaching the list is unethical. 

Please feel free to have your experts offer suggestions and advice, but
having those experts directly passing contacts on to sales execs does
not feel okay.  Using the MySQL lists as a honey pot for sales
opportunities is not why the list exist.

W.

On 4/2/2012 4:59 AM, Ryan Walsh wrote:
> Hello Wes,
>
> I hope this finds you well.
>
> I am following up on recent contact you had with Baron Schwartz. I am
> wondering if you are available at 11:00 am PDT tomorrow to speak? I
> wanted to establish if there may be areas that Percona can help with
> the release of your new application.
>
> If this time does not work, let me know when may be better.
>
> Regards,
>
> Ryan Walsh
> Corporate Account Executive, Percona Inc.
> +1-888-401-3401 ext: 512
> +1-647-345-1095
> ryan.wa...@percona.com <http://www.percona.com/live/m>
> http://www.percona.com <http://www.percona.com/>
>
>
> Percona Live MySQL Conference April 10-12 Santa Clara
> http://www.percona.com/live/mysql-conference-2012/
>

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz



HA & Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-03-29 Thread Wes Modes
First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.

I am working at a University on a high-profile database driven project
that we expect to be slammed within the first few months. Since this is
a new project and one that we expect to be popular, we don't know what
kind of usage to expect, but we want to be prepared. Therefore, we are
building in extra capacity.

Our top goals are scalability and high availability, provided we hope
through multiple MySQL nodes and VMWare functionality. I've been
surprised that there are not more MySQL architects trying to meet these
high-level goals using virtualization and shared storage (or at least
they do not seem to be writing about it).

I've looked at replication, multi-mastering, DRBD, clustering,
partitioning, and sharding.

Here's what we got, and some of our constraints:

* We are concerned that One Big Database instance won't be enough to
handle all of the queries, plus it is a single point of failure.
Therefore, multiple nodes are desirable.

* With the primary application that will be using the database, writes
and reads cannot be split off from each other. This limitation alone,
rules out replication, MMM, and a few other solutions.

* We do not expect to be especially write-heavy.

* We have shared storage in the form of an iSCSI SAN. We'd like to
leverage the shared storage, if possible.

* We have VMWare HA which already monitors hosts and brings them up
within minutes elsewhere if we lose a host. So some of the suggested HA
solutions are redundant.

* We expect to have another instance of our system running in the Amazon
cloud for the first few months while the traffic is high, so we may take
advantage of RDS, though an exact duplicate of our local system will
save us development work.

Thanks for any advice you can give.

Wes Modes

-- 
Wes Modes
Systems Designer, Developer, and Administrator
University Library ITS
University of California, Santa Cruz


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql