RE: Looking for consultant

2012-07-19 Thread Stillman, Benjamin
7.2 introduces geographic clustering:
https://blogs.oracle.com/MySQL/entry/synchronously_replicating_databases_across_data
http://dev.mysql.com/tech-resources/articles/mysql-cluster-7.2.html (section 
titled: Enhancing Cross Data Center Scalability: Multi-Site Clustering)

Data nodes can be located at multiple data centers. They've had geographic 
replication for a while, but this makes it even easier. Obviously performance 
depends on your network setup. I believe they suggest latency under 20ms and 
bandwidth between the datacenters of 1Gbit or faster. Redundant management and 
SQL nodes can be split across the datacenters also.



-Original Message-
From: Howard Hart [mailto:h...@ooma.com]
Sent: Wednesday, July 18, 2012 8:26 PM
To: mysql@lists.mysql.com
Subject: Re: Looking for consultant

You could write to an InnoDB frontend with master/master replication at each 
site, and slave off the local InnoDB server to your local cluster at each site.

Would make your writes limited by your InnoDB server performance and remote 
replication speed, but reads would run at cluster speeds and be a bit more 
bulletproof.

That could also potentially cover the foreign key constraints limitation in 
cluster since last I checked, it doesn't support these--may have changed 
recently--don't know. The foreign key constraint checks in this case would be 
covered by the InnoDB frontend prior to pushing to cluster.

Also looks like the latest MySQL cluster solution supports asynchronous binlog 
style replication per link below, so guess that's a possibility too now.

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication.html


On 07/18/2012 04:45 PM, Rick James wrote:
 Keep in mind that all cluster solutions are vulnerable to a single power 
 failure, earthquake, flood, tornado, etc.

 To protect from such, you need a hot backup located remotely from the 
 live setup.  This introduces latency that will kill performance -- all 
 cluster solutions depend on syncing, heartbeats, etc, that cannot afford long 
 latencies.

 You may choose to ignore that issue.  But, before going forward you need to 
 make that decision.

 -Original Message-
 From: Antonis Kopsaftis [mailto:ak...@edu.teiath.gr]
 Sent: Wednesday, July 18, 2012 9:09 AM
 To: Carl Kabbe
 Cc: mysql@lists.mysql.com
 Subject: Re: Looking for consultant

 Hello,

 As far i can understand by your post, you need a high availability
 mysql cluster with large capacity.
 For having high availability you need something that can give you
 multi-master replication between two or more mysql servers.

 In my knowledge there are three solutions that can give you multi-
 master
 replication:

 1. Official mysql cluster
 It's an Enterprise class solution, very complicated, but 'it fully
 multi-master. I was using one for about two year, but i dont
 recommend it because (at least in my setup) it did not have very good
 performance.
 It's use it's own storage engine(NDB) which has a number of
 limitations.

 2. Tungsten replicator.
 It 's relative new product. It support multi-master replication
 between different type of databases, and it seems very promising.
 It's java based. I haven't tested it but you can read a lot about on:
 http://datacharmer.blogspot.com

 3. Percona xtraDB cluster
 It's also a relative new product. It's also support multi-master
 replication, and it seems to have very good performance. The last 3
 weeks i have installed a 3 node cluster of percona software and i'm
 testing it. It seems to works ok, and after some optimization it has
 better performance than my production mysql setup(simple
 primary-slave
 replication) on same hardware (virtual machines). If i dont find any
 serious problem till September i will use it for production.


 Now,for you application to communicate with the two mysql master
 nodes there several solutions:
 1. Desing your app to use both mysql servers. With this solution you
 can ever split writes in the one server, and reads in the other. It's
 up to you to do whatever you want.

 2. Setup a simple heartbeat solution and setup a floating virtual ip
 between you mysql servers. If one of the mysql server( i mean the
 whole
 OS) crash, the floating ip will be attached to the second server.

 3. In each app server, install a tcp load balancer software like
 haproxy and balance the mysql tcp connections between your app
 servers and the mysql servers.

 Regards,
 akops


 On 18/7/2012 6:11 μμ, Carl Kabbe wrote:
 We are actually facing both capacity and availability issues at the
 same time.
 Our current primary server is a Dell T410 (single processor, 32 GB
 memory) with a Dell T310 (single processor, 16GB memory) as backup.
 Normally, the backup server is running as a slave to the primary
 server and we manually switch it over when the primary server fails
 (which it did last Saturday morning at 2:00AM.)  The switch over
 process takes
 10-15 minutes although I am reducing that to about five minutes with
 some

Re: Looking for consultant

2012-07-18 Thread Johnny Withers
Would you consider a service like www.xeround.com?

Sent from my iPad

On Jul 17, 2012, at 7:23 PM, Carl Kabbe c...@etrak-plus.com wrote:

 On Monday, I asked if there were consultants out there who could help set up 
 an NDB high availability system.  As I compared our needs to NDB, it became 
 obvious that NDB was not the answer and more obvious that simply adding high 
 availability processes to our existing Innodb system was.

 So, I am back asking if there are consultants lurking on this list that could 
 help with this project.

 Thanks,

 Carl
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Shawn Green

On 7/17/2012 8:22 PM, Carl Kabbe wrote:

On Monday, I asked if there were consultants out there who could help set up an 
NDB high availability system.  As I compared our needs to NDB, it became 
obvious that NDB was not the answer and more obvious that simply adding high 
availability processes to our existing Innodb system was.

So, I am back asking if there are consultants lurking on this list that could 
help with this project.



As has been discussed on this list many times before, there are many 
ways to measure 'high availability'. Most of them deal with what kind of 
disaster you want to survive or return to service from.  If all you are 
looking for is additional production capacity then the terms you may 
want to investigate are 'scale out', 'partitioning', and 'replication'. 
All high-availability solutions require at least some level of hardware 
redundancy. Sometimes they require multiple layers in multiple locations.


Several of those features of MySQL also help with meeting some 
high-availability goals.


Are you willing to discuss your specific desired availability thresholds 
in public?


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Carl Kabbe
We are actually facing both capacity and availability issues at the same time.

Our current primary server is a Dell T410 (single processor, 32 GB memory) with 
a Dell T310 (single processor, 16GB memory) as backup.  Normally, the backup 
server is running as a slave to the primary server and we manually switch it 
over when the primary server fails (which it did last Saturday morning at 
2:00AM.)  The switch over process takes 10-15 minutes although I am reducing 
that to about five minutes with some scripting (the changeover is a little more 
complex than you might think because we have a middle piece, also MySQL, that 
we use to determine where the real data is.)  Until six months ago, the time 
delay was not a problem because the customer processes could tolerate such a 
delay.  However, we now have a couple of water parks using our system at their 
gate, in their gift shops and in their concessions so we need to now move the 
changeover time to a short enough period that they really don't notice.  Hence, 
the need I have described as 'high availability'.

The T410 is normally reasonably capable of processing our transactions, i.e., 
the customers are comfortable with the latency.  However, we have been on the 
T310 since last Saturday and it is awful, basically barely able to keep up and 
producing unacceptable latency.  Further, our load will double in the next six 
months and double again the the following six months.

So, my thought was that since we have to deal with the issue change over time 
which will cause us to restructure the servers, that we should also deal with 
the capacity issue.  I think a couple of Dell T620's will provide the capacity 
we need (the servers we have spec'ed should be around 8X faster than the T410) 
but I have no experience evaluating or setting up HA systems (I have worked 
with MySQL for 12 years and am reasonably comfortable with it and I have read 
everything I can find about HA options and their implementations.)  Hence, my 
post asking for help (which we are willing to pay for.)

The web app is primarily JSP's for the administration side and Flash for the 
operators and other people doing transactions.  The server side code is about 
1.25 million lines of code and there are about 750 JSP's.  The data is 950 
tables with heavy use of foreign key constraints.  The container is Tomcat 
which runs on separate servers (the data servers only run MySQL.)

Any ideas or help in any way are always welcome.

Thanks,

Carl



On Jul 18, 2012, at 9:42 AM, Shawn Green wrote:

 On 7/17/2012 8:22 PM, Carl Kabbe wrote:
 On Monday, I asked if there were consultants out there who could help set up 
 an NDB high availability system.  As I compared our needs to NDB, it became 
 obvious that NDB was not the answer and more obvious that simply adding high 
 availability processes to our existing Innodb system was.
 
 So, I am back asking if there are consultants lurking on this list that 
 could help with this project.
 
 
 As has been discussed on this list many times before, there are many ways to 
 measure 'high availability'. Most of them deal with what kind of disaster you 
 want to survive or return to service from.  If all you are looking for is 
 additional production capacity then the terms you may want to investigate are 
 'scale out', 'partitioning', and 'replication'. All high-availability 
 solutions require at least some level of hardware redundancy. Sometimes they 
 require multiple layers in multiple locations.
 
 Several of those features of MySQL also help with meeting some 
 high-availability goals.
 
 Are you willing to discuss your specific desired availability thresholds in 
 public?
 
 -- 
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Adrian Fita
On 18/07/12 18:11, Carl Kabbe wrote:
 We are actually facing both capacity and availability issues at the
 same time.
 
 Our current primary server is a Dell T410 (single processor, 32 GB
 memory) with a Dell T310 (single processor, 16GB memory) as backup.
 Normally, the backup server is running as a slave to the primary
 server and we manually switch it over when the primary server fails
 (which it did last Saturday morning at 2:00AM.)  The switch over
 process takes 10-15 minutes although I am reducing that to about five
 minutes with some scripting (the changeover is a little more complex
 than you might think because we have a middle piece, also MySQL, that
 we use to determine where the real data is.)  Until six months ago,
 the time delay was not a problem because the customer processes could
 tolerate such a delay.  However, we now have a couple of water parks
 using our system at their gate, in their gift shops and in their
 concessions so we need to now move the changeover time to a short
 enough period that they really don't notice.  Hence, the need I have
 described as 'high availability'.

Hello. May I direct you to these guys: http://www.hastexo.com/ ? They do
High Availability consulting and implementation. They seem to know their
stuff and I'm certain they could help you.

 The T410 is normally reasonably capable of processing our
 transactions, i.e., the customers are comfortable with the latency.
 However, we have been on the T310 since last Saturday and it is
 awful, basically barely able to keep up and producing unacceptable
 latency.  Further, our load will double in the next six months and
 double again the the following six months.
 
 So, my thought was that since we have to deal with the issue change
 over time which will cause us to restructure the servers, that we
 should also deal with the capacity issue.  I think a couple of Dell
 T620's will provide the capacity we need (the servers we have spec'ed
 should be around 8X faster than the T410) but I have no experience
 evaluating or setting up HA systems (I have worked with MySQL for 12
 years and am reasonably comfortable with it and I have read
 everything I can find about HA options and their implementations.)
 Hence, my post asking for help (which we are willing to pay for.)
 
 The web app is primarily JSP's for the administration side and Flash
 for the operators and other people doing transactions.  The server
 side code is about 1.25 million lines of code and there are about 750
 JSP's.  The data is 950 tables with heavy use of foreign key
 constraints.  The container is Tomcat which runs on separate servers
 (the data servers only run MySQL.)
 
 Any ideas or help in any way are always welcome.
 
 Thanks,
 
 Carl
 
 
 
 On Jul 18, 2012, at 9:42 AM, Shawn Green wrote:
 
 On 7/17/2012 8:22 PM, Carl Kabbe wrote:
 On Monday, I asked if there were consultants out there who could
 help set up an NDB high availability system.  As I compared our
 needs to NDB, it became obvious that NDB was not the answer and
 more obvious that simply adding high availability processes to
 our existing Innodb system was.
 
 So, I am back asking if there are consultants lurking on this
 list that could help with this project.
 
 
 As has been discussed on this list many times before, there are
 many ways to measure 'high availability'. Most of them deal with
 what kind of disaster you want to survive or return to service
 from.  If all you are looking for is additional production capacity
 then the terms you may want to investigate are 'scale out',
 'partitioning', and 'replication'. All high-availability solutions
 require at least some level of hardware redundancy. Sometimes they
 require multiple layers in multiple locations.
 
 Several of those features of MySQL also help with meeting some
 high-availability goals.
 
 Are you willing to discuss your specific desired availability
 thresholds in public?

-- 
Adrian Fita

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Shawn Green

Hello Carl,

On 7/18/2012 11:11 AM, Carl Kabbe wrote:

We are actually facing both capacity and availability issues at the same time.
...


It sounds to me like you need a combination of sharding (one master per 
client or set of clients) combined with multiple slaves (one for backups 
only). If you share read queries between master and slave already, then 
you can continue with this. By using the one slave for backups only, it 
only needs to process the replication stream so it will be able to 
maintain itself most up to date. This would be the machine you switch to 
in event of failover. All machines (masters and slaves) need to have the 
same capacity.


Separating your clients to multiple machines will help with uptime and 
throughput. If you lose one, only some of your clients lose their 
connection while you fail over. Also, because each master does not need 
to handle ALL of your clients at one time (just some of them), you can 
use much cheaper hardware to handle the load. The other advantage is 
disk usage. By sharing your traffic over multiple disks (not just one 
big RAID array or SAN or NAS for ALL of your clients at once) you are 
actually providing more capacity for transactions than you would with a 
single large array.


Yes, this may make maintenance a little more interesting but this way 
you won't need to invest in such huge servers and you gain the 
redundancy you need to meet the HA goals you stated. Backups will be 
more numerous but they will be smaller (and possibly client specific). 
Backups can also happen in parallel (from multiple sources) which will 
make your maintenance windows smaller. Heavy traffic from one client 
will not drag down the performance of another (with the exception of 
clogging your network pipes). It's a win-win.


Go simple, not bigger. Divide and conquer is what I believe is your best 
approach.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Antonis Kopsaftis

Hello,

As far i can understand by your post, you need a high availability mysql 
cluster with large capacity.
For having high availability you need something that can give you 
multi-master replication between two or more mysql servers.


In my knowledge there are three solutions that can give you multi-master 
replication:


1. Official mysql cluster
It's an Enterprise class solution, very complicated, but 'it fully 
multi-master. I was using one for about two year, but i dont recommend 
it because (at least in my setup) it did not have very good performance. 
It's use it's own storage engine(NDB) which has a number of limitations.


2. Tungsten replicator.
It 's relative new product. It support multi-master replication between 
different type of databases, and it seems very promising. It's java 
based. I haven't tested it but you can read a lot about on: 
http://datacharmer.blogspot.com


3. Percona xtraDB cluster
It's also a relative new product. It's also support multi-master 
replication, and it seems to have very good performance. The last 3 
weeks i have installed a 3 node cluster of percona software and i'm 
testing it. It seems to works ok, and after some optimization it has 
better performance than my production mysql setup(simple primary-slave 
replication) on same hardware (virtual machines). If i dont find any 
serious problem till September i will use it for production.



Now,for you application to communicate with the two mysql master nodes 
there several solutions:
1. Desing your app to use both mysql servers. With this solution you can 
ever split writes in the one server, and reads in the other. It's up to 
you to do whatever you want.


2. Setup a simple heartbeat solution and setup a floating virtual ip 
between you mysql servers. If one of the mysql server( i mean the whole 
OS) crash, the floating ip will be attached to the second server.


3. In each app server, install a tcp load balancer software like 
haproxy and balance the mysql tcp connections between your app servers 
and the mysql servers.


Regards,
akops


On 18/7/2012 6:11 μμ, Carl Kabbe wrote:

We are actually facing both capacity and availability issues at the same time.

Our current primary server is a Dell T410 (single processor, 32 GB memory) with 
a Dell T310 (single processor, 16GB memory) as backup.  Normally, the backup 
server is running as a slave to the primary server and we manually switch it 
over when the primary server fails (which it did last Saturday morning at 
2:00AM.)  The switch over process takes 10-15 minutes although I am reducing 
that to about five minutes with some scripting (the changeover is a little more 
complex than you might think because we have a middle piece, also MySQL, that 
we use to determine where the real data is.)  Until six months ago, the time 
delay was not a problem because the customer processes could tolerate such a 
delay.  However, we now have a couple of water parks using our system at their 
gate, in their gift shops and in their concessions so we need to now move the 
changeover time to a short enough period that they really don't notice.  Hence, 
the need I have described as 'high availability'.

The T410 is normally reasonably capable of processing our transactions, i.e., 
the customers are comfortable with the latency.  However, we have been on the 
T310 since last Saturday and it is awful, basically barely able to keep up and 
producing unacceptable latency.  Further, our load will double in the next six 
months and double again the the following six months.

So, my thought was that since we have to deal with the issue change over time 
which will cause us to restructure the servers, that we should also deal with 
the capacity issue.  I think a couple of Dell T620's will provide the capacity 
we need (the servers we have spec'ed should be around 8X faster than the T410) 
but I have no experience evaluating or setting up HA systems (I have worked 
with MySQL for 12 years and am reasonably comfortable with it and I have read 
everything I can find about HA options and their implementations.)  Hence, my 
post asking for help (which we are willing to pay for.)

The web app is primarily JSP's for the administration side and Flash for the 
operators and other people doing transactions.  The server side code is about 
1.25 million lines of code and there are about 750 JSP's.  The data is 950 
tables with heavy use of foreign key constraints.  The container is Tomcat 
which runs on separate servers (the data servers only run MySQL.)

Any ideas or help in any way are always welcome.

Thanks,

Carl



On Jul 18, 2012, at 9:42 AM, Shawn Green wrote:


On 7/17/2012 8:22 PM, Carl Kabbe wrote:

On Monday, I asked if there were consultants out there who could help set up an 
NDB high availability system.  As I compared our needs to NDB, it became 
obvious that NDB was not the answer and more obvious that simply adding high 
availability processes to our existing Innodb system 

RE: Looking for consultant

2012-07-18 Thread Rick James
Keep in mind that all cluster solutions are vulnerable to a single power 
failure, earthquake, flood, tornado, etc.

To protect from such, you need a hot backup located remotely from the live 
setup.  This introduces latency that will kill performance -- all cluster 
solutions depend on syncing, heartbeats, etc, that cannot afford long latencies.

You may choose to ignore that issue.  But, before going forward you need to 
make that decision.

 -Original Message-
 From: Antonis Kopsaftis [mailto:ak...@edu.teiath.gr]
 Sent: Wednesday, July 18, 2012 9:09 AM
 To: Carl Kabbe
 Cc: mysql@lists.mysql.com
 Subject: Re: Looking for consultant
 
 Hello,
 
 As far i can understand by your post, you need a high availability
 mysql cluster with large capacity.
 For having high availability you need something that can give you
 multi-master replication between two or more mysql servers.
 
 In my knowledge there are three solutions that can give you multi-
 master
 replication:
 
 1. Official mysql cluster
 It's an Enterprise class solution, very complicated, but 'it fully
 multi-master. I was using one for about two year, but i dont recommend
 it because (at least in my setup) it did not have very good
 performance.
 It's use it's own storage engine(NDB) which has a number of
 limitations.
 
 2. Tungsten replicator.
 It 's relative new product. It support multi-master replication between
 different type of databases, and it seems very promising. It's java
 based. I haven't tested it but you can read a lot about on:
 http://datacharmer.blogspot.com
 
 3. Percona xtraDB cluster
 It's also a relative new product. It's also support multi-master
 replication, and it seems to have very good performance. The last 3
 weeks i have installed a 3 node cluster of percona software and i'm
 testing it. It seems to works ok, and after some optimization it has
 better performance than my production mysql setup(simple primary-slave
 replication) on same hardware (virtual machines). If i dont find any
 serious problem till September i will use it for production.
 
 
 Now,for you application to communicate with the two mysql master nodes
 there several solutions:
 1. Desing your app to use both mysql servers. With this solution you
 can ever split writes in the one server, and reads in the other. It's
 up to you to do whatever you want.
 
 2. Setup a simple heartbeat solution and setup a floating virtual ip
 between you mysql servers. If one of the mysql server( i mean the whole
 OS) crash, the floating ip will be attached to the second server.
 
 3. In each app server, install a tcp load balancer software like
 haproxy and balance the mysql tcp connections between your app
 servers and the mysql servers.
 
 Regards,
 akops
 
 
 On 18/7/2012 6:11 μμ, Carl Kabbe wrote:
  We are actually facing both capacity and availability issues at the
 same time.
 
  Our current primary server is a Dell T410 (single processor, 32 GB
 memory) with a Dell T310 (single processor, 16GB memory) as backup.
 Normally, the backup server is running as a slave to the primary server
 and we manually switch it over when the primary server fails (which it
 did last Saturday morning at 2:00AM.)  The switch over process takes
 10-15 minutes although I am reducing that to about five minutes with
 some scripting (the changeover is a little more complex than you might
 think because we have a middle piece, also MySQL, that we use to
 determine where the real data is.)  Until six months ago, the time
 delay was not a problem because the customer processes could tolerate
 such a delay.  However, we now have a couple of water parks using our
 system at their gate, in their gift shops and in their concessions so
 we need to now move the changeover time to a short enough period that
 they really don't notice.  Hence, the need I have described as 'high
 availability'.
 
  The T410 is normally reasonably capable of processing our
 transactions, i.e., the customers are comfortable with the latency.
 However, we have been on the T310 since last Saturday and it is awful,
 basically barely able to keep up and producing unacceptable latency.
 Further, our load will double in the next six months and double again
 the the following six months.
 
  So, my thought was that since we have to deal with the issue change
  over time which will cause us to restructure the servers, that we
  should also deal with the capacity issue.  I think a couple of Dell
  T620's will provide the capacity we need (the servers we have spec'ed
  should be around 8X faster than the T410) but I have no experience
  evaluating or setting up HA systems (I have worked with MySQL for 12
  years and am reasonably comfortable with it and I have read
 everything
  I can find about HA options and their implementations.)  Hence, my
  post asking for help (which we are willing to pay for.)
 
  The web app is primarily JSP's for the administration side and Flash
  for the operators and other people doing

Re: Looking for consultant

2012-07-18 Thread Howard Hart
You could write to an InnoDB frontend with master/master replication at 
each site, and slave off the local InnoDB server to your local cluster 
at each site.


Would make your writes limited by your InnoDB server performance and 
remote replication speed, but reads would run at cluster speeds and be a 
bit more bulletproof.


That could also potentially cover the foreign key constraints limitation 
in cluster since last I checked, it doesn't support these--may have 
changed recently--don't know. The foreign key constraint checks in this 
case would be covered by the InnoDB frontend prior to pushing to cluster.


Also looks like the latest MySQL cluster solution supports asynchronous 
binlog style replication per link below, so guess that's a possibility 
too now.


http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication.html


On 07/18/2012 04:45 PM, Rick James wrote:

Keep in mind that all cluster solutions are vulnerable to a single power 
failure, earthquake, flood, tornado, etc.

To protect from such, you need a hot backup located remotely from the live 
setup.  This introduces latency that will kill performance -- all cluster solutions depend on 
syncing, heartbeats, etc, that cannot afford long latencies.

You may choose to ignore that issue.  But, before going forward you need to 
make that decision.


-Original Message-
From: Antonis Kopsaftis [mailto:ak...@edu.teiath.gr]
Sent: Wednesday, July 18, 2012 9:09 AM
To: Carl Kabbe
Cc: mysql@lists.mysql.com
Subject: Re: Looking for consultant

Hello,

As far i can understand by your post, you need a high availability
mysql cluster with large capacity.
For having high availability you need something that can give you
multi-master replication between two or more mysql servers.

In my knowledge there are three solutions that can give you multi-
master
replication:

1. Official mysql cluster
It's an Enterprise class solution, very complicated, but 'it fully
multi-master. I was using one for about two year, but i dont recommend
it because (at least in my setup) it did not have very good
performance.
It's use it's own storage engine(NDB) which has a number of
limitations.

2. Tungsten replicator.
It 's relative new product. It support multi-master replication between
different type of databases, and it seems very promising. It's java
based. I haven't tested it but you can read a lot about on:
http://datacharmer.blogspot.com

3. Percona xtraDB cluster
It's also a relative new product. It's also support multi-master
replication, and it seems to have very good performance. The last 3
weeks i have installed a 3 node cluster of percona software and i'm
testing it. It seems to works ok, and after some optimization it has
better performance than my production mysql setup(simple primary-slave
replication) on same hardware (virtual machines). If i dont find any
serious problem till September i will use it for production.


Now,for you application to communicate with the two mysql master nodes
there several solutions:
1. Desing your app to use both mysql servers. With this solution you
can ever split writes in the one server, and reads in the other. It's
up to you to do whatever you want.

2. Setup a simple heartbeat solution and setup a floating virtual ip
between you mysql servers. If one of the mysql server( i mean the whole
OS) crash, the floating ip will be attached to the second server.

3. In each app server, install a tcp load balancer software like
haproxy and balance the mysql tcp connections between your app
servers and the mysql servers.

Regards,
akops


On 18/7/2012 6:11 μμ, Carl Kabbe wrote:

We are actually facing both capacity and availability issues at the

same time.

Our current primary server is a Dell T410 (single processor, 32 GB

memory) with a Dell T310 (single processor, 16GB memory) as backup.
Normally, the backup server is running as a slave to the primary server
and we manually switch it over when the primary server fails (which it
did last Saturday morning at 2:00AM.)  The switch over process takes
10-15 minutes although I am reducing that to about five minutes with
some scripting (the changeover is a little more complex than you might
think because we have a middle piece, also MySQL, that we use to
determine where the real data is.)  Until six months ago, the time
delay was not a problem because the customer processes could tolerate
such a delay.  However, we now have a couple of water parks using our
system at their gate, in their gift shops and in their concessions so
we need to now move the changeover time to a short enough period that
they really don't notice.  Hence, the need I have described as 'high
availability'.

The T410 is normally reasonably capable of processing our

transactions, i.e., the customers are comfortable with the latency.
However, we have been on the T310 since last Saturday and it is awful,
basically barely able to keep up and producing unacceptable latency.
Further, our load will double

Looking for consultant

2012-07-17 Thread Carl Kabbe
On Monday, I asked if there were consultants out there who could help set up an 
NDB high availability system.  As I compared our needs to NDB, it became 
obvious that NDB was not the answer and more obvious that simply adding high 
availability processes to our existing Innodb system was.  

So, I am back asking if there are consultants lurking on this list that could 
help with this project.

Thanks,

Carl
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Looking for consultant

2012-07-16 Thread Carl Kabbe
We are looking at installing an NDB cluster and are looking for someone to 
assist us in setting it up.

Thanks,

Carl
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



OFF TOPIC - Looking for Consultant

2011-11-13 Thread Carl
We are looking for a consultant to set up Master - Master replication between 
two sites (both in US.)  Both sites run MySQL version 5.5 (Innodb) in Slackware 
Linux.  Local traffic at each site is on the low side of moderate and is from a 
Java based web application.  There is a VPN between the sites and the remote 
site is currently running as a slave.

We are looking for a consultant to do this as our staff simply does not have 
the time.

Reply directly to me c...@etrak-plus.com.

Thanks,

Carl