If you need high select- performance but only moderate update/insert/delete,
then it's easy and robust way to:
- cluster master-db as active-hot standby (with RHCS)
- use mysql proxy to split selects and run many slaves for selects in
loadbalance pool behind
one VIP
- use bin- replication mast
Tiago Cruz wrote:
I'm researching about mysql's cluster.
What did you suggests?
Last time I tried using MySQL Cluster it was unacceptably slow. It
wasn't really a workable solution. MySQL replication was a much better
option.
I need to have write in all nodes of cluster, so, I can't use
Hello Guys,
I'm researching about mysql's cluster.
What did you suggests?
I need to have write in all nodes of cluster, so, I can't use DRBD7 +
HeartBeat.
Maybe, I should use DRBR8 and GFS to to this?
One proxy application to separe Insert/Update from Select?
Well.. thanks for your time! :)
In the message dated: Thu, 24 Apr 2008 17:02:54 EDT,
The pithy ruminations from Lon Hohberger on
were:
=> On Thu, 2008-04-24 at 15:29 -0400, Lon Hohberger wrote:
=> > On Thu, 2008-04-24 at 15:23 -0400, Lon Hohberger wrote:
=> >
=> > I'm writing step by step wiki article now. ;)
=> >
=> > [Not
You work fast, Actually Leo was on to something. My personal user ssh
files where the same
because of our automounted directories. However, I copied over the files
in /etc/ssh to all three
nodes in my cluster and ssh works great.(I can ssh from any node on our
network to the VIP
addresses the
On Thu, 2008-04-24 at 15:29 -0400, Lon Hohberger wrote:
> On Thu, 2008-04-24 at 15:23 -0400, Lon Hohberger wrote:
>
> I'm writing step by step wiki article now. ;)
>
> [Note: it will need to be expanded for non-Red Hat/Fedora distributions]
Here's how to do it so that each service has a differen
On Apr 24, 2008, at 6:12 AM, Alain Moulle wrote:
Apr 24 11:47:02 [EMAIL PROTECTED] ccsd[11099]: Local version # : 1
Apr 24 11:47:02 [EMAIL PROTECTED] ccsd[11099]: Remote version #: 1
Apr 24 11:47:02 [EMAIL PROTECTED] ccsd[11099]: Remote copy of cluster.conf
is from
quorate node.
Is there
the key is on the both nodes. I use nis for my accounts and automounted
home directories.
Leo Pleiman wrote:
You have to install the same ssh key on each machine.
Bennie Thomas wrote:
I have a 3-node Cluster set up as 2-nodes active and one passive. I
have assigned 2 IP Aliases
to fail over.
On Thu, 2008-04-24 at 15:23 -0400, Lon Hohberger wrote:
I'm writing step by step wiki article now. ;)
[Note: it will need to be expanded for non-Red Hat/Fedora distributions]
-- Lon
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
You have to install the same ssh key on each machine.
Bennie Thomas wrote:
I have a 3-node Cluster set up as 2-nodes active and one passive. I
have assigned 2 IP Aliases
to fail over. The problem I am having is; When I ssh to the IP
aliases the first time it works fine,
I then failover the IP
On Thu, 2008-04-24 at 14:10 -0500, Bennie Thomas wrote:
> I have a 3-node Cluster set up as 2-nodes active and one passive. I have
> assigned 2 IP Aliases
> to fail over. The problem I am having is; When I ssh to the IP aliases
> the first time it works fine,
> I then failover the IP alias servi
I have a 3-node Cluster set up as 2-nodes active and one passive. I have
assigned 2 IP Aliases
to fail over. The problem I am having is; When I ssh to the IP aliases
the first time it works fine,
I then failover the IP alias service to the backup node, then try
ssh'ing to the alias, it fails w
On Thu, 2008-04-24 at 13:12 +0200, Alain Moulle wrote:
> when testing a two-nodes cluster with quorum disk, when
> I poweroff the node1 , node 2 fences well the node 1 and
> failovers the service, but in log of node 2 I have before and after
> the fence success messages many messages like this:
>
On Wed, 2008-04-23 at 16:11 -0400, Charlie Brady wrote:
> On Fri, 18 Apr 2008, HarriPäiväniemi wrote:
>
> > Qdisk man says at least 1 heuristic is reguired.
> >
> > Is it?
> >
> > I have (accidentally) tested and to my mind it worked fine without
> > heuristics. I gave 1 vote to quorumd and no -
Hi
I 'm facing a problem :
when testing a two-nodes cluster with quorum disk, when
I poweroff the node1 , node 2 fences well the node 1 and
failovers the service, but in log of node 2 I have before and after
the fence success messages many messages like this:
Apr 24 11:30:04 [EMAIL PROTECTED] qd
15 matches
Mail list logo