On Fri, 2008-08-15 at 14:25 -0400, Bruce Momjian wrote:
> Simon Riggs wrote:
> > > > Implementation would be to make PQreset() try secondary connection if
> > > > the primary one fails to reset. Of course you can program this manually,
> > > > but the feature is that you wouldn't need to, nor woul
Simon Riggs wrote:
> > > Implementation would be to make PQreset() try secondary connection if
> > > the primary one fails to reset. Of course you can program this manually,
> > > but the feature is that you wouldn't need to, nor would you need to
> > > request changes to 27 different interfaces ei
On Fri, 2008-08-15 at 12:24 -0400, Bruce Momjian wrote:
> Simon Riggs wrote:
> > When primary server fails, it would be good if the clients connected to
> > the primary knew to reconnect to the standby servers automatically.
> >
> > We might want to specify that centrally and then send the redire
Simon Riggs wrote:
> When primary server fails, it would be good if the clients connected to
> the primary knew to reconnect to the standby servers automatically.
>
> We might want to specify that centrally and then send the redirection
> address to the client when it connects. Sounds like lots of
Hi,
Dimitri Fontaine wrote:
That's not exactly this, I want to preserve any of the database servers from
erroring whenever a network failure happens. Sync is not an answer here.
So, you want your base data to remain readable on the slaves, even if it
looses connection to the master, right?
Le mardi 05 août 2008, Markus Wanner a écrit :
> I do not understanding that reasoning. Synchronous replication is
> certainly *more* resilient to network failures, as it does *not* loose
> any data on failover.
>
> However, you are speaking about "logs" and "stats". That certainly
> sounds like da
Hi,
(sorry... I'm typing too fast and hitting the wrong keys... continuing
the previous mail now...)
Dimitri Fontaine wrote:
Now, this configuration needs to be resistant to network failure of any node,
Yeah, increasing availability is the primary purpose of doing replication.
central one
Hi,
Dimitri Fontaine wrote:
Redirecting writing transactions from slaves to the master node solves
another problem. Being able to 'rescue' such forwarded connections in
case of a failure of the master is just a nice side effect. But it
doesn't solve the problem of connection losses between a cli
Le mardi 05 août 2008, Markus Wanner a écrit :
> Dimitri Fontaine wrote:
> > I'm thinking in term of single master multiple slaves scenario...
> > In single master case, each slave only needs to know who the current
> > master is and if itself can process read-only queries (locally) or not.
>
> I d
Hi,
Dimitri Fontaine wrote:
I'm thinking in term of single master multiple slaves scenario...
In single master case, each slave only needs to know who the current master is
and if itself can process read-only queries (locally) or not.
I don't think that's as trivial as you make it sound. I'd
Le mardi 05 août 2008, Markus Wanner a écrit :
> I've thought about that as well, but think about it this way: to protect
> against N failing nodes, you need to forward *every* request through N
> living nodes, before actually hitting the node which processes the
> query. To me, that sounds like an
Hi,
Dimitri Fontaine wrote:
If slave nodes were able to accept connection and redirect them to master, the
client wouldn't need to care about connecting to master or slave, just to
connect to a live node.
I've thought about that as well, but think about it this way: to protect
against N fail
Le mardi 05 août 2008, Markus Wanner a écrit :
> > (Think network partition.)
>
> Uh... well, yeah, of course the servers themselves need to exchange
> their state and make sure they only accept clients if they are up and
> running (as seen by the cluster). That's what the 'view' of a GCS is all
>
Hi,
Simon Riggs wrote:
On Tue, 2008-08-05 at 11:50 +0300, Hannu Krosing wrote:
I guess having the title "Automatic Client Failover" suggest to most
readers, that you are trying to solve the client side separately from
server.
Yes, that's right: separately. Why would anybody presume I meant "
Hi,
Greg Stark wrote:
a cwnrallu
What is that?
Regards
Markus Wanner
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Tom Lane wrote:
Huh? The pgpool is on the server, not on the client side.
Not necessarily. Having pgpool on the client side works just as well.
There is one really bad consequence of the oversimplified failover
design that Simon proposes, which is that clients might try to fail over
for
On Tue, 2008-08-05 at 11:50 +0300, Hannu Krosing wrote:
> On Tue, 2008-08-05 at 07:52 +0100, Simon Riggs wrote:
> > On Mon, 2008-08-04 at 22:56 -0400, Tom Lane wrote:
> > > Josh Berkus <[EMAIL PROTECTED]> writes:
> > > > I think the proposal was for an extremely simple "works 75% of the
> > > > t
On Tue, 2008-08-05 at 07:52 +0100, Simon Riggs wrote:
> On Mon, 2008-08-04 at 22:56 -0400, Tom Lane wrote:
> > Josh Berkus <[EMAIL PROTECTED]> writes:
> > > I think the proposal was for an extremely simple "works 75% of the time"
> > > failover solution. While I can see the attraction of that, th
Le mardi 05 août 2008, Tom Lane a écrit :
> Huh? The problem case is that the primary server goes down, which would
> certainly mean that a pgbouncer instance on the same machine goes with
> it. So it seems to me that integrating pgbouncer is 100% backwards.
With all due respect, it seems to me
Greg
On 5-Aug-08, at 12:15 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
There is one really bad consequence of the oversimplified failover
design that Simon proposes, which is that clients might try to fail
over
for reasons other than a primary server failure. (Think network
partition.) You
On Mon, 2008-08-04 at 22:56 -0400, Tom Lane wrote:
> Josh Berkus <[EMAIL PROTECTED]> writes:
> > I think the proposal was for an extremely simple "works 75% of the time"
> > failover solution. While I can see the attraction of that, the
> > consequences of having failover *not* work are pretty
Josh Berkus <[EMAIL PROTECTED]> writes:
> I think the proposal was for an extremely simple "works 75% of the time"
> failover solution. While I can see the attraction of that, the
> consequences of having failover *not* work are pretty severe.
Exactly. The point of failover (or any other HA fe
On Mon, Aug 04, 2008 at 05:17:59PM -0400, Jonah H. Harris wrote:
> On Mon, Aug 4, 2008 at 5:08 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> > When primary server fails, it would be good if the clients connected to
> > the primary knew to reconnect to the standby servers automatically.
>
> This wou
Tom,
> Failover that actually works is not something we can provide with
> trivial changes to Postgres.
I think the proposal was for an extremely simple "works 75% of the time"
failover solution. While I can see the attraction of that, the
consequences of having failover *not* work are pretty
Dimitri Fontaine <[EMAIL PROTECTED]> writes:
> Le 5 août 08 à 01:13, Tom Lane a écrit :
>> There is one really bad consequence of the oversimplified failover
>> design that Simon proposes, which is that clients might try to fail
>> over for reasons other than a primary server failure. (Think netwo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Le 5 août 08 à 01:13, Tom Lane a écrit :
There is one really bad consequence of the oversimplified failover
design that Simon proposes, which is that clients might try to fail
over
for reasons other than a primary server failure. (Think netw
On Mon, 2008-08-04 at 22:08 +0100, Simon Riggs wrote:
> When primary server fails, it would be good if the clients connected to
> the primary knew to reconnect to the standby servers automatically.
>
> We might want to specify that centrally and then send the redirection
> address to the client wh
"Jonah H. Harris" <[EMAIL PROTECTED]> writes:
> On Mon, Aug 4, 2008 at 5:39 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
>> Well, it's less simple, but you can already do this with pgPool on the
>> client machine.
> Yeah, but if you have tens or hundreds of clients, you wouldn't want
> to be install
On Mon, Aug 4, 2008 at 5:39 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
> Well, it's less simple, but you can already do this with pgPool on the
> client machine.
Yeah, but if you have tens or hundreds of clients, you wouldn't want
to be installing/managing a pgpool on each. Similarly, I think an
On Monday 04 August 2008 14:08, Simon Riggs wrote:
> When primary server fails, it would be good if the clients connected to
> the primary knew to reconnect to the standby servers automatically.
>
> We might want to specify that centrally and then send the redirection
> address to the client when i
On Mon, Aug 4, 2008 at 5:08 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> When primary server fails, it would be good if the clients connected to
> the primary knew to reconnect to the standby servers automatically.
This would be a nice feature which many people I've talked to have
asked for. In O
When primary server fails, it would be good if the clients connected to
the primary knew to reconnect to the standby servers automatically.
We might want to specify that centrally and then send the redirection
address to the client when it connects. Sounds like lots of work though.
Seems fairly s
32 matches
Mail list logo