Re: [HACKERS] contrib/snapshot

2011-01-01 Thread Joel Jacobson
2010/12/31 Simon Riggs si...@2ndquadrant.com

 Please call it something other than snapshot. There's already about 3
 tools called something similar and a couple of different meanings of the
 term in the world of Postgres.


Renamed the entire github project as well:
https://github.com/gluefinance/fsnapshot

-- 
Best regards,

Joel Jacobson
Glue Finance


Re: RIGHT/FULL OUTER hash joins (was Re: [HACKERS] small table left outer join big table)

2011-01-01 Thread Simon Riggs
On Thu, 2010-12-30 at 10:45 -0500, Tom Lane wrote:

 Comments?

Thanks for working on this. I love the reuse of tuple flags; I can't
help feeling that opens up doors, just not sure how yet...

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Fri, 2010-12-31 at 22:18 +0100, Hannu Krosing wrote:
 On 31.12.2010 13:40, Heikki Linnakangas wrote:
 
  Sounds good.
 
  I still don't like the synchronous_standbys='' and 
  synchronous_replication=on combination, though. IMHO that still 
  amounts to letting the standby control the behavior on master, and it 
  makes it impossible to temporarily add an asynchronous standby to the mix.
 A sync standby _will_have_ the ability to control the master anyway by 
 simply being there or not.
 
 What is currently proposed is having dual power lines / dual UPS' and 
 working happily on when one of them fails.
 Requiring both of them to be present defeats the original purpose  of 
 doubling them.
 
 So following Simons design of 2 standbys and only one required to ACK to 
 commit you get 2X reliability of single standby.
...

Yes, working out the math is a good idea. Things are much clearer if we
do that.

Let's assume we have 98% availability on any single server.

1. Having one primary and 2 standbys, either of which can acknowledge,
and we never lock up if both standbys fail, then we will have 99.9992%
server availability. (So PostgreSQL hits 5 Nines, with data
guarantees). (Maximised availability)

2. Having one primary and 2 standbys, either of which can acknowledge,
and we lock up if both standbys fail to protect the data, then we will
have 99.996% availability. Slightly less availability, but we don't put
data at risk at any time, since any commit is always covered by at least
2 servers. (Maximised protection)

3. If we have a primary and a single standby which must acknowledge, and
we choose to lock up if the standby fails, then we will have only 96.04%
availability.

4. If we have a primary and two standbys (named or otherwise), both of
which must acknowledge or we lock up the master, then we have an awesome
94.12% availability.

On the last two, there is also an increased likelihood of administrative
cock-ups because of more specific and complex config requirements.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python refactoring

2011-01-01 Thread Jan Urbański
On 01/01/11 01:00, Peter Eisentraut wrote:
 On tor, 2010-12-23 at 14:41 +0100, Jan Urbański wrote:
 It does some architectural changes to PL/Python that make it easier to
 implement other features, like for instance a validator function. The
 full list of changes in the patch is:
 
 I would review this and the following patches, but I'd really prefer it
 if you could split this particular patch into about 11 single-purpose
 patches.  I think most of the changes here are not interrelated.

OK, I'll split this patch into even smaller chunks.

Jan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python refactoring

2011-01-01 Thread Peter Eisentraut
On lör, 2011-01-01 at 13:24 +0100, Jan Urbański wrote:
 On 01/01/11 01:00, Peter Eisentraut wrote:
  On tor, 2010-12-23 at 14:41 +0100, Jan Urbański wrote:
  It does some architectural changes to PL/Python that make it easier to
  implement other features, like for instance a validator function. The
  full list of changes in the patch is:
  
  I would review this and the following patches, but I'd really prefer it
  if you could split this particular patch into about 11 single-purpose
  patches.  I think most of the changes here are not interrelated.
 
 OK, I'll split this patch into even smaller chunks.

Thanks.  Just attach them all to a single mail message.  Don't create
new CF entries or something.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Jeff Janes
On 12/31/10, Simon Riggs si...@2ndquadrant.com wrote:
 On Fri, 2010-12-31 at 09:27 +0100, Stefan Kaltenbrunner wrote:

 Maybe it has been discussed but I still don't see way it makes any
 sense. If I declare a standby a sync standby I better want it sync - not
 maybe sync. consider the case of a 1 master and two identical sync
 standbys - one sync standby is in the same datacenter the other is in a
 backup location say 15km away.
 Given there is a small constant latency to the second box (even if you
 have fast networks) the end effect is that the second standby will NEVER
 be sync (because the local one will always be faster) and you end up
 with an async slave that cannot be used per your business rules?

 Your picture above is a common misconception. I will add something to
 the docs to explain this.

 1. sync is a guarantee about how we respond to the client when we
 commit. If we wait for more than one response that slows things down,
 makes the cluster more fragile, complicates the code and doesn't
 appreciably improve the guarantee.

Whether it is more fragile depends on if you look at up-time fragility
or durability fragility.  I think it can appreciably improve the
guarantee.


 2. sync does not guarantee that the updates to the standbys are in any
 way coordinated. You can run a query on one standby and get one answer
 and at the exact same time run the same query on another standby and get
 a different answer (slightly ahead/behind). That also means that if the
 master crashes one of the servers will be ahead or behind. You can use
 pg_last_xlog_receive_location() to check which one that is.

If at least one of the standbys is in the same smoking crater as the
primary, then pg_last_xlog_receive_location on it is unlikely to
respond.

The guarantee goes away precisely when it is needed.

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 12/31/2010 08:15 PM, Simon Riggs wrote:

On Fri, 2010-12-31 at 14:40 +0200, Heikki Linnakangas wrote:

On 31.12.2010 13:48, Simon Riggs wrote:


I see significant real-world issues with configuring replication using
multiple named servers, as described in the link above:


All of these points only apply to specifying *multiple* named servers in
the synchronous_standbys='...' list.


Unfortunately, some of the points apply to using named servers ever,
even if there is only one.


  That's certainly a more complicated
scenario, and the configuration is more complicated as a result.
With your proposal, it's not possible in the first place.

Multiple synchronous standbys probably isn't needed by most people, so
I'm fine with leaving that out for now, keeping the design the same
otherwise. I included it in the proposal because it easily falls out of
the design. So, if you're worried about the complexities of multiple
synchronous standbys, let's keep the UI exactly the same as what I
described in the link above, but only allow one name in the
synchronous_standbys setting, instead of a list.


The best usage recommendation is still to have 2+ standbys, *any* of
which can be used to provide sync rep. That is the best performance,
best availability and easiest to configure that I know of. That best
usage is not achievable with uniquely named servers; using non-unique
names defeats the point of having names in the first place.


I disagree with that usage recommendation, if we ask for sync we should 
get sync, your definition is more like we should have fsync=on only do 
fsync sometimes and still claim it is safe. Also it is very much 
possible to do that semisync style replication feature with named 
servers (see my post about the design I would like to see as a dba) and 
STILL keep the flexibility to do what other people (like me) in that 
thread want (at least from an UI perspective).
As I said before I would very much prefer to have us restricted to 
exactly ONE sync capable standby and x async ones if we cannot agree on 
a reasonable interface :(




I accept that the best usage is a general case and there may be
circumstances that make the difficulties of named servers worth the
trouble.

So replicating to multiple synchronous standbys is definitely needed in
this release. *Confirming* replication to multiple named sync standbys
is the thing we don't need in this release.


well you keep saying that but to be honest I cannot really even see a 
usecase for me - what is only a random one of a set of servers is sync 
at any time and I don't really know which one.
My usecases would al involved 2 sync standbys and 1 or more async ones. 
but the second sync one would be in a different datacenter and I NEED to 
protect against a datacenter failure which your proposals says I cannot 
do :(


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 01/01/2011 02:13 PM, Jeff Janes wrote:

On 12/31/10, Simon Riggssi...@2ndquadrant.com  wrote:

On Fri, 2010-12-31 at 09:27 +0100, Stefan Kaltenbrunner wrote:


Maybe it has been discussed but I still don't see way it makes any
sense. If I declare a standby a sync standby I better want it sync - not
maybe sync. consider the case of a 1 master and two identical sync
standbys - one sync standby is in the same datacenter the other is in a
backup location say 15km away.
Given there is a small constant latency to the second box (even if you
have fast networks) the end effect is that the second standby will NEVER
be sync (because the local one will always be faster) and you end up
with an async slave that cannot be used per your business rules?


Your picture above is a common misconception. I will add something to
the docs to explain this.

1. sync is a guarantee about how we respond to the client when we
commit. If we wait for more than one response that slows things down,
makes the cluster more fragile, complicates the code and doesn't
appreciably improve the guarantee.


Whether it is more fragile depends on if you look at up-time fragility
or durability fragility.  I think it can appreciably improve the
guarantee.



2. sync does not guarantee that the updates to the standbys are in any
way coordinated. You can run a query on one standby and get one answer
and at the exact same time run the same query on another standby and get
a different answer (slightly ahead/behind). That also means that if the
master crashes one of the servers will be ahead or behind. You can use
pg_last_xlog_receive_location() to check which one that is.


If at least one of the standbys is in the same smoking crater as the
primary, then pg_last_xlog_receive_location on it is unlikely to
respond.

The guarantee goes away precisely when it is needed.


that is exactly my point - if have no guarantee that your SYNC standby 
is actually sync there is no use for it being used in business cases 
that require sync replication.
If we cannot support that usecase I would either like to see us 
restricting to only one sync capable standby or by putting a big CAVEAT 
into the docs saying that sync replication in pg only is a hint and not 
a guarantee that might or might not be honored in the case of more than 
one standby.




Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Robert Haas
On Sat, Jan 1, 2011 at 9:03 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
 that is exactly my point - if have no guarantee that your SYNC standby is
 actually sync there is no use for it being used in business cases that
 require sync replication.
 If we cannot support that usecase I would either like to see us restricting
 to only one sync capable standby or by putting a big CAVEAT into the docs
 saying that sync replication in pg only is a hint and not a guarantee that
 might or might not be honored in the case of more than one standby.

I think it's clear that different people want to different things.  I
understand Simon's point, but I think the point Stefan and Jeff are
making is equally valid.  I think the solution is:

- Simon gets to implement his version first because he's writing the
code.  If someone else writes the code then they get to pick.

- Whoever wants to make the other thing work can write a patch for that after.

- The docs should not allege that either setup is preferable to the
other, because there is not now and will never be consensus that this
is in fact true.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Guillaume Lelarge
Le 01/01/2011 06:05, Robert Haas a écrit :
 On Fri, Dec 31, 2010 at 8:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2010-12-30 at 11:03 -0500, Robert Haas wrote:
 No, quite the opposite.  With the other approach, you needed:

 constraints cannot be used on views
 constraints cannot be used on composite types
 constraints cannot be used on TOAST tables
 constraints cannot be used on indexes
 constraints cannot be used on foreign tables

 With this, you just need:

 constraints can only be used on tables

 At the beginning of this thread you said that the error messages should
 focus on what you tried to do, not what you could do instead.
 
 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.  It would be simple enough if we could
 write:
 
 /* translator: first %s is a feature, second %s is a relation type */
 %s cannot be used on %s
 
 ...but I think this is likely to cause some translation headaches.
 

Actually, this is simply not translatable in some languages. We had the
same issue on pgAdmin, and we resolved this by having quite a big number
of new strings to translate. Harder one time for the translator, but
results in a much better experience for the user.

 Also, in this particular case, the user could very well assume that a
 TOAST table or a foreign table is a table.
 
 There's a limited amount we can do about confused users, but it is
 true that the negative phrasing is better for that case.
 

It's at least better for the translator.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Robert Haas
On Sat, Jan 1, 2011 at 9:53 AM, Guillaume Lelarge
guilla...@lelarge.info wrote:
 Le 01/01/2011 06:05, Robert Haas a écrit :
 On Fri, Dec 31, 2010 at 8:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2010-12-30 at 11:03 -0500, Robert Haas wrote:
 No, quite the opposite.  With the other approach, you needed:

 constraints cannot be used on views
 constraints cannot be used on composite types
 constraints cannot be used on TOAST tables
 constraints cannot be used on indexes
 constraints cannot be used on foreign tables

 With this, you just need:

 constraints can only be used on tables

 At the beginning of this thread you said that the error messages should
 focus on what you tried to do, not what you could do instead.

 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.  It would be simple enough if we could
 write:

 /* translator: first %s is a feature, second %s is a relation type */
 %s cannot be used on %s

 ...but I think this is likely to cause some translation headaches.

 Actually, this is simply not translatable in some languages. We had the
 same issue on pgAdmin, and we resolved this by having quite a big number
 of new strings to translate. Harder one time for the translator, but
 results in a much better experience for the user.

Is it in any better if we write one string per feature, like this:

constraints cannot be used on %s
triggers cannot be used on %s

...where %s is a plural object type (views, foreign tables, etc.).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 01/01/2011 03:15 PM, Robert Haas wrote:

On Sat, Jan 1, 2011 at 9:03 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc  wrote:

that is exactly my point - if have no guarantee that your SYNC standby is
actually sync there is no use for it being used in business cases that
require sync replication.
If we cannot support that usecase I would either like to see us restricting
to only one sync capable standby or by putting a big CAVEAT into the docs
saying that sync replication in pg only is a hint and not a guarantee that
might or might not be honored in the case of more than one standby.


I think it's clear that different people want to different things.  I
understand Simon's point, but I think the point Stefan and Jeff are
making is equally valid.  I think the solution is:

- Simon gets to implement his version first because he's writing the
code.  If someone else writes the code then they get to pick.


fair point ;)



- Whoever wants to make the other thing work can write a patch for that after.


yeah but I still would like to get a statement on why simon thinks that 
the design heikki and others have proposed for supporting multiple sync 
standby that are actually sync (and also supports semi-sync as his patch 
which i consider a degraded case of full sync).


if you take the syncronous_standbys=list thing as an example what 
about considering it as:


foo: sync capable standby
bar  sync capable standby
baz: sync capable standby

with

syncronous_standbys=standbyname:sync required(bool)

syncronous_standbys=foo,bar,baz you get sems sync - whatever standby 
returns first causes the master to return as well (as in what simons 
patch does)
syncronous_standbys=foo:true,bar:true,baz - require at least foo and bar 
to reply before the master returns


** the syntax chosen ist just a random example and could be anything **


that one could as well be used to do other per standby configurations 
(timeouts, wait behaviour etc) or not only being a 
syncronous_standby=list thing but more a standby_list = list thingy 
that also includes async slaves (defaulting to * or whatever so 
everything is async with default settings unless anything else is specified)




- The docs should not allege that either setup is preferable to the
other, because there is not now and will never be consensus that this
is in fact true.


well I should think we need to clearly spell out everything that affects 
reliability and if we only support semi-sync for more than 1 standby we 
have only that setup :)

Anyway as long as sync rep is disabled by default I'm fine with that.


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Guillaume Lelarge
Le 01/01/2011 16:00, Robert Haas a écrit :
 On Sat, Jan 1, 2011 at 9:53 AM, Guillaume Lelarge
 guilla...@lelarge.info wrote:
 Le 01/01/2011 06:05, Robert Haas a écrit :
 On Fri, Dec 31, 2010 at 8:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On tor, 2010-12-30 at 11:03 -0500, Robert Haas wrote:
 No, quite the opposite.  With the other approach, you needed:

 constraints cannot be used on views
 constraints cannot be used on composite types
 constraints cannot be used on TOAST tables
 constraints cannot be used on indexes
 constraints cannot be used on foreign tables

 With this, you just need:

 constraints can only be used on tables

 At the beginning of this thread you said that the error messages should
 focus on what you tried to do, not what you could do instead.

 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.  It would be simple enough if we could
 write:

 /* translator: first %s is a feature, second %s is a relation type */
 %s cannot be used on %s

 ...but I think this is likely to cause some translation headaches.

 Actually, this is simply not translatable in some languages. We had the
 same issue on pgAdmin, and we resolved this by having quite a big number
 of new strings to translate. Harder one time for the translator, but
 results in a much better experience for the user.
 
 Is it in any better if we write one string per feature, like this:
 
 constraints cannot be used on %s
 triggers cannot be used on %s
 
 ...where %s is a plural object type (views, foreign tables, etc.).
 

If %s was a singular object, it would be an issue for french. But for
plural form, it won't be an issue. Not sure it would be the same in
other languages. IIRC from my student years, german could have an issue
here.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER TABLE .. SET SCHEMA lock strength

2011-01-01 Thread Robert Haas
While reviewing the SQL/MED patch, I happened to notice that
ExecAlterObjectSchemaStmt calls AlterTableNamespace with a lock mode
argument of AccessExclusiveLock. Does anyone see a reason why
ShareUpdateExclusiveLock would be insufficient?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Dimitri Fontaine
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
 well you keep saying that but to be honest I cannot really even see a
 usecase for me - what is only a random one of a set of servers is sync at
 any time and I don't really know which one.

It looks easy enough to get to know which one it is.  Surely the primary
knows and could update something visible through a system view for
users?  This as been asked for before and I was thinking there was a
consensus on this.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 05:13 -0800, Jeff Janes wrote:
 On 12/31/10, Simon Riggs si...@2ndquadrant.com wrote:
  On Fri, 2010-12-31 at 09:27 +0100, Stefan Kaltenbrunner wrote:
 
  Maybe it has been discussed but I still don't see way it makes any
  sense. If I declare a standby a sync standby I better want it sync - not
  maybe sync. consider the case of a 1 master and two identical sync
  standbys - one sync standby is in the same datacenter the other is in a
  backup location say 15km away.
  Given there is a small constant latency to the second box (even if you
  have fast networks) the end effect is that the second standby will NEVER
  be sync (because the local one will always be faster) and you end up
  with an async slave that cannot be used per your business rules?
 
  Your picture above is a common misconception. I will add something to
  the docs to explain this.
 
  1. sync is a guarantee about how we respond to the client when we
  commit. If we wait for more than one response that slows things down,
  makes the cluster more fragile, complicates the code and doesn't
  appreciably improve the guarantee.
 
 Whether it is more fragile depends on if you look at up-time fragility
 or durability fragility.  I think it can appreciably improve the
 guarantee.

Yes, agreed. That is why I proposed quorum commit earlier in 2010, as a
way to improve the durability guarantee. That was bogged down by the
requirement for named servers, which I see as unnecessary.

  2. sync does not guarantee that the updates to the standbys are in any
  way coordinated. You can run a query on one standby and get one answer
  and at the exact same time run the same query on another standby and get
  a different answer (slightly ahead/behind). That also means that if the
  master crashes one of the servers will be ahead or behind. You can use
  pg_last_xlog_receive_location() to check which one that is.
 
 If at least one of the standbys is in the same smoking crater as the
 primary, then pg_last_xlog_receive_location on it is unlikely to
 respond.
 
 The guarantee goes away precisely when it is needed.

Fairly obviously, I would not be advocating anything that forced you to
use a server in the same smoking crater. I can't see any guarantee
that goes away precisely when it is needed.

Perhaps you could explain the issue you see, because your comments seem
unrelated to my point above.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 01/01/2011 05:28 PM, Dimitri Fontaine wrote:

Stefan Kaltenbrunnerste...@kaltenbrunner.cc  writes:

well you keep saying that but to be honest I cannot really even see a
usecase for me - what is only a random one of a set of servers is sync at
any time and I don't really know which one.


It looks easy enough to get to know which one it is.  Surely the primary
knows and could update something visible through a system view for
users?  This as been asked for before and I was thinking there was a
consensus on this.


well as jeff janes already said - anything that requires the master to 
still exist is not useful for a desaster. Consider the now often 
mentioned 2 sync standby scenario with one standby in the same location 
and one in a secondary location.
If you have a desaster(fire,water,explosion,admin fail,...) at the 
primary location and you have no access to either the master or the 
standby you will never be sure that the standby on the secondary 
location is actually in sync - it could be but you will never know if 
you lost that 1B$ invoice just commited on the master and the closeby 
standby and therefor confirmed to the client...
Most of my requirements have very hard requirements on the integrity of 
the data, very high requirements on the read-only availability and 
somewhat high requirements on the availability of a master for writes, 
but data integrity will always trump that.




Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 16:12 +0100, Stefan Kaltenbrunner wrote:

 I still would like to get a statement on why simon thinks that 
 the design heikki and others have proposed 

I've explained in huge detail why I think what I think, nor avoided any
technical issue.

It appears to me there has been substantial confusion over alternatives,
because of a misunderstanding about how synchronisation works. Requiring
confirmation that standbys are in sync is *not* the same thing as them
actually being in sync. Every single proposal made by anybody here on
hackers that supports multiple standby servers suffers from the same
issue: when the primary crashes you need to work out which standby
server is ahead.


 - The docs should not allege that either setup is preferable to the
  other, because there is not now and will never be consensus that this
  is in fact true.

I remain hopeful that people will read what I have read and understand
it. Having taken the trouble to do that publicly, my conscious is clear
that I've done the very best to explain things and make it easy for
users to avoid error. If I am prevented from putting sound advice into
the docs, I'll not worry too much.


 well I should think we need to clearly spell out everything that affects 
 reliability and if we only support semi-sync for more than 1 standby we 
 have only that setup :)

You can use sync rep with 1 or more standby servers.

At the end of the day, I can't stop anyone from saying What an idiot,
he designed something that gave the same durability and availability as
Oracle and MySQL do, yet with additional performance management
features.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 17:37 +0100, Stefan Kaltenbrunner wrote:
 On 01/01/2011 05:28 PM, Dimitri Fontaine wrote:
  Stefan Kaltenbrunnerste...@kaltenbrunner.cc  writes:
  well you keep saying that but to be honest I cannot really even see a
  usecase for me - what is only a random one of a set of servers is sync at
  any time and I don't really know which one.
 
  It looks easy enough to get to know which one it is.  Surely the primary
  knows and could update something visible through a system view for
  users?  This as been asked for before and I was thinking there was a
  consensus on this.
 
 well as jeff janes already said - anything that requires the master to 
 still exist is not useful for a desaster.

Nobody has suggested that the master needs to still exist after a
disaster.

  Consider the now often 
 mentioned 2 sync standby scenario with one standby in the same location 
 and one in a secondary location.
 If you have a desaster(fire,water,explosion,admin fail,...) at the 
 primary location and you have no access to either the master or the 
 standby you will never be sure that the standby on the secondary 
 location is actually in sync - it could be but you will never know if 
 you lost that 1B$ invoice just commited on the master and the closeby 
 standby and therefor confirmed to the client...

I've never suggested you configure your systems like that. It would of
course be stupid.

This is not a sensible technical discussion. I'll go back to coding.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 01/01/2011 05:55 PM, Simon Riggs wrote:

On Sat, 2011-01-01 at 16:12 +0100, Stefan Kaltenbrunner wrote:


I still would like to get a statement on why simon thinks that
the design heikki and others have proposed


I've explained in huge detail why I think what I think, nor avoided any
technical issue.

It appears to me there has been substantial confusion over alternatives,
because of a misunderstanding about how synchronisation works. Requiring
confirmation that standbys are in sync is *not* the same thing as them
actually being in sync. Every single proposal made by anybody here on
hackers that supports multiple standby servers suffers from the same
issue: when the primary crashes you need to work out which standby
server is ahead.


aaah that was exactly what I was after - so the problem is that when you 
have a sync standby it will technically always be in front of the 
master (because it needs to fsync/apply/whatever before the master).
In the end the question boils down to what is the bigger problem in 
the case of a lost master:


a) a transaction that was confirmed on the master but might not be on 
any of the surviving sync standbys (or you will never know if it is) - 
this is how I understand the proposal so far
b) a transaction that was not yet confirmed on the master but might have 
been applied on the surving standby before the desaster - this is what I 
understand confirm from all sync standbys could result in.


Spelled out that more clearly now makes me a bit reconsider on what I 
said before but I still wonder if ultimately we will have to provide 
both modes to satisfy different business requirements (a might provide 
the more accurate answer on average but b might at least provide a way 
to identify the wild transaction buy looking at additional data)







- The docs should not allege that either setup is preferable to the

other, because there is not now and will never be consensus that this
is in fact true.


I remain hopeful that people will read what I have read and understand
it. Having taken the trouble to do that publicly, my conscious is clear
that I've done the very best to explain things and make it easy for
users to avoid error. If I am prevented from putting sound advice into
the docs, I'll not worry too much.



well I should think we need to clearly spell out everything that affects
reliability and if we only support semi-sync for more than 1 standby we
have only that setup :)


You can use sync rep with 1 or more standby servers.

At the end of the day, I can't stop anyone from saying What an idiot,
he designed something that gave the same durability and availability as
Oracle and MySQL do, yet with additional performance management
features.


ok


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. SET SCHEMA lock strength

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 11:06 -0500, Robert Haas wrote:

 While reviewing the SQL/MED patch, I happened to notice that
 ExecAlterObjectSchemaStmt calls AlterTableNamespace with a lock mode
 argument of AccessExclusiveLock. Does anyone see a reason why
 ShareUpdateExclusiveLock would be insufficient?

It seemed unsafe to me to do that while an object was being accessed,
since it effectively changes the search_path, which is dangerous.

Seems like a good change, if it really is viable.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 18:13 +0100, Stefan Kaltenbrunner wrote:
 On 01/01/2011 05:55 PM, Simon Riggs wrote:
 
  It appears to me there has been substantial confusion over alternatives,
  because of a misunderstanding about how synchronisation works. Requiring
  confirmation that standbys are in sync is *not* the same thing as them
  actually being in sync. Every single proposal made by anybody here on
  hackers that supports multiple standby servers suffers from the same
  issue: when the primary crashes you need to work out which standby
  server is ahead.
 
 aaah that was exactly what I was after - so the problem is that when you 
 have a sync standby it will technically always be in front of the 
 master (because it needs to fsync/apply/whatever before the master).
 In the end the question boils down to what is the bigger problem in 
 the case of a lost master:

 a) a transaction that was confirmed on the master but might not be on 
 any of the surviving sync standbys (or you will never know if it is) - 
 this is how I understand the proposal so far

No that cannot happen, the current situation is that we will fsync WAL
on the master, then fsync WAL on the standby, then reply to the master.
The standby is never ahead of the master, at any point.

 b) a transaction that was not yet confirmed on the master but might have 
 been applied on the surving standby before the desaster - this is what I 
 understand confirm from all sync standbys could result in.

Yes, that is described in the docs changes I published.

(a) was discussed, but ruled out, since it would require any crash/immed
shutdown of the master to become a failover, or have some kind of weird
back channel to give the missing data back.

There hasn't been any difference of opinion in this area, that I am
aware of. All proposals have offered (b).

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 17:28 +0100, Dimitri Fontaine wrote:
 something visible through a system view for
 users?  This as been asked for before and I was thinking there was a
 consensus on this. 

Yes, it will be there.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Stefan Kaltenbrunner

On 01/01/2011 06:29 PM, Simon Riggs wrote:

On Sat, 2011-01-01 at 18:13 +0100, Stefan Kaltenbrunner wrote:

On 01/01/2011 05:55 PM, Simon Riggs wrote:


It appears to me there has been substantial confusion over alternatives,
because of a misunderstanding about how synchronisation works. Requiring
confirmation that standbys are in sync is *not* the same thing as them
actually being in sync. Every single proposal made by anybody here on
hackers that supports multiple standby servers suffers from the same
issue: when the primary crashes you need to work out which standby
server is ahead.


aaah that was exactly what I was after - so the problem is that when you
have a sync standby it will technically always be in front of the
master (because it needs to fsync/apply/whatever before the master).
In the end the question boils down to what is the bigger problem in
the case of a lost master:



a) a transaction that was confirmed on the master but might not be on
any of the surviving sync standbys (or you will never know if it is) -
this is how I understand the proposal so far


No that cannot happen, the current situation is that we will fsync WAL
on the master, then fsync WAL on the standby, then reply to the master.
The standby is never ahead of the master, at any point.


hmm maybe my surviving standbys(the case I'm wondering about is whole 
datacenter failures which might take out more than just the master) was 
not clear - consider three boxes, one master and two standby and 
semisync replication(ie any one of the standbys is enough to reply).


1. master fsyncs wal
2. standby #1 fsyncs and replies
3. master confirms commit
4. desaster strikes and destroys master and standby #1 while standby m2 
never had time to apply the change(IO/CPU load, latency, whatever)
5. now you have a sync standby that is missing something that was 
commited on the master and confirmed to the client and no way to verify 
that this thing happened (same problem with more than two standbys - as 
long as you lose ONE standby and the master at the same time you will 
never be sure)




what is it that I'm missing here?





b) a transaction that was not yet confirmed on the master but might have
been applied on the surving standby before the desaster - this is what I
understand confirm from all sync standbys could result in.


Yes, that is described in the docs changes I published.

(a) was discussed, but ruled out, since it would require any crash/immed
shutdown of the master to become a failover, or have some kind of weird
back channel to give the missing data back.

There hasn't been any difference of opinion in this area, that I am
aware of. All proposals have offered (b).


hmm I'm confused now - any chance you mixed up a  b here because in a) 
no backchannel is needed because the standby could just fetch the 
missing data from the master?
If that is the case I agree that it would be hard to get the replication 
up again after a crash of the master with a standby that is ahead but in 
the end it would be a business decision (as in conflict resolution) on 
what to do - take the ahead standbys data and use that or destroy the 
old standby and recreate.




Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. SET SCHEMA lock strength

2011-01-01 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On Sat, 2011-01-01 at 11:06 -0500, Robert Haas wrote:
 While reviewing the SQL/MED patch, I happened to notice that
 ExecAlterObjectSchemaStmt calls AlterTableNamespace with a lock mode
 argument of AccessExclusiveLock. Does anyone see a reason why
 ShareUpdateExclusiveLock would be insufficient?

 It seemed unsafe to me to do that while an object was being accessed,
 since it effectively changes the search_path, which is dangerous.

ALTER RENAME and ALTER SET SCHEMA are both in the nature of changing the
object's identity.  Consider the fairly typical use-case where you are
renaming an old instance out of the way and renaming another one into
the same schema/name.  Do you really want that to be a low-lock
operation?  I find it really hard to envision a use case where it'd be
smart to allow some concurrent operations to continue using the the old
instance while others start using the new one.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 18:49 +0100, Stefan Kaltenbrunner wrote:
 hmm maybe my surviving standbys(the case I'm wondering about is
 whole 
 datacenter failures which might take out more than just the master)
 was 
 not clear - consider three boxes, one master and two standby and 
 semisync replication(ie any one of the standbys is enough to reply).
 
 1. master fsyncs wal
 2. standby #1 fsyncs and replies
 3. master confirms commit
 4. desaster strikes and destroys master and standby #1 while standby
 m2 
 never had time to apply the change(IO/CPU load, latency, whatever)
 5. now you have a sync standby that is missing something that was 
 commited on the master and confirmed to the client and no way to
 verify 
 that this thing happened (same problem with more than two standbys -
 as 
 long as you lose ONE standby and the master at the same time you will 
 never be sure)

This is obvious misconfiguration that anybody with HA experience would
spot immediately. If you have local standbys then you should mark them
as not available for sync rep, as described in the docs I've written.

 what is it that I'm missing here? 

The fact that we've discussed this already and agreed to do 9.1 with
quorum_commit = 1. I proposed making this a parameter; other solutions
were also proposed, but it was considered too complex for this release.

This is a trade-off between availability and data guarantees.

MySQL and Oracle suffer from exactly this problem. DB2 supports only
one master and SQLServer doesn't have sync rep at all.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problems with autovacuum and vacuum

2011-01-01 Thread Filip Rembiałkowski
2010/12/30 JotaComm jota.c...@gmail.com

 Hello,

 Last week I had a serious problem with my PostgreSQL database. My
 autovacuum is OFF, but in September it started to prevent the transaction
 wraparoud; however last week the following message appeared continuously in
 my log:

 WARNING: database production must be vacuumed within 4827083 transactions

 HINT: To avoid a database shutdown, execute a full-database VACUUM in
 production.

 This message appeared for five to six hours; after that, the message
 disappeared from log. Any idea about what could have happened?



probably another wraparaund-forced autovacuum worker did the job, so the
warnings disappeared



 Every day the vacuum is executed on some tables; and on Sundays it's
 executed on all tables. But as the autovacuum is running since September,
 and it runs for a long time, the vacuum was blocked because autovacuum had
 been running on the same table. How should I procede in this case?



hmm. single vacuum process runs for more than 3 months on a table with
10 rows?
this is ... less than 128 rows/second, not good.

I would rather terminate this old process, and start a VACUUM VERBOSE when
the database is less loaded.


How many INS/UPD/DEL you have on this table?



PS. When you fix this, enable autovacuum, to avoid more problems...


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Heikki Linnakangas

On 01.01.2011 19:03, Simon Riggs wrote:

On Sat, 2011-01-01 at 17:37 +0100, Stefan Kaltenbrunner wrote:

On 01/01/2011 05:28 PM, Dimitri Fontaine wrote:

Stefan Kaltenbrunnerste...@kaltenbrunner.cc   writes:

well you keep saying that but to be honest I cannot really even see a
usecase for me - what is only a random one of a set of servers is sync at
any time and I don't really know which one.


It looks easy enough to get to know which one it is.  Surely the primary
knows and could update something visible through a system view for
users?  This as been asked for before and I was thinking there was a
consensus on this.


well as jeff janes already said - anything that requires the master to
still exist is not useful for a desaster.


Nobody has suggested that the master needs to still exist after a
disaster.


Dimitri just did, see above. I agree it's not very useful.

I don't think there's any other solution to knowing which standby is 
ahead than connect to both standbys and ask how far each is. I don't see 
a problem with that, whatever middleware handles the failover and 
STONITH etc. should be able to do that too.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Heikki Linnakangas

On 31.12.2010 23:18, Hannu Krosing wrote:

On 31.12.2010 13:40, Heikki Linnakangas wrote:

That thread makes no mention of how to specify which standbys are
synchronous and which are not.

The simplest way would be to have separate database users for sync and
async standbys ?

That would allow any standby with right credentials act as a sync user,
and those who are not eligible are not accepted even if they try to act
as a synchronity (?) provider.


Hmm, access control... We haven't yet discussed what privileges a 
standby needs to become synchronous. Perhaps it needs to be a separate 
privilege that can be granted, in addition to the replication privilege?


Robert's suggestion of using the roles instead of server names would 
also solve that. With that you would list the roles in 
synchronous_standbys, and no-one else could become a synchronous 
standby. The downside is that if you want to have two standbys in the 
mode that it's enough that either one acknowledges a commit, they would 
have to use the same user account.


If we don't adopt Robert's suggestion, do we want to restrict what 
standby name a user can claim, to stop one standby from spoofing another?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO item for pg_ctl and server detection

2011-01-01 Thread Peter Eisentraut
On fre, 2010-12-31 at 17:26 -0500, Bruce Momjian wrote:
 Patch applied, and TODO item removed because patch mostly detects if a
 stale postmaster created the postmaster.pid file.  The TODO was:

Please fix this new compiler warning:

pg_ctl.c:1787: warning: implicit declaration of function ‘time’



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO item for pg_ctl and server detection

2011-01-01 Thread Bruce Momjian
Peter Eisentraut wrote:
 On fre, 2010-12-31 at 17:26 -0500, Bruce Momjian wrote:
  Patch applied, and TODO item removed because patch mostly detects if a
  stale postmaster created the postmaster.pid file.  The TODO was:
 
 Please fix this new compiler warning:
 
 pg_ctl.c:1787: warning: implicit declaration of function ?time?

Thanks, done.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Anyone for SSDs?

2011-01-01 Thread Bruce Momjian
Robert Treat wrote:
   What _is_ interesting is that Postgres often has sequential and
   random/disk ways of doing things, and by reducing random_page_cost when
   using SSDs, you automatically use more random operations, so in a way
   the Postgres code was already prepared for SSD usage.  Surprisingly, we
   had to change very little.
 
  To add to this very late reply, we basically had random methods to do
  things (in RAM), and sequential/random methods for disk.  By changing
  random_page_cost, we favor doing random things on disk.
 
  The big question is whether there are random things we have never
  implemented on disk that now make sense --- off hand, I can't think of
  any.
 
 
 The idea of us avoiding quicksort when we know we need to spill to disk is

You mean using quicksort from an (SSD) disk vs. tape sorts --- good
point.

 the type of thing that I wonder if it should be investigated, if you figure
 that spill to disk means ssd's so it's not so much of a performance
 hit. This reminds me of some performance testing we did maybe a year, year
 and a half ago, trying to see how best to get performance by adding some
 SSD's into one of our servers. Basically speed increased as we changed
 things like so:
 put entire $pgdata on sata
 put entire $pgdata on ssd
 put xlogs on ssd, pgdata on sata
 put pgdata and xlogs on sata, put arc on ssd, crank up postgres's memory
 settings
 
 arc being zfs's adaptive replacement cache, so basically giving the server a
 second, very large level of memory to work with, and then configuring
 postgres to make use of it. It wasn't terribly obvious to me why this ended
 up outperforming the initial idea of putting everything on ssd, but my
 impression was that the more you could force postgres into making decisions
 as if it was dealing with fast storage rather than slow storage, the better
 off you'd be (and that random_page_cost is not so wholly inclusive enough to
 do this for you).

Yes, I wonder if this requires futher investigation.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SSI SLRU low-level functions first cut

2011-01-01 Thread Kevin Grittner
I've got low-level routines coded for interfacing predicate.c to SLRU
to handle old committed transactions, so that SSI can deal with
situations where a large number of transactions are run during the
lifetime of a single serializable transaction.  I'm not actually
*using* these new functions yet, but that's what I do next.  I would
love it if someone could review this commit and let me know whether
it looks generally sane.
 
http://git.postgresql.org/gitweb?p=users/kgrittn/postgres.git;a=commitdiff;h=00a0bc6c47c8173e82e5927d9b75fe570280860f
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. SET SCHEMA lock strength

2011-01-01 Thread Peter Eisentraut
On lör, 2011-01-01 at 13:17 -0500, Tom Lane wrote:
 ALTER RENAME and ALTER SET SCHEMA are both in the nature of changing the
 object's identity.  Consider the fairly typical use-case where you are
 renaming an old instance out of the way and renaming another one into
 the same schema/name.  Do you really want that to be a low-lock
 operation?  I find it really hard to envision a use case where it'd be
 smart to allow some concurrent operations to continue using the the old
 instance while others start using the new one.

At least in Unix land, that's a handy property.  And we're frequently
cursing those other operating systems where it doesn't work that way.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Peter Eisentraut
On lör, 2011-01-01 at 00:05 -0500, Robert Haas wrote:
 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.

I don't see anything wrong with having 20 or 30 messages of variants of

foo cannot be used on bar

without placeholders.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Peter Eisentraut
On lör, 2011-01-01 at 10:00 -0500, Robert Haas wrote:
 Is it in any better if we write one string per feature, like this:
 
 constraints cannot be used on %s
 triggers cannot be used on %s
 
 ...where %s is a plural object type (views, foreign tables, etc.).

No, this won't work.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE .. SET SCHEMA lock strength

2011-01-01 Thread Robert Haas
On Sat, Jan 1, 2011 at 4:24 PM, Peter Eisentraut pete...@gmx.net wrote:
 On lör, 2011-01-01 at 13:17 -0500, Tom Lane wrote:
 ALTER RENAME and ALTER SET SCHEMA are both in the nature of changing the
 object's identity.  Consider the fairly typical use-case where you are
 renaming an old instance out of the way and renaming another one into
 the same schema/name.  Do you really want that to be a low-lock
 operation?  I find it really hard to envision a use case where it'd be
 smart to allow some concurrent operations to continue using the the old
 instance while others start using the new one.

 At least in Unix land, that's a handy property.  And we're frequently
 cursing those other operating systems where it doesn't work that way.

Yeah, exactly.  If someone is renaming an old instance out of the way
and sticking a new one in its place, the LAST thing you want to do is
lock out queries unnecessarily.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] and it's not a bunny rabbit, either

2011-01-01 Thread Robert Haas
On Sat, Jan 1, 2011 at 4:28 PM, Peter Eisentraut pete...@gmx.net wrote:
 On lör, 2011-01-01 at 00:05 -0500, Robert Haas wrote:
 Yeah, and I still believe that.  I'm having difficulty coming up with
 a workable approach, though.

 I don't see anything wrong with having 20 or 30 messages of variants of

 foo cannot be used on bar

 without placeholders.

Well, that's OK with me.  It seems a little grotty, but manageably so.
 Questions:

1. Should we try to include the name of the object?  If so, how?

2. Can we have a variant with an SQL-command-fragment parameter?

%s cannot be used on views
where %s might be CLUSTER, DROP COLUMN, etc.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Josh Berkus
On 1/1/11 5:59 AM, Stefan Kaltenbrunner wrote:
 well you keep saying that but to be honest I cannot really even see a
 usecase for me - what is only a random one of a set of servers is sync
 at any time and I don't really know which one.
 My usecases would al involved 2 sync standbys and 1 or more async ones.
 but the second sync one would be in a different datacenter and I NEED to
 protect against a datacenter failure which your proposals says I cannot
 do :(

As far as I know, *nobody* has written the bookkeeping code to actually
track which standbys have ack'd.  We need to get single-ack synch
standby merged, tested and working before we add anything as complicated
as each standby on this list must ack.  That means that it's extremely
unlikely for 9.1 at this point.

Frankly, if Simon hadn't already submitted code, I'd be pushing for
single-standby-only for 9.1, instead of any one.

Standby in general deals with the A,D,R triangle (Availability,
Durability, Response time).  Any one configuration is the A,R
configuration, and the only reason to go out with it for 9.1 is because
it's simpler to implement than the D,R configuration (all standbys must
ack).

 Hmm, access control... We haven't yet discussed what privileges a
 standby needs to become synchronous. Perhaps it needs to be a separate
 privilege that can be granted, in addition to the replication privilege?

No, I don't think so.  An additional priv would just complicate life for
DBAs without providing any real benefit.  You'd be guarding against the
very narrow hypothetical case where there's a server admin with limited
privs on the master, and authorization to create async standbies, but
not the authorization to create s synch standby.  How likely is that to
*ever* happen?

 Robert's suggestion of using the roles instead of server names would
 also solve that. With that you would list the roles in
 synchronous_standbys, and no-one else could become a synchronous
 standby. The downside is that if you want to have two standbys in the
 mode that it's enough that either one acknowledges a commit, they would
 have to use the same user account.

I really don't think that Robert was suggesting that we have
predetermined Roles with magic names like synchronous_standbys (were
you, Robert?).  That would defeat eventually having the feature which
people like Stefan want: the ability to define pools of servers with
custom names to represent various data centers.  It also overloads and
perverts the concept of Roles.

While I quite like the idea of having Roles for replication, synch/async
should be a property (CREATE ROLE seattle_dac NOLOGIN REPLICATION
SYNCHRONOUS*), not a special role.  Also, I'll repeat: I see this as 9.2
work, not 9.1 work.  There's going to be far too much bookkeeping code
to write to make it happen without severely delaying 9.1.

BTW, I don't see *any* of this requiring us to have a
synchronous_standbys = list GUC if we embrace the Roles concept.

--Josh

( here's how I can see Roles working:

1) Create a group role for the synch standbyes (NOLOGIN REPLICATION)
2) Create one or more replication roles which are members of that group
role.
3) File-synch those standbys and get them replicating (asynchronously)
4) ALTER ROLE synch_standbys SYNCHRONOUS
5) The postmaster scans the list of Roles for synchronous roles.  For
each synchronous role, the must be one replication ack in order commit.

Since the above means that it would be possible to have a replication
connection which was a member of more than one synch group, you can see
that the bookkeeping involved will be substantial.  So, 9.2.
)





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 21:41 +0200, Heikki Linnakangas wrote:
 On 31.12.2010 23:18, Hannu Krosing wrote:
  On 31.12.2010 13:40, Heikki Linnakangas wrote:
  That thread makes no mention of how to specify which standbys are
  synchronous and which are not.
  The simplest way would be to have separate database users for sync and
  async standbys ?
 
  That would allow any standby with right credentials act as a sync user,
  and those who are not eligible are not accepted even if they try to act
  as a synchronity (?) provider.
 
 Hmm, access control... We haven't yet discussed what privileges a 
 standby needs to become synchronous. Perhaps it needs to be a separate 
 privilege that can be granted, in addition to the replication privilege?

Perhaps we don't need it, also. Why is that essential in this release?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Simon Riggs
On Sat, 2011-01-01 at 14:40 -0800, Josh Berkus wrote:

 Standby in general deals with the A,D,R triangle (Availability,
 Durability, Response time).  Any one configuration is the A,R
 configuration, and the only reason to go out with it for 9.1 is
 because it's simpler to implement than the D,R configuration (all
 standbys must ack).

Nicely put. Not the only reason though...

As I showed earlier, the AR gives you 99.999% availability and the DR
gives you 94% availability, considering a 3 server config. If you add
more servers, the availability of the DR option gets much worse, very
quickly.

The performance of AR is much better also, and stays same or better as
cluster size increases. DR choice makes performance degrade as cluster
size increases, since it works at the speed of the slowest node.

The fact that I can get 5 Nines with simpler code makes it even
sweeter.

quorum commit  1 can improve the durability guarantee of data, but it
also seems that many people wouldn't understand its implications and
would mis-configure it, to our collective embarrassment.
 
-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Aidan Van Dyk
On Sat, Jan 1, 2011 at 6:08 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Sat, 2011-01-01 at 14:40 -0800, Josh Berkus wrote:

 Standby in general deals with the A,D,R triangle (Availability,
 Durability, Response time).  Any one configuration is the A,R
 configuration, and the only reason to go out with it for 9.1 is
 because it's simpler to implement than the D,R configuration (all
 standbys must ack).

 Nicely put. Not the only reason though...

 As I showed earlier, the AR gives you 99.999% availability and the DR
 gives you 94% availability, considering a 3 server config. If you add
 more servers, the availability of the DR option gets much worse, very
 quickly.

 The performance of AR is much better also, and stays same or better as
 cluster size increases. DR choice makes performance degrade as cluster
 size increases, since it works at the speed of the slowest node.

I'm all for getting first-past-post in for 9.1.  Otherwise I fear
we'll get nothing.

Stephen and I will only be able to use 1 sync slave, the DR-site
one.  That's fine.  I can live with it, and make my local slave be
async.  Or replicate the FS/block under WAL.  I can monitor the 
out of it, and unless it goes down, it should easily be able to keep
up with the remote sync one beind a slower WAN link.

And I think both Stephen and I understand your availability math.
We're not arguing that the 1st past post both gives better query
availabiliyt, and cluster scale performance.

But when the primary datacenter servers are dust in the crater (or
boats in the flood, or ash in the fire), I either keep my job, or I
don't.  And that depends on whether there is a chance I (my database
system) confirmed a transaction that I can't recover.

So sync rep with 1st past post already makes my job easier.  I'll take
it over nothing ;-)

a.


-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep Design

2011-01-01 Thread Robert Haas
On Sat, Jan 1, 2011 at 6:54 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Yes, working out the math is a good idea. Things are much clearer if we
 do that.

 Let's assume we have 98% availability on any single server.

 1. Having one primary and 2 standbys, either of which can acknowledge,
 and we never lock up if both standbys fail, then we will have 99.9992%
 server availability. (So PostgreSQL hits 5 Nines, with data
 guarantees). (Maximised availability)

I don't agree with this math.  If the master and one standby fail
simultaneously, the other standby is useless, because it may or may
not be caught up with the master.  You know that the last transaction
acknowledged as committed by the master is on at least one of the two
standbys, but you don't know which one, and so you can't safely
promote the surviving standby.

(If you are working in an environment where promoting the surviving
standby when it's possibly not caught up is OK, then you don't need
sync rep in the first place: you can just run async rep and get much
better performance.)

So the availability is 98% (you are up when the master is up) + 98%^2
* 2% (you are up when both slaves are up and the master is down) =
99.92%.  If you had only a single standby, then you could be certain
that any commit acknowledged by the master was on that standby.  Thus
your availability would be 98% (up when master is up) + 98% * 2% (you
are up when the master is down and the slave is up) = 99.96%.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - core functionality

2011-01-01 Thread Robert Haas
On Mon, Dec 27, 2010 at 10:16 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Dec 25, 2010 at 11:52 PM, Robert Haas robertmh...@gmail.com wrote:
 I'm working on getting a first chunk of this committed.

 OK, here's the patch.

I've now committed a version of this with a bunch of further
revisions, corrections, and cleanup.  It looks to me as though this
patch was written based on the 9.0 code and not thoroughly updated for
some of the 9.1 changes, but I think I cleaned most of that up.  With
a patch of this size, I am sure there are a few things I overlooked,
so please point 'em out and I'll try to fix them promptly.

Hanada-san, can you rebase the fdw_scan patch over what I committed
and post an updated version ASAP?  It'd be better for Heikki or Tom to
work on that part of this than me, since they have a better
understanding of the executor than I do, but I'm sure that they will
not want to work from the previously posted patches as the changes I
made are fairly extensive.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] management of large patches

2011-01-01 Thread Robert Haas
We're coming the end of the 9.1 development cycle, and I think that
there is a serious danger of insufficient bandwidth to handle the
large patches we have outstanding.  For my part, I am hoping to find
the bandwidth to two, MAYBE three major commits between now and the
end of 9.1CF4, but I am not positive that I will be able to find even
that much time, and the number of major patches vying for attention is
considerably greater than that.  Quick estimate:

- SQL/MED - probably needs ~3 large commits: foreign table scan, file
FDW, postgresql FDW, plus whatever else gets submitted in the next two
weeks
- MERGE
- checkpoint improvements
- SE-Linux integration
- extensions - may need 2 or more commits
- true serializability - not entirely sure of the status of this
- writeable CTEs (Tom has indicated he will look at this)
- PL/python patches (Peter has indicated he will look look at this)
- snapshot taking inconsistencies (Tom has indicated he will look at this)
- per-column collation (Peter)
- synchronous replication (Simon, and, given the level of interest in
and complexity of this feature, probably others as well)

I guess my basic question is - is it realistic to think that we're
going to get all of the above done in the next 45 days?  Is there
anything we can do make the process more efficient?  If a few more
large patches drop into the queue in the next two weeks, will we have
bandwidth for those as well?  If we don't think we can get everything
done in the time available, what's the best way to handle that?  I
would hate to discourage people from continuing to hack away, but I
think it would be even worse to give people the impression that
there's a chance of getting work reviewed and committed if there
really isn't.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --split patch

2011-01-01 Thread Peter Eisentraut
On tis, 2010-12-28 at 12:33 -0500, Tom Lane wrote:
 (2) randomly different ordering of rows within a table.  Your patch
 didn't address that, unless I misunderstood quite a bit.

This issue here is just comparing schemas, so that part is a separate
problem for someone else.

 I think the correct fix for (1) is to improve pg_dump's method for
 sorting objects.  It's not that bad now, but it does have issues with
 random ordering of similarly-named objects.  IIRC Peter Eisentraut
 proposed something for this last winter but it seemed a mite too ugly,
 and he got beaten down to just this:
 
 commit 1acc06a1f4ae752793d2199d8d462a6708c8acc2
 Author: Peter Eisentraut pete...@gmx.net
 Date:   Mon Feb 15 19:59:47 2010 +
 
 When sorting functions in pg_dump, break ties (same name) by
 number of arguments

Yes, that was addressing the same underlying problem.  Frankly, I have
been thinking split files a lot before and since then.  If the files
were appropriately named, it would remove a lot of problems compared to
diffing one even perfectly sorted big dump file.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump --split patch

2011-01-01 Thread Peter Eisentraut
On tis, 2010-12-28 at 20:51 -0500, Andrew Dunstan wrote:
 try:
 
   diff -F '^CREATE' ...

This works about 67% of the time and still doesn't actually tell at a
glance what changed.  It will only tell you what the change you are
currently looking at probably belongs to.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers