Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Andres Freund
On Saturday 19 June 2010 18:05:34 Joshua D. Drake wrote:
 On Sat, 2010-06-19 at 09:43 -0400, Robert Haas wrote:
  4. Streaming Replication needs to detect death of master.  We need
  some sort of keep-alive, here.  Whether it's at the TCP level (as
  advocated by Tom Lane and others) or at the protocol level (as
  advocated by Greg Stark) is something that we have yet to decide; once
  it's decided, someone will need to do it...
 
 TCP involves unknowns, such as firewalls, vpn routers and ssh tunnels. I
 humbly suggest we *not* be pedantic and implement something practical
 and less prone to variables outside the control of Pg.
 
 Sincerely,
 +
 Joshua D. Drake

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Florian Pflug
On Jun 20, 2010, at 7:18 , Tom Lane wrote:
 Florian Pflug f...@phlo.org writes:
 On Jun 19, 2010, at 21:13 , Tom Lane wrote:
 This is nonsense --- the slave's kernel *will* eventually notice that
 the TCP connection is dead, and tell walreceiver so.  I don't doubt
 that the standard TCP timeout is longer than people want to wait for
 that, but claiming that it will never happen is simply wrong.
 
 No, Robert is correct AFAIK. If you're *waiting* for data, TCP
 generates no traffic (expect with keepalive enabled).
 
 Mph.  I was thinking that keepalive was on by default with a very long
 interval, but I see this isn't so.  However, if we enable keepalive,
 then it's irrelevant to the point anyway.  Nobody's produced any
 evidence that keepalive is an unsuitable solution.

Yeah, I agree. Just enabling keepalive should suffice for 9.0. 

BTW, the postmaster already enables keepalive on incoming connections in 
StreamConnection() - presumably to prevent crashed clients from occupying a 
backend process forever. So there's even a clear precedent for doing so, and 
proof that it doesn't cause any harm.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Small FSM is too large

2010-06-20 Thread Simon Riggs

I notice that if I vacuum a 1 row table I get a FSM that is 24576 bytes
in size, or 3 database blocks.

Why is it not 1 block, or better still 0 blocks for such a small table?

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] to enable O_DIRECT within postgresql

2010-06-20 Thread Daniel Ng
Greg: Thank you very much for your insightful comments on the performance of

direct io applied to postgres! That inspired me a lot.

Tom: thank you for the reference to man page!

On Fri, Jun 18, 2010 at 2:02 AM, Greg Smith g...@2ndquadrant.com wrote:

 Daniel Ng wrote:

 I am trying to enable the direct IO for the disk-resident
 hash partitions of hashjoin in postgresql.


 As Tom already mentioned this isn't working because of alignment issues.
  I'm not sure what you expect to achieve though.  You should be warned that
 other than the WAL, every experiment I've ever seen that tries to add more
 direct I/O to the database has failed to improve anything; the result is
 neither barely noticeable, or a major performance drop.  This is
 particularly futile if you're doing your research on Linux/ext3, where even
 if your code works delivers a speed up no one will trust it enough to ever
 merge and deploy it, due to the generally poor quality of that area of the
 kernel so far.

 This particular area is magnetic for drawing developer attention as it
 seems like there's a big win just under the surface if things were improved
 a bit.  There isn't.  On operating systems like Solaris where it's possible
 to prototype here by use mounting options to silently covert parts of the
 database to direct I/O, experiments in that area have all been
 disappointing.  One of the presentations from Jignesh Shah at Sun covered
 his experiments in this area, can't seem to find it at the moment but I
 remember the results were not positive in any way.

 --
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us




Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Kevin Grittner
Florian Pflug  wrote:
 On Jun 20, 2010, at 7:18 , Tom Lane wrote:
 
 I was thinking that keepalive was on by default with a very
 long interval, but I see this isn't so. However, if we enable
 keepalive, then it's irrelevant to the point anyway. Nobody's
 produced any evidence that keepalive is an unsuitable solution.

 Yeah, I agree. Just enabling keepalive should suffice for 9.0.
 
+1, with configurable timeout; otherwise people will often feel they
need to kill the receiver process to get it to attempt reconnect or
archive search, anyway.  Two hours is a long time to block
replication based on a broken connection before attempting to move
on.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extensible enum types

2010-06-20 Thread Peter Geoghegan
 Ahem. That is what a natural key is for :)

Well, they have their own drawbacks that don't make them particularly
appealing to use with lookup tables to ape enums. How many lookup
tables have you seen in the wild with a natural key?

People sometimes represent things like US states as enums. This is
probably a mistake, because you cannot control or predict if there'll
be a new US state, unlikely though that me be. You *can* control, for
example, what types of payment your application can deal with, and
you'll probably have to hardcode differences in dealing with each
inside your application, which makes enums a good choice. In my
earlier example, in addition to 'cash', there is a value for
payment_type of 'credit_card' . There is a separate column in the
payments table that references a credit_cards table, because credit
cards are considered transitory. A check constraint enforces that
credit_cards_id is null or not null as appropriate.

I don't like the idea of having values in a table that aren't so much
data as an integral part of your application/database. I think it's
wrong-headed. That's why I am not in favour of your enums as a lookup
table wrapper suggestion.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extensible enum types

2010-06-20 Thread Kevin Grittner
Peter Geoghegan  wrote:
 
 How many lookup tables have you seen in the wild with a natural
 key?
 
Me?  Personally?  A few hundred.

 People sometimes represent things like US states as enums. This is
 probably a mistake, because you cannot control or predict if
 there'll be a new US state, unlikely though that me be.
 
More importantly, you're likely to need to associate properties with
the state.  Sales tax info, maybe a sales manager, etc.  A state
table can be a handy place to store things like that.
 
 I don't like the idea of having values in a table that aren't so
 much data as an integral part of your application/database.
 
Yep, exactly why natural keys should be used when possible.
 
-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Florian Pflug  wrote:
 Yeah, I agree. Just enabling keepalive should suffice for 9.0.
 
 +1, with configurable timeout;

Right, of course.  That's already in the pending patch isn't it?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Joshua D. Drake
On Sun, 2010-06-20 at 11:36 -0400, Tom Lane wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
  Florian Pflug  wrote:
  Yeah, I agree. Just enabling keepalive should suffice for 9.0.
  
  +1, with configurable timeout;
 
 Right, of course.  That's already in the pending patch isn't it?

Can someone tell me what we are going to do about firewalls that impose
their own rules outside of the control of the DBA?

I know that keepalive *should* work, however I also know that regardless
of keepalive I often have to restart sessions etc. There are
environments that are outside the control of the user.

Perhaps this has already been solved and I don't know about it. Does the
master-slave relationship have a built in ping mechanism that is
outside of the TCP protocol?

Sincerely,

Joshua D. Drake

 
   regards, tom lane
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] stats collector connection refused on recv of test message

2010-06-20 Thread Steve Singer


On one of my machines I get

LOG:  could not receive test message on socket for statistics collector: 
Connection refused

on startup.  I noticed this testing 9.0 but when I went back to check I'm 
now getting it on 8.3 as well, disabling all of my iptables rules doesn't 
help.


I've done some debugging and the recv() call for reading the test message in 
pgstat.c is returning -1 with ernno set to 111 (connection refused) as the 
log message indicates.   The previous calls on PgStatSocket all seemed to 
work fine (including the send and select).


If I modify pgstat.c so that it uses pgStatSock for the bind() and receive 
but a second socket structure for the connect() and send() everything seems 
to work fine (I modify both the send calls for both the 'test' message and 
the real stats messages).


Someone else recently reported this error on -admin here 
http://archives.postgresql.org/pgsql-admin/2010-04/msg00109.php but the 
through sort of stopped.


Is using a single UDP socket structure instance for sending a message to 
yourself 'proper'? (it looks like we've been doing this in pgstat.c for 
many years without issues reported).


This machine is a 32bit powerpc running Debian linux with kernel 2.6.22 and 
glibc 2.7-18 installed.  I'm wondering if something was changed in the linux 
kernel to break this.


Steve




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extensible enum types

2010-06-20 Thread Peter Geoghegan
 People sometimes represent things like US states as enums. This is
 probably a mistake, because you cannot control or predict if
 there'll be a new US state, unlikely though that me be.

 More importantly, you're likely to need to associate properties with
 the state.  Sales tax info, maybe a sales manager, etc.  A state
 table can be a handy place to store things like that.

That's probably true, but if there was any question of needing to
associate such values with US states, it ought to be perfectly obvious
to everyone that enums are totally inappropriate. If that wasn't the
case, then their use is only highly questionable, at least IMHO. What
you're describing isn't really a lookup table as I understand the
term. It's just a table. Lookup tables typically have things in them
like the various possible states of another table's tuples. In my
experience, lookup tables generally have two columns, an integer PK
and a description/state.

 I don't like the idea of having values in a table that aren't so
 much data as an integral part of your application/database.

 Yep, exactly why natural keys should be used when possible.

The not having to remember lookup value PK point I made was very
much ancillary to my main point. Ideally, if you restore a schema-only
dump of your database, you shouldn't be missing anything that is
schema. Things like the possible states of a table's tuples are often
schema, not data, and should be treated as such.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] stats collector connection refused on recv of test message

2010-06-20 Thread Tom Lane
Steve Singer ssinger...@sympatico.ca writes:
 Is using a single UDP socket structure instance for sending a message to 
 yourself 'proper'? (it looks like we've been doing this in pgstat.c for 
 many years without issues reported).

Why wouldn't it be?

 This machine is a 32bit powerpc running Debian linux with kernel 2.6.22 and 
 glibc 2.7-18 installed.  I'm wondering if something was changed in the linux 
 kernel to break this.

Sounds like it.  File a debian bug.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small FSM is too large

2010-06-20 Thread Heikki Linnakangas

On 20/06/10 13:56, Simon Riggs wrote:

I notice that if I vacuum a 1 row table I get a FSM that is 24576 bytes
in size, or 3 database blocks.

Why is it not 1 block, or better still 0 blocks for such a small table?


It was just less code to write and test that way. The FSM tree is always 
constant height, three levels, to keep things simple.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Kevin Grittner
Joshua D. Drake  wrote:
 
 Can someone tell me what we are going to do about firewalls that
 impose their own rules outside of the control of the DBA?
 
Has anyone actually seen a firewall configured for something so
stupid as to allow *almost* all the various packets involved in using
a TCP connection, but which suppressed just keepalive packets?  That
seems to be what you're suggesting is the risk; it's an outlandish
enough suggestion that I think the burden of proof is on you to show
that it happens often enough to make this a worthless change.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Kenneth Marshall
On Sun, Jun 20, 2010 at 03:01:04PM -0500, Kevin Grittner wrote:
 Joshua D. Drake  wrote:
  
  Can someone tell me what we are going to do about firewalls that
  impose their own rules outside of the control of the DBA?
  
 Has anyone actually seen a firewall configured for something so
 stupid as to allow *almost* all the various packets involved in using
 a TCP connection, but which suppressed just keepalive packets?  That
 seems to be what you're suggesting is the risk; it's an outlandish
 enough suggestion that I think the burden of proof is on you to show
 that it happens often enough to make this a worthless change.
  
 -Kevin
 

I have seen this sort of behavior but in every case it has been
the result of a myopic view of firewall/IP tables solutions to
perceived attacks. While I do agree that having heartbeat
within the replication process it worthwhile, it should definitely
be 9.1 material at best. For 9.0 such ill-behaved environments
will need much more interaction by the DBA with monitoring and
triage of problems as they arrive.

Regards,
Ken

P.S. My favorite example of odd behavior was preemptively dropping
TCP packets in one direction only at a single port. Many, many
odd things happen when the kernel does not know that the packet
would never make it to it destination. Services would sometimes
run for weeks without a problem depending on when the port ended
up being used invariably at night or on the weekend.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Robert Haas
On Sun, Jun 20, 2010 at 11:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Florian Pflug  wrote:
 Yeah, I agree. Just enabling keepalive should suffice for 9.0.

 +1, with configurable timeout;

 Right, of course.  That's already in the pending patch isn't it?

Is this sarcasm, or is there a pending patch I'm not aware of?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About tapes

2010-06-20 Thread Robert Haas
On Sat, Jun 19, 2010 at 4:57 AM, mac_man2...@hotmail.it
mac_man2...@hotmail.it wrote:
 Tom, Robert,
 thank you.

 Now it is clearer how space on tapes is recycled.

 I tried to follow Robert's example but storing one tape per separate file.
 Read in the first block of each run (hosted by separate tapes and so by
 separate files) and output them into extra storage, wherever this extra
 storage is.
 Again, those first input blocks are now garbage: is it correct?

Yes.

 In this case, what happens when trying to recycle those garbage blocks by
 hosting the result of merging the second block of each run?

You just overwrite them with the new data.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, Jun 20, 2010 at 11:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Right, of course.  That's already in the pending patch isn't it?

 Is this sarcasm, or is there a pending patch I'm not aware of?

https://commitfest.postgresql.org/action/patch_view?id=281

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Florian Pflug
On Jun 20, 2010, at 22:01 , Kevin Grittner wrote:
 Joshua D. Drake  wrote:
 
 Can someone tell me what we are going to do about firewalls that
 impose their own rules outside of the control of the DBA?
 
 Has anyone actually seen a firewall configured for something so
 stupid as to allow *almost* all the various packets involved in using
 a TCP connection, but which suppressed just keepalive packets?  That
 seems to be what you're suggesting is the risk; it's an outlandish
 enough suggestion that I think the burden of proof is on you to show
 that it happens often enough to make this a worthless change.

Yeah, especially since there is no such thing as a special keepalive packet 
in TCP. Keepalive simply sends packets with zero bytes of payload every once in 
a while if the connection is otherwise inactive. If those aren't acknowledged 
(like every other packet would be) by the peer, the connection is assumed to be 
broken. On a reasonably active connection, keepalive neither causes additional 
transmissions, nor altered transmissions.

Keepalive is therefore extremely unlikely to break things - in the very worst 
case, a (really, really stupid) firewall might decide to drop packets with zero 
bytes of payload, causing inactive connections to abort after a while. AFAIK 
walreceiver will simply reconnect in this case. 

Plus, the postmaster enables keepalive on all incoming connections *already*, 
so any problems ought to have caused bugreports about dropped client 
connections.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Robert Haas
On Sun, Jun 20, 2010 at 5:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Sun, Jun 20, 2010 at 11:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Right, of course.  That's already in the pending patch isn't it?

 Is this sarcasm, or is there a pending patch I'm not aware of?

 https://commitfest.postgresql.org/action/patch_view?id=281

+1 for applying something along these lines, but we'll also need to
update walreceiver to actually use one or more of these new
parameters.

On a quick read, I think I see a problem with this: if a parameter is
specified with a non-zero value and there is no OS support available
for that parameter, it's an error.  Presumably, for our purposes here,
we'd prefer to simply ignore any parameters for which OS support is
not available.  Given the nature of these parameters, one might argue
that's a more useful behavior in general.

Also, what about Windows?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, Jun 20, 2010 at 5:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 https://commitfest.postgresql.org/action/patch_view?id=281

 +1 for applying something along these lines, but we'll also need to
 update walreceiver to actually use one or more of these new
 parameters.

Right, but the libpq-level support has to come first.

 On a quick read, I think I see a problem with this: if a parameter is
 specified with a non-zero value and there is no OS support available
 for that parameter, it's an error.  Presumably, for our purposes here,
 we'd prefer to simply ignore any parameters for which OS support is
 not available.  Given the nature of these parameters, one might argue
 that's a more useful behavior in general.

 Also, what about Windows?

Well, of course that patch hasn't been reviewed yet ... but shouldn't we
just be copying the existing server-side behavior, as to both points?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Greg Stark
On Sun, Jun 20, 2010 at 10:41 PM, Florian Pflug f...@phlo.org wrote:
 Yeah, especially since there is no such thing as a special keepalive packet 
 in TCP. Keepalive simply sends packets with zero bytes of payload every once 
 in a while if the connection is otherwise inactive. If those aren't 
 acknowledged (like every other packet would be) by the peer, the connection 
 is assumed to be broken. On a reasonably active connection, keepalive neither 
 causes additional transmissions, nor altered transmissions.

Actualy keep-alive packets contain one byte of data which is a
duplicate of the last previously acked byte.


 Keepalive is therefore extremely unlikely to break things - in the very worst 
 case, a (really, really stupid) firewall might decide to drop packets with 
 zero bytes of payload, causing inactive connections to abort after a while. 
 AFAIK walreceiver will simply reconnect in this case.

Stateful firewalls whole raison-d'etre is to block packets which
aren't consistent with the current TCP state -- such as packets with a
sequence number earlier than the last acked sequence number.
Keepalives do in fact violate the basic TCP spec so they wouldn't be
entirely crazy to block them. Of course a firewall that blocked them
would be pretty criminally stupid given how ubiquitous they are.

  Plus, the postmaster enables keepalive on all incoming connections
*already*, so any problems ought to have caused bugreports about
dropped client connections.


Really? Since when? I thought there was some discussion about this
about a year ago and I made it very clear this had to be an optional
feature which defaulted to off.

Keepalives introduce spurious disconnections in working TCP
connections that have transient outages which is basic TCP
functionality that's supposed to work. There are cases where that's
what you want but it isn't the kind of thing that should be on by
default, let alone on unconditionally.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Kevin Grittner
Greg Stark  wrote:
 
 Keepalives introduce spurious disconnections in working TCP
 connections that have transient outages
 
It's been a while since I read up on this, so perhaps my memory has
distorted the facts over time, but I thought that under TCP, if one
side sends a packet which isn't ack'd after a (configurable) number
of tries with certain (configurable) timings, the connection would be
considered broken and an error returned regardless of keepalive
settings.  I thought keepalive only generated a trickle of small
packets during idle time so that broken connections could be detected
on the side of a connection which was waiting to receive data before
doing something.  That doesn't sound consistent with your
characterization, though, since if my recollection is right, one
could just as easily say that any write to a TCP socket by the
application can also cause spurious disconnections in working TCP
connections that have transient outages.
 
I know that with a two minute keepalive timeout, I can unplug a
machine from one switch port and plug it in somewhere else and the
networking hardware sorts things out fast enough that the transient
network outage doesn't break the TCP connection, whether the
application is sending data or it is quiescent and the OS is sending
keepalive packets.
 
From what I've read about the present walreceiver retry logic, if the
connection breaks, WR will use some intelligence to try the archive
and retry connecting through TCP, in turn, until it finds data.  If
the connection goes silent without breaking, WR sits there forever
without looking at the archive or trying to obtain a new TCP
connection to the master.  I know which behavior I'd prefer.
Apparently the testers who encountered the behavior felt the same.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Florian Pflug
On Jun 21, 2010, at 0:13 , Greg Stark wrote:
 Keepalive is therefore extremely unlikely to break things - in the very 
 worst case, a (really, really stupid) firewall might decide to drop packets 
 with zero bytes of payload, causing inactive connections to abort after a 
 while. AFAIK walreceiver will simply reconnect in this case.
 
 Stateful firewalls whole raison-d'etre is to block packets which
 aren't consistent with the current TCP state -- such as packets with a
 sequence number earlier than the last acked sequence number.
 Keepalives do in fact violate the basic TCP spec so they wouldn't be
 entirely crazy to block them. 

Keepalives play games with the spec, but they don't outright violate it I'd 
say. The sender bluffs by retransmitting data it *knows* has been ACK'ed. But 
since nobody else can prove with certainty that the sender actually saw that 
ACK (think NIC-internal buffer overflow), nobody is able to call that bluff. 

 Of course a firewall that blocked them
 would be pretty criminally stupid given how ubiquitous they are.


Very true, and another reason to stop worrying about possibly brain-dead 
firewalls.

 Plus, the postmaster enables keepalive on all incoming connections
 *already*, so any problems ought to have caused bugreports about
 dropped client connections.
 
 Really? Since when? I thought there was some discussion about this
 about a year ago and I made it very clear this had to be an optional
 feature which defaulted to off.

Since 'bout 10 years. The setsockopt call is in StreamConnection() in 
src/backend/libpq/pqcomm.c.

Here's the corresponding commit:

commit 5aa160abba32a1f2d7818b9f49213f38c99b3fd8
Author: Tatsuo Ishii is...@postgresql.org
Date:   Sat May 20 13:10:54 2000 +

Add KEEPALIVE option to the socket of backend. This will automatically
terminate the backend that has no frontend anymore.

 Keepalives introduce spurious disconnections in working TCP
 connections that have transient outages which is basic TCP
 functionality that's supposed to work. There are cases where that's
 what you want but it isn't the kind of thing that should be on by
 default, let alone on unconditionally.

I'd buy that if all timeouts and retry counts would default to +infinity. But 
they don't, and hence sufficiently long network outages *will* cause connection 
aborts anyway. That a particular connection might survive due to inactivity 
proves nothing, since whether the connection is active or inactive during an 
outage is usually outside of anyone's control.

I really fail to see why anyone would prefer connections (and therefore 
transactions!) getting stuck forever over a few spurious disconnects. The 
former always require manual intervention and cause all sorts of performance 
and disk-space issues, while the latter won't even be an issue for well-written 
clients who just reconnect and retry.

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About tapes

2010-06-20 Thread mac_man2...@hotmail.it

Robert, so in my example:
- tapes are stored in different files (one tape per file)
- you confirm those first blocks are garbage
- you confirm they could be rewritten with new data

This means that we can do recycle space using one tape per file. Correct?

So, in this case, why do we need to use logical tapesets?
In other words, why Tom affirmed it was impossible to recycle space 
implementing one tape per file?




Il 20/06/2010 23:20, Robert Haas ha scritto:

On Sat, Jun 19, 2010 at 4:57 AM, mac_man2...@hotmail.it
mac_man2...@hotmail.it  wrote:
   

Tom, Robert,
thank you.

Now it is clearer how space on tapes is recycled.

I tried to follow Robert's example but storing one tape per separate file.
Read in the first block of each run (hosted by separate tapes and so by
separate files) and output them into extra storage, wherever this extra
storage is.
Again, those first input blocks are now garbage: is it correct?
 

Yes.

   

In this case, what happens when trying to recycle those garbage blocks by
hosting the result of merging the second block of each run?
 

You just overwrite them with the new data.

   



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Greg Stark
On Mon, Jun 21, 2010 at 12:42 AM, Florian Pflug f...@phlo.org wrote:
 I'd buy that if all timeouts and retry counts would default to +infinity. But 
 they don't, and hence sufficiently long network outages *will* cause 
 connection aborts anyway. That a particular connection might survive due to 
 inactivity proves nothing, since whether the connection is active or inactive 
 during an outage is usually outside of anyone's control.

 I really fail to see why anyone would prefer connections (and therefore 
 transactions!) getting stuck forever over a few spurious disconnects. The 
 former always require manual intervention and cause all sorts of performance 
 and disk-space issues, while the latter won't even be an issue for 
 well-written clients who just reconnect and retry.


So just as a data point I'm routinely annoyed by reopening my screen
session and finding various session sessions have died since the day
before. Usually this is caused by broken firewalls but there are also
a bunch of SSH options which some servers have enabled which cause my
sessions to never survive very long if there are any network outages.
Servers where those options are disabled work fine.

I admit this is a very different use case though and since we have
control over the behaviour when the connection breaks perhaps the
analogy falls apart completely. I'm not sure we can guarantee that
reconnecting is always so simple though. What if the user set up an
SSH gateway or needs some extra authentication to make the connection.
Are users expecting the slave to randomly disconnect and reconnect
willy nilly or are they expecting that once it connects it'll keep
using that connection forever?

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About tapes

2010-06-20 Thread Tom Lane
mac_man2...@hotmail.it mac_man2...@hotmail.it writes:
 Robert, so in my example:
 - tapes are stored in different files (one tape per file)
 - you confirm those first blocks are garbage
 - you confirm they could be rewritten with new data

 This means that we can do recycle space using one tape per file. Correct?

No.  You could do that if the rate at which you need to write data to
the file is = the rate at which you extract it.  But for what we
are doing, namely merging runs from several tapes into one output run,
it's pretty much guaranteed that you need new space faster than you are
consuming data from any one input tape.  It balances out as long as you
keep *all* the tapes in one operating-system file; otherwise not.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: psql \whoami option

2010-06-20 Thread Steve Singer



This is a review for the \whoami patch (changed to \conninfo).

This review was done on the Feb 2 2010 version of the patch (rebased to 
head) that reflects some of the feedback from -hackers on the initial 
submission.  The commitfest entry should be updated to reflect the most 
recent version of this patch that David emailed to me.



Content  Purpose

The patch adds a \conninfo command to psql to print connection information 
for the current connection.  The patch includes documentation updates but no 
regression test changes.  I don't see  regression tests for other psql '\' 
commands so I don't think they are required in this case either.


Usability Review
==

The initial discussion on -hackers recommened renaming the command to 
\conninfo which was done.


One comment I have on the output format is that values (ie the database 
name) are enclosed in double quotes but the values being quoted can contain 
double quotes that are not being escaped.   For example


Connected to database: testinger, user: ssinger, port: 5432 via local 
domain socket


(where my database name is testinger ).  Programs will have a hard time 
parsing this.  I'm not sure if this is a valid concern but I'm mentioning 
it.



Initial Run
==

Connecting both through tcp/ip and unix domain sockets produces valid 
\conninfo output.  The regression tests pass when the patch is applied.



Performance
=

I see no performance implications of this patch.


Code  Nitpicking


in command.c you have the opening brace on the same line as the if. See
if (host) {
and the associated else {

The block 	else if (strcmp(cmd, conninfo) == 0) is in between  the 
commands \c and \cd it looks like the commands are ordered 
alphabetically.   Wouldn't conninfo fit in after \cd but before \copy



In help.c you don't update the row count at the top of slashUsage() per the 
comment you should increment it.



Other than those issues the patch looks fine.

Steve


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] beta3 the open items list

2010-06-20 Thread Robert Haas
On Sun, Jun 20, 2010 at 9:31 PM, Greg Stark gsst...@mit.edu wrote:
 On Mon, Jun 21, 2010 at 12:42 AM, Florian Pflug f...@phlo.org wrote:
 I'd buy that if all timeouts and retry counts would default to +infinity. 
 But they don't, and hence sufficiently long network outages *will* cause 
 connection aborts anyway. That a particular connection might survive due to 
 inactivity proves nothing, since whether the connection is active or 
 inactive during an outage is usually outside of anyone's control.

 I really fail to see why anyone would prefer connections (and therefore 
 transactions!) getting stuck forever over a few spurious disconnects. The 
 former always require manual intervention and cause all sorts of performance 
 and disk-space issues, while the latter won't even be an issue for 
 well-written clients who just reconnect and retry.


 So just as a data point I'm routinely annoyed by reopening my screen
 session and finding various session sessions have died since the day
 before. Usually this is caused by broken firewalls but there are also
 a bunch of SSH options which some servers have enabled which cause my
 sessions to never survive very long if there are any network outages.
 Servers where those options are disabled work fine.

 I admit this is a very different use case though and since we have
 control over the behaviour when the connection breaks perhaps the
 analogy falls apart completely. I'm not sure we can guarantee that
 reconnecting is always so simple though. What if the user set up an
 SSH gateway or needs some extra authentication to make the connection.
 Are users expecting the slave to randomly disconnect and reconnect
 willy nilly or are they expecting that once it connects it'll keep
 using that connection forever?

I feel like we're getting off in the weeds, here.  Obviously, the user
would ideally like the connection to the master to last forever, but
equally obviously, if the master unexpectedly reboots, they'd like the
slave to notice - ideally within some reasonable time period - that it
needs to reconnect.  There's no perfect way to distinguish the master
croaked from the network administrator unplugged the Ethernet cable
and is planning to plug it back in any hour now, so we'll just need
to pick some reasonable timeout and go with it.  To my way of
thinking, if the master hasn't responded in a minute or two, that's a
sign that it's time to declare the connection dead.  Retrying the
connection *should* be cheap.  If the user has set things up so that a
TCP connection from slave to master is not straightforward, the user
has configured it incorrectly, and no matter what we do it's not going
to be reliable.

I still think there's a decent argument that we might want to have a
protocol-level heartbeat rather than a TCP-level heartbeat.  But doing
the latter is, I think, good enough for 9.0.  We're pretty much
speculating about what the problems with that approach might be, so
getting too worked up about fixing them at this point seems premature.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: psql \whoami option

2010-06-20 Thread Robert Haas
On Sun, Jun 20, 2010 at 10:51 PM, Steve Singer ssinger...@sympatico.ca wrote:
 One comment I have on the output format is that values (ie the database
 name) are enclosed in double quotes but the values being quoted can contain
 double quotes that are not being escaped.   For example

 Connected to database: testinger, user: ssinger, port: 5432 via local
 domain socket

 (where my database name is testinger ).  Programs will have a hard time
 parsing this.  I'm not sure if this is a valid concern but I'm mentioning
 it.

It seems like for user and database it might be sensible to apply
PQescapeIdentifier to the value before printing it.  This will
double-quote it and escape any internal double-quotes appropriately.
The port is, I guess, being stored as a string, but doesn't it have to
be an integer?  In which case, why quote it at all?

Is there really a point to the non-DSN format or should we just use
the DSN format always?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keepalive for max_standby_delay

2010-06-20 Thread Ron Mayer
Robert Haas wrote:
 On Wed, Jun 16, 2010 at 9:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Sorry, I've been a bit distracted by other responsibilities (libtiff
 security issues for Red Hat, if you must know).  I'll get on it shortly.
 
 What?  You have other things to do besides hack on PostgreSQL?  Shocking!  :-)

I suspect you're kidding, but in case some on the list didn't realize,
Tom's probably as famous (if not moreso) in the image compression
community as he is in the database community:

http://www.jpeg.org/jpeg/index.html
Probably the largest and most important contribution however was the work
 of the Independent JPEG Group (IJG), and Tom Lane in particular.

http://www.w3.org/TR/PNG-Credits.html , http://www.w3.org/TR/PNG/
PNG (Portable Network Graphics) Specification
 Version 1.0
 ...
 Contributing Editor
 Tom Lane, t...@sss.pgh.pa.us

http://www.fileformat.info/format/tiff/egff.htm
... by Dr. Tom Lane of the Independent JPEG Group, a member of the
 TIFF Advisory Committee

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] server authentication over Unix-domain sockets

2010-06-20 Thread KaiGai Kohei
(2010/06/11 21:11), Stephen Frost wrote:
 * Magnus Hagander (mag...@hagander.net) wrote:
 On Fri, Jun 11, 2010 at 14:07, Stephen Frostsfr...@snowman.net  wrote:
 I definitely like the idea but I dislike requiring the user to do
 something to implement it.  Thinking about how packagers might want to
 use it, could we make it possible to build it defaulted to a specific
 value (eg: 'postgres' on Debian) and allow users a way to override
 and/or unset it?

 Well, even if we don't put that in, the packager could export a global
 PGREQUIREPEER environment variable.
 
 Yea, no, that's a crappy solution, sorry. :)  I've been down that
 road with people trying to monkey with /etc/bashrc; oh wait, not
 everyone uses bash, and having every package screw with that stuff is
 equally horrible.  Admittedly, in this specific case, Debian could
 implement what you're talking about in it's wrapper system, maybe, but I
 still don't like it and if people don't use the wrapper (I can imagine
 cases why that might happen, tho I havn't ever had to myself), they
 wouldn't get the benefit..
 
Are you suggesting the packager enforces a certain unix user on the
installation time, although 'postgres' shall be used in most cases?

Let's back to the purpose of the feature.
In my understanding, it provides the client process the way to verity
user identifier of the server process before sending password.
Indeed, if we provide a default value of the requirepeer using
environment variable, the client process can override its own setting.
But is there any problem?

This option allows the client process to specify an expected user
identifier of the server process, then libpq closes the connection
if not matched.
Even if the default shall be given from the system default, the
client can provide an explicit alternative in the connection string.
Is there any fundamental differences to the environment variable?

Thanks,
-- 
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers