Re: [HACKERS] Documentation/help for materialized and recursive views

2013-07-01 Thread Magnus Hagander
On Mon, Jul 1, 2013 at 9:26 PM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Jul 1, 2013 at 10:20 AM, David Fetter da...@fetter.org wrote:
 With deepest respect, failing to provide documentation to users on our
 widest-deployed platform seems pretty hostile to me.

 Yes, that would be pretty hostile.  However, we don't do anything that
 remotely resembles that statement, nor has anyone proposed any such
 thing.

 Personally, I think this whole thread is much ado about nothing.
 Magnus is basically arguing that people might expect that CREATE VIEW
 ought to tell you about CREATE MATERIALIZED VIEW also, but I don't
 find that argument to have a whole lot of merit.  Views and
 materialized views are pretty different things; it is a bit like

How different are they really? Yes, they are very different from an
implementation standpoint, from an enduser perspective they really are
not. If they were, they'd probably be called something else..

 asking why Googling for dog does not give you information on hot
 dogs.  The output of psql's \h command is intended to be a succinct

I'd personally say it's more like googling for dog gives me hits
specifically around dog breeding and not just dogs themselves.


 synopsis summarizing the salient syntax (try saying that five times
 fast), not a comprehensive reference.  If you want the latter, read
 the fine manual.  I admit that this particular case is slightly more
 prone to confusion than some, but I'm just not that exercised about
 it.  Every bit of detail we add to the \h output is better for the
 people who otherwise would have been unhappy, but it's worse for all
 the people who did need it because it's more to read through.

True.


 Regardless of whether you agree with or disagree with the above
 statement, building a high-quality documentation reader into psql so
 that users who are running Windows but not mingw, cygwin, or pgAdmin
 can access the documentation more easily doesn't seem like the correct
 solution to this problem.  I don't really object if somebody wants to
 do it (although someone else may object) but it's certainly taking the
 long way around as far as this particular confusion is concerned.

I still think a better option to that would be to get psql to provide
a link to the full documentation there.

pgAdmin could also do that, but doesn't - it gets you a link to the
main documentation, but not a context sensitive one IIRC.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation/help for materialized and recursive views

2013-06-28 Thread Magnus Hagander
On Fri, Jun 28, 2013 at 5:54 AM, Kevin Grittner kgri...@ymail.com wrote:
 Robert Haas robertmh...@gmail.com wrote:
 Magnus Hagander mag...@hagander.net wrote:

  The functionality of materialized views will (over time) totally swamp
  that of normal views, so mixing all the corresponding documentation
  with the documentation for normal views probably doesn’t make things
  easier for people that are only interested in normal views.

  That's a better point I think. That said, it would be very useful if
  it actually showed up in \h CREATE VIEW in psql - I wonder if we
  should just add the syntax to that page, and then link said future
  information on a separate page somehow?

 IMHO, it's better to keep them separate; they are very different beasts.


 +1

 Although I wonder whether we shouldn't cross-link those pages

They are already crosslinked under see also. But that doesn't really
help the guy doing \h CREATE VIEW in psql, which was the case where
it was brought to my attention.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation/help for materialized and recursive views

2013-06-28 Thread Magnus Hagander
On Fri, Jun 28, 2013 at 4:49 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Magnus Hagander escribió:

 They are already crosslinked under see also. But that doesn't really
 help the guy doing \h CREATE VIEW in psql, which was the case where
 it was brought to my attention.

 Maybe \h should somehow display the see also section?

I've been toying with the idea getting \h to show an actual http://
link to the reference page on the website, since most terminals lets
you deal with URLs easily lately. I haven't actually looked into how
feasible that would be though, but it would be interesting to check
out.  (With a toggle to turn it on/off of course)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Documentation/help for materialized and recursive views

2013-06-27 Thread Magnus Hagander
Is there a particular reason why CREATE RECURSIVE VIEW is part of the
help for CREATE VIEW, but CREATE MATERIALIZED VIEW doesn't show up
there?

I realize the technical reason (they're different man pages, and that
also controls what's in \h in psql which is where I ran into it), but
was there any particular reason to split those up in the first place?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation/help for materialized and recursive views

2013-06-27 Thread Magnus Hagander
On Thu, Jun 27, 2013 at 10:56 AM, Nicolas Barbier
nicolas.barb...@gmail.com wrote:
 2013/6/27 Magnus Hagander mag...@hagander.net:

 Is there a particular reason why CREATE RECURSIVE VIEW is part of the
 help for CREATE VIEW, but CREATE MATERIALIZED VIEW doesn't show up
 there?

 I realize the technical reason (they're different man pages, and that
 also controls what's in \h in psql which is where I ran into it), but
 was there any particular reason to split those up in the first place?

 Normal views are an abstraction layer, while materialized views
 (despite containing the word “view”) are mainly a performance tool (in
 a way similar to indexes).

Oh yes, I'm well aware of this of course.


 The functionality of materialized views will (over time) totally swamp
 that of normal views, so mixing all the corresponding documentation
 with the documentation for normal views probably doesn’t make things
 easier for people that are only interested in normal views.

That's a better point I think. That said, it would be very useful if
it actually showed up in \h CREATE VIEW in psql - I wonder if we
should just add the syntax to that page, and then link said future
information on a separate page somehow?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-27 Thread Magnus Hagander
On Tue, Jun 25, 2013 at 3:31 PM, Michael Paquier
michael.paqu...@gmail.com wrote:

 On 2013/06/25, at 22:23, Fujii Masao masao.fu...@gmail.com wrote:

 On Tue, Jun 25, 2013 at 6:33 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Tue, Jun 25, 2013 at 5:33 PM, Misa Simic misa.si...@gmail.com wrote:
 Hi,

 Where we can find latest snapshot for 9.3 version?

 We have taken latest snapshot from
 http://ftp.postgresql.org/pub/snapshot/dev/

 But it seems it is for 9.4 version...
 9.3 has moved to branch REL9_3_STABLE a couple of days ago.

 Yes. We can find the snapshot from REL9_3_STABLE git branch.
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=shortlog;h=refs/heads/REL9_3_STABLE
 Indeed, I completely forgot that you can download snapshots from 
 postgresql.org's git. Simply use that instead of the FTP server now as long 
 as 9.3 snapshots are not generated there.

In case somebody is still looking, snapshots are properly building for 9.3 now.

Those snapshots aren't identical to a download from git, as they've
gone through a make dist-prep or whatever it's called. But they're
pretty close.

However, if oyu're looking for a snapshot, please use the one on the
ftpsite. Generating those snapshots on the git server is slow and
expensive...


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Min value for port

2013-06-27 Thread Magnus Hagander
Is there a reason why we have set the min allowed value for port to 1,
not 1024? Given that you can't actually start postgres with a value of
1024, shoulnd't the entry in pg_settings reference that as well?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Min value for port

2013-06-27 Thread Magnus Hagander
On Thu, Jun 27, 2013 at 2:16 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 6/27/13 6:34 AM, Magnus Hagander wrote:
 Is there a reason why we have set the min allowed value for port to 1,
 not 1024? Given that you can't actually start postgres with a value of
 1024, shoulnd't the entry in pg_settings reference that as well?

 Are you thinking of the restriction that you need to be root to use
 ports 1024?  That restriction is not necessarily universal.  We can let
 the kernel tell us at run time if it doesn't like our port.

Yes, that's the restriction I was talking about. It's just a bit
annoying that if you look at pg_settings.min_value it doesn't actually
tell you the truth. But yeah, I believe Windows actually lets you use
a lower port number, so it'd at least have to be #ifdef'ed for that if
we wanted to change it.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Implementing incremental backup

2013-06-20 Thread Magnus Hagander
On Thu, Jun 20, 2013 at 12:18 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Claudio Freire escribió:
 On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost sfr...@snowman.net wrote:
  * Claudio Freire (klaussfre...@gmail.com) wrote:
  I don't see how this is better than snapshotting at the filesystem
  level. I have no experience with TB scale databases (I've been limited
  to only hundreds of GB), but from my limited mid-size db experience,
  filesystem snapshotting is pretty much the same thing you propose
  there (xfs_freeze), and it works pretty well. There's even automated
  tools to do that, like bacula, and they can handle incremental
  snapshots.
 
  Large databases tend to have multiple filesystems and getting a single,
  consistent, snapshot across all of them while under load is..
  'challenging'.  It's fine if you use pg_start/stop_backup() and you're
  saving the XLOGs off, but if you can't do that..

 Good point there.

 I still don't like the idea of having to mark each modified page. The
 WAL compressor idea sounds a lot more workable. As in scalable.

 There was a project that removed useless WAL records from the stream,
 to make it smaller and useful for long-term archiving.  It only removed
 FPIs as far as I recall.  It's dead now, and didn't compile on recent
 (9.1?) Postgres because of changes in the WAL structs, IIRC.

 This doesn't help if you have a large lot of UPDATEs that touch the same
 set of rows over and over, though.  Tatsuo-san's proposal would allow
 this use-case to work nicely because you only keep one copy of such
 data, not one for each modification.

 If you have the two technologies, you could teach them to work in
 conjunction: you set up WAL replication, and tell the WAL compressor to
 prune updates for high-update tables (avoid useless traffic), then use
 incremental backup to back these up.  This seems like it would have a
 lot of moving parts and be rather bug-prone, though.

Just as a datapoint, I think this is basically what at least some
other database engine (sqlserver) calls incremental vs
differential backup.

Differential backup keep tracks of which blocks have changed (by one
way or another - maybe as simple as the LSN, but it doesn't matter
how, really) and backs up just those blocks (diffed back to the base
backup).

Incremental does the transaction log, which is basically what we do
with log archiving except it's not done in realtime - it's all saved
on the master until the backup command runs.

Of course, it's quite a been a few years since I set up one of those
in anger, so disclaimer for that info being out of date :)

Didn't pg_rman try to do something based on the page LSN to achieve
something similar to this?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Config reload/restart preview

2013-06-20 Thread Magnus Hagander
On Thu, Jun 20, 2013 at 1:04 PM, Thom Brown t...@linux.com wrote:
 Hi,

 I've noticed that there's no easy way of checking which settings will change
 if the config is reloaded, and I think not being able to do this can cause
 some unfortunate problems.

 For example, a customer went to change their configuration, just setting
 log_autovacuum_min_duration to about 20 seconds, and reloaded the server.
 However, the log file swelled to over 5GB in size before they realised
 something was wrong, and then reverted the change.  It transpired that the
 reload also pulled in a log_statements change from 'ddl' to 'all' that
 someone must have changed at some point without applying it.

 Should we have a way of previewing changes that would be applied if we
 reloaded/restarted the server?

 For example:

 pg_ctl previewconfig

 SIGHUP: log_statements will change from 'ddl' to 'all'
 SIGHUP: log_vacuum_min_duration will change from -1 to 2
 POSTMASTER: fsync will change from 'on' to 'off'

 I'm not proposing this specifically, but something that would provide such
 information.

Yes, we should.

This would go well with something I started working on some time ago
(but haven't actually gotten far on at all), which is the ability for
pg_ctl to be able to give feedback at all. Meaning a pg_ctl reload
should also be able to tell you which parameters were changed, without
having to go to the log. Obviously that's almost exactly the same
feature.

The problem today is that pg_ctl just sends off a SIGHUP when it does
a reload. We'd have to give it an actual interface that could return
data back as well, such as a socket of some kind. So it does take some
work to come up with. But I definitely think we should have something
like this.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Config reload/restart preview

2013-06-20 Thread Magnus Hagander
On Thu, Jun 20, 2013 at 2:54 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Magnus Hagander mag...@hagander.net writes:
 Should we have a way of previewing changes that would be applied if we
 reloaded/restarted the server?

 Yes, we should.

 +1

 This would go well with something I started working on some time ago
 (but haven't actually gotten far on at all), which is the ability for
 pg_ctl to be able to give feedback at all. Meaning a pg_ctl reload
 should also be able to tell you which parameters were changed, without
 having to go to the log. Obviously that's almost exactly the same
 feature.

 It could probably connect to the server and issue the SQL command to
 reload, and that one could probably get enhanced to return modified
 variable as NOTICE, or be run with the right client_min_message:

 SELECT pg_reload_conf();

 The pg_ctl client would then have to know to display the messages sent
 back by the server.

The problem with that is that now you must *always* have your system
set up to allow the postgres user to log in in pg_hba.conf or it
fails.

But yes, one option would be to use SQL instead of opening a socket.
Maybe that's a better idea - have pg_ctl try to use that if available,
and if not send a regular signal and not try to collect the output.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How do we track backpatches?

2013-06-19 Thread Magnus Hagander
On Wed, Jun 19, 2013 at 4:44 AM, Peter Eisentraut pete...@gmx.net wrote:
 On Tue, 2013-06-18 at 12:32 +0200, Magnus Hagander wrote:
 The CF app was and is specifically for dealing with CFs. Having it
 deal with backpatches makes it, well, a bugtracker. It's not meant to
 be that. If we want a bugtracker, then it has very different
 requirements.

 It's not in evidence that the requirements are different.  The CF app is
 basically a list of lists of patches with date information and
 associated person's names.  Tracking backpatch candidates doesn't sound
 that much different.  (That said, I'm not convinced backpatches need any
 tracking at all, but if they did, I think the CF app would be just
 fine.)

 Having an always-open CF would defeat the workflow.

 I'd imagine having a CF entry per release, so after a set of minor
 releases, the CF is closed.

Oh, I think I misunderstood what you meant.

That way does make a lot more sense than what I thought you were
saying :) I shall withdraw my objection.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-06-19 Thread Magnus Hagander
On Jun 19, 2013 7:55 PM, Peter Eisentraut pete...@gmx.net wrote:

 On 6/19/13 1:49 PM, Josh Berkus wrote:
  On 06/19/2013 10:48 AM, Peter Eisentraut wrote:
  On 6/13/13 5:47 PM, Josh Berkus wrote:
  2. File name to store settings set by ALTER SYSTEM command is still
  persistent.auto.conf
  Why?  Shouldn't it just be auto.conf?  Or system.auto.conf?
 
  I prefer auto.conf, personally.
 
  Well, not much about it is automatic, really.  It's just set elsewhere.
 
 
  True, but can you think of a better word to mean don't edit this by
hand?

 generated_by_server.conf

System.conf?

/Magnus


Re: [HACKERS] How do we track backpatches?

2013-06-19 Thread Magnus Hagander
On Wed, Jun 19, 2013 at 8:54 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Josh Berkus wrote:

  I'd imagine having a CF entry per release, so after a set of minor
  releases, the CF is closed.

 How would we name these?

 Also, what about patches for beta?  Should we have a beta CF?

 Don't we have the Open Items wiki page for those?  Seems to work well
 enough.

Yes. The CF app only tracks things that already have patches. For the
beta, we really need to track things that may not have been fixed - or
that may have been done, just only partially so far.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How do we track backpatches?

2013-06-18 Thread Magnus Hagander
On Tue, Jun 18, 2013 at 4:48 AM, Peter Eisentraut pete...@gmx.net wrote:
 On Mon, 2013-06-17 at 17:11 -0700, Josh Berkus wrote:
 Contributors,

 While going through this mailing list looking for missing 9.4 patches, I
 realized that we don't track backpatches (that is, fixes to prior
 versions) at all anywhere.  Where backpatches are submitted by
 committers this isn't an issue, but we had a couple major ones (like the
 autovacuum fix) which were submitted by general contributors.  The same
 goes for beta fixes.

 Should we add a prior version category to the CF to make sure these
 don't get dropped?  Or do something else?

 A separate commit fest for tracking proposed backpatches might be
 useful.

The CF app was and is specifically for dealing with CFs. Having it
deal with backpatches makes it, well, a bugtracker. It's not meant to
be that. If we want a bugtracker, then it has very different
requirements.

Having an always-open CF would defeat the workflow. But since those
patches are typically going into HEAD as well, why not just a
commitfest *topic* for it, on whatever commitfest happens to be the
open one? Then it'll get processed within the existing workflow.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean switchover

2013-06-12 Thread Magnus Hagander
On Wed, Jun 12, 2013 at 6:41 AM, Amit Kapila amit.kap...@huawei.com wrote:
 On Wednesday, June 12, 2013 4:23 AM Fujii Masao wrote:
 Hi,

 In streaming replication, when we shutdown the master, walsender tries
 to send all the outstanding WAL records including the shutdown
 checkpoint record to the standby, and then to exit. This basically
 means that all the WAL records are fully synced between two servers
 after the clean shutdown of the master. So, after promoting the standby
 to new master, we can restart the stopped master as new standby without
 the need for a fresh backup from new master.

 But there is one problem: though walsender tries to send all the
 outstanding WAL records, it doesn't wait for them to be replicated to
 the standby. IOW, walsender closes the replication connection as soon
 as it sends WAL records.
 Then, before receiving all the WAL records, walreceiver can detect the
 closure of connection and exit. We cannot guarantee that there is no
 missing WAL in the standby after clean shutdown of the master. In this
 case, backup from new master is required when restarting the stopped
 master as new standby. I have experienced this case several times,
 especially when enabling WAL archiving.

 The attached patch fixes this problem. It just changes walsender so
 that it waits for all the outstanding WAL records to be replicated to
 the standby before closing the replication connection.

 You may be concerned the case where the standby gets stuck and the
 walsender keeps waiting for the reply from that standby. In this case,
 wal_sender_timeout detects such inactive standby and then walsender
 ends. So even in that case, the shutdown can end.

 Do you think it can impact time to complete shutdown?
 After completing shutdown, user will promote standby to master, so if there
 is delay in shutdown, it can cause delay in switchover.

I'd expect a controlled switchover to happen without dataloss. Yes,
this could make it take a bit longer time, but it guarantees you don't
loose data. ISTM that if you don't care about the potential dataloss,
you can just use a faster shutdown method (e.g. immediate)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean switchover

2013-06-12 Thread Magnus Hagander
On Wed, Jun 12, 2013 at 1:48 PM, Andres Freund and...@2ndquadrant.com wrote:
 Hi,

 On 2013-06-12 07:53:29 +0900, Fujii Masao wrote:
 The attached patch fixes this problem. It just changes walsender so that it
 waits for all the outstanding WAL records to be replicated to the standby
 before closing the replication connection.

 Imo this is a fix that needs to get backpatched... The code tried to do
 this but failed, I don't think it really gives grounds for valid *new*
 concerns.

+1 (without having looked at the code itself, it's definitely a
behaviour that needs to be fixed)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hard limit on WAL space used (because PANIC sucks)

2013-06-12 Thread Magnus Hagander
On Jun 12, 2013 4:56 PM, Robert Haas robertmh...@gmail.com wrote:

 On Sat, Jun 8, 2013 at 10:36 AM, MauMau maumau...@gmail.com wrote:
  Yes, I feel designing reliable archiving, even for the simplest case -
copy
  WAL to disk, is very difficult.  I know there are following three
problems
  if you just follow the PostgreSQL manual.  Average users won't notice
them.
  I guess even professional DBAs migrating from other DBMSs won't, either.
 
  1. If the machine or postgres crashes while archive_command is copying
a WAL
  file, later archive recovery fails.
  This is because cp leaves a file of less than 16MB in archive area, and
  postgres refuses to start when it finds such a small archive WAL file.
  The solution, which IIRC Tomas san told me here, is to do like cp %p
  /archive/dir/%f.tmp  mv /archive/dir/%f.tmp /archive/dir/%f.
 
  2. archive_command dumps core when you run pg_ctl stop -mi.
  This is because postmaster sends SIGQUIT to all its descendants.  The
core
  files accumulate in the data directory, which will be backed up with the
  database.  Of course those core files are garbage.
  archive_command script needs to catch SIGQUIT and exit.
 
  3. You cannot know the reason of archive_command failure (e.g. archive
area
  full) if you don't use PostgreSQL's server logging.
  This is because archive_command failure is not logged in
syslog/eventlog.
 
 
  I hope PostgreSQL will provide a reliable archiving facility that is
ready
  to use.

 +1.  I think we should have a way to set an archive DIRECTORY, rather
 than an archive command.  And if you set it, then PostgreSQL should
 just do all of that stuff correctly, without any help from the user.

Wouldn't that encourage people to do local archiving, which is almost
always a bad idea?

I'd rather improve the experience with pg_receivexlog or another way that
does remote archiving...

 Of course, some users will want to archive to a remote machine via ssh
 or rsync or what-have-you, and those users will need to provide their
 own tools.  But it's got to be pretty common to archive to a local
 path that happens to be a remote mount, or to a local directory whose
 contents are subsequently copied off by a batch job.  Making that work
 nicely with near-zero configuration would be a significant advance.

I guess archiving to a nfs mount or so isn't too bad, but archiving and
using a cronjob to get the files off is typically a great way to loose
data, and we really shouldn't encourage that by default, Imo.

/Magnus


Re: [HACKERS] Make targets of doc links used by phpPgAdmin static

2013-06-06 Thread Magnus Hagander
On Jun 6, 2013 4:14 AM, Peter Eisentraut pete...@gmx.net wrote:

 On Tue, 2013-06-04 at 22:27 -0500, Karl O. Pinc wrote:
  On 06/04/2013 10:16:20 PM, Peter Eisentraut wrote:
   On Tue, 2013-05-07 at 23:18 -0400, Alvaro Herrera wrote:
Peter Eisentraut wrote:
 On Tue, 2013-05-07 at 00:32 -0500, Karl O. Pinc wrote:
  Attached is a documentation patch against head which makes
  static the targets of the on-line PG html documentation that
  are referenced by the phpPgAdmin help system.e

 done
   
I wonder about backpatching this to 9.2 ?
  
   done
 
  Will this be in the next point release?  Or just when
  will it go live?

 I don't know when it goes to the web site, but it will be in the next
 point release.

Website doc updates for back branches are loaded at the same time as the
release is pushed. The developer docs are loaded every few hours from
git, but release docs are only loaded on release day.

/Magnus


Re: [HACKERS] units in postgresql.conf comments

2013-05-30 Thread Magnus Hagander
On Thu, May 30, 2013 at 3:52 AM, Joshua D. Drake j...@commandprompt.com wrote:

 On 05/30/2013 12:01 AM, Heikki Linnakangas wrote:

 We could make it mandatory to specify the unit in the value. Ie. throw
 an error on wal_sender_timeout = 50:

 ERROR: unit required for option wal_sender_timeout
 HINT:  Valid units for this parameter are ms, s, min, h, and d.

 Then you wouldn't need a comment to explain what the unit of a naked
 value is. The only problem I see with that is backwards-compatibility.
 Old postgresql.conf files containing naked values would no longer work.
 But all you'd need to do is to add in the units, which would work on
 older versions too, and would be good for readability anyway.

In general, I like this. Requiring full specification is never
wrong. Except possibly for thje backwards compatible thing.


 I like this idea with one addition. We should have a default unit for each.
 For wal_sender_timeout seconds makes sense, but for checkpoint_timeout
 minutes makes sense (for example).

This sounds like a good way to make things even more confusing. Right
now the confusion is only in the comments - this would make it
confusing in the actual values.

Requiring a unit seems like a much better idea. That way, there is no
way for confusion.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Running pgindent

2013-05-28 Thread Magnus Hagander
On Tue, May 28, 2013 at 9:48 AM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, May 28, 2013 at 9:40 AM, Bruce Momjian br...@momjian.us wrote:
 On Wed, May 22, 2013 at 01:52:28PM -0400, Bruce Momjian wrote:
 Do we want to run pgindent soon?

 OK, should I run it this week?  Wednesday, 1800 GMT?

 wfm.

+1.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New committers

2013-05-27 Thread Magnus Hagander
It's been brought to my attention that I forgot to email hackers about us
adding new committers to the project, as I planned to do when I wrote my
blog post about it.

This is the same people as were announced during the pgcon closing session
- Jeff Davis, Stephen frost, fujii masao and Noah misch.

My apologies for forgetting to announce it here, and another welcome to the
team to our new guys!

/Magnus


Re: [HACKERS] proposal: option --application_name for psql

2013-05-14 Thread Magnus Hagander
On Tue, May 14, 2013 at 9:25 PM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2013/5/14 Erik Rijkers e...@xs4all.nl:
 On Tue, May 14, 2013 21:04, Pavel Stehule wrote:
 Hello

 we like to use psql in combination with bash - just like postgresql driver.

 Actually there is no simple possibility (from command line) to set
 application_name. We would to use different name than psql - for
 example splunk, ...

 I tested PGAPPNAME, but it doesn't work.


 It does work:

 $ PGAPPNAME=splunk psql -c select application_name from pg_stat_activity;
  application_name
 --
  splunk
 (1 row)

 But perhaps I misunderstood your intention?


 I was wrong - PGAPPNAME works, thank you

 but command line --application_name option should be a useful option I
 am thinking still


That goes for a lot of the parameters that go in a connection string,
I'm not sure application_name is common enough to have it's own
argument if other parameters don't. And you can still set it as long
as you use psql in connection string mode, as you can with all the
libpq parameters.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: option --application_name for psql

2013-05-14 Thread Magnus Hagander
On Tue, May 14, 2013 at 9:36 PM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2013/5/14 Magnus Hagander mag...@hagander.net:
 psql in connection string mode

 Hi Magnus, can me send some link?

http://www.postgresql.org/docs/9.2/static/app-psql.html

If this parameter contains an = sign or starts with a valid URI
prefix (postgresql:// or postgres://), it is treated as a conninfo
string. See Section 31.1 for more information.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit subject line

2013-05-06 Thread Magnus Hagander
On Fri, May 3, 2013 at 9:07 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-05-03 14:54:23 -0400, Andrew Dunstan wrote:

 On 05/03/2013 02:43 PM, Tom Lane wrote:
 Heikki Linnakangas hlinnakan...@vmware.com writes:
 On 03.05.2013 20:56, Bruce Momjian wrote:
 On Fri, May  3, 2013 at 01:42:33PM -0400, Andrew Dunstan wrote:
 Yeah. The recommended style is to have the first line be 50 chars or
 less, which is a bit unfortunate - it can be a challenge to keep to
 that limit for a meaningful or comprehensive subject.
 Oh, that's tight. I didn't know about the 50 char recommendation. I've
 tried to keep mine  76 chars, so that when you do git log, it fits on
 a 80 char display with the 4 char indentation that git log does.
 Yeah, that's news to me too.  I've been using a 75-char line length for
 all my commit messages since we switched to git.  It's frequently tough
 enough to get a useful headline into 75 chars --- I can't see trying to
 do 50.

 man git-commit says:

Though not required, it’s a good idea to begin the commit message
with a single short (less than 50 character) line summarizing the
change, followed by a blank line and then a more thorough
description. Tools that turn commits into email, for example, use
the first line on the Subject: line and the rest of the commit in
the body.

 I'd be happy to use 75 or whatever if we could convince the email tools not
 to truncate the subject lines at 50.

 Its worth to notice that neither git nor the kernel adhere to that
 limit...

FWIW, the tool we use to generate the commit emails truncate it at 80
(minus the pgsql:  header). We can increase that, but it only fixes
the email one, and not the one that people look at on the web...

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit subject line

2013-05-06 Thread Magnus Hagander
On Mon, May 6, 2013 at 4:47 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 05/06/2013 10:19 AM, Magnus Hagander wrote:

 On Fri, May 3, 2013 at 9:07 PM, Andres Freund and...@2ndquadrant.com
 wrote:

 On 2013-05-03 14:54:23 -0400, Andrew Dunstan wrote:

 On 05/03/2013 02:43 PM, Tom Lane wrote:

 Heikki Linnakangas hlinnakan...@vmware.com writes:

 On 03.05.2013 20:56, Bruce Momjian wrote:

 On Fri, May  3, 2013 at 01:42:33PM -0400, Andrew Dunstan wrote:

 Yeah. The recommended style is to have the first line be 50 chars or
 less, which is a bit unfortunate - it can be a challenge to keep to
 that limit for a meaningful or comprehensive subject.

 Oh, that's tight. I didn't know about the 50 char recommendation. I've
 tried to keep mine  76 chars, so that when you do git log, it fits
 on
 a 80 char display with the 4 char indentation that git log does.

 Yeah, that's news to me too.  I've been using a 75-char line length for
 all my commit messages since we switched to git.  It's frequently tough
 enough to get a useful headline into 75 chars --- I can't see trying to
 do 50.

 man git-commit says:

 Though not required, it’s a good idea to begin the commit message
 with a single short (less than 50 character) line summarizing the
 change, followed by a blank line and then a more thorough
 description. Tools that turn commits into email, for example, use
 the first line on the Subject: line and the rest of the commit in
 the body.

 I'd be happy to use 75 or whatever if we could convince the email tools
 not
 to truncate the subject lines at 50.

 Its worth to notice that neither git nor the kernel adhere to that
 limit...

 FWIW, the tool we use to generate the commit emails truncate it at 80
 (minus the pgsql:  header). We can increase that, but it only fixes
 the email one, and not the one that people look at on the web...


 In practice, something else must be further truncating it, at about 64 chars
 by the look of it - see for example
 http://www.postgresql.org/message-id/e1uvtfj-00079k...@gemulon.postgresql.org

Ha. Good point. There's actually a bit of a bug in the code there :)
What it does is limit the length to 80-length(pgsql: $shortmsg),
which is 64. It is supposed to limit it to 80-length(pgsql: )..
(Since it substitutes the actual commit message where $shortmsg is
found).

That's fixable though :)


 Re your other point, github at least seems to elide at about 70 chars  - see
 https://github.com/postgres/postgres/commit/b42ea7981ce1e7484951a22662937541066d8647
 - where Joe used a very long first sentence rather than a show summary line.
 I don't know if gitweb could be induced to elide after a greater length - I
 bet it could fairly easily. There does seem to be lots of spare screen real
 estate on the commit summary and history pages, which I think is where this
 occurs.

Possibly. I can never find my way around that one though, and making
any modifications also has us ending up maintaining what's basically a
fork - unless there's always a config argument for it somewhere.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recovery target 'immediate'

2013-05-02 Thread Magnus Hagander
On Thu, May 2, 2013 at 8:55 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 26 April 2013 18:13, Heikki Linnakangas hlinnakan...@vmware.com wrote:
 On 26.04.2013 19:50, Magnus Hagander wrote:

 On Fri, Apr 26, 2013 at 6:43 PM, Simon Riggssi...@2ndquadrant.com
 wrote:

 On 26 April 2013 17:25, Heikki Linnakangashlinnakan...@vmware.com
 wrote:

 Actually, from a usability point of view I think would be nice to have
 just

 one setting, recovery_target. It's already somewhat confusing to have
 recovery_target_xid, recovery_target_time, and recovery_target_name,
 which
 are mutually exclusive, and recovery_target_inclusive which is just a
 modifier for the others. Maybe something like:

 recovery_target = 'xid 1234'
 recovery_target = 'xid 1234 exclusive'
 recovery_target = '2013-04-22 12:33'
 recovery_target = '2013-04-22 12:33 exclusive'
 recovery_target = 'consistent'
 recovery_target = 'name: daily backup'


 So now you want to change the whole existing API so it fits with your
 one new requirement?


 No, I think the above would be a usability improvement whether or not we add
 the new feature.


 I don't see the usability improvement. This is only being suggested to
 make one new addition look cleaner; there isn't a common gripe that
 the use of parameters is hard to use, other than their location and
 the ability to treat them as GUCs.

Actually, there is - I hear it quite often from people not so
experienced in PostgreSQL. Though in fairness, I'm not entirely sure
the new syntax would help - some of those need a tool to do it for
them, really (and such tools exist, I believe).

That said, there is one property that's very unclear now and that's
that you can only set one of recovery_target_time, recovery_target_xid
and recovery_target_name. But they can be freely combined with
recovery_target_timeline and recovery_target_inclusive. That's quite
confusing.



 This changes the existing API which will confuse people that know it
 and invalidate everything written in software and on wikis as to how
 to do it. That means all the in case of fire break glass
 instructions are all wrong and need to be rewritten and retested.

Yes, *that* is the main reason *not* to make the change. It has a
pretty bad cost in backwards compatibility loss. There is a gain, but
I don't think it outweighs the cost.



 It also introduces a single common datatype for such entries, where
 before we had that xids were numbers, names were text, so this new
 mechanism operates completely differently from all other GUC
 parameters.

 Plus its inconsistent, in that with xids you have 'xid 1234' whereas
 timestamps just say '2013-04-22' rather than 'timestamp 2013-04-22',
 or with names should they end in a colon or not. There'n no clear
 differentiation between text for names and other keywords. Presumably
 we'll need a complex parser to sort that out.

I'm assuming that was just typos in Heikki's example. I'm sure he
meant them to be consistent.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recovery target 'immediate'

2013-04-26 Thread Magnus Hagander
On Fri, Apr 26, 2013 at 1:47 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 26 April 2013 11:29, Heikki Linnakangas hlinnakan...@vmware.com wrote:

 But there is also an operation to simply restore a backup.

 Just because a tool supports an imprecise definition of a restore,
 doesn't mean Postgres should encourage and support that.

 Restore a backup is more suited to filesystems where most files
 don't change much. And its also a common user complaint: I restored
 my back but now I've lost my changes. Can you help?. That's not
 something that's been heard around here because we don't encourage
 foot-guns.

I think it makes perfect sense to have this. Since we do guarantee it
to still be consistent even if things *are* changing around. The lack
of an easy way to do this is probably the most common reason I've seen
for people using pg_dump instead of physical backups in the past.
pg_basebackup fixed it for the backup side of things, with the -x
option. This appears to be a suggestion to do that kind of restore
even if you have a log archive style backups.

That said, maybe the easier choice for a *system* (such as v-thingy)
would be to simply to the full backup using pg_basebackup -x (or
similar), therefor not needing the log archive at all when restoring.
Yes, it makes the base backup slightly larger, but also much
simpler... As a bonus, your base backup would still work if you hosed
your log archive.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recovery target 'immediate'

2013-04-26 Thread Magnus Hagander
On Apr 26, 2013 4:38 PM, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Apr 26, 2013 at 10:05 AM, Simon Riggs si...@2ndquadrant.com
wrote:
  Restore points are definitely the way to go here, this is what they
  were created for. Stopping at a labelled location has a defined
  meaning for the user, which is much better than just stop anywhere
  convenient, which I found so frightening.
 
  It should be straightforward to create a restore point with the same
  name as used in pg_start_backup('text');
 
  pg_basebackup backups would need to use a unique key, which is harder
  to achieve. If we write a WAL record at backup start that would make
  the starting LSN unique, so we could then use that for the restore
  point name for that backup.
 
  If people want anything else they can request an additional restore
  point at the end of the backup.

 I personally find this to be considerably more error-prone than
 Heikki's suggestion.  On the occasions when I have had the dubious
 pleasure of trying to do PITR recovery, it's quite easy to supply a
 recovery target that never actually gets matched - and then you
 accidentally recover all the way to the end of WAL.  This is not fun.
 Having a bulletproof way to say recover until you reach consistency
 and then stop is a much nicer API.  I don't think stop as soon as
 possible is at all the same thing as stop anywhere convenient.


Thinking some more about it, this could also be useful together with
pausing at the recovery target to get a quick look at the state of things
before recovering further. I assume that would work as well, since it would
be a recovery target like the others..

/Magnus


Re: [HACKERS] Recovery target 'immediate'

2013-04-26 Thread Magnus Hagander
On Fri, Apr 26, 2013 at 6:43 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 26 April 2013 17:25, Heikki Linnakangas hlinnakan...@vmware.com wrote:
 On 26.04.2013 19:05, Simon Riggs wrote:

 On 26 April 2013 16:38, Robert Haasrobertmh...@gmail.com  wrote:

 On Fri, Apr 26, 2013 at 11:35 AM, Simon Riggssi...@2ndquadrant.com
 wrote:

 Given that I was describing how we might implement Heikki's
 suggestion, I find this comment confusing.

 Please explain.


 Heikki's suggestion is simply to have a mode that stops as soon as
 consistency is reached.  The server already knows (from the backup
 label) what the consistency point is, so there's no need to add a
 restore point or anything else to the WAL stream to implement what
 he's talking about.


 Using restore points just puts into use the facility that is already
 best practice to use, put there for just this kind of situation.
 I guess you could do recovery_target_name = '$consistent'

 Doing it the other way means you need to add a new kind of recovery
 target to the API just for this.
 recovery_target_immediate = on


 Sounds good to me.

 Actually, from a usability point of view I think would be nice to have just
 one setting, recovery_target. It's already somewhat confusing to have
 recovery_target_xid, recovery_target_time, and recovery_target_name, which
 are mutually exclusive, and recovery_target_inclusive which is just a
 modifier for the others. Maybe something like:

 recovery_target = 'xid 1234'
 recovery_target = 'xid 1234 exclusive'
 recovery_target = '2013-04-22 12:33'
 recovery_target = '2013-04-22 12:33 exclusive'
 recovery_target = 'consistent'
 recovery_target = 'name: daily backup'

 So now you want to change the whole existing API so it fits with your
 one new requirement?

I like that newer API suggestion better than what we have now - though
it can perhaps be improved even more. But I definitely don't think
it's worth breaking backwards compatibility for it. There are lots of
tools and scripts and whatnot out there that use the current API. I
think we need a bigger improvement than just a cleaner syntax to break
those.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO links broken?

2013-04-17 Thread Magnus Hagander
On Wed, Apr 17, 2013 at 2:13 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Apr 17, 2013 at 12:21 AM, Stephen Scheck
 singularsyn...@gmail.com wrote:
 Many of the links in the TODO wiki page result in a page not found error.
 Is this page up-to-date?
 Can anything be inferred about the status of these items from the broken
 link?

 I think what we can infer is that the new archives code is broken.  I
 hope someone is planning to fix that.  If there's been some decision

Yes. We can infer that. It makes it a whole lot easier to fix
something with better bug repors than that, of course, as I'm sure you
(Robert in this case, not Stephen) are generally aware of.

I've reverted a patch that was applied a few days ago that dealt with
how URLs are parsed, and I think that's the one that's responsible.
But it would be good to have an actual example of what didn't work,
because the links i tried all worked...

 made that we don't have to support the historical URLs for our
 archives pages, I think that's a really bad plan; those links are in a
 lot more places than just the Todo.

No, the plan has always been to support those. There are no plans to
remove that.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO links broken?

2013-04-17 Thread Magnus Hagander
On Wed, Apr 17, 2013 at 3:14 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Apr 17, 2013 at 8:48 AM, Magnus Hagander mag...@hagander.net wrote:
 Yes. We can infer that. It makes it a whole lot easier to fix
 something with better bug repors than that, of course, as I'm sure you
 (Robert in this case, not Stephen) are generally aware of.

 I've reverted a patch that was applied a few days ago that dealt with
 how URLs are parsed, and I think that's the one that's responsible.
 But it would be good to have an actual example of what didn't work,
 because the links i tried all worked...

 Hmm.  Sorry for the lack of detail.  I assumed the problem was obvious
 and widespread because I clicked on the first link I saw in the Todo
 and it didn't work.  But after clicking a bunch more links from the
 Todo, I only found three that fail.

 http://archives.postgresql.org/pgsql-hackers/2008-12/msg01340.php
 http://archives.postgresql.org/pgsql-hackers/2011-03/msg01831.php


Works now, so that seems to have been fixed by the reverting of the
patch. It might be a while before they all recover due to caching
issues, but both of these work now for me, which seems to indcate the
fix is the right one.


 http://www.postgresql.org/message-id/4B577E9F.8000505%40dunslane.net/

It works with %40 for me now, so it might have been related - can you
check if it is still an issue for you? It might be different in
different browsers.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra security measures for next week's releases

2013-04-03 Thread Magnus Hagander
On Wed, Apr 3, 2013 at 12:09 PM,  roadrunn...@gmx.at wrote:
 Am 28.03.2013 18:03, schrieb Tom Lane:

 The core committee has decided that one of the security issues due to be
 fixed next week is sufficiently bad that we need to take extra measures
 to prevent it from becoming public before packages containing the fix
 are available.  (This is a scenario we've discussed before, but never
 had to actually implement.)


 8.3 has reached EOL in February 2013, I guess there will be no fix for 8.3,
 right?

That is correct.

Some distributions may backpatch fixes manually, but there will be no
official patch for 8.3.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Orphaned files after initdb

2013-04-01 Thread Magnus Hagander
Maybe my thinking is just missing something obvious here. But looking
at a fresh initdb, I'm seeing a number of files in the data directory
for this database that don't have corresponding entries in pg_class. I
looked for entries based on relfilenode in pg_class - am I missing
some other way we put files in there?

I'm seeing the same on both head and 9.1 as an example, but different oids.

I feel I'm either missing something obvious, or we have a problem
here? And somehow the first one seems more likely..

The query I ran (yeah, i'm sure it can be written prettier, but this
was quick-n-dirty):

with t as (
   select  * from pg_ls_dir('datadir/base/1') l(f)
   where f not in (
select relfilenode::text from pg_class
union all select relfilenode::text || '_vm' from pg_class
union all select relfilenode::text || '_fsm' from pg_class)
)
select f,
   size,
   modification
from t, pg_stat_file('datadir/base/1/' || f)
order by 1;

Which gives me:
f|  size  |  modification
-++
 11809   |  65536 | 2013-04-01 20:34:45+02
 11809_fsm   |  24576 | 2013-04-01 20:34:45+02
 11809_vm|   8192 | 2013-04-01 20:34:45+02
 11811   |  16384 | 2013-04-01 20:34:45+02
 11812   |  32768 | 2013-04-01 20:34:45+02
 11821   | 344064 | 2013-04-01 20:34:45+02
 11821_fsm   |  24576 | 2013-04-01 20:34:45+02
 11821_vm|   8192 | 2013-04-01 20:34:45+02
 11823   | 131072 | 2013-04-01 20:34:45+02
 11824   |  90112 | 2013-04-01 20:34:45+02
 11825   | 507904 | 2013-04-01 20:34:45+02
 11825_fsm   |  24576 | 2013-04-01 20:34:45+02
 11825_vm|   8192 | 2013-04-01 20:34:45+02
 11827   |  0 | 2013-04-01 20:34:45+02
 11829   |   8192 | 2013-04-01 20:34:45+02
 11830   |  73728 | 2013-04-01 20:34:45+02
 11831   | 196608 | 2013-04-01 20:34:45+02
 11832   |  65536 | 2013-04-01 20:34:45+02
 11832_fsm   |  24576 | 2013-04-01 20:34:45+02
 11832_vm|   8192 | 2013-04-01 20:34:45+02
 11834   |  16384 | 2013-04-01 20:34:45+02
 11835   |  32768 | 2013-04-01 20:34:45+02
 pg_filenode.map |512 | 2013-04-01 20:34:45+02
 PG_VERSION  |  4 | 2013-04-01 20:34:44+02
(24 rows)



The filenode map and pg_version obviously make sense, but where are
the others from?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Orphaned files after initdb

2013-04-01 Thread Magnus Hagander
On Mon, Apr 1, 2013 at 9:09 PM, Andres Freund and...@2ndquadrant.com wrote:
 Hi,

 On 2013-04-01 20:43:36 +0200, Magnus Hagander wrote:
 Maybe my thinking is just missing something obvious here. But looking
 at a fresh initdb, I'm seeing a number of files in the data directory
 for this database that don't have corresponding entries in pg_class. I
 looked for entries based on relfilenode in pg_class - am I missing
 some other way we put files in there?

 I'm seeing the same on both head and 9.1 as an example, but different oids.

 I feel I'm either missing something obvious, or we have a problem
 here? And somehow the first one seems more likely..

 The query I ran (yeah, i'm sure it can be written prettier, but this
 was quick-n-dirty):

 with t as (
select  * from pg_ls_dir('datadir/base/1') l(f)
where f not in (
 select relfilenode::text from pg_class
 union all select relfilenode::text || '_vm' from pg_class
 union all select relfilenode::text || '_fsm' from pg_class)
 )
 select f,
size,
modification
 from t, pg_stat_file('datadir/base/1/' || f)
 order by 1;

 You're missing nailed tables which don't have a valid relfilenode (but
 InvalidOid/0) in pg_class.relfilenode. You need to query the refilenode like
 SELECT pg_relation_filenode(pg_class.oid) FROM pg_class.

Ha. Of course. I knew it was me getting it wrong.

You'd get extra happy if only you knew how many pretty senior pg folks
i pestered about that one on irc before posting here :D

Thanks!

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Orphaned files after initdb

2013-04-01 Thread Magnus Hagander
On Mon, Apr 1, 2013 at 10:53 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-04-01 21:24:06 +0200, Magnus Hagander wrote:
 On Mon, Apr 1, 2013 at 9:09 PM, Andres Freund and...@2ndquadrant.com wrote:
  Hi,
 
  On 2013-04-01 20:43:36 +0200, Magnus Hagander wrote:
  Maybe my thinking is just missing something obvious here. But looking
  at a fresh initdb, I'm seeing a number of files in the data directory
  for this database that don't have corresponding entries in pg_class. I
  looked for entries based on relfilenode in pg_class - am I missing
  some other way we put files in there?
 
  I'm seeing the same on both head and 9.1 as an example, but different 
  oids.
 
  I feel I'm either missing something obvious, or we have a problem
  here? And somehow the first one seems more likely..
 
  The query I ran (yeah, i'm sure it can be written prettier, but this
  was quick-n-dirty):
 
  with t as (
 select  * from pg_ls_dir('datadir/base/1') l(f)
 where f not in (
  select relfilenode::text from pg_class
  union all select relfilenode::text || '_vm' from pg_class
  union all select relfilenode::text || '_fsm' from pg_class)
  )
  select f,
 size,
 modification
  from t, pg_stat_file('datadir/base/1/' || f)
  order by 1;
 
  You're missing nailed tables which don't have a valid relfilenode (but
  InvalidOid/0) in pg_class.relfilenode. You need to query the refilenode 
  like
  SELECT pg_relation_filenode(pg_class.oid) FROM pg_class.

 Ha. Of course. I knew it was me getting it wrong.

 You'd get extra happy if only you knew how many pretty senior pg folks
 i pestered about that one on irc before posting here :D

 Most of those probably didn't spend too much time on developing wal-logical
 changes transformations ;)

 For some real reason to send this email: Even if you add 
 pg_relation_filenode()
 to the above query, its still not correct. You also need to disambiguate by
 tablespace. Just for the case youre using it for something more interesting
 than a freshly initdbed cluster.

Right. I was, but with a single tablespace :)

You also need to add a IS NOT NULL to the definitions, or it's always
going to return zero rows (just in case somebody is picking up the
query)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to 9.3 beta

2013-03-29 Thread Magnus Hagander
On Fri, Mar 29, 2013 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Bruce Momjian br...@momjian.us writes:
 Our final 9.3 commit-fest has has exceeded the two-month mark, so it is
 time to start targeting a date to close it and get to 9.3 beta.  I see
 25 items will needing attention before we can close it:

   https://commitfest.postgresql.org/action/commitfest_view?id=17

 What is a reasonable timeframe to target for completion of these items?

 TBH, once Andrew commits the JSON patch, I wouldn't have a problem with
 moving all the rest to Returned With Feedback or the next CF.  None of
 the others seem to me to be close-to-committable (with the possible
 exception of the Max LSN patch, which I've not looked at), and April is
 not the time to be doing development.

I haven't looked through too many of them. as for the one I have on
there myself, the pg_retainxlog one, i think the consensus was to turn
it into a documentation patch - and we can keep adding doc patches
into beta.

But, to the point. It might be a decent starting point to at least
move all patches that are waiting on author and doesn't get almost
daily status updates (or has a very clear statement from the author on
when an update can be expected). That'll cut the list down quite a bit
I think, and should make it easier to push harder on the others
perhaps.


 Next week is going to be tied up with the back-branch releases, but
 maybe we could target beta for the week after?  The main gating factor
 at this point really would be how quickly we could write some draft
 release notes, so people know what to test.

I think we need to give it at least one more week before we can target
a beta. Packagers will if anything have *more* work to deal with these
backbranch releases than usual, and asking them to push another one
just one (or even two) weeks after that would be rushing it
unnecessarily...

Of course, that doesn't prevent from starting work on the release
notes meanwhile :)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting post-mortem on a near disaster with git

2013-03-25 Thread Magnus Hagander
On Mon, Mar 25, 2013 at 7:07 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
 On 03/24/2013 11:22 PM, Andrew Dunstan wrote:

 On 03/24/2013 06:06 PM, Michael Paquier wrote:
 On Mon, Mar 25, 2013 at 12:52 AM, Tom Lane t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us wrote:

 Over the weekend, KDE came within a gnat's eyelash of losing *all*
 their authoritative git repos, despite having seemingly-extensive
 redundancy.  Read about it here:
 http://jefferai.org/2013/03/24/too-perfect-a-mirror/

 It is really great that KDE people are actually sharing this
 experience. This is really profitable for other projects as well as
 individuals.
 And thanks for sharing it here.


 We should think about protecting our own repo a bit better,
 especially
 after the recent unpleasantness with a bogus forced update.  The idea
 of having clones that are deliberately a day or two behind seems
 attractive ...

 Just an idea here: why not adding a new subdomain in postgresql.org
 http://postgresql.org for mirrors of the official GIT repository
 similar to the buildfarm?
 People registered in this service could publish themselves mirrors and
 decide by themselves the delay their
 clone keeps with the parent repo. The scripts used by each mirror
 maintainer (for backup, sync repo with
 a given delay) could be centralized in a way similar to buildfarm code
 so as everybody in the community could
 use it and publish it if they want.

 Also, the mirrors published should be maintained by people that are
 well-known inside the community,
 and who would not add extra commits which would make the mirror
 out-of-sync with the parent repo.

 Such an idea is perhaps too much if the point is to maintain 2-3
 mirrors of the parent repo, but gives
 enough transparency to actually know where the mirrors are and what is
 the sync delay maintained.



 This strikes me as being overkill. The sysadmins seem to have it covered.

 Back when we used CVS for quite a few years I kept 7 day rolling
 snapshots of the CVS repo, against just such a difficulty as this. But
 we seem to be much better organized with infrastructure these days so I
 haven't done that for a long time.

 well there is always room for improvement(and for learning from others)
 - but I agree that this proposal seems way overkill. If people think we
 should keep online delayed mirrors we certainly have the resources to
 do that on our own if we want though...

Yeah, definitely.

It's also interesting to note that one of the things they do is to
stop using mirrored clones. The fact that we *don't* use mirrored
clones for our anon repository is exectly how we caught the issue
caused by the invalid push that Kevin did a short while ago...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Materialized views vs event triggers missing docs?

2013-03-20 Thread Magnus Hagander
The table at 
http://www.postgresql.org/docs/devel/static/event-trigger-matrix.html
does not include things like CREATE MATERIALIZED VIEW or REFRESH
MATERIALIZED VIEW. but they certainly seem to work?

Just a missing doc patch, or is there something in the code that's not
behaving as intended?

If it's just a doc thing - perhaps this is a table we should somehow
try to autogenerate?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-03-20 Thread Magnus Hagander
On Mar 20, 2013 11:14 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 Atri Sharma atri.j...@gmail.com writes:
  We can use a scheduling algorithm, and can define a pool of tasks as
well as
  a time constraint for the amount of time which can be used for running
the
  tasks.Then, a scheduling algorithm can pick tasks from the pool based on
  priorities and the time duration of a task.I can see a dynamic
programming
  solution to this problem.

 I think mcron already implements it all and is made to be embedded into
 a larger program.


As long as your larger program is gpl. Not even lgpl on that one. I'd think
that's a killer for that idea...

/Magnus


Re: [HACKERS] backward incompatible pg_basebackup and pg_receivexlog

2013-03-19 Thread Magnus Hagander
On Tue, Mar 19, 2013 at 11:39 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 19.03.2013 04:42, Peter Eisentraut wrote:

 pg_basebackup and pg_receivexlog from 9.3 won't work with earlier
 servers anymore.  I wonder if this has been fully thought through.  We
 have put in a lot of effort to make client programs compatible with many
 server versions as well as keeping the client/server protocol compatible
 across many versions.  Both of these assumptions are now being broken,
 which will result in all kinds of annoyances.

 It seems to me that these tools could probably be enhanced to understand
 both old and new formats.


 Yes, this was discussed, and the consensus was to break
 backwards-compatibility in 9.3, so that we can clean up the protocol to be
 architecture-independent. That makes it easier to write portable clients,
 from 9.3 onwards. See the thread ending at
 http://www.postgresql.org/message-id/4fe2279c.2070...@enterprisedb.com.


 Also, using the old tools against new server versions either behaves
 funny or silently appears to work, both of which might be a problem.


 Hmm, it would be good to fix that. I wonder how, though. The most
 straightforward way would be to add an explicit version check in the
 clients, in backbranches. That would give a nice error message, but that
 would only help with new minor versions.

Still better to do it in a backbranch, than not at all. At least we
are then nicer to the ones that do keep up with upgrades, which we
recommend they do...


 I think if we are documenting the replication protocol as part of the
 frontend/backend protocol and are exposing client tools that use it,
 changes need to be done with the same rigor as other protocol changes.


 Agreed. The plan is that we're going to be more careful with it from now on.


 As far as I can tell, there is no separate version number for the
 replication part of the protocol, so either there needs to be one or the
 protocol as a whole needs to be updated.


 Good point.

 I propose that we add a version number, and call the 9.3 version version 2.
 Let's add a new field to the result set of the IDENTIFY_SYSTEM command for
 the replication protocol version number. The version number should be bumped
 if the replication protocol is changed in a non-backwards-compatible way.

+1.

 That includes changes to the messages sent in the COPY-both mode, after the
 START_REPLICATION command. If we just add new commands, there's no need to
 bump the version; a client can still check the server version number to
 determine if a command exists or not.

Sounds good.


 We could also try to support old client + new server combination to some
 extent by future-proofing the protocol a bit. We could specify that the
 client should ignore any message types that it does not understand, and also
 add a header length field to the WalData message ('w'), so that we can add
 new header fields to it that old clients can just ignore. That way we can
 keep the protocol version unchanged if we just add some optional stuff to
 it. I'm not sure how useful that is in practice though; it's not
 unreasonable that you must upgrade to the latest client, as long as the new
 client works with old server versions.

I think that's quite reasonable, as long as we detect it, and can give
a nice error message telling the user how to deal with it.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leaking lots of unreferenced inodes (pg_xlog files?), maybe after moving tables and indexes to tablespace on different volume

2013-03-13 Thread Magnus Hagander
On Mar 13, 2013 3:04 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Palle Girgensohn gir...@freebsd.org writes:
  ... I got lots of space freed
  up, but it seems that after that the disk usage grows linearly (it seems
  to leave many inodes unreferenced).

 Hm.  We've seen issues in the past with PG processes failing to close
 no-longer-useful files promptly, but ...

  Strange thing is I cannot find any open files.

 ... that suggests there's something else going on.

  The unreferenced inodes are almost exclusively around 16 MB in size, so
  i.e. they would most probably all be pg_xlog files.

 Have you got any sort of WAL archiving active, and if so maybe that's
 holding onto WAL files?  Not that it's clear how come lsof wouldn't
 tattle on an archiving process either.

  Stopping postgresql briefly did not help, I tried that.

 That seems to point the finger at some non-postgres cause.  I confess
 I can't guess what.


Yeah, unreferenced inodes with no open files, and only discoverable with
fsck sounds like a filsystem bug to me. Particularly since it showed up
just after a operating system upgrade, and doesn't go away with a postgres
restart...

/Magnus


Re: [HACKERS] Reporting hba lines

2013-03-10 Thread Magnus Hagander
On Sun, Jan 20, 2013 at 5:56 PM, Dean Rasheed dean.a.rash...@gmail.com wrote:
 On 5 January 2013 16:58, Magnus Hagander mag...@hagander.net wrote:
 Attached is an updated version of the patch, per the comments from Tom
 and rebased on top of the current master. Since it's been a long time
 ago, and some code churn in the area, another round of review is
 probably a good thing...


 I took a look at this patch, and it seems to be in pretty good shape.
 It applies cleanly to head, and seems to work as advertised/discussed.
 I have a couple of comments on the code...


 In next_token(), in the case of an overlong token, this change looks wrong:

 /* Discard remainder of line */
 !   while ((c = getc(fp)) != EOF  c != '\n')
 !   ;
 break;
 }

 --- 188,195 
errmsg(authentication file token too long, 
 skipping: \%s\,
   start_buf)));
 /* Discard remainder of line */
 !   while ((c = (*(*lineptr)++)) != '\0'  c != '\n')
 !   (*lineptr)++;
 break;

 It appears to be incrementing lineptr twice per loop iteration, so it
 risks missing the EOL/EOF and running off the end of the buffer.


 Nitpicking, at the end of the loop you have:

 !   c = (**lineptr);
 !   (*lineptr)++;

 perhaps for consistency with the preceding code, that should be c =
 (*(*lineptr)++). Personally, I'd also get rid of the outer sets of
 brackets in each of these expressions and just write c =
 *(*lineptr)++, since I don't think they add anything.


 Finally, the comment block for tokenize_file() needs updating, now
 that it returns three lists.

Thanks for the review - I've updated the patch per your comments
(minus the changing of the outer set of brackets - kept that the way
it was for consistency, but that can always be changed later if people
prefer that way), and will push it as it now stands.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Matview patch added rewriteDefine.c.orig to the repository

2013-03-04 Thread Magnus Hagander
On Mar 4, 2013 10:55 AM, Boszormenyi Zoltan z...@cybertec.at wrote:

 2013-03-04 08:02 keltezéssel, Simon Riggs írta:

 On 4 March 2013 06:39, Boszormenyi Zoltan z...@cybertec.at wrote:

 commit 3bf3ab8c563699138be02f9dc305b7b77a724307
 (Add a materialized view relations.) added this:

   src/backend/rewrite/rewriteDefine.c.orig| 945 +...
 ...
   create mode 100644 src/backend/rewrite/rewriteDefine.c.orig

 Committers should be more careful if they want to do work late Sunday.
 I know, Monday begins on Saturday[1], but still... ;-)

 AFAICS Kevin fixed this last night...


 Did he? I just did this:

 [zozo@localhost postgresql]$ git branch -a
 * master
   remotes/origin/HEAD - origin/master
   remotes/origin/REL2_0B
   remotes/origin/REL6_4
   remotes/origin/REL6_5_PATCHES
   remotes/origin/REL7_0_PATCHES
   remotes/origin/REL7_1_STABLE
   remotes/origin/REL7_2_STABLE
   remotes/origin/REL7_3_STABLE
   remotes/origin/REL7_4_STABLE
   remotes/origin/REL8_0_STABLE
   remotes/origin/REL8_1_STABLE
   remotes/origin/REL8_2_STABLE
   remotes/origin/REL8_3_STABLE
   remotes/origin/REL8_4_STABLE
   remotes/origin/REL8_5_ALPHA1_BRANCH
   remotes/origin/REL8_5_ALPHA2_BRANCH
   remotes/origin/REL8_5_ALPHA3_BRANCH
   remotes/origin/REL9_0_ALPHA4_BRANCH
   remotes/origin/REL9_0_ALPHA5_BRANCH
   remotes/origin/REL9_0_STABLE
   remotes/origin/REL9_1_STABLE
   remotes/origin/REL9_2_STABLE
   remotes/origin/Release_1_0_3
   remotes/origin/WIN32_DEV
   remotes/origin/ecpg_big_bison
   remotes/origin/master
 [zozo@localhost postgresql]$ cat .git/config
 [core]
 repositoryformatversion = 0
 filemode = true
 bare = false
 logallrefupdates = true
 [remote origin]
 fetch = +refs/heads/*:refs/remotes/origin/*
 url = git://git.postgresql.org/git/postgresql.git
 [branch master]
 remote = origin
 merge = refs/heads/master
 [zozo@localhost postgresql]$ git pull origin
 Already up-to-date.
 [zozo@localhost postgresql]$ find . -name *.orig
 ./src/backend/rewrite/rewriteDefine.c.orig

 The main repo at
http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary
 only shows one followup commit (just as in my local copy):
 bc61878682051678ade5f59da7bfd90ab72ce13b
 Fix map_sql_value_to_xml_value() to treat domains like their base types.

 In which branch was this fixed?

The repository is currently broken. There's a thread on www about it, and
also see the email to hackers a few hours ago telling committers to stop
pushing until it's fixed.

/Magnus


[HACKERS] Bug in tm2timestamp

2013-03-04 Thread Magnus Hagander
AFAICT, there's a bug in tm2timestamp(). You can't do this:

postgres=# select '1999-12-31T24:00:00'::timestamptz;
ERROR:  timestamp out of range: 1999-12-31T24:00:00

But that's a perfectly legal date. It works fine for any other year -
and AFAICT this is because of the POSTGRES_EPOCH_JDATE being
2000-01-01.

The check in 1693 and forward comes with *result=0 and date=-1 in this
case, which AFAICT is fine.

I'm not entirely sure what that check is guarding against (which may
be because I just came off a flight from canada and don't really have
the brain in full gear ATM). Perhaps it just needs an extra exclusion
for this special case?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hold all commits!

2013-03-04 Thread Magnus Hagander
On Mon, Mar 4, 2013 at 8:34 AM, Magnus Hagander mag...@hagander.net wrote:
 Committers,

 please hold *all* pushes to the PostgreSQL repository until we have
 resolved the issue mentioned in
 http://www.postgresql.org/message-id/cabuevezm2f8qaxvds9me-41l5bqeego4duoscqhc0c6v05h...@mail.gmail.com.

 Pushing anything at this point is likely to give us more work in
 cleaning it up, and you may need to re-push it again anyway.. So
 please just hold off until we've sorted out that issue.

Ok, should be good to go again. Committers may run into issues with
non fast forward pulls since the history in that repository was
slightly changed, but a regular rebase should probably work. But
please doublecheck everything before pushing anything, of course!

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in tm2timestamp

2013-03-04 Thread Magnus Hagander
On Mon, Mar 4, 2013 at 8:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 AFAICT, there's a bug in tm2timestamp(). You can't do this:
 postgres=# select '1999-12-31T24:00:00'::timestamptz;
 ERROR:  timestamp out of range: 1999-12-31T24:00:00

 But that's a perfectly legal date. It works fine for any other year -
 and AFAICT this is because of the POSTGRES_EPOCH_JDATE being
 2000-01-01.

 The check in 1693 and forward comes with *result=0 and date=-1 in this
 case, which AFAICT is fine.

 Meh.  Looks like I fixed this back in 2003, but didn't fix it right.
 Will try again.

 I'm not entirely sure what that check is guarding against (which may
 be because I just came off a flight from canada and don't really have
 the brain in full gear ATM). Perhaps it just needs an extra exclusion
 for this special case?

 I think we should just tweak the tests so that if date is 0 or -1,
 we don't consider it an overflow even if the time adjustment pushes
 the result to the other sign.

That's pretty  much what I thought - thanks for confirming, and for
putting the fix in!


 BTW, it strikes me that it's a bit silly to go to all this effort
 here, and then ignore the possibility of overflow in the dt2local
 adjustment just below.  But we'd have to change the API of that
 function, which I don't especially feel like doing right now.

Yeah, and it wouldn't be a good backpatchable thing anyway...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Hold all commits!

2013-03-03 Thread Magnus Hagander
Committers,

please hold *all* pushes to the PostgreSQL repository until we have
resolved the issue mentioned in
http://www.postgresql.org/message-id/cabuevezm2f8qaxvds9me-41l5bqeego4duoscqhc0c6v05h...@mail.gmail.com.

Pushing anything at this point is likely to give us more work in
cleaning it up, and you may need to re-push it again anyway.. So
please just hold off until we've sorted out that issue.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] Contrib module xml2 status

2013-02-21 Thread Magnus Hagander
On Thu, Feb 21, 2013 at 6:39 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Feb 20, 2013 at 11:58 AM, Ian Lawrence Barwick
 barw...@gmail.com wrote:
 I'm not sure if this is a documentation or hackers issue, but the
 documentation page for contrib module xml2 refers to PostgreSQL 8.4 in the
 future tense:

It is planned that this module will be removed in PostgreSQL 8.4 in
 favor of the newer standard API

 http://www.postgresql.org/docs/devel/static/xml2.html

 Are there any plans to remove this module by a forseeable date?

 Nope.  I have repeatedly been voted down on removing it, and I've also
 been repeatedly voted down on removing the deprecation text.  Could we
 at least agree on changing the deprecation text to say This module is
 deprecated and may be removed in a future release?

Not reopening the actual discussion about rmeoving it, but assuming
we're not, strong +1 on changing the deprecation message. And don't
forget to backpatch the change so it shows up in the old versions of
the docs as well.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] streaming header too small

2013-02-20 Thread Magnus Hagander
On Feb 20, 2013 11:29 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:

 On 20.02.2013 02:11, Selena Deckelmann wrote:

 So, I just ran into a similar issue backing up a 9.2.1 server using
 pg_basebackup version 9.2.3:

 pg_basebackup: starting background WAL receiver
 pg_basebackup: streaming header too small: 25


 I've had it happen two times in a row. I'm going to try again...

 But -- what would be helpful here? I can recompile pg_basebackup with
more
 debugging...


 Hmm, 25 bytes would be the size of the WAL data packet, if it contains
just the header and no actual WAL data. I think pg_basebackup should accept
that - it's not unreasonable that the server might send such a packet
sometimes.

 Looking at the walsender code, it's not supposed to ever send such a
packet. But I suspect there's one corner-case where it might: if the
current send location is at an xlogid boundary, so that we previously sent
the last byte from the last WAL segment in the previous logical xlog file,
and the WAL flush position points to byte 0 in the beginning of the new WAL
file. Both of those positions are in fact the same thing, but we have two
different ways to represent the same position. For example, if we've
already sent up to WAL position (sentPtr in walsender.c):

 xlogid = 4
 xrecoff = XLogFileSize

 and GetFlushRecPtr() returns:

 xlogid = 5
 xrecoff = 0

 Those both point to the same position. But the check in XLogSend that
decides if there is any work to do uses XLByteLE() to check if they are
equal, and XLByteLE() treats the latter to be greater than the former. So,
in that situation, XLogSend() would decide that it has work to do, but
there actually isn't, so it would send 0 bytes of WAL data.

 I'm not sure how GetFlushRecPtr() could return such a position, though.
But I'm also not convinced that it can't happen.

 It would be fairly easy to fix walsender to not send anything in that
situation. It would also be easy to fix pg_basebackup to not treat it as an
error. We probably should do both.

 In 9.3, the XLogRecPtr representation changed so that there is only one
value for a boundary position like that, so this is a 9.2-only issue.

That does sound like a reasonable explanation and fix. Heck, probably
enough to just put the fix in pg_basebackup since it's gone in 9.3 anyway.

But I'd really like to confirm this is the actual situation before
considering it fixed, since it's clearly very intermittent.

Selena, was this reasonably reproducible for you? Would it be possible to
get a network trace of it to show of that's the kind of package coming
across, or by hacking up pg_basebackup to print the exact position it was
at when the problem occurred?

/Magnus


Re: [HACKERS] streaming header too small

2013-02-20 Thread Magnus Hagander
On Wed, Feb 20, 2013 at 4:53 PM, Selena Deckelmann sel...@chesnok.com wrote:



 On Wed, Feb 20, 2013 at 6:23 AM, Magnus Hagander mag...@hagander.net
 wrote:


 On Feb 20, 2013 11:29 AM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:
 
  On 20.02.2013 02:11, Selena Deckelmann wrote:
 
  So, I just ran into a similar issue backing up a 9.2.1 server using
  pg_basebackup version 9.2.3:
 
  pg_basebackup: starting background WAL receiver
  pg_basebackup: streaming header too small: 25
 
 
  I've had it happen two times in a row. I'm going to try again...
 
  But -- what would be helpful here? I can recompile pg_basebackup with
  more
  debugging...
 
 
  Hmm, 25 bytes would be the size of the WAL data packet, if it contains
  just the header and no actual WAL data. I think pg_basebackup should accept
  that - it's not unreasonable that the server might send such a packet
  sometimes.
 
  Looking at the walsender code, it's not supposed to ever send such a
  packet. But I suspect there's one corner-case where it might: if the 
  current
  send location is at an xlogid boundary, so that we previously sent the last
  byte from the last WAL segment in the previous logical xlog file, and the
  WAL flush position points to byte 0 in the beginning of the new WAL file.
  Both of those positions are in fact the same thing, but we have two
  different ways to represent the same position. For example, if we've 
  already
  sent up to WAL position (sentPtr in walsender.c):
 
  xlogid = 4
  xrecoff = XLogFileSize
 
  and GetFlushRecPtr() returns:
 
  xlogid = 5
  xrecoff = 0
 
  Those both point to the same position. But the check in XLogSend that
  decides if there is any work to do uses XLByteLE() to check if they are
  equal, and XLByteLE() treats the latter to be greater than the former. So,
  in that situation, XLogSend() would decide that it has work to do, but 
  there
  actually isn't, so it would send 0 bytes of WAL data.
 
  I'm not sure how GetFlushRecPtr() could return such a position, though.
  But I'm also not convinced that it can't happen.
 
  It would be fairly easy to fix walsender to not send anything in that
  situation. It would also be easy to fix pg_basebackup to not treat it as an
  error. We probably should do both.
 
  In 9.3, the XLogRecPtr representation changed so that there is only one
  value for a boundary position like that, so this is a 9.2-only issue.


 Thanks for the debugging, Heikki!


 That does sound like a reasonable explanation and fix. Heck, probably
 enough to just put the fix in pg_basebackup since it's gone in 9.3 anyway.

 But I'd really like to confirm this is the actual situation before
 considering it fixed, since it's clearly very intermittent.

 Selena, was this reasonably reproducible for you? Would it be possible to
 get a network trace of it to show of that's the kind of package coming
 across, or by hacking up pg_basebackup to print the exact position it was at
 when the problem occurred?


 This is happening with a very busy 700 GB system, so I'm going to rule out a
 network trace out for the moment. The error is occurring sometime in the
 middle of the backup. Last time it was at least 30-40 minutes into a 2 hr
 backup.

Yikes. Yeah, that doesn't seem like the best choice of method :)


 I'll see about hacking up pg_basebackup. I'm doing nightly backups so
 hopefully (?) it will happen again in the next few days.

Thanks!


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-02-18 Thread Magnus Hagander
On Thu, Feb 14, 2013 at 7:02 PM, Josh Berkus j...@agliodbs.com wrote:
 Folks,

 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.

 Questions:

 - Who wants to mentor for GSOC?

Sign me up on that list. Depending on projects, of course.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation: references to old versions

2013-02-12 Thread Magnus Hagander
On Tue, Feb 12, 2013 at 10:19 AM, Dave Page dp...@pgadmin.org wrote:

 Hi

 On Tue, Feb 12, 2013 at 9:14 AM, Pavan Deolasee pavan.deola...@gmail.com
 wrote:

 While reading some documentation, I noticed that we have references to
 past releases. For example,
 http://www.postgresql.org/docs/9.2/static/xfunc-c.html mentions A
 magic block is required as of PostgreSQL 8.2. Sure, this is true even
 as of 9.2 or even 9.3. There are a few more such references that I
 could spot with a simple grep in the doc/src/sgml directory. I wonder
 if this is worth fixing. I don't know sgml enough, but if it supports
 some sort of a meta-tag, then it might help to replace specific
 version references with something like CURRENT_MAJOR_RELEASE so that
 the current major version gets substituted when docs for that release
 are built.


 The phrasing as of 8.2 basically means from 8.2 onwards. Changing the
 version there would make the text incorrect.

In theory we could remove that reference completely since any versions
where it does not apply are out of support since years ago at this
point. But I don't see a point in doing that just for the sake of
removing - but if a section is reworded in general that contains such
references, it could be dropped.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation: references to old versions

2013-02-12 Thread Magnus Hagander
On Tue, Feb 12, 2013 at 4:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Tue, Feb 12, 2013 at 10:19 AM, Dave Page dp...@pgadmin.org wrote:
 The phrasing as of 8.2 basically means from 8.2 onwards. Changing the
 version there would make the text incorrect.

 In theory we could remove that reference completely since any versions
 where it does not apply are out of support since years ago at this
 point. But I don't see a point in doing that just for the sake of
 removing - but if a section is reworded in general that contains such
 references, it could be dropped.

 The fact that pre-8.2 is out of support doesn't make this useless
 information: many such installations are still out there in the wild,
 and people may want to know what techniques will work with them.
 I'd be inclined to leave the text as-is until it's been awhile since
 we've heard of anyone using pre-8.2.

If you are using an earlier version, you should really be reading the
documentation that corresponds to the version that you are using.

That said, this general reason is why I say we shouldn't go remove it
intentionally. I'm just saying it's acceptable collateral damage if
it gets removed along with another patch, and that we shouldn't put
special efforts into maintaining it.

I'd rather see that special effort go into encouraging people to read
the version of the documentation that matches their version of
PostgreSQL...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Successful post to pgsql-hackers

2013-02-11 Thread Magnus Hagander
On Mon, Feb 11, 2013 at 5:01 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Magnus Hagander escribió:
 On Sun, Feb 10, 2013 at 3:25 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:

  I changed the defaults for new subscribers, and also the flags values
  for all existing subscribers, note.

 For *all* existing subscribers, or those that had not changed their
 defaults? And did you change just those flags, or for all flags?

 For all existing subscribers, I changed the ackpost (to off), ackreject
 (to on), ackstall (to on) flags.  Other flags were untouched.  There's
 no way to distinguish unchanged from set to the same as the default
 in mj2, I'm afraid.

Yeah, it seems you at least broke some cronjobs that relied on the old
default...

That said, I think that was pretty darn unfriendly to those who
potentially changed their own configuration. E.g. all those who
potentially set their ackpost to on intentionally years ago, have now
lost that one... But it's clearly too late to do anything about that
one now.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Successful post to pgsql-hackers

2013-02-11 Thread Magnus Hagander
On Mon, Feb 11, 2013 at 2:44 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Magnus Hagander escribió:
 On Mon, Feb 11, 2013 at 5:01 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  Magnus Hagander escribió:
  On Sun, Feb 10, 2013 at 3:25 AM, Alvaro Herrera
  alvhe...@2ndquadrant.com wrote:
 
   I changed the defaults for new subscribers, and also the flags values
   for all existing subscribers, note.
 
  For *all* existing subscribers, or those that had not changed their
  defaults? And did you change just those flags, or for all flags?
 
  For all existing subscribers, I changed the ackpost (to off), ackreject
  (to on), ackstall (to on) flags.  Other flags were untouched.  There's
  no way to distinguish unchanged from set to the same as the default
  in mj2, I'm afraid.

 Yeah, it seems you at least broke some cronjobs that relied on the old
 default...

 Huh?  We had cronjobs that relied on ackpost being set?  Please elaborate.

Nah, cronjobs that synchronized the flags for certain subscribers.
Didn't you receive a copy of the error message? :) It didn't actually
beak it, but it's spewing warnings.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Successful post to pgsql-hackers

2013-02-10 Thread Magnus Hagander
On Sun, Feb 10, 2013 at 3:25 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Magnus Hagander escribió:
 On Sat, Feb 9, 2013 at 4:31 PM, Peter Geoghegan
 peter.geoghega...@gmail.com wrote:
  On 9 February 2013 15:24, Magnus Hagander mag...@hagander.net wrote:
  It's in your personal majordomo settings. I don't think it's related
  to that at all, but it's a flag you set on your subscription, so
  perhaps you set it by mistake.
 
  Then I must have set it by mistake too, when I recently changed e-mail
  addresses.

 Hmm. I wonder if Alvaro may have accidentally switched the default, if
 it happened to more than one person. Alvaro, can you check?

 I hadn't touched this, but Andres Freund had already complained about
 this before.  I just checked and yes, it seems that the flag to send a
 confirmation for each post is set.  I have reset it.

 I also took the opportunity to set the flags to send confirmation emails
 when a posting is rejected or stalled for moderation.  We discussed this
 previously (Pavel Stehule complained about it).

 I changed the defaults for new subscribers, and also the flags values
 for all existing subscribers, note.

For *all* existing subscribers, or those that had not changed their
defaults? And did you change just those flags, or for all flags?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Successful post to pgsql-hackers

2013-02-09 Thread Magnus Hagander
It's in your personal majordomo settings. I don't think it's related
to that at all, but it's a flag you set on your subscription, so
perhaps you set it by mistake.

You shold be able to set it from https://mail.postgresql.org. The
setting you're looking for is ackpost, and you'll want to turn it
off.

//Magnus


On Sat, Feb 9, 2013 at 4:17 PM, Gurjeet Singh gurj...@singh.im wrote:
 Recently I have started getting these confirmations for every email I send
 to the mailing lists. I think it's related to the fact that I recently
 switched to using a new email address.

 How can I turn these notifications off?

 -- Forwarded message --
 From: pgsql-hackers-ow...@postgresql.org
 Date: Sat, Feb 9, 2013 at 10:13 AM
 Subject: Successful post to pgsql-hackers
 To: Gurjeet Singh gurj...@singh.im


 Your message to the pgsql-hackers list, posted on
   Sat, 9 Feb 2013 10:11:05 -0500

 with subject
   Re: pg_prewarm

 is currently being delivered.



 --
 Gurjeet Singh

 http://gurjeet.singh.im/



--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Successful post to pgsql-hackers

2013-02-09 Thread Magnus Hagander
On Sat, Feb 9, 2013 at 4:31 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
 On 9 February 2013 15:24, Magnus Hagander mag...@hagander.net wrote:
 It's in your personal majordomo settings. I don't think it's related
 to that at all, but it's a flag you set on your subscription, so
 perhaps you set it by mistake.

 Then I must have set it by mistake too, when I recently changed e-mail
 addresses.

Hmm. I wonder if Alvaro may have accidentally switched the default, if
it happened to more than one person. Alvaro, can you check?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Considering Gerrit for CFs

2013-02-08 Thread Magnus Hagander
On Fri, Feb 8, 2013 at 1:32 AM, Josh Berkus j...@agliodbs.com wrote:
 Folks,

 First, thanks for the serious discussion of this.

 There are obvious tooling gaps (aren't there always?), but I don't
 really see the model as broken, and I don't think I've been around
 pgsql-hackers exclusively or extensively enough to have developed
 Stockholm syndrome.

 I don't see the model as broken either.  Just the tooling, which is why
 I'm looking at tooling.  As in, I'm looking for better tooling in order

Yet you are suggesting tooling that requires a change in the model?


 to solve two problems:

 1. maximize the efficiency of existing reviewer time

 2. make tooling not be an obstacle to getting new reviewers

I think you are missing a fundamental part in this - which is 0.
don't negatively affect the efficiency of existing committer time.
I'm not saying it necessarily does (though I think it does, but that's
not a proven point), but that has to be a pretty high priority.


 Of these two, (2) is actually the more critical. We have been losing,
 not gaining, active committers and reviewers for the last couple years.
  Clearly do more of what we've been doing is a losing strategy.   We
 need to be sucessfully moving people up the contributor chain if we're
 ever going to get out of this not enough reviewers hole.

Agreed. But do you have any actual proof that the problem is in we
loose reviewers because we're relying on email?


 I agree that tooling is a minority of this, but tooling is also the
 easiest thing to change (compared with project organization), so that's
 what I'm tackling first.  Expect a discussion on the people aspects at
 the developer meeting.

It would probably be a good thing to discuss the tooling there, too.


 Personally, I find the most annoying thing with the current process
 being when reviewers post their reviews as a completely separate
 thread, instead of replying in the original thread. This causes
 context switches when parsing things, because now you have to jump
 back and forth between the CF app and your mail reader. But it's still
 only on the annoyance side, I think the process in general is not
 broken. (That said, I *have* been on the inside a long time, *and* I
 live in Stockholm, so I might well have that syndrome)

 So, look at this from the perspective of a casual reviewer, say at a PUG
 reviewfest.  Instructions to new reviewer:

 1. find the feature you want to review on the CF app.

 2. Click the link to the mailing list archives.

 3. Click all the way through the list thread to make sure there isn't a
 later version of the patch.

Supposedly the latest version should always be listed in the CF app.
The fact that this is a manual step is a problem, that could probably
be fixed quite easily.


 4. Download the patch.   Hopefully it's not mangled by the archives
 (this has gotten much better than it was last year)

Yes, several things have been done to make this work better. There
shouldn't be any issues at all with it now - and if there are, we are
in a much better position to fix them.


 5. Apply the patch to HEAD clone.

 6. Do actual reviewing/testing.

 7. Write an email review.

 8. Send it to pgsql-hackers
 8.a. this requires you to be subscribed to pgsql-hackers.

No, it does not. It will get caught in the moderation queue and get
slightly delayed if you're not, but it works perfectly fine.

And if we were to use another system, you'd still have to sign up for
that one, so it's not really that big a problem.


 9. wait for the email to show up in the archives.

You do realize this currently counts within seconds or something like
that, rather than the 15+ minutes it used to be? And the fact is, you
don't actually have to wait for it.


 10. create a review comment in the CF app.
 10.a. this requires you to be signed up for a community account
 10.b. sign up for one now
 10.c. wait for it to be approved

Huh? There is no approval process for community accounts. There is a
verification step, of course, but any system would have that.


 11. link the review comment to the messageID

 12. change status of the patch

 This is a few too many steps, and certainly appears completely broken to
 any newcomer.

I agree it's way too many step. Several of those can certainly be made
more efficient now that we have a more sane archives, well within the
scope of the current system.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment typo

2013-02-08 Thread Magnus Hagander
On Fri, Feb 8, 2013 at 6:49 AM, Etsuro Fujita
fujita.ets...@lab.ntt.co.jp wrote:
 I found a comment typo.  Please find attached a patch.

Applied, thanks.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Considering Gerrit for CFs

2013-02-08 Thread Magnus Hagander
On Fri, Feb 8, 2013 at 4:20 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 2/8/13 5:23 AM, Magnus Hagander wrote:
 But do you have any actual proof that the problem is in we
 loose reviewers because we're relying on email?

 Here is one: Me.

 Just yesterday I downloaded a piece of software that was previously
 unknown to me from GitHub and found a bug.  Within 15 minutes or so I
 had fixed the bug, made a fork, sent a pull request.  Today I read, the
 fix was merged last night, and I'm happy.

 How would this go with PostgreSQL?  You can use the bug form on the web
 site, but you can't attach any code, so the bug will just linger and
 ultimately put more burden on a core contributor to deal with the
 minutiae of developing, testing, and committing a trivial fix and
 sending feedback to the submitter.  Or the user could take the high road
 and develop and patch and submit it.  Just make sure it's in context
 diff format!  Search the wiki if you don't know how to do that!  Send it
 to -hackers, your email will be held for moderation.  We won't actually
 do anything with your patch, but we will tell you to add it to that
 commitfest app over there.  You need to sign up for an account to use
 that.  We will deal with your patch in one or two months.  But only if
 you review another patch.  And you should sign up for that other mailing
 list, to make sure you're doing it right.  Chances are, the first review
 you're going to get is that your patch doesn't apply anymore, but which
 time you will have lost interest in the patch anyway.

 So, I don't have any further evidence that we are losing reviewers, but
 in light of the above and the options out there were interested
 developers can contribute much more easily, I'm amazed that we are
 getting any new contributors or reviewers at all.

 Of course, Gerrit doesn't actually address most of the issues above, but
 it could be part of a step forward.

You're outlining an issue for submitters. Berkus was complaining about
issues for reviewers. These are clearly different issues. And I don't
think gerrit helps at all with the submitters process that you've
outlined above - it's a tool to help the reviewing. That doesn't, of
course, mean that we shouldn't try to solve both things - but they are
completely different.

Basically, what you're saying above, is we should start accepting pull
requests from github. There's nothing preventing us from doing that
(other than the wish to do so), no need to change tooling for review
for that. It just means that committers need to use git and add
peoples repositories as remotes instead of applying patches. Probably
not a huge burden today since most pg developers are used to git by
now.

However, it's not going to change the requirement to help review other
things, that's a pure policy issue. Which I'm pretty sure we don't
enforce for 10-minute-trivial-fixup-patches. And it's not going to
change the fact that it takes time before someone gets around to your
patch, that's a resource issue. And it's not going to change the fact
that a patch migt not apply after 2 months, that's a consequence of
the second problem. It doesn't change the fact that you have to sign
up - it just makes it more likely that you're already signed up, since
so many people are on github already, but you *do* have to sign up for
a service, wherever it's hosted.

But it does change the fact that you don't have to deal with email,
and can use web instead.

Personally, I find it much easier to just git clone, make changes,
git diff, attach to email, than fork on github. git clone, make
changes, push to github, create pull request on github. repeatedly
check status of said pull request since email notifications aren't
usable. But that's me, personally, and I realize many people today
prefer web interfaces for as much as possible. There's nothing
stopping us from supporting both, of course.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Stamp 9.1.8.

2013-02-08 Thread Magnus Hagander
On Fri, Feb 8, 2013 at 6:53 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Feb 8, 2013 at 11:09 AM, Kevin Grittner kgri...@ymail.com wrote:
 Magnus Hagander mag...@hagander.net wrote:

 if there is any other committer who [wants to receive emails from
 the packagers list], let me know and I will add you there as well.

 Hi Magnus,

 Please add me.

 Thanks,

 -Kevin

 Ditto.

FYI, I've done this for all those who asked on this thread, and you
should've received a confirmation email from mj2. I won't ack the
requests individually :)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Considering Gerrit for CFs

2013-02-07 Thread Magnus Hagander
On Thu, Feb 7, 2013 at 8:20 AM, Daniel Farina dan...@heroku.com wrote:
 On Wed, Feb 6, 2013 at 3:00 PM, Joshua D. Drake j...@commandprompt.com 
 wrote:

 On 02/06/2013 01:53 PM, Tom Lane wrote:

 ... if it's going to try to coerce us out of our email-centric habits,
 then I for one am very much against it.  To me, the problems with the
 existing CF app are precisely that it's not well enough integrated with
 the email discussions.  The way to fix that is not to abandon email (or
 at least, it's not a way I wish to pursue).


 The email centric habits are by far the biggest limiting factor we have.
 Email was never designed for integral collaboration. That said, I see no way
 around it. I have brought up this idea before but, Redmine has by far served
 the purposes (with a little effort) of CMD and it also integrates with GIT.
 It also does not break the email work flow. It does not currently allow
 commands via email but that could be easily (when I say easily, I mean it)
 added.

 Just another thought.

 I don't think controlling things by email is the issue.  I have used
 the usual litany of bug trackers and appreciate the
 correspondence-driven model that -hackers and -bugs uses pretty
 pleasant.  If nothing else, the uniform, well-developed, addressable,
 and indexed nature of the archives definitely provides me a lot of
 value that I haven't really seen duplicated in other projects that use
 structured bug or patch tracking.  The individual communications tend
 to be of higher quality -- particularly to the purpose of later
 reference -- maybe a byproduct of the fact that prose is on the
 pedestal.

 There are obvious tooling gaps (aren't there always?), but I don't
 really see the model as broken, and I don't think I've been around
 pgsql-hackers exclusively or extensively enough to have developed
 Stockholm syndrome.

 I also happen to feel that the commitfest application works rather
 well for me in general.  Sure, I wish that it might slurp up all
 submitted patches automatically or something like that with default
 titles or something or identify new versions when they appear, but
 I've always felt that skimming the commitfest detail page for a patch
 was useful.  It was perhaps harder to know if the commitfest page I
 was looking at was complete or up to date or not, although it often
 is.

 Here's what I find most gut-wrenching in the whole submit-review-commit 
 process:

 * When a reviewer shows up a couple of weeks later and says this
 patch doesn't apply or make check crash or fails to compile.
 It's especially sad because the reviewer has presumably cut out a
 chunk of time -- now probably aborted -- to review the patch.
 Machines can know these things automatically.

If the report is *just* this patch doesn't apply, I agree. If the
reviewer is more this patch doesn't apply anymore. Here's an adjusted
version that does it has a value in itself - because somebody else
just got more familiar with that part of the code.

 * When on occasion patches are submitted with non-C/sh/perl suites
 that may not really be something that anyone wants to be a
 build/source tree, but seem like they might do a better job testing
 the patch.  The inevitable conclusion is that the automated test
 harness is tossed, or never written because it is known it will have
 no place to live after a possible commit.  Somehow I wish those could
 live somewhere and run against all submitted patches.

 I've toyed a very, very tiny bit with running build farm clients on
 Heroku with both of these in mind, but haven't revisited it in a
 while.

It's certainly been loosely discusse dbefore as a possible enhancement
- but the main thing being it basically *has* to be run in a
virtualized environment that's thrown away, or you're going to open
all sorts of issues with running arbitrary code on peoples machines.
Of course, virtualization is kind of what you guys do :)


Personally, I find the most annoying thing with the current process
being when reviewers post their reviews as a completely separate
thread, instead of replying in the original thread. This causes
context switches when parsing things, because now you have to jump
back and forth between the CF app and your mail reader. But it's still
only on the annoyance side, I think the process in general is not
broken. (That said, I *have* been on the inside a long time, *and* I
live in Stockholm, so I might well have that syndrome)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Considering Gerrit for CFs

2013-02-07 Thread Magnus Hagander
On Thu, Feb 7, 2013 at 8:29 AM, Brendan Jurd dire...@gmail.com wrote:
 On 7 February 2013 08:07, Josh Berkus j...@agliodbs.com wrote:
 The existing Gerrit community would be keen to have the PostgreSQL
 project as a major user, though, and would theoretically help with
 modification needs.  Current major users are OpenStack, Mediawiki,
 LibreOffice and QT.

 Do we actually have any requirements that are known to be uncatered
 for in gerrit's standard feature set?

Email being the primary interaction method has long been a stated
requirement, and we've just been told that's uncatered for in gerrit.

Turning the same question around, do we have any requirements on top
of the current CF app that actually *are* catered for in gerrit? That
is, what problem are we actually trying to solve? (On a technical
level - bring in more reviewers doesn't count, you have to explain
*how* that's going to happen)

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Considering Gerrit for CFs

2013-02-06 Thread Magnus Hagander
On Wed, Feb 6, 2013 at 10:07 PM, Josh Berkus j...@agliodbs.com wrote:
 Hackers,

 As an occasional CommitFest manager, I'm keenly aware of the makeshift
 nature of the CommitFest app.  If we want to go on using it -- and if we
 want to attract additional reviewers -- we need to improve it
 substantially.  What Robert built for us was supposed to be a second
 draft, not a final version.

This is probably not something we should discuss right now - it's
better discussed when we're not right inthe middle of a commitfest,
no?


 The problem with doing it in-house is that the folks who can work on it
 and maintain it will be taking time away from developing PostgreSQL.  So
 I've been keeping an eye on third-party OSS apps for contribution
 management, waiting for one of them to mature enough that we can
 seriously consider using it.

 I think one of them has, now: Gerrit.  http://code.google.com/p/gerrit/

 I spent some time with OpenStack's main Gerrit admin while at LCA, and
 was fairly encouraged that Gerrit would be a big step up compared to our
 current ad-hoc PHP.  However, gerrit is designed to be git-centric

We have no ad-hoc PHP, but I'm assume you're referring to the cf
management app that's in perl?


 rather than email-centric, so it would modify our current email-centric
 workflow (e.g. reviews are posted via a special git commit). Unlike

Previously, we've said we do not want to do this. And I think in
general, it's a realliy bad idea to have a tool dictate the workflow.
It should be the other way around.

Now, if we *want' to change our workflow, that's a different story, of
course. But a tool shouldn't dictate that.


 other git tools, though, it expects patches and not branches, so that
 would integrate well with what we do now.  It would also require
 supporting Java in our infrastructure.

We already have a certain amount of support for java in the
infrastructure. It does mandate that it doesn't have any special
requirements on the java environment, of course - but as long as it
works with the one that ships on Debian, we can do it.


 The advantages in features would be substantial: a better interface,
 ways to perform automated tasks (like remind submitters that a patch is
 waiting on author), online diffs, automated testing integration, and a
 configurable review workflow process.

Could you point to an example somewhere that we could check such features out?


 The existing Gerrit community would be keen to have the PostgreSQL
 project as a major user, though, and would theoretically help with
 modification needs.  Current major users are OpenStack, Mediawiki,
 LibreOffice and QT.

theoretically?


 Thoughts?

I just took a quick look at their system, and when they start talking
about requirements in the 100's of Gb of RAM, 24 core machines and
SSD, I get scared :) But that's to scale it - doesn't mention when
you need to do anything like that. I'm assuming we'd be tiny.

FWIW, what we have now could easily run on a box with 128Mb RAM...


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Visual Studio 2012 RC

2013-02-05 Thread Magnus Hagander
On Feb 5, 2013 5:34 PM, Andrew Dunstan and...@dunslane.net wrote:


 On 02/01/2013 08:55 AM, Andrew Dunstan wrote:


 On 02/01/2013 06:12 AM, Craig Ringer wrote:

 On 01/26/2013 10:56 PM, Noah Misch wrote:

 On Sat, Jan 26, 2013 at 12:20:56PM +0800, Craig Ringer wrote:

 Just to confirm, I think that this is ready for commit as posted in
 20130101025421.ga17...@tornado.leadboat.com.

 I'll amend my docs changes and submit them separately.

 Thanks.  Here's a rebased version.

 Is there any chance someone could pick this up for 9.3 before it
diverges too much? It's ready to go, Windows only, and tested.

 https://commitfest.postgresql.org/action/patch_view?id=1023





 I expect to get to it in due course if nobody else does. I have been set
back some by the death and replacement of my Windows workstation, (plus
coming to terms with Windows 8).




 OK, I have looked at this and it seems perfectly sane. In fact, after a
very frustrating time VS2012 is the *only* way I have found to get a 64 bit
build using MS tools on Windows 8. Given that, I propose to backport these
changes to 9.2 so we can get some buildfarm coverage of that tool chain
that includes the latest stable release.

As long as it's fairly standalone and doesn't change the actual cost around
(mobile atm so i couldn't look at the actual patch), that seems like the
right idea to me.

/Magnus


Re: [HACKERS] [PATCH] Add PQconninfoParseParams and PQconninfodefaultsMerge to libpq

2013-02-02 Thread Magnus Hagander
On Feb 3, 2013 4:16 AM, Phil Sorber p...@omniti.com wrote:

 This patch came up from discussion about pg_isready.

 PQconninfoParseParams is similar to PQconninfoParse but takes two
 arrays like PQconnectdbParams. It essentially exposes
 conninfo_array_parse().

 PQconninfodefaultsMerge essentially exposes conninfo_add_defaults().
 It allows you to pass a PQconninfoOption struct and it adds defaults
 for all NULL values.

 There are no docs yet. I assumed I would let bikeshedding ensue, and
 also debate on whether we even want these first.

I think you forgot to attach the patch.

/Magnus


Re: [HACKERS] pg_basebackup with -R option and start standby have problems with escaped password

2013-01-29 Thread Magnus Hagander
On Thu, Jan 24, 2013 at 7:04 AM, Hari Babu haribabu.ko...@huawei.com wrote:
 On Wed, Jan 23, 2013 11:48 PM, Magnus Hagander wrote:
On Wed, Jan 23, 2013 at 10:18 AM, Hari Babu haribabu.ko...@huawei.com
 wrote:
 Test scenario to reproduce:
 1. Start the server
 2. create the user as follows
 ./psql postgres -c create user user1 superuser login
 password 'use''1'

 3. Take the backup with -R option as follows.
 ./pg_basebackup -D ../../data1 -R -U user1 -W

 The following errors are occurring when the new standby on the backup
 database starts.

 FATAL:  could not connect to the primary server: missing = after 1'
 in
 connection info string

What does the resulting recovery.conf file look like?

 The recovery.conf which is generated is as follows

 standby_mode = 'on'
 primary_conninfo = 'user=''user1'' password=''use''1'' port=''5432'' '


 I observed the problem is while reading primary_conninfo from the
 recovery.conf file
 the function GUC_scanstr removes the quotes of the string and also makes
 the
 continuos double quote('') as single quote(').

 By using the same connection string while connecting to primary server the
 function conninfo_parse the escape quotes are not able to parse properly
 and it is leading
 to problem.

 please correct me if any thing wrong in my observation.

Well, it's clearly broken at least :O

Zoltan, do you have time to  look at it? I won't have time until at
least after FOSDEM, unfortunately.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Review of pg_basebackup and pg_receivexlog to use non-blocking socket communication, was: Re: [HACKERS] Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown

2013-01-28 Thread Magnus Hagander
On Tue, Jan 22, 2013 at 7:31 AM, Amit Kapila amit.kap...@huawei.com wrote:
 On Monday, January 21, 2013 6:22 PM Magnus Hagander
 On Fri, Jan 18, 2013 at 7:50 AM, Amit Kapila amit.kap...@huawei.com
 wrote:
  On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote:
  On 07.01.2013 16:23, Boszormenyi Zoltan wrote:
   Since my other patch against pg_basebackup is now committed,
   this patch doesn't apply cleanly, patch rejects 2 hunks.
   The fixed up patch is attached.
 
  Now that I look at this a high-level perspective, why are we only
  worried about timeouts in the Copy-mode and when connecting? The
  initial
  checkpoint could take a long time too, and if the server turns into
 a
  black hole while the checkpoint is running, pg_basebackup will still
  hang. Then again, a short timeout on that phase would be a bad idea,
  because the checkpoint can indeed take a long time.
 
  True, but IMO, if somebody want to take basebackup, he should do that
 when
  the server is not loaded.

 A lot of installations don't have such an optino, because there is no
 time whe nthe server is not loaded.

 Good to know about it.
 I have always heard that customer will run background maintenance activities
 (Reindex, Vacuum Full, etc) when the server is less loaded.
 For example
 a. Billing applications in telecom, at night times they can be relatively
 less loaded.

That assumes there is a nighttime.. If you're operating in enough
timezones, that won't happen.


 b. Any databases used for Sensex transactions, they will be relatively free
 once the market is closed.
 c. Banking solutions, because transactions are done mostly in day times.

True. But those are definitely very very narrow usecases ;)

Don't get me wrong. There are a *lot* of people who have nighttimes to
do maintenance in. They are the lucky ones :) But we can't assume this
scenario.


 There will be many cases where Database server will be loaded all the times,
 if you can give some example, it will be a good learning for me.

Most internet based businesses that do business in multiple countries.
Or really, any business that has customers in multiple timezones
across the world. And even more to the point, any business who's
*customers* have customers in multiple timezones across the world,
provided they are services-based.


  In streaming replication, the keep-alive messages carry additional
  information, the timestamps and WAL locations, so a keepalive makes
  sense at that level. But otherwise, aren't we just trying to
  reimplement
  TCP keepalives? TCP keepalives are not perfect, but if we want to
 have
  an application level timeout, it should be implemented in the FE/BE
  protocol.
 
  I don't think we need to do anything specific to pg_basebackup. The
  user
  can simply specify TCP keepalive settings in the connection string,
  like
  with any libpq program.
 
  I think currently user has no way to specify TCP keepalive settings
 from
  pg_basebackup, please let me know if there is any such existing way?

 You can set it through environment variables. As was discussed
 elsewhere, it would be good to have the ability to do it natively to
 pg_basebackup as well.

 Sure, already modifying the existing patch to support connection string in
 pg_basebackup and pg_receivexlog.

Good.


  I think specifying TCP settings is very cumbersome for most users,
 that's
  the reason most standard interfaces (ODBC/JDBC) have such application
 level
  timeout mechanism.
 
  By implementing in FE/BE protocol (do you mean to say that make such
  non-blocking behavior inside Libpq or something else), it might be
 generic
  and can be used for others as well but it might need few interface
 changes.

 If it's specifying them that is cumbersome, then that's the part we
 should fix, rather than modifying the protocol, no?

 That can be done as part of point 2 of initial proposal
 (2. Support recv_timeout separately to provide a way to users who are not
 comfortable tcp keepalives).

Looking at the bigger picture, we should in that case support those on
*all* our frontend applications, and not just pg_basebackup. To me, it
makes more sense to just say use the connection string method to
connect when you need to set these parameters. There are always going
to be some parameters that require that.


 To achieve this there can be 2 ways.
 1. Change in FE/BE protocol - I am not sure exactly how this can be done,
 but as per Heikki this is better way of implementing it.
 2. Make the socket as non-blocking in pg_basebackup.

 Advantage of Approach-1 is that if we do in such a fashion that in lower
 layers (libpq) it is addressed then all other apps (pg_basebackup, etc) can
 use it, no need to handle separately in each application.

 So now as changes in Approach-1 seems to be invasive, we decided to do it
 later.

Ok - I haven't really been following the thread, but that doesn't seem
unreasonable. The thing I was objecting to is putting in special
parameters

[HACKERS] Re: [BUGS] BUG #7809: Running pg_dump on slave w/ streaming replication fails if there are unlogged tables

2013-01-25 Thread Magnus Hagander
On Sun, Jan 20, 2013 at 4:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 + PGresult *res = ExecuteSqlQueryForSingleRow(fout, SELECT 
 pg_is_in_recovery());

 That function call needs to be schema-qualified for security.

Applied and backpatched, with that fix and a sentence in the
documentation added.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Make pg_dump exclude unlogged table data on hot standby slaves

2013-01-25 Thread Magnus Hagander
On Fri, Jan 25, 2013 at 1:59 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-01-25 13:56:11 +0100, Magnus Hagander wrote:
 On Fri, Jan 25, 2013 at 1:31 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2013-01-25 08:49:10 +, Magnus Hagander wrote:
  Make pg_dump exclude unlogged table data on hot standby slaves
 
  This missed the fact that there is no ExecuteSqlQueryForSingleRow and
  surroundign infrastructure.

 Ugh, that's what I get for pushing something just before getting on a
 plane. Thanks for spotting.

 Fix looks good, except I'd just put the query text inline in PQexec()
 and not bother with a separate variable..

 The query needs to get passed to check_sql_result and write_msg as well,
 thats why I added the extra variable.

Ahh, gotcha.


 I don't think my -committers post got through (seems to be restricted),
 so I am CCing -hackers so somebody else can see the patch.

Ok! I just talked to Robert, and he's taking a look at it for me.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sepgsql 1/3] add name qualified creation label

2013-01-24 Thread Magnus Hagander
On Thu, Jan 24, 2013 at 10:11 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 2013/1/24 Tom Lane t...@sss.pgh.pa.us:
 John R Pierce pie...@hogranch.com writes:
 On 1/23/2013 8:32 PM, Tom Lane wrote:
 FWIW, in Fedora-land I see: ...

 I'd be far more interested in what is in RHEL and CentOS.Fedora,
 with its 6 month obsolescence cycle, is of zero interest to me for
 deploying database servers.

 But of course Fedora is also the upstream that will become RHEL7
 and beyond.

Do we know which version of Fedora will become RHEL7, and thus, which
version of libselinux will go in RHEL7? (And do we know which version
of postgres will go in RHEL7, assuming release schedules hold)

 It might be that the update timing makes a bigger difference in some
 other distros, though.  To return to Heikki's original point about
 Debian, what are they shipping today?

 Even though I'm not good at release cycle of Debian, I tried to check
 the shipped version of postgresql and libselinux for stable, testing,
 unstable and experimental release.
 I'm not certain why they don't push postgresql-9.2 into experimental
 release yet. However, it seems to me optimistic libselinux-2.1.10 being
 bundled on the timeline of postgresql-9.3.

 If someone familiar with Debian's release cycle, I'd like to see the 
 suggestion.

 * Debian (stable) ... postgresql-8.4 + libselinux-2.0.96
 http://packages.debian.org/en/squeeze/postgresql
 http://packages.debian.org/en/source/squeeze/libselinux

 * Debian (testing) ... postgresql-9.1 + libselinux-2.1.9
 http://packages.debian.org/en/wheezy/postgresql
 http://packages.debian.org/en/source/wheezy/libselinux

Just as a note, wheezy is the version that will be the next debian
stable, and it's in freeze since quite a while back. So we can safely
expect it will be 2.1.9 that's included in the next debian stable.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-24 Thread Magnus Hagander
On Thu, Jan 24, 2013 at 6:04 PM, Peter Eisentraut pete...@gmx.net wrote:
 After reviewing this, it appears to me that this is really just a very
 verbose version of

 archive_command = 'sleep $initialsleep; while test $(psql -AtX -c select 
 pg_xlogfile_name(something)  $$%f$$ collate \C\;) = t; sleep $sleep; done'

 I think it might be better to just document this as an example.  I don't
 quite see the overhead of maintaining another tool justified.

Well, obviously I don't entirely agree ;)

Yes, it's a convenience command. Like pg_standby was. And like many
other commands that we maintain as part of *core*, such as createuser,
vacuumdb, etc. Those can all be done with an even *simpler* command
than the one you suggest above. So I don't see that as an argument why
it wouldn't be useful.

Also, the command you suggest above does not work on Windows. You can
probably write a .BAT file to do it for you, but I'm pretty sure it's
impossible to do it as an archive_command there.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-24 Thread Magnus Hagander
On Thu, Jan 24, 2013 at 6:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Thu, Jan 24, 2013 at 6:04 PM, Peter Eisentraut pete...@gmx.net wrote:
 I think it might be better to just document this as an example.  I don't
 quite see the overhead of maintaining another tool justified.

 Well, obviously I don't entirely agree ;)

 Yes, it's a convenience command. Like pg_standby was. And like many
 other commands that we maintain as part of *core*, such as createuser,
 vacuumdb, etc. Those can all be done with an even *simpler* command
 than the one you suggest above. So I don't see that as an argument why
 it wouldn't be useful.

 We've discussed removing a lot of those tools, too.  Not breaking
 backwards compatibility is probably the only reason they're still there.

 In the case at hand, I seem to recall from upthread that we expect
 this'd be obsolete in a release or two.  If that's true then I think
 a para or two of documentation is a better idea than a tool we'll be
 essentially condemned to keep maintaining forever.

Not really sure there is such an expectation - any more than there was
such an expectation when we initially put pg_standby in there. It
would be *possible* to do it, certainly. But it's not like we have an
actual plan. And AFAIK the stuff that was discussed upthread was a
simplified version of it - not the full flexibility.

That said, it's certainly a point that we'd have to maintain it. But I
don't see why we'd have to maintain it beyond the point where we
included the same functionality in core, if we did.


 Also, the command you suggest above does not work on Windows. You can
 probably write a .BAT file to do it for you, but I'm pretty sure it's
 impossible to do it as an archive_command there.

 Perhaps we could whip up such a .BAT file and put it in the docs?

That would probably work, yes.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup with -R option and start standby have problems with escaped password

2013-01-23 Thread Magnus Hagander
On Wed, Jan 23, 2013 at 10:18 AM, Hari Babu haribabu.ko...@huawei.com wrote:
 Test scenario to reproduce:
 1. Start the server
 2. create the user as follows
 ./psql postgres -c create user user1 superuser login
 password 'use''1'

 3. Take the backup with -R option as follows.
 ./pg_basebackup -D ../../data1 -R -U user1 -W

 The following errors are occurring when the new standby on the backup
 database starts.

 FATAL:  could not connect to the primary server: missing = after 1' in
 connection info string

What does the resulting recovery.conf file look like?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-21 Thread Magnus Hagander
On Mon, Jan 21, 2013 at 9:01 AM, Andrew Dunstan and...@dunslane.net wrote:

 On 01/21/2013 02:17 AM, Magnus Hagander wrote:



 On Jan 21, 2013 3:06 AM, Craig Ringer cr...@2ndquadrant.com
 mailto:cr...@2ndquadrant.com wrote:
 
  On 01/21/2013 10:03 AM, Craig Ringer wrote:
   On 01/19/2013 04:08 AM, Boszormenyi Zoltan wrote:
   However, I am not sure whether Cygwin provides the mkstemp() call or
   not.
   Searching... Found bugzilla reports against mkstemp on Cygwin.
   Is Cygwin a platform that should be targeted for the server backend
   these days?
  
   I can understand making sure that libpq works on Cygwin, but is there
   any reason at all to run a Pg server backend on Cygwin rather than as
   native Windows binaries?
 
  I'm not suggesting immediately dropping working support, since this is
  so trivially worked around. I'm just wondering why anybody cares about
  the platform.

 I have suggested similar before, and been voted down :) iirc Andrew uses
 it, no? Either way, the consensus earlier had been that as long as it
 doesn't require major surgery or blocks something else, we should try to
 keep it working. And as you say this sounds like something that can be
 handled trivially, I think now is not the time.


 No, I only use the client. But then I support plenty of things I don't use.

Oh, I somehow thought you were. And yes, we all support things we
don't use - but it certainly helps if there is *someone* out there who
uses it. Having a buildfarm animal (which we do) only goes so far...

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Review of pg_basebackup and pg_receivexlog to use non-blocking socket communication, was: Re: [HACKERS] Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown

2013-01-21 Thread Magnus Hagander
On Fri, Jan 18, 2013 at 7:50 AM, Amit Kapila amit.kap...@huawei.com wrote:
 On Wednesday, January 16, 2013 4:02 PM Heikki Linnakangas wrote:
 On 07.01.2013 16:23, Boszormenyi Zoltan wrote:
  Since my other patch against pg_basebackup is now committed,
  this patch doesn't apply cleanly, patch rejects 2 hunks.
  The fixed up patch is attached.

 Now that I look at this a high-level perspective, why are we only
 worried about timeouts in the Copy-mode and when connecting? The
 initial
 checkpoint could take a long time too, and if the server turns into a
 black hole while the checkpoint is running, pg_basebackup will still
 hang. Then again, a short timeout on that phase would be a bad idea,
 because the checkpoint can indeed take a long time.

 True, but IMO, if somebody want to take basebackup, he should do that when
 the server is not loaded.

A lot of installations don't have such an optino, because there is no
time whe nthe server is not loaded.


 In streaming replication, the keep-alive messages carry additional
 information, the timestamps and WAL locations, so a keepalive makes
 sense at that level. But otherwise, aren't we just trying to
 reimplement
 TCP keepalives? TCP keepalives are not perfect, but if we want to have
 an application level timeout, it should be implemented in the FE/BE
 protocol.

 I don't think we need to do anything specific to pg_basebackup. The
 user
 can simply specify TCP keepalive settings in the connection string,
 like
 with any libpq program.

 I think currently user has no way to specify TCP keepalive settings from
 pg_basebackup, please let me know if there is any such existing way?

You can set it through environment variables. As was discussed
elsewhere, it would be good to have the ability to do it natively to
pg_basebackup as well.


 I think specifying TCP settings is very cumbersome for most users, that's
 the reason most standard interfaces (ODBC/JDBC) have such application level
 timeout mechanism.

 By implementing in FE/BE protocol (do you mean to say that make such
 non-blocking behavior inside Libpq or something else), it might be generic
 and can be used for others as well but it might need few interface changes.

If it's specifying them that is cumbersome, then that's the part we
should fix, rather than modifying the protocol, no?


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making testing on Windows easier

2013-01-21 Thread Magnus Hagander
On Mon, Jan 21, 2013 at 3:03 PM, Dave Page dp...@pgadmin.org wrote:
 On Mon, Jan 21, 2013 at 1:59 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 01/21/2013 08:11 AM, Dave Page wrote:


 I was never able to determine why the Windows instances were so much
 slower than the corresponding Linux instance of the same type.

 Full vs. para-virtualisation perhaps?


 No, Windows builds just are slower. For some time the buildfarm has been
 reporting run times for various members, so there's plenty of data on this.
 For example, nightjar and currawong are respectively FreeBSD/gcc and
 WindowsXP/VC2008 members running in VMs on the same host. currawong takes
 three times as long for a buildfarm run even though it's doing less work.

 Hmm, OK. I don't build PostgreSQL interactively enough to notice I
 guess. For C++ it's definitely the other way round - I can build
 pgAdmin in a fraction of the time in a Windows VM than I can on the
 host Mac it runs on.

Yes, for C++ it's a difference in at least one order of magnitude.

For C it definitely isn't. MSVC tends to be faster than gcc (both on
windows), which I think is mostly because it builds multiple files in
one run. However, actually starting each build step takes
significantly longer. We've also added some things like the DEF file
magic that can definitely take quite some time, particularly when
building the backend.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-21 Thread Magnus Hagander
On Jan 22, 2013 1:31 AM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Jan 21, 2013 at 6:23 PM, Phil Sorber p...@omniti.com wrote:
  On Wed, Jan 16, 2013 at 8:18 AM, Abhijit Menon-Sen a...@2ndquadrant.com
wrote:
  Here's a breakdown based purely on the names from the CF page (i.e. I
  didn't check archives to see who actually posted reviews, and didn't
  take into account reviews posted without updating the CF page).
 
  FWIW, I reviewed at least one at the point you did this survey, and I
  did update the CF page, but I didn't put my name into that box marked
  reviewers because it is an extra step (Edit Patch) that isn't
  immediatly obvious. Isn't there a way to automatically populate that
  based on people linking in their reviews?

 Sadly, the CF application doesn't have access to the user name - real
 name mappings.  And while it's only

Sure it does. It only has it for users that have logged in at least once.
But that wouldn't be a problem for this scenario as they would have already
logged in to post said link.

/Magnus


[HACKERS] Re: [BUGS] BUG #7809: Running pg_dump on slave w/ streaming replication fails if there are unlogged tables

2013-01-20 Thread Magnus Hagander
On Tue, Jan 15, 2013 at 10:35 PM, Magnus Hagander mag...@hagander.net wrote:
 On Tue, Jan 15, 2013 at 12:13 AM,  j...@tanga.com wrote:
 The following bug has been logged on the website:

 Bug reference:  7809
 Logged by:  Joe Van Dyk
 Email address:  j...@tanga.com
 PostgreSQL version: 9.2.2
 Operating system:   Ubuntu
 Description:

 Running pg_dump on a streaming replication slave with a database that has
 unlogged_tables will fail unless you provide the --no-unlogged-table-data
 option with the following (scary) error:

 pg_dump: Dumping the contents of table tracking_import_data failed:
 PQgetResult() failed.
 pg_dump: Error message from server: ERROR:  could not open file
 base/16388/190326: No such file or directory
 pg_dump: The command was: COPY public.tracking_import_data (uuid,
 tracking_number) TO stdout;

 (this guy  encountered the error as well:
 http://www.postgresql.org/message-id/de2de764-307d-4a23-a9a9-6608ac097...@ticketevolution.com
 )

 Could running pg_dump against a slave always use the
 --no-unlogged-table-data option?

 That sounds like a pretty reasonable idea, I think. Should be easy
 enough to figure out at an early stage, too.

 That said, it wouldn't hurt if we could make that error a little less
 scary. Instead of saying could not open file, could we find a way to
 say this is an unlogged table on a slave, it's not going to work?

 We can fix pg_dump the easy way, but what about custom tools...

Here's a patch to fix this in pg_dump. I intentionally made the check
for pg_is_in_recovery() on everything since 9.0, since we might well
end up with other things we want to do different in hot standby (in
theory. but not likely). And since we're not going to end up with any
unlogged tables on 9.0 anyway, it doesn't hurt to turn them off.

I'm thinking we can consider this a bug and it should be backpatched
(to 9.1 where we added unlogged tables). Comments?


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


pg_dump_unlogged.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dividing privileges for replication role.

2013-01-20 Thread Magnus Hagander
On Sat, Jan 19, 2013 at 4:47 AM, Tomonari Katsumata
t.katsumata1...@gmail.com wrote:
 Hi,

 I made a patch to divide privileges for replication role.

 Currently(9.2), the privilege for replication role is
 true / false which means that standby server is able to
 connect to another server or not with the replication role.

Why is it better to do this with a privilege, rather than just using
pg_hba.conf? It doesn't represent an actual permission level after
all - it's just an administrative flag to say you can't connect.
Which AFAICS can just as easily be handled in pg_hba.conf? I guess I'm
missing something?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7809: Running pg_dump on slave w/ streaming replication fails if there are unlogged tables

2013-01-20 Thread Magnus Hagander
On Sun, Jan 20, 2013 at 4:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 + PGresult *res = ExecuteSqlQueryForSingleRow(fout, SELECT 
 pg_is_in_recovery());

 That function call needs to be schema-qualified for security.

Ha! I wonder if I can set up an autoresponder to *myself* with that
review whenever I commit to pgdump :) Thanks!

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-20 Thread Magnus Hagander
On Jan 21, 2013 3:06 AM, Craig Ringer cr...@2ndquadrant.com wrote:

 On 01/21/2013 10:03 AM, Craig Ringer wrote:
  On 01/19/2013 04:08 AM, Boszormenyi Zoltan wrote:
  However, I am not sure whether Cygwin provides the mkstemp() call or
not.
  Searching... Found bugzilla reports against mkstemp on Cygwin.
  Is Cygwin a platform that should be targeted for the server backend
  these days?
 
  I can understand making sure that libpq works on Cygwin, but is there
  any reason at all to run a Pg server backend on Cygwin rather than as
  native Windows binaries?

 I'm not suggesting immediately dropping working support, since this is
 so trivially worked around. I'm just wondering why anybody cares about
 the platform.

I have suggested similar before, and been voted down :) iirc Andrew uses
it, no? Either way, the consensus earlier had been that as long as it
doesn't require major surgery or blocks something else, we should try to
keep it working. And as you say this sounds like something that can be
handled trivially, I think now is not the time.

/Magnus


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Magnus Hagander
On Fri, Jan 18, 2013 at 1:05 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 18.01.2013 13:41, Amit Kapila wrote:

 On Friday, January 18, 2013 3:46 PM Heikki Linnakangas wrote:

 On 18.01.2013 08:50, Amit Kapila wrote:

 I think currently user has no way to specify TCP keepalive settings

 from

 pg_basebackup, please let me know if there is any such existing way?


 I was going to say you can just use keepalives_idle=30 in the
 connection string. But there's no way to pass a connection string to
 pg_basebackup on the command line! The usual way to pass a connection
 string is to pass it as the database name, and PQconnect will expand
 it,
 but that doesn't work with pg_basebackup because it hardcodes the
 database name as replication. D'oh.

 You could still use environment variables and a service file to do it,
 but it's certainly more cumbersome. It clearly should be possible to
 pass a full connection string to pg_basebackup, that's an obvious
 oversight.


 So to solve this problem below can be done:
 1. Support connection string in pg_basebackup and mention keepalives or
 connection_timeout
 2. Support recv_timeout separately to provide a way to users who are not
 comfortable tcp keepalives

 a. 1 can be done alone
 b. 2 can be done alone
 c. both 1 and 2.


 Right. Let's do just 1 for now. An general application level, non-TCP,
 keepalive message at the libpq level might be a good idea, but that's a much
 larger patch, definitely not 9.3 material.

+1 for doing 1 now. But actually, I think we can just keep it that way
in the future as well. If you need to specify these fairly advanced
options, using a connection string really isn't a problem.

I think it would be more worthwhile to go through the rest of the
tools in bin/ and make sure they *all* support connection strings.
And, an important point,  do it the same way.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing connection string to pg_basebackup

2013-01-19 Thread Magnus Hagander
On Fri, Jan 18, 2013 at 2:43 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Heikki Linnakangas hlinnakan...@vmware.com writes:
 You could still use environment variables and a service file to do it, but
 it's certainly more cumbersome. It clearly should be possible to pass a full
 connection string to pg_basebackup, that's an obvious oversight.

 FWIW, +1. I would consider it a bugfix (backpatch, etc).

While it's a feature I'd very much like to see, I really don't think
you can consider it a bugfix. It's functionality that was left out -
it's not like we tried to implement it and it didn't work. We pushed
the whole implementation to next version (and then forgot about
actually putting it in the next version, until now)


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4

2013-01-17 Thread Magnus Hagander
On Jan 17, 2013 8:15 AM, Abhijit Menon-Sen a...@2ndquadrant.com wrote:

 At 2013-01-17 16:05:05 +0900, michael.paqu...@gmail.com wrote:
 
  Is it really necessary to create a new commit fest just to move the
  items? Marking the patches that are considered as being too late for
  9.3 should be just returned with feedback.

 Opening 2013-03 is not so much to move existing patches, but to give
 people a place to submit *new* post-9.3 patches without interrupting
 the 2013-01 CF.

Yeah, and +1 for doing that. The sooner the better. By whichever of the
time frames for the cf that had been discussed, it should certainly not be
open for accepting new patches for 9.3 anymore.

/Magnus


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-17 Thread Magnus Hagander
On Thu, Jan 17, 2013 at 8:19 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:

 On Wed, Jan 16, 2013 at 1:51 PM, Abhijit Menon-Sen a...@2ndquadrant.com
 wrote:

 At 2013-01-16 02:07:29 -0500, t...@sss.pgh.pa.us wrote:
 
  In case you hadn't noticed, we've totally lost control of
  the CF process.

 What can we do to get it back on track?

 I know various people (myself included) have been trying to keep CF3
 moving, e.g. sending followup mail, adjusting patch status, etc.

 I want to help, but I don't know what's wrong. What are the committers
 working on, and what is the status of the Ready for commiter patches?
 Is the problem that the patches marked Ready aren't, in fact, ready? Or
 is it lack of feedback from authors? Or something else?


 ISTM that even committers are often overwhelmed with the work, their own as
 well as that of reviewing other's patches and committing them. Especially
 when a patch is large or touches core areas, I can feel the significant work
 that the committer has to do even after some help and initial review from CF
 reviewers. On the other hand, if the patches are not committed in time, we
 loose context, interest dies down and when the patch is actually picked up
 by a committer who is often not involved in the original discussion, many
 points need to be revisited and reworked.

 Would it help to step up a few developers and create a second line of
 committers ? The commits by the second line committers will still be
 reviewed by the first line committers before they make into the product, but
 may be at later stage or when we are in beta. I thought of even suggesting
 that the master branch will contain only commits by the first line
 committers. We then maintain a secondary branch which also have commits from
 the second line committers in addition to all commits from the master
 branch. The first line committers can then cherry pick from the secondary
 branch at some later stage. But not sure if this will add more overhead and
 defeat the problem at hand.

While we can certainly do that, it would probably help just to havae a
second line of reviewers. Basically a set of more senior reviewers -
so a patch would go submission - reviewer - senior reviewer -
committer. With the second line of reviewers focusing more on the
whole how to do things, etc.

As you, I'm not sure if it creates more overhead than it solves, but
it might be worth a try. In a way it already exists, because I'm sure
committers pay slightly more attention to reviews by people who ahve
been doing it a lot and are known to process those things, than to new
entries. All that woudl bee needed was for those people to realize it
would be helpful for them to do a second-stage review even if somebody
else has done the first one.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: pgbench - aggregation of info written into log

2013-01-17 Thread Magnus Hagander
On Thu, Jan 17, 2013 at 2:35 AM, Tatsuo Ishii is...@postgresql.org wrote:
 This might be way more than we want to do, but there is an article
 that describes some techniques for doing what seems to be missing
 (AIUI):

 http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
 Even this would be doable, I'm afraid it may not fit in 9.3 if we
 think about the current status of CF. So our choice would be:

 1) Postpone the patch to 9.4

 2) Commit the patch in 9.3 without Windows support

 I personally am ok with #2. We traditionally avoid particular paltform
 specific features on PostgreSQL.  However I think the policiy could be
 losen for contrib staffs. Also pgbench is just a client program. We
 could always use pgbench on UNIX/Linux if we truely need the feature.

 What do you think?

 Fair enough, I was just trying to point out alternatives. We have
 committed platform-specific features before now. I hope it doesn't
 just get left like this, though.

We have committed platform-specific features before, but generally
only when it's not *possible* to do them for all platforms. For
example the posix_fadvise stuff isn't available on Windows at all, so
there isn't much we can do there.


 Yeah, I hope someone pick this up and propose as a TODO item. In the
 mean time, I'm going to commit the patch without Windows support
 unless there's objection.

Perhaps we should actually hold off until someone committs to actually
getting it fixed in the next version? If we do have that, then we can
commit it as a partial feature, but if we just hope someone picks it
up, that's leaving it very loose..

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: pgbench - aggregation of info written into log

2013-01-17 Thread Magnus Hagander
On Thu, Jan 17, 2013 at 11:09 AM, Dave Page dp...@pgadmin.org wrote:
 On Thu, Jan 17, 2013 at 9:36 AM, Magnus Hagander mag...@hagander.net wrote:
 On Thu, Jan 17, 2013 at 2:35 AM, Tatsuo Ishii is...@postgresql.org wrote:
 This might be way more than we want to do, but there is an article
 that describes some techniques for doing what seems to be missing
 (AIUI):

 http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
 Even this would be doable, I'm afraid it may not fit in 9.3 if we
 think about the current status of CF. So our choice would be:

 1) Postpone the patch to 9.4

 2) Commit the patch in 9.3 without Windows support

 I personally am ok with #2. We traditionally avoid particular paltform
 specific features on PostgreSQL.  However I think the policiy could be
 losen for contrib staffs. Also pgbench is just a client program. We
 could always use pgbench on UNIX/Linux if we truely need the feature.

 What do you think?

 Fair enough, I was just trying to point out alternatives. We have
 committed platform-specific features before now. I hope it doesn't
 just get left like this, though.

 We have committed platform-specific features before, but generally
 only when it's not *possible* to do them for all platforms. For
 example the posix_fadvise stuff isn't available on Windows at all, so
 there isn't much we can do there.

 Right - having platform specific features for other reasons like lack
 of time is a slippery slope in my opinion. We should not get into such
 a habit or Windows will quickly become a second class platform as far
 as PostgreSQL features are concerned.

Especially since there is no lack of time - the functionality is
there, it just looks (significantly) different.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple --table options for other commands

2013-01-17 Thread Magnus Hagander
On Fri, Dec 14, 2012 at 4:14 PM, Karl O. Pinc k...@meme.com wrote:
 On 12/13/2012 11:02:56 PM, Josh Kupershmidt wrote:
 On Thu, Dec 13, 2012 at 9:03 PM, Karl O. Pinc k...@meme.com wrote:

  Sorry to be so persnickety, and unhelpful until now.
  It seemed like it should be doable, but something
  was going wrong between keyboard and chair.  I guess
  I should be doing this when better rested.

 No problem, here is v5 with changed synopses.

 The clusterdb synopsis had tabs in it, which I understand
 is frowned upon in the docs.  I've fixed this.
 It looks good to me, passes check and so forth.

 Attached is a v6 patch, with no tabs in docs and based
 off the latest head.

 I'm marking it ready for committer.

Thanks. Applied, with only some small whitespace changes.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: pgbench - aggregation of info written into log

2013-01-17 Thread Magnus Hagander
On Thu, Jan 17, 2013 at 2:29 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 01/17/2013 06:04 AM, Tomas Vondra wrote:

 The problem is I have access to absolutely no Windows machines,
 not mentioning the development tools (and that I have no clue about it).

 I vaguely remember there were people on this list doing Windows
 development
 on a virtual machine or something. Any interest in working on this /
 giving
 me some help?




 One of the items on my very long TODO list (see stuff elsewhere about
 committers not doing enough reviewing and committing) is to create a package
 that can easily be run to set Windows Postgres development environments for
 MSVC, Mingw and Cygwin on Amazon, GoGrid etc.

 Once I get that done I'll be a lot less sympathetic to I don't have access
 to Windows pleas.

 Windows does run in a VM very well, but if you're going to set it up on your
 own VM environment, (e.h. VirtualBox or KVM/qemu) you need your own legal
 copy to install there. If you don't already have one, that will set you back
 about $140.00 (for w7 Pro) in the USA. Note that that's significantly better
 than the situation with OSX, which you can't run at all except on Apple
 hardware.

Yeah. I used to have an AMI with the VS environment preinstalled on
Amazon, but I managed to fat finger things and delete it at some point
and haven't really had time to rebuild it.

Having a script that would download and install all the pre-requisites
on such a box would be *great*. Then you could get up and going pretty
quickly, and getting a Windows box up running for a few hours there is
almost free, and you don't have to deal with licensing hassles.

(Of course, the AMI method doesn't work all the way since you'd be
distributing Visual Studio, but if we can have a script that
auto-downloads-and-installs it as necessary we can get around that)


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] small pg_basebackup display bug

2013-01-17 Thread Magnus Hagander
On Sun, Dec 16, 2012 at 7:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Sat, Dec 15, 2012 at 2:24 PM, Erik Rijkers e...@xs4all.nl wrote:
 That would make such a truncation less frequent, and after all a truncated 
 display is not
 particular useful.

 Agreed - it's useful during testing, but not in a typical production
 use. It might actually be more useful if it's truncated in in the
 other end (keeping the last 30 instead of the first 30 chars)

 +1 for truncating from the left.  I think pg_upgrade already does that
 in its progress messages.

Fixed. I also fixed the output of the size parameter to be a fixed
length, so the whole row doesn't shift left and right depending on how
far long the process is.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default SSL compression (was: libpq compression)

2013-01-17 Thread Magnus Hagander
On Wed, Jan 2, 2013 at 3:17 PM, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Jan 2, 2013 at 3:15 PM, Noah Misch n...@leadboat.com wrote:
 On Wed, Jan 02, 2013 at 02:03:20PM +0100, Magnus Hagander wrote:
 On Wed, Jan 2, 2013 at 1:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  So +1 for changing it to DEFAULT from me, too.  There's no reason to
  think we know more about this than the OpenSSL authors.

 The DEFAULT value in OpenSSL 1.0 means ALL:!aNULL:!eNULL.

 Researching some more, this might cause a problem actually, which
 would explain some of the things that are in our default. For example,
 an ADH algorithm doesn't use certificates - but it uses DH parameters,
 so it likely won't work anyway. EDH uses certs, but also requires DH
 parameters.

 Maybe what we nede is DEFAULT:!ADH:@STRENGTH as the default?

 I understand aNULL to include ADH.

 Hmm. Seems you're right when I run a test on it, I was reading it wrong.


 The other difference is that our current string denies 40 and 56 bit
 encryptions (low and export strenghts). Do we stll want to do that?

 On the one hand, those seem bad to permit by default in 2013.  On the other
 hand, if so, why hasn't OpenSSL removed them from DEFAULT?  Perhaps it has
 backward-compatibility concerns that wouldn't apply to us by virtue of having
 disabled them for some time.  Sounds reasonable to continue disabling them.

 So then the default would be DEFAULT:!LOW:!EXP:@STRENGTH

 (the @STRENGTH part is the sorting key for preference, which the
 default one seems not to have)

 The biggest difference being that we start from DEFAULT rather than ALL.

I've applied a change that does this, including still rejecting MD5.
Meaning that the difference is we start from DEFAULT instead of ALL
(and the ADH rule is removed, since !aNULL is already in the default).

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query execution

2013-01-16 Thread Magnus Hagander
On Wed, Jan 16, 2013 at 12:03 AM, Bruce Momjian br...@momjian.us wrote:
 On Tue, Jan 15, 2013 at 11:01:04PM +, Simon Riggs wrote:
 On 15 January 2013 22:55, Bruce Momjian br...@momjian.us wrote:

  Why is this being discussed now?
 
  It is for 9.4 and will take months.  I didn't think there was a better
  time.  We don't usually discuss features during beta testing.

 Bruce, there are many, many patches on the queue. How will we ever get
 to beta testing if we begin open ended discussions on next release?

 If we can't finish what we've started for 9.3, why talk about 9.4?

 Yes, its a great topic for discussion, but there are better times.

 Like when?  I don't remember a policy of not discussing things now.
 Does anyone else remember this?  Are you saying feature discussion is
 only between commit-fests?  Is this written down anywhere?  I only
 remember beta-time as a time not to discuss features.

We kind of do - when in a CF we should do reviewing of existing
patches, when outside a CF we should do discussions and work on new
features. It's on http://wiki.postgresql.org/wiki/CommitFest. It
doesn't specifically say do this and don't do htat, but it says focus
on review and discussing things that will happen that far ahead is
definitely not focusing on review.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-16 Thread Magnus Hagander
On Wed, Jan 16, 2013 at 9:21 AM, Abhijit Menon-Sen a...@2ndquadrant.com wrote:
 At 2013-01-16 02:07:29 -0500, t...@sss.pgh.pa.us wrote:

 In case you hadn't noticed, we've totally lost control of
 the CF process.

 What can we do to get it back on track?

Not sure. One start might be to actually start having commitfest
managers. I haven't actually checked numbers, but my feeling is that
last round most commitfests (maybe not all) had a commitfest manager
that worked (usually hard) at keeping things moving, whereas this
round most (or all?) commitfests have been unmanaged. I'm not sure if
that's the reason for it, but it's a feeling I have..


 I know various people (myself included) have been trying to keep CF3
 moving, e.g. sending followup mail, adjusting patch status, etc.

 I want to help, but I don't know what's wrong. What are the committers
 working on, and what is the status of the Ready for commiter patches?
 Is the problem that the patches marked Ready aren't, in fact, ready? Or
 is it lack of feedback from authors? Or something else?

I'm not one of the committers that pick up the most patches, but from
reading messages on the lists I think fairly often patches that are
marked as ready, aren't. Sometimes they require a small change, which
is fine, but more often than not once it hits a committer it ends up
with a lot of feedback requiring rather extensive changes. As in it
technical works, but it's better to do it in a different way. I'm not
sure how to catch those better.


 Would it help at all to move all pending items (i.e. anything less than
 ready) from CF3 to CF4, just so that the committers have only one list
 to look at, while reviewers can work on the other? Only psychological,
 but maybe that's better than the current situation?

No, it would. They should've been bounced to the next commitfest when
the last one closed. The problem was we never closed it...


Tom also wrote:
 In case you hadn't noticed, we've totally lost control of the CF
 process.  Quite aside from the lack of progress on closing CF3, major
 hackers who should know better are submitting significant new feature
 patches now, despite our agreement in Ottawa that nothing big would be
 accepted after CF3.  At this point I'd bet against releasing 9.3 during
 2013.

Well, if we said that, why don't we just single out those patches, and
bounce them right now. Before people put more work into it.

We also talked about the one-patch-one-review. Did someone ever check
if that worked out - did we get that spread, or did we end up with the
same ratio as last time?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-16 Thread Magnus Hagander
On Wed, Jan 16, 2013 at 1:12 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 16 January 2013 08:21, Abhijit Menon-Sen a...@2ndquadrant.com wrote:
 At 2013-01-16 02:07:29 -0500, t...@sss.pgh.pa.us wrote:

 In case you hadn't noticed, we've totally lost control of
 the CF process.

 What can we do to get it back on track?

 Totally lost control is an overstatement. The current situation is
 that there are clearly more patches than people working on them; a
 situation that we've been in, with various degrees for all of the last
 9 years of my involvement with PostgreSQL.

 AFAIK there was no CF manager assigned to Nov2012 CF, so not
 surprising it is off track. Not sure we ever agreed a process for
 assigning CF mgrs, so again, not surprising it is off track.

I don't think we have a process for it, and that's perhaps something
we need to discuss for the future.


 I would like to nominate Craig Ringer to be independent CF mgr for Jan2013 CF.

If he's willing to do that, then +1 from me.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7809: Running pg_dump on slave w/ streaming replication fails if there are unlogged tables

2013-01-15 Thread Magnus Hagander
On Tue, Jan 15, 2013 at 12:13 AM,  j...@tanga.com wrote:
 The following bug has been logged on the website:

 Bug reference:  7809
 Logged by:  Joe Van Dyk
 Email address:  j...@tanga.com
 PostgreSQL version: 9.2.2
 Operating system:   Ubuntu
 Description:

 Running pg_dump on a streaming replication slave with a database that has
 unlogged_tables will fail unless you provide the --no-unlogged-table-data
 option with the following (scary) error:

 pg_dump: Dumping the contents of table tracking_import_data failed:
 PQgetResult() failed.
 pg_dump: Error message from server: ERROR:  could not open file
 base/16388/190326: No such file or directory
 pg_dump: The command was: COPY public.tracking_import_data (uuid,
 tracking_number) TO stdout;

 (this guy  encountered the error as well:
 http://www.postgresql.org/message-id/de2de764-307d-4a23-a9a9-6608ac097...@ticketevolution.com
 )

 Could running pg_dump against a slave always use the
 --no-unlogged-table-data option?

That sounds like a pretty reasonable idea, I think. Should be easy
enough to figure out at an early stage, too.

That said, it wouldn't hurt if we could make that error a little less
scary. Instead of saying could not open file, could we find a way to
say this is an unlogged table on a slave, it's not going to work?

We can fix pg_dump the easy way, but what about custom tools...


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-14 Thread Magnus Hagander
On Sat, Jan 5, 2013 at 3:11 PM, Magnus Hagander mag...@hagander.net wrote:
 On Fri, Jan 4, 2013 at 7:13 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 1/3/13 12:30 PM, Robert Haas wrote:
 On Thu, Jan 3, 2013 at 11:32 AM, Magnus Hagander mag...@hagander.net 
 wrote:
 Any particular reason? It goes pretty tightly together with
 pg_receivexlog, which is why I'd prefer putting it alongside that one.
 But if you have a good argument against it, I can change my mind :)

 Mostly that it seems like a hack, and I suspect we may come up with a
 better way to do this in the future.

 It does seem like a hack.  Couldn't this be implemented with a backend
 switch instead?

 It definitely is a bit of a hack.

 I assume by backend switch you mean guc, right? If so, no, not easily
 so. Because it's the archiver process that does the deleting. And this
 process does not have access to a full backend interface, e.g. the
 ability to run a query. We could make it look at the same data that's
 currently shown in pg_stat_replicatoin through shared memory, but thta
 would *only* work in the very most simple cases (e.g. a single
 pg_receivexlog and no other replication). The ability to run a custom
 SQL query is going to be necessary for anything a bit more advanced.


 Also, as a small practical matter, since this is a server-side program
 (since it's being used as archive_command), we shouldn't put it into the
 pg_basebackup directory, because that would blur the lines about what to
 install where, in particular for the translations.

 Good argument. That along with the being a hack, and the comment from
 Robert, means that maybe contrib/ is a better place for it, yes.

Here's a version for inclusion in /contrib.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


pg_retainxlog.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-14 Thread Magnus Hagander
On Mon, Jan 14, 2013 at 5:56 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Jan 4, 2013 at 4:55 PM, Dimitri Fontaine dimi...@2ndquadrant.fr 
 wrote:
 Robert Haas robertmh...@gmail.com writes:
 Mostly that it seems like a hack, and I suspect we may come up with a
 better way to do this in the future.

 Do you have the specs of such better way? Would it be a problem to have
 both pg_retainxlog and the new way?

 Well, I think in the long term we are likely to want the master to
 have some kind of ability to track the positions of its slaves, even
 when they are disconnected.  And, optionally, to retain the WAL that
 they need, again even when they are disconnected.  If such an ability
 materializes, this will be moot (even as I think that pg_standby is
 now largely moot, at least for new installations, now that we have
 standby_mode=on).

I agree. But just as we had pg_standby for quite a while before we got
standby_mode=on, I believe we should have pg_retainxlog (or something
like it) until we have something more integrated.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


<    6   7   8   9   10   11   12   13   14   15   >