Re: [HACKERS] 9.2rc1 produces incorrect results

2012-09-05 Thread Vik Reykja
On Wed, Sep 5, 2012 at 6:09 AM, Tom Lane  wrote:

> I wrote:
> > I think probably the best fix is to rejigger things so that Params
> > assigned by different executions of SS_replace_correlation_vars and
> > createplan.c can't share PARAM_EXEC numbers.  This will result in
> > rather larger ecxt_param_exec_vals arrays at runtime, but the array
> > entries aren't very large, so I don't think it'll matter.
>
> Attached is a draft patch against HEAD for this.  I think it makes the
> planner's handling of outer-level Params far less squishy than it's ever
> been, but it is rather a large change.  Not sure whether to risk pushing
> it into 9.2 right now, or wait till after we cut 9.2.0 ... thoughts?
>

I am not in a position to know what's best for the project but my company
can't upgrade (from 9.0) until this is fixed.  We'll wait for 9.2.1 if we
have to.  After all, we skipped 9.1.


Re: [HACKERS] Cascading replication and recovery_target_timeline='latest'

2012-09-05 Thread Dimitri Fontaine
Heikki Linnakangas  writes:
> On 04.09.2012 03:02, Dimitri Fontaine wrote:
>> Heikki Linnakangas  writes:
>>> Hmm, I was thinking that when walsender gets the position it can send the
>>> WAL up to, in GetStandbyFlushRecPtr(), it could atomically check the current
>>> recovery timeline. If it has changed, refuse to send the new WAL and
>>> terminate. That would be a fairly small change, it would just close the
>>> window between requesting walsenders to terminate and them actually
>>> terminating.
>
> No, only cascading replication is affected. In non-cascading situation, the
> timeline never changes in the master. It's only in cascading mode that you
> have a problem, where the standby can cross timelines while it's replaying
> the WAL, and also sending it over to cascading standby.

It seems to me that it applies to connecting a standby to a newly
promoted standby too, as the timeline did change in this case too.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] State of the on-disk bitmap index

2012-09-05 Thread Albe Laurenz
Daniel Bausch wrote:
> I am going to implement a simple kind of "encoded bitmap indexes"
(EBI).

> I thought, it could be a good idea to base my work on the long
proposed
> on-disk bitmap index implementation.  Regarding to the wiki, you,
Jonah
> and Simon, were the last devs that touched this thing.  Unfortunately
I
> could not find the patch representing your state of that work.  I
could
> only capture the development history up to Gianni Ciolli & Gabriele
> Bartolini from the old pgsql-patches archives.  Other people involved
> were Jie Zhang, Gavin Sherry, Heikki Linnakangas, and Leonardo F.  Are
> you and the others still interested in getting this into PG?  A rebase
> of the most current bitmap index implementation onto master HEAD will
be
> the first 'byproduct' that I am going to deliver back to you.
>
> 1. Is anyone working on this currently?
> 2. Who has got the most current source code?
> 3. Is there a git of that or will I need to reconstruct the history
from
> the patches I collected?

It seems like you did not get any answers from any of the
people you mentioned ...

The latest version of the patch I found is
http://archives.postgresql.org/pgsql-patches/2006-12/msg00015.php
So that's probably the best you can get.

I want to encourage you to work on this.

You'd have to come up with a sound concept and discuss it on this
list, and it would be helpful to have some draft patch for
git master that can be used as a basis for discussion.

Expect to meet some resistance.  Nobody will want the extra
code and complexity unless you can show suffitient benefits.

One concern that came up in previous discussions is that
bitmap indexes are only useful for columns with low cardinality,
and in that case the result will likely be a significant portion
of the table anyway and a sequential scan would be faster.
I think that this is less true if you have more conditions,
and this is supposedly the case where encoded bitmap indexes
work better anyway.

Another criticism I can imagine is that PostgreSQL already
supports a bitmap index scan of b-tree indexes, so you would
have to show that on-disk bitmap indexes outperform that
in realistic scenarios.  This has probably become more
difficult with the recently introduced index-only scan
for b-tree indexes, which is particularly helpful in
data warehouse scenarios.

So you'd have to run some performance tests against a draft
implementation to get people convinced that it is worth the
effort.  Supporting index-only scans Would probably give
you an edge.

Yours,
Laurenz Albe


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2rc1 produces incorrect results

2012-09-05 Thread Thom Brown
On 5 September 2012 05:09, Tom Lane  wrote:
> I wrote:
>> I think probably the best fix is to rejigger things so that Params
>> assigned by different executions of SS_replace_correlation_vars and
>> createplan.c can't share PARAM_EXEC numbers.  This will result in
>> rather larger ecxt_param_exec_vals arrays at runtime, but the array
>> entries aren't very large, so I don't think it'll matter.
>
> Attached is a draft patch against HEAD for this.  I think it makes the
> planner's handling of outer-level Params far less squishy than it's ever
> been, but it is rather a large change.  Not sure whether to risk pushing
> it into 9.2 right now, or wait till after we cut 9.2.0 ... thoughts?

Just so someone else has tested the case in question, here's the
result this end:

 id | array
+---
  1 | {1}
  1 | {1}
(2 rows)


QUERY PLAN
---
 Result  (cost=131.45..133.07 rows=8 width=36)
   CTE a
 ->  Nested Loop  (cost=87.18..131.09 rows=7 width=4)
   ->  Merge Right Join  (cost=87.18..123.33 rows=7 width=4)
 Merge Cond: (((pg_c.relname)::text) = ((t2.id)::text))
 Filter: (pg_c.oid IS NULL)
 ->  Sort  (cost=22.82..23.55 rows=291 width=68)
   Sort Key: ((pg_c.relname)::text)
   ->  Seq Scan on pg_class pg_c
(cost=0.00..10.91 rows=291 width=68)
 ->  Sort  (cost=64.36..66.84 rows=993 width=4)
   Sort Key: ((t2.id)::text)
   ->  Seq Scan on t2  (cost=0.00..14.93 rows=993 width=4)
   ->  Index Only Scan using t1_pkey on t1  (cost=0.00..1.10
rows=1 width=4)
 Index Cond: (id = t2.id)
   CTE b
 ->  WindowAgg  (cost=0.24..0.36 rows=7 width=4)
   ->  Sort  (cost=0.24..0.26 rows=7 width=4)
 Sort Key: a.id
 ->  CTE Scan on a  (cost=0.00..0.14 rows=7 width=4)
   ->  Append  (cost=0.00..1.62 rows=8 width=36)
 ->  CTE Scan on a  (cost=0.00..0.77 rows=4 width=4)
   Filter: is_something
   SubPlan 3
 ->  CTE Scan on b  (cost=0.00..0.16 rows=1 width=4)
   Filter: (id = a.id)
 ->  CTE Scan on a  (cost=0.00..0.77 rows=4 width=4)
   Filter: is_something
   SubPlan 4
 ->  CTE Scan on b  (cost=0.00..0.16 rows=1 width=4)
   Filter: (id = a.id)
(30 rows)

As for shipping without the fix, I'm torn on whether to do so or not.
I imagine most productions will wait for a .1 or .2 release, and use
.0 for migration testing.  Plus this bug hasn't been hit (or at least
not noticed) during 5 releases of 9.1, and there isn't enough time
left before shipping to expose the changes to enough testing in the
areas affected, so I'd be slightly inclined to push this into 9.1.6
and 9.2.1.

Regards

Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] State of the on-disk bitmap index

2012-09-05 Thread Daniel Bausch
Hi Albe and the list,

>> I am going to implement a simple kind of "encoded bitmap indexes" (EBI).
>> 
>> I thought, it could be a good idea to base my work on the long proposed
>> on-disk bitmap index implementation.  Regarding to the wiki, you,
>> Jonah and Simon, were the last devs that touched this thing.  Unfortunately
>> I could not find the patch representing your state of that work.  I
>> could only capture the development history up to Gianni Ciolli & Gabriele
>> Bartolini from the old pgsql-patches archives.  Other people involved
>> were Jie Zhang, Gavin Sherry, Heikki Linnakangas, and Leonardo F.  Are
>> you and the others still interested in getting this into PG?  A rebase
>> of the most current bitmap index implementation onto master HEAD will
>> be the first 'byproduct' that I am going to deliver back to you.
>>
>> 1. Is anyone working on this currently?
>> 2. Who has got the most current source code?
>> 3. Is there a git of that or will I need to reconstruct the history
>> from
>> the patches I collected?
> 
> It seems like you did not get any answers from any of the
> people you mentioned ...
> 
> The latest version of the patch I found is
> http://archives.postgresql.org/pgsql-patches/2006-12/msg00015.php
> So that's probably the best you can get.
> 
> I want to encourage you to work on this.

Yes I do.  Thank you for your support.

I used the (more recent) patches posted by Gianni Ciolli in 2008 and
currently am in the process of porting those to master HEAD -- like I
promised.  I will post the ported patches when I get them to compile and
the index seems to work (somehow).

Nevertheless, I am still interested in what Simon, Jonah, and Leonardo
did after that point in time.  So if someone knows details (code) about
their solutions to, for example, the VACUUM problems, please mail back.

> You'd have to come up with a sound concept and discuss it on this
> list, and it would be helpful to have some draft patch for
> git master that can be used as a basis for discussion.
> 
> Expect to meet some resistance.  Nobody will want the extra
> code and complexity unless you can show suffitient benefits.

If noone wants that, it would be sad.  However, I will at least do all
the work required to run benchmark queries against it.  Nevertheless, I
appreciate any help.

Indeed, the patch is a big one and the approach seems a bit hacky at
some places.  I also suspect that the compression approach could be
improved/replaced by something that is more efficient compression wise.
However, I never could come up with an own solution that complete in the
time available for my current project.

> Another criticism I can imagine is that PostgreSQL already
> supports a bitmap index scan of b-tree indexes, so you would
> have to show that on-disk bitmap indexes outperform that
> in realistic scenarios.  This has probably become more
> difficult with the recently introduced index-only scan
> for b-tree indexes, which is particularly helpful in
> data warehouse scenarios.

IIRC, it was already shown that bitmap indexes can speed up TPC-H
queries.  I will compare B+-tree, bitmap, and encoded bitmap indexes.

> So you'd have to run some performance tests against a draft
> implementation to get people convinced that it is worth the
> effort.  Supporting index-only scans Would probably give
> you an edge.

Yes I will, because I am going to write about that.

Kind regards,
Daniel

-- 
Daniel Bausch
Wissenschaftlicher Mitarbeiter
Technische Universität Darmstadt
Fachbereich Informatik
Fachgebiet Datenbanken und Verteilte Systeme

Hochschulstraße 10
64289 Darmstadt
Germany

Tel.: +49 6151 16 6706
Fax:  +49 6151 16 6229


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] fixing variadic parameters for type "ANY"

2012-09-05 Thread Pavel Stehule
Hello

Our customer reported issue with "format" function -
http://archives.postgresql.org/pgsql-bugs/2012-09/msg00011.php.

This issue is related to our implementation of variadic functions -
there are gap in implemented functionality - parameter cannot be
marked as VARIADIC in function call, when variadic parameter is "ANY"
type.

In this case VARIADIC keyword is quietly ignored now, that is bug.

I though about some solution in relation to function format. Sometimes

SELECT format(' a = %s, b = %s', VARIADIC ARRAY[10,20])

can be useful.

But implementation is relative harder - because for type "ANY" we
don't do any magic with parameters - in this case we should to unpack
array and unpacked values should be appended on end of parameter
lists. But it cannot be done, because then function signature cannot
be calculated in analyze time - and it does problems across function
call processing. So I propose enhancing FuncExpr and fmgr_info about
boolean "arrayva" field, that is true, when function should to expand
of variadic parameter by self. Polymorphic functions with "ANY"
parameters should be careful now when it use parameters, so it is not
significant difference to current design.

Regards

Pavel


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Magnus Hagander
On Wed, Sep 5, 2012 at 7:31 AM, Amit Kapila  wrote:
> On Tuesday, September 04, 2012 12:40 AM Tom Lane wrote:
> Magnus Hagander  writes:
>> On Mon, Sep 3, 2012 at 8:51 PM, Tom Lane  wrote:
 I have another question after thinking about that for awhile: is there
 any security concern there?  On Unix-oid systems, we expect the kernel
 to restrict who can do a kill() on a postgres process.  If there's any
 similar restriction on who can send to that named pipe in the Windows
 version, it's not obvious from the code.  Do we have/need any
 restriction there?
>
>>> We use the default for CreateNamedPipe() which is:
>>> " The ACLs in the default security descriptor for a named pipe grant
>>> full control to the LocalSystem account, administrators, and the
>>> creator owner. They also grant read access to members of the Everyone
>>> group and the anonymous account."
>>> (ref:
> http://msdn.microsoft.com/en-us/library/windows/desktop/aa365150(v=vs.85).as
> px)
>
>> Hm.  The write protections sound fine ... but what's the semantics of
>> reading, is it like Unix pipes?  If so, couldn't a random third party
>> drain the pipe by reading from it, and thereby cause signals to be lost?
>
>   When a client connects to the server-end of a named pipe, the server-end
> of the pipe is now dedicated to the client. No
>   more connections will be allowed to that server pipe instance until the
> client has disconnected.

This is the main argument. yes. Each client gets it's own copy, so it
can't get drained.

>   So I think based on above 2 points it can be deduced that the signal sent
> by pgkill() cannot be read by anyone else.

Agreed.

Well, what someone else could do is create a pipe with our name before
we do (since we use the actual name - it's \\.\pipe\pgsinal_), by
guessing what pid we will have. If that happens, we'll go into a loop
and try to recreate it while logging a warning message to
eventlog/stderr. (this happens for every backend). We can't throw an
error on this and kill the backend because the pipe is created in the
background thread not the main one.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] State of the on-disk bitmap index

2012-09-05 Thread Gianni Ciolli
Dear Albe and Daniel,

On Wed, Sep 05, 2012 at 11:28:18AM +0200, Daniel Bausch wrote:
> Hi Albe and the list,
> 
> >> I am going to implement a simple kind of "encoded bitmap indexes" (EBI).
> >> 
> >> I thought, it could be a good idea to base my work on the long proposed
> >> on-disk bitmap index implementation.  Regarding to the wiki, you,
> >> Jonah and Simon, were the last devs that touched this thing.  Unfortunately
> >> I could not find the patch representing your state of that work.  I
> >> could only capture the development history up to Gianni Ciolli & Gabriele
> >> Bartolini from the old pgsql-patches archives.  Other people involved
> >> were Jie Zhang, Gavin Sherry, Heikki Linnakangas, and Leonardo F.  Are
> >> you and the others still interested in getting this into PG?  A rebase
> >> of the most current bitmap index implementation onto master HEAD will
> >> be the first 'byproduct' that I am going to deliver back to you.
> >>
> >> 1. Is anyone working on this currently?
> >> 2. Who has got the most current source code?
> >> 3. Is there a git of that or will I need to reconstruct the history
> >> from
> >> the patches I collected?
> > 
> > It seems like you did not get any answers from any of the
> > people you mentioned ...

My fault: I missed the questions in August, but today my colleague
Gabriele drew my attention to them. I apologise.

> I used the (more recent) patches posted by Gianni Ciolli in 2008 and
> currently am in the process of porting those to master HEAD -- like I
> promised.

Back in 2008 the PostgreSQL project wasn't using git, and I wasn't
either; hence that patch is the best starting point I can find.

> > Another criticism I can imagine is that PostgreSQL already
> > supports a bitmap index scan of b-tree indexes, so you would
> > have to show that on-disk bitmap indexes outperform that
> > in realistic scenarios.  This has probably become more
> > difficult with the recently introduced index-only scan
> > for b-tree indexes, which is particularly helpful in
> > data warehouse scenarios.
> 
> IIRC, it was already shown that bitmap indexes can speed up TPC-H
> queries.  I will compare B+-tree, bitmap, and encoded bitmap indexes.

I think what Albe meant (also what we attempted back then, if memory
serves me, but without reaching completion) is a set of tests which
show a significant benefit of your implementation over the existing
index type implementations in PostgreSQL, to justify the increased
complexity of the source code.

The kind of test I have in mind is: a big table T with a
low-cardinality column C, such that a btree index on C is
significantly larger than the corresponding bitmap index on the same
column.

Create the bitmap index, and run a query like

  SELECT ... FROM T WHERE C = ... 

more than once; then you should notice that subsequent scans are much
faster than the first run, because the index is small enough to fit
the shared memory and will not need to be reloaded from disk at every
scan.

Then drop the bitmap index, and create a btree index on the same
column; this time the index will be too large and subsequent scans
will be slow, because the index blocks must be reloaded from disk at
every scan.

Hope that helps;
best regards,
Dr. Gianni Ciolli - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
gianni.cio...@2ndquadrant.it | www.2ndquadrant.it


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Kohei KaiGai
2012/9/3 Alvaro Herrera :
> Excerpts from Kohei KaiGai's message of dom sep 02 15:53:22 -0300 2012:
>> This patch fixes a few portions on which sepgsql didn't follow the latest
>> core API changes.
>
> I think you should get a buildfarm animal installed that builds and
> tests sepgsql, to avoid this kind of problem in the future.
>
Thanks for your suggestion. I'm interested in.

http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto

Does it test only build-correctness? Or, is it possible to include
result of regression test for result to be alarmed?
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] State of the on-disk bitmap index

2012-09-05 Thread Daniel Bausch
Hi Gianni!

Thank you for your attention and response!

>> I used the (more recent) patches posted by Gianni Ciolli in 2008 and
>> currently am in the process of porting those to master HEAD -- like I
>> promised.
> 
> Back in 2008 the PostgreSQL project wasn't using git, and I wasn't
> either; hence that patch is the best starting point I can find.

Ok, fine.  However, while I do not find the mail at the moment, I think,
someone said, he fixed the VACUUM.  Additionally, the Wiki lists Simon
and Jonah as the last authors, pretending they prepared a patch for 8.5.

>> IIRC, it was already shown that bitmap indexes can speed up TPC-H
>> queries.  I will compare B+-tree, bitmap, and encoded bitmap indexes.
> 
> I think what Albe meant (also what we attempted back then, if memory
> serves me, but without reaching completion) is a set of tests which
> show a significant benefit of your implementation over the existing
> index type implementations in PostgreSQL, to justify the increased
> complexity of the source code.
> 
> The kind of test I have in mind is: a big table T with a
> low-cardinality column C, such that a btree index on C is
> significantly larger than the corresponding bitmap index on the same
> column.
> 
> Create the bitmap index, and run a query like
> 
>   SELECT ... FROM T WHERE C = ... 
> 
> more than once; then you should notice that subsequent scans are much
> faster than the first run, because the index is small enough to fit
> the shared memory and will not need to be reloaded from disk at every
> scan.
> 
> Then drop the bitmap index, and create a btree index on the same
> column; this time the index will be too large and subsequent scans
> will be slow, because the index blocks must be reloaded from disk at
> every scan.
> 
> Hope that helps;

Is that, what your bmi-perf-test.tar.gz from 2008 does?  I did not look
into that.  I will at least do something like you just described plus
some TPC-H test.  As the encoding helps against the cardinality
problems, I will also draw comparisons with different cardinalities.

Yours sincerely,
Daniel

-- 
Daniel Bausch
Wissenschaftlicher Mitarbeiter
Technische Universität Darmstadt
Fachbereich Informatik
Fachgebiet Datenbanken und Verteilte Systeme

Hochschulstraße 10
64289 Darmstadt
Germany

Tel.: +49 6151 16 6706
Fax:  +49 6151 16 6229


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Amit Kapila
On Wednesday, September 05, 2012 3:58 PM Magnus Hagander wrote:
On Wed, Sep 5, 2012 at 7:31 AM, Amit Kapila  wrote:
> On Tuesday, September 04, 2012 12:40 AM Tom Lane wrote:
> Magnus Hagander  writes:
>> On Mon, Sep 3, 2012 at 8:51 PM, Tom Lane  wrote:
> I have another question after thinking about that for awhile: is there
> any security concern there?  On Unix-oid systems, we expect the kernel
> to restrict who can do a kill() on a postgres process.  If there's any
> similar restriction on who can send to that named pipe in the Windows
> version, it's not obvious from the code.  Do we have/need any
> restriction there?
>
 We use the default for CreateNamedPipe() which is:
 " The ACLs in the default security descriptor for a named pipe grant
 full control to the LocalSystem account, administrators, and the
 creator owner. They also grant read access to members of the Everyone
 group and the anonymous account."
 (ref:
>
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365150(v=vs.85).as
> px)
>
>>> Hm.  The write protections sound fine ... but what's the semantics of
>>> reading, is it like Unix pipes?  If so, couldn't a random third party
>>> drain the pipe by reading from it, and thereby cause signals to be lost?
>
>>   When a client connects to the server-end of a named pipe, the
server-end
>> of the pipe is now dedicated to the client. No
>>   more connections will be allowed to that server pipe instance until the
>> client has disconnected.

> This is the main argument. yes. Each client gets it's own copy, so it
> can't get drained.

>>   So I think based on above 2 points it can be deduced that the signal
sent
>> by pgkill() cannot be read by anyone else.

> Agreed.

> Well, what someone else could do is create a pipe with our name before
> we do (since we use the actual name - it's \\.\pipe\pgsinal_), by
> guessing what pid we will have. If that happens, we'll go into a loop
> and try to recreate it while logging a warning message to
> eventlog/stderr. (this happens for every backend). We can't throw an
> error on this and kill the backend because the pipe is created in the
> background thread not the main one.

  Once it is detected that already a same Named Pipe already exists, there
can be following options:

  a. try to create with some other name, but in that case how to communicate
the new name to client end of pipe. 
 Some solution can be thought if this approach seems to be reasonable,
though currently I don't have any in mind.
  b. give error, as creation of pipe is generally at beginning of process
creation(backend), but you already mentioned it 
 is not good approach.
  c. any other better solution?

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Alvaro Herrera
Excerpts from Kohei KaiGai's message of mié sep 05 08:30:37 -0300 2012:
> 2012/9/3 Alvaro Herrera :
> > Excerpts from Kohei KaiGai's message of dom sep 02 15:53:22 -0300 2012:
> >> This patch fixes a few portions on which sepgsql didn't follow the latest
> >> core API changes.
> >
> > I think you should get a buildfarm animal installed that builds and
> > tests sepgsql, to avoid this kind of problem in the future.
> >
> Thanks for your suggestion. I'm interested in.
> 
> http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto
> 
> Does it test only build-correctness? Or, is it possible to include
> result of regression test for result to be alarmed?

Yes, regression test diffs are also reported and can cause failures.
As far as I know, you can construct your own test steps, if you want to
do something customized that's not present in regular BF animals.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Bruce Momjian
On Tue, Sep  4, 2012 at 03:44:35PM -0400, Andrew Dunstan wrote:
> 
> On 09/04/2012 03:09 PM, Andrew Dunstan wrote:
> >I realized this morning that I might have been a bit cavalier in
> >using dos2unix to smooth away differences in the dumpfiles
> >produced by pg_upgrade. Attached is a dump of the diff if this
> >isn't done,  with Carriage Returns printed as '*' to make them
> >visible. As can be seen, in function bodies dump2 has the Carriage
> >Returns doubled. I have not had time to delve into how this comes
> >about, and I need to attend to some income-producing activity for
> >a bit, but I'd like to get it cleaned up ASAP. We are under the
> >hammer for 9.2, so any help other people can give on this would be
> >appreciated.
> >
> 
> 
> Actually, I have the answer - it's quite simple. We just need to
> open the output files in binary mode when we split the dumpall file.
> The attached patch fixes it. I think we should backpatch the first
> part to 9.0.

> diff --git a/contrib/pg_upgrade/dump.c b/contrib/pg_upgrade/dump.c
> index b905ab0..0a96dde 100644
> --- a/contrib/pg_upgrade/dump.c
> +++ b/contrib/pg_upgrade/dump.c
> @@ -62,10 +62,10 @@ split_old_dump(void)
>   if ((all_dump = fopen(filename, "r")) == NULL)
>   pg_log(PG_FATAL, "Could not open dump file \"%s\": %s\n", 
> filename, getErrorText(errno));
>   snprintf(filename, sizeof(filename), "%s", GLOBALS_DUMP_FILE);
> - if ((globals_dump = fopen_priv(filename, "w")) == NULL)
> + if ((globals_dump = fopen_priv(filename, PG_BINARY_W)) == NULL)
>   pg_log(PG_FATAL, "Could not write to dump file \"%s\": %s\n", 
> filename, getErrorText(errno));
>   snprintf(filename, sizeof(filename), "%s", DB_DUMP_FILE);
> - if ((db_dump = fopen_priv(filename, "w")) == NULL)
> + if ((db_dump = fopen_priv(filename, PG_BINARY_W)) == NULL)
>   pg_log(PG_FATAL, "Could not write to dump file \"%s\": %s\n", 
> filename, getErrorText(errno));
>  
>   current_output = globals_dump;
> diff --git a/contrib/pg_upgrade/test.sh b/contrib/pg_upgrade/test.sh
> index d411ac6..3899600 100644
> --- a/contrib/pg_upgrade/test.sh
> +++ b/contrib/pg_upgrade/test.sh
> @@ -128,10 +128,6 @@ else
>   sh ./delete_old_cluster.sh
>  fi
>  
> -if [ $testhost = Msys ] ; then
> -   dos2unix "$temp_root"/dump1.sql "$temp_root"/dump2.sql
> -fi
> -
>  if diff -q "$temp_root"/dump1.sql "$temp_root"/dump2.sql; then
>   echo PASSED
>   exit 0

I reviewed this idea and supports this patch's inclusion in 9.2.  I was
unclear why it was needed, but I see pg_dumpall, which is the file
pg_upgrade splits apart, as also using binary mode to write this file:

OPF = fopen(filename, PG_BINARY_W);

I agree with Tom that pg_upgrade needs some quiet time.  ;-)  Andrew,
have a sufficient number of buildfarm members verified our recent
patches that this can be added.  My patch from last night was mostly C
comments so isn't something that needs testing.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:11 AM, Bruce Momjian wrote:


I reviewed this idea and supports this patch's inclusion in 9.2.  I was
unclear why it was needed, but I see pg_dumpall, which is the file
pg_upgrade splits apart, as also using binary mode to write this file:

OPF = fopen(filename, PG_BINARY_W);

I agree with Tom that pg_upgrade needs some quiet time.  ;-)  Andrew,
have a sufficient number of buildfarm members verified our recent
patches that this can be added.  My patch from last night was mostly C
comments so isn't something that needs testing.



I am quite happy not committing anything for now.

There are two buildfarm members doing pg_upgrade tests: crake (Fedora 
16) and pitta (Windows/Mingw64). The buildfarm code is experimental and 
not in any release yet, and when it is the test will be optional.


The PG_BINARY_W change has only been verified on a non-buildfarm setup 
on my laptop (Mingw)


Note that while it does look like there's a bug either in pg_upgrade or 
pg_dumpall, it's probably mostly harmless (adding some spurious CRs to 
function code bodies on Windows). I'd feel happier if it didn't, and 
happier still if I knew for sure the ultimate origin. Your pg_dumpall 
discovery above is interesting. I might have time later on today to 
delve into all this. I'm out of contact for the next few hours.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:11 AM, Alvaro Herrera wrote:

Excerpts from Kohei KaiGai's message of mié sep 05 08:30:37 -0300 2012:

2012/9/3 Alvaro Herrera :

Excerpts from Kohei KaiGai's message of dom sep 02 15:53:22 -0300 2012:

This patch fixes a few portions on which sepgsql didn't follow the latest
core API changes.

I think you should get a buildfarm animal installed that builds and
tests sepgsql, to avoid this kind of problem in the future.


Thanks for your suggestion. I'm interested in.

http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto

Does it test only build-correctness? Or, is it possible to include
result of regression test for result to be alarmed?

Yes, regression test diffs are also reported and can cause failures.
As far as I know, you can construct your own test steps, if you want to
do something customized that's not present in regular BF animals.



Looking at SEPgsql testing is on my long TODO list. I'll have to set up 
a separate VM for it, as I don't habitually run SELinux.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-09-05 Thread Andres Freund
On Sunday, August 26, 2012 06:10:02 PM Andres Freund wrote:
> On Saturday, August 25, 2012 06:38:09 AM Tom Lane wrote:
> > Andres Freund  writes:
> > > Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3
> > > seems to work. Is that acceptable?
> > 
> > Surely that's breaking perl's expectations, to more or less the same
> > degree they're breaking ours?
> 
> In the referenced bug they agree that this is the way forward.
As nobody has any better ideas here is a patch doing that:


-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From c5583861a98c6e5c26961d6346c5b5abc699f90d Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Wed, 5 Sep 2012 16:04:41 +0200
Subject: [PATCH] Reset SIGFPE handler after plperl initialization

Unfortunately perl resets the sigfpe handler to SIG_IGN which is
bad for two reasons: First, we don't get a nice error message
anymore if a SIGFPE is generated via math on the sql level,
secondly setting SIGFPE to SIG_IGN is strongly discouraged by
posix and invokes undefined behaviour according to it.
At least linux defines this undefined behaviour as resetting the
SIGFPE handler and killing the triggering process.

In perl bug 114574 the perl developers agree that the correct
approach is to just reset the SIGFPE handler.

On some platforms this fixes a server crash with: SELECT (-(2^31))::int/-1;
---
 src/pl/plperl/plperl.c |   16 
 1 file changed, 16 insertions(+)

diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c
index b31e965..f4b2fa9 100644
--- a/src/pl/plperl/plperl.c
+++ b/src/pl/plperl/plperl.c
@@ -28,6 +28,7 @@
 #include "nodes/makefuncs.h"
 #include "parser/parse_type.h"
 #include "storage/ipc.h"
+#include "tcop/tcopprot.h"
 #include "utils/builtins.h"
 #include "utils/fmgroids.h"
 #include "utils/guc.h"
@@ -743,6 +744,21 @@ plperl_init_interp(void)
 			perl_sys_init_done = 1;
 			/* quiet warning if PERL_SYS_INIT3 doesn't use the third argument */
 			dummy_env[0] = NULL;
+
+			/*
+			 * Unfortunately perl resets the sigfpe handler to SIG_IGN which is
+			 * bad for two reasons: First, we don't get a nice error message
+			 * anymore if a SIGFPE is generated via math on the sql level,
+			 * secondly setting SIGFPE to SIG_IGN is strongly discouraged by
+			 * posix and invokes undefined behaviour according to it.
+			 * At least linux defines this undefined behaviour as resetting the
+			 * SIGFPE handler and killing the triggering process.
+			 *
+			 * In perl bug 114574 the perl developers agree that the correct
+			 * approach is to just reset the SIGFPE handler.
+			 */
+			pqsignal(SIGFPE, FloatExceptionHandler);
+
 		}
 	}
 #endif
-- 
1.7.10.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication and recovery_target_timeline='latest'

2012-09-05 Thread Heikki Linnakangas

On 05.09.2012 01:03, Dimitri Fontaine wrote:

Heikki Linnakangas  writes:

On 04.09.2012 03:02, Dimitri Fontaine wrote:

Heikki Linnakangas   writes:

Hmm, I was thinking that when walsender gets the position it can send the
WAL up to, in GetStandbyFlushRecPtr(), it could atomically check the current
recovery timeline. If it has changed, refuse to send the new WAL and
terminate. That would be a fairly small change, it would just close the
window between requesting walsenders to terminate and them actually
terminating.


No, only cascading replication is affected. In non-cascading situation, the
timeline never changes in the master. It's only in cascading mode that you
have a problem, where the standby can cross timelines while it's replaying
the WAL, and also sending it over to cascading standby.


It seems to me that it applies to connecting a standby to a newly
promoted standby too, as the timeline did change in this case too.


I was worried about that too at first, but Fujii pointed out that's OK: 
see last paragraph at 
http://archives.postgresql.org/pgsql-hackers/2012-08/msg01203.php.


If you connect to a standby that was already promoted to new master, 
it's no different from connecting to a master in general. It works. If 
you connect just before a standby is promoted, it works because a 
cascading standby pays attention to the recovery target timeline, and 
the pointer to last replayed WAL record. Promoting a standby doesn't 
change recovery target timeline or the last replayed WAL record, it sets 
XLogCtl->ThisTimeLineID. So the walsender in cascading mode will send 
the WAL up to where the promotion happened, but will stop there until 
it's terminated by the signal.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2rc1 produces incorrect results

2012-09-05 Thread Tom Lane
Thom Brown  writes:
> On 5 September 2012 05:09, Tom Lane  wrote:
>> Attached is a draft patch against HEAD for this.  I think it makes the
>> planner's handling of outer-level Params far less squishy than it's ever
>> been, but it is rather a large change.  Not sure whether to risk pushing
>> it into 9.2 right now, or wait till after we cut 9.2.0 ... thoughts?

> As for shipping without the fix, I'm torn on whether to do so or not.
> I imagine most productions will wait for a .1 or .2 release, and use
> .0 for migration testing.  Plus this bug hasn't been hit (or at least
> not noticed) during 5 releases of 9.1, and there isn't enough time
> left before shipping to expose the changes to enough testing in the
> areas affected, so I'd be slightly inclined to push this into 9.1.6
> and 9.2.1.

Yeah, after sleeping on it that's my feeling as well.  The patch needs
some rework for back branches anyway (since a nontrivial part of it
is touching LATERAL support that doesn't exist before HEAD).  I'll
push the fix to HEAD but wait till after 9.2.0 wrap for the back
branches.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication and recovery_target_timeline='latest'

2012-09-05 Thread Dimitri Fontaine
Heikki Linnakangas  writes:
> I was worried about that too at first, but Fujii pointed out that's OK: see
> last paragraph at
> http://archives.postgresql.org/pgsql-hackers/2012-08/msg01203.php.

Mmm, ok.

I'm worried about master-standby-standby setup where the master
disappear, we promote a standby and the second standby now feeds from
the newly promoted standby.  Well we have to reconnect manually in this
case, but don't we need some similar stopgaps?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2rc1 produces incorrect results

2012-09-05 Thread Tom Lane
BTW, after considerable fooling around with Vik's example, I've been
able to produce a regression test case that fails in all PG versions
with WITH:

with
A as ( select q2 as id, (select q1) as x from int8_tbl ),
B as ( select id, row_number() over (partition by id) as r from A ),
C as ( select A.id, array(select B.id from B where B.id = A.id) from A )
select * from C;

The correct answer to this is

id |array
---+-
   456 | {456}
  4567890123456789 | {4567890123456789,4567890123456789}
   123 | {123}
  4567890123456789 | {4567890123456789,4567890123456789}
 -4567890123456789 | {-4567890123456789}
(5 rows)

as you can soon convince yourself by inspecting the contents of
int8_tbl:

q1|q2 
--+---
  123 |   456
  123 |  4567890123456789
 4567890123456789 |   123
 4567890123456789 |  4567890123456789
 4567890123456789 | -4567890123456789
(5 rows)

I got that answer with patched HEAD, but all the back branches
give me

id |array
---+-
   456 | {4567890123456789,4567890123456789}
  4567890123456789 | {4567890123456789,4567890123456789}
   123 | {123}
  4567890123456789 | {4567890123456789,4567890123456789}
 -4567890123456789 | {-4567890123456789}
(5 rows)

So this does indeed need to be back-patched as far as 8.4.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Andres Freund
On Tuesday, September 04, 2012 12:11:28 PM Amit Kapila wrote:
> On Tuesday, September 04, 2012 11:00 AM Andres Freund wrote:
> 
> On Tuesday, September 04, 2012 06:20:59 AM Tom Lane wrote:
> > Andres Freund  writes:
> >> > I can see why that would be nice, but is it really realistic? Don't we
> >> > expect some more diligence in applications using this against letting
> >> > such a child continue to run after ctrl-c/SIGTERMing e.g. pg_dump in
> >> > comparison to closing a normal database connection?
> >> 
> >> Er, what?  If you kill the client, the child postgres will see
> >> connection closure and will shut down.  I already tested that with the
> >> POC patch, it worked fine.
> > 
> > Well, but that will make scripting harder because you cannot start
> > another single backend pg_dump before the old backend noticed it,
> > checkpointed and shut down.
> 
>   But isn't that behavior will be similar when currently server is shutting
> down due to CTRL-C, and at that time new clients will not be allowed to
> connect. As this new interface is an approach similar to embedded database
> where first API (StartServer) or at connect time it starts database and
> the other connection might not be allowed during shutdown state.
I don't find that a convincing comparison. Normally don't need to shutdown the 
server between two pg_dump commands. Which very well might be scripted.

Especially as for now, without a background writer/checkpointer writing stuff 
beforehand, the shutdown checkpoint won't be fast. IO isn't unlikely if youre 
doing a pg_dump because of hint bits...

Greetings,

Andres
-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Tom Lane
Andres Freund  writes:
> I don't find that a convincing comparison. Normally don't need to shutdown 
> the 
> server between two pg_dump commands. Which very well might be scripted.

> Especially as for now, without a background writer/checkpointer writing stuff 
> beforehand, the shutdown checkpoint won't be fast. IO isn't unlikely if youre 
> doing a pg_dump because of hint bits...

I still think this is a straw-man argument.  There is no expectation
that a standalone PG implementation would provide performance for a
series of standalone sessions that is equivalent to what you'd get from
a persistent server.  If that scenario is what's important to you, you'd
use a persistent server.  The case where this sort of thing would be
interesting is where minimizing administration complexity (by not having
a server) is more important than performance.  People currently use, eg,
SQLite for that type of application, and it's not because of
performance.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Tom Lane
"anara...@anarazel.de"  writes:
> I am not saying its bad that it is slower, that's absolutely OK. Just that it 
> will take a variable amount of time till you can run pgdump again and its not 
> easily detectable without looping and trying again.

Well, that's why the proposed libpq code is written to wait for the
child postgres to exit when closing the connection.

Admittedly, if you forcibly kill pg_dump (or some other client) and then
immediately try to start a new one, it's not clear how long you'll have
to wait.  But so what?  Anything we might do in this space is going to
have pluses and minuses.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread anara...@anarazel.de


Tom Lane  schrieb:

>Andres Freund  writes:
>> I don't find that a convincing comparison. Normally don't need to
>shutdown the 
>> server between two pg_dump commands. Which very well might be
>scripted.
>
>> Especially as for now, without a background writer/checkpointer
>writing stuff 
>> beforehand, the shutdown checkpoint won't be fast. IO isn't unlikely
>if youre 
>> doing a pg_dump because of hint bits...
>
>I still think this is a straw-man argument.  There is no expectation
>that a standalone PG implementation would provide performance for a
>series of standalone sessions that is equivalent to what you'd get from
>a persistent server.  If that scenario is what's important to you,
>you'd
>use a persistent server.  The case where this sort of thing would be
>interesting is where minimizing administration complexity (by not
>having
>a server) is more important than performance.  People currently use,
>eg,
>SQLite for that type of application, and it's not because of
>performance.
I am not saying its bad that it is slower, that's absolutely OK. Just that it 
will take a variable amount of time till you can run pgdump again and its not 
easily detectable without looping and trying again.

Andres


--- 
Please excuse the brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication and recovery_target_timeline='latest'

2012-09-05 Thread Heikki Linnakangas

On 05.09.2012 07:55, Dimitri Fontaine wrote:

Heikki Linnakangas  writes:

I was worried about that too at first, but Fujii pointed out that's OK: see
last paragraph at
http://archives.postgresql.org/pgsql-hackers/2012-08/msg01203.php.


Mmm, ok.

I'm worried about master-standby-standby setup where the master
disappear, we promote a standby and the second standby now feeds from
the newly promoted standby.  Well we have to reconnect manually in this
case, but don't we need some similar stopgaps?


The second standby will have to reconnect, but it will happen automatically.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-09-05 Thread Tom Lane
Andres Freund  writes:
> On Sunday, August 26, 2012 06:10:02 PM Andres Freund wrote:
>> On Saturday, August 25, 2012 06:38:09 AM Tom Lane wrote:
>>> Surely that's breaking perl's expectations, to more or less the same
>>> degree they're breaking ours?

>> In the referenced bug they agree that this is the way forward.

> As nobody has any better ideas here is a patch doing that:

OK.  Do we want to commit this now, or wait till after 9.2.0?
My feeling is it's probably okay to include in 9.2.0, but I can see
that somebody might want to argue not to.  Any objections out there?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-09-05 Thread Andres Freund
On Wednesday, September 05, 2012 07:15:52 PM Tom Lane wrote:
> Andres Freund  writes:
> > On Sunday, August 26, 2012 06:10:02 PM Andres Freund wrote:
> >> On Saturday, August 25, 2012 06:38:09 AM Tom Lane wrote:
> >>> Surely that's breaking perl's expectations, to more or less the same
> >>> degree they're breaking ours?
> >> 
> >> In the referenced bug they agree that this is the way forward.
> > 
> > As nobody has any better ideas here is a patch doing that:
> OK.  Do we want to commit this now, or wait till after 9.2.0?
> My feeling is it's probably okay to include in 9.2.0, but I can see
> that somebody might want to argue not to.  Any objections out there?
Perhaps unsurprisingly I would argue for including it. I am not saying its a 
perfect solution, but not bandaiding seems to open a bigger hole/DOS. Given 
that any occurance of SIGFPE inside perl on linux in the last 10 years or so 
would have lead to perl (including postgres w. plperl[u]) getting killed with 
a somewhat distinctive message and the lack of reports I could find about it 
the risk doesn't seem to be too big.

Greetings,

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Robert Haas
On Sun, Sep 2, 2012 at 2:53 PM, Kohei KaiGai  wrote:
> This patch fixes a few portions on which sepgsql didn't follow the latest
> core API changes.
>
> 1) Even though the prototype of ProcessUtility_hook was recently changed,
> sepgsql side didn't follow this update, so it made build failed.
>
> 2) sepgsql internally uses GETSTRUCT() and HeapTupleGetOid() macro
> these were moved to htup_details.h, so it needs an additional #include
> for "access/htup_defails.h".
>
> 3) sepgsql internally used a bool typed variable named "abort".
> I noticed it conflicts with ereport macro because it internally expanded to
> ereport_domain that contains invocation of "abort()". So, it renamed this
> variables to abort_on_violation.
>
> #define ereport_domain(elevel, domain, rest)\
> (errstart(elevel, __FILE__, __LINE__, PG_FUNCNAME_MACRO, domain) ? \
>  (errfinish rest) : (void) 0), \
> ((elevel) >= ERROR ? abort() : (void) 0)
>
> This does not affect to v9.2, so please apply it on the master branch.

I have committed this untested.  It seems pretty mechanical and I
assume that you tested it.  Anyway, it's certainly broken without the
patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:46 AM, Andrew Dunstan wrote:


On 09/05/2012 09:11 AM, Bruce Momjian wrote:


I reviewed this idea and supports this patch's inclusion in 9.2.  I was
unclear why it was needed, but I see pg_dumpall, which is the file
pg_upgrade splits apart, as also using binary mode to write this file:

OPF = fopen(filename, PG_BINARY_W);

I agree with Tom that pg_upgrade needs some quiet time.  ;-) Andrew,
have a sufficient number of buildfarm members verified our recent
patches that this can be added.  My patch from last night was mostly C
comments so isn't something that needs testing.



I am quite happy not committing anything for now.

There are two buildfarm members doing pg_upgrade tests: crake (Fedora 
16) and pitta (Windows/Mingw64). The buildfarm code is experimental 
and not in any release yet, and when it is the test will be optional.


The PG_BINARY_W change has only been verified on a non-buildfarm setup 
on my laptop (Mingw)


Note that while it does look like there's a bug either in pg_upgrade 
or pg_dumpall, it's probably mostly harmless (adding some spurious CRs 
to function code bodies on Windows). I'd feel happier if it didn't, 
and happier still if I knew for sure the ultimate origin. Your 
pg_dumpall discovery above is interesting. I might have time later on 
today to delve into all this. I'm out of contact for the next few hours.



OK, I now have a complete handle on what's going on here, and withdraw 
my earlier statement that I am confused on this issue :-)


First, one lot of CRs is produced because the pg_upgrade test script 
calls pg_dumpall without -f and redirects that to a file, which Windows 
kindly opens on text mode. The solution to that is to change the test 
script to use pg_dumpall -f instead.


The second lot of CRs (seen in the second dump file in the diff i 
previously sent) is produced by pg_upgrade writing its output in text 
mode, which turns LF into CRLF. The solution to that is the patch to 
dump.c I posted, which, as Bruce observed, does the same thing that 
pg_dumpall does. Arguably, it should also open the input file in binary, 
so that if there really is a CRLF in the dump it won't be eaten.


Another question is whether or not pg_dumpall (and pg_dump in text mode 
too for that matter) should be trying to suppress newline translation on 
its output even to stdout. It already does that for non-text formats 
(see call to setmode()) but I don't see why we shouldn't for text as 
well. But those are obviously longstanding bugs that we can leave to 
another day.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Tom Lane
Andrew Dunstan  writes:
> OK, I now have a complete handle on what's going on here, and withdraw 
> my earlier statement that I am confused on this issue :-)

> First, one lot of CRs is produced because the pg_upgrade test script 
> calls pg_dumpall without -f and redirects that to a file, which Windows 
> kindly opens on text mode. The solution to that is to change the test 
> script to use pg_dumpall -f instead.

> The second lot of CRs (seen in the second dump file in the diff i 
> previously sent) is produced by pg_upgrade writing its output in text 
> mode, which turns LF into CRLF. The solution to that is the patch to 
> dump.c I posted, which, as Bruce observed, does the same thing that 
> pg_dumpall does. Arguably, it should also open the input file in binary, 
> so that if there really is a CRLF in the dump it won't be eaten.

+1 to all the above.  Do we want to risk squeezing this into 9.2.0,
or is it better to delay?

> Another question is whether or not pg_dumpall (and pg_dump in text mode 
> too for that matter) should be trying to suppress newline translation on 
> its output even to stdout.

I'm inclined to think not - we've not heard any complaints from Windows
users about its current behavior, and it's been like that forever.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 03:17:40PM -0400, Andrew Dunstan wrote:
> >The PG_BINARY_W change has only been verified on a non-buildfarm
> >setup on my laptop (Mingw)
> >
> >Note that while it does look like there's a bug either in
> >pg_upgrade or pg_dumpall, it's probably mostly harmless (adding
> >some spurious CRs to function code bodies on Windows). I'd feel
> >happier if it didn't, and happier still if I knew for sure the
> >ultimate origin. Your pg_dumpall discovery above is interesting. I
> >might have time later on today to delve into all this. I'm out of
> >contact for the next few hours.
> 
> 
> OK, I now have a complete handle on what's going on here, and
> withdraw my earlier statement that I am confused on this issue :-)
> 
> First, one lot of CRs is produced because the pg_upgrade test script
> calls pg_dumpall without -f and redirects that to a file, which
> Windows kindly opens on text mode. The solution to that is to change
> the test script to use pg_dumpall -f instead.
> 
> The second lot of CRs (seen in the second dump file in the diff i
> previously sent) is produced by pg_upgrade writing its output in
> text mode, which turns LF into CRLF. The solution to that is the
> patch to dump.c I posted, which, as Bruce observed, does the same
> thing that pg_dumpall does. Arguably, it should also open the input
> file in binary, so that if there really is a CRLF in the dump it
> won't be eaten.

So, right now we are only add \r for function bodies, which is mostly
harmless, but what if a function body has strings with an embedded
newlines?  What about creating a table with newlines in its identifiers:

CREATE TABLE "a
b" ("c
d" int);

If \r is added in there, it would be a data corruption problem.  Can you
test that?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 03:36 PM, Tom Lane wrote:

Andrew Dunstan  writes:

OK, I now have a complete handle on what's going on here, and withdraw
my earlier statement that I am confused on this issue :-)
First, one lot of CRs is produced because the pg_upgrade test script
calls pg_dumpall without -f and redirects that to a file, which Windows
kindly opens on text mode. The solution to that is to change the test
script to use pg_dumpall -f instead.
The second lot of CRs (seen in the second dump file in the diff i
previously sent) is produced by pg_upgrade writing its output in text
mode, which turns LF into CRLF. The solution to that is the patch to
dump.c I posted, which, as Bruce observed, does the same thing that
pg_dumpall does. Arguably, it should also open the input file in binary,
so that if there really is a CRLF in the dump it won't be eaten.

+1 to all the above.  Do we want to risk squeezing this into 9.2.0,
or is it better to delay?



When we (particularly Bruce and I) didn't fully understand what was 
happening there was a good argument for delay, but now I'd rather put it 
in so we can remove the error-hiding hack in the test script. I think 
the risk is minimal.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 03:40 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 03:17:40PM -0400, Andrew Dunstan wrote:

The PG_BINARY_W change has only been verified on a non-buildfarm
setup on my laptop (Mingw)

Note that while it does look like there's a bug either in
pg_upgrade or pg_dumpall, it's probably mostly harmless (adding
some spurious CRs to function code bodies on Windows). I'd feel
happier if it didn't, and happier still if I knew for sure the
ultimate origin. Your pg_dumpall discovery above is interesting. I
might have time later on today to delve into all this. I'm out of
contact for the next few hours.


OK, I now have a complete handle on what's going on here, and
withdraw my earlier statement that I am confused on this issue :-)

First, one lot of CRs is produced because the pg_upgrade test script
calls pg_dumpall without -f and redirects that to a file, which
Windows kindly opens on text mode. The solution to that is to change
the test script to use pg_dumpall -f instead.

The second lot of CRs (seen in the second dump file in the diff i
previously sent) is produced by pg_upgrade writing its output in
text mode, which turns LF into CRLF. The solution to that is the
patch to dump.c I posted, which, as Bruce observed, does the same
thing that pg_dumpall does. Arguably, it should also open the input
file in binary, so that if there really is a CRLF in the dump it
won't be eaten.

So, right now we are only add \r for function bodies, which is mostly
harmless, but what if a function body has strings with an embedded
newlines?  What about creating a table with newlines in its identifiers:

CREATE TABLE "a
b" ("c
d" int);

If \r is added in there, it would be a data corruption problem.  Can you
test that?


These are among the reasons why I am suggesting opening the file in 
binary mode. You're right, that would be data corruption.


I can set up a check, but it will take a bit of time.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 03:50:13PM -0400, Andrew Dunstan wrote:
> >>The second lot of CRs (seen in the second dump file in the diff i
> >>previously sent) is produced by pg_upgrade writing its output in
> >>text mode, which turns LF into CRLF. The solution to that is the
> >>patch to dump.c I posted, which, as Bruce observed, does the same
> >>thing that pg_dumpall does. Arguably, it should also open the input
> >>file in binary, so that if there really is a CRLF in the dump it
> >>won't be eaten.
> >So, right now we are only add \r for function bodies, which is mostly
> >harmless, but what if a function body has strings with an embedded
> >newlines?  What about creating a table with newlines in its identifiers:
> >
> >CREATE TABLE "a
> >b" ("c
> >d" int);
> >
> >If \r is added in there, it would be a data corruption problem.  Can you
> >test that?
> 
> These are among the reasons why I am suggesting opening the file in
> binary mode. You're right, that would be data corruption.
> 
> I can set up a check, but it will take a bit of time.

My only point is that this is no longer a buildfarm failure issue, it is
a potential data corruption issue.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 03:50 PM, Andrew Dunstan wrote:


On 09/05/2012 03:40 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 03:17:40PM -0400, Andrew Dunstan wrote:

The PG_BINARY_W change has only been verified on a non-buildfarm
setup on my laptop (Mingw)

Note that while it does look like there's a bug either in
pg_upgrade or pg_dumpall, it's probably mostly harmless (adding
some spurious CRs to function code bodies on Windows). I'd feel
happier if it didn't, and happier still if I knew for sure the
ultimate origin. Your pg_dumpall discovery above is interesting. I
might have time later on today to delve into all this. I'm out of
contact for the next few hours.


OK, I now have a complete handle on what's going on here, and
withdraw my earlier statement that I am confused on this issue :-)

First, one lot of CRs is produced because the pg_upgrade test script
calls pg_dumpall without -f and redirects that to a file, which
Windows kindly opens on text mode. The solution to that is to change
the test script to use pg_dumpall -f instead.

The second lot of CRs (seen in the second dump file in the diff i
previously sent) is produced by pg_upgrade writing its output in
text mode, which turns LF into CRLF. The solution to that is the
patch to dump.c I posted, which, as Bruce observed, does the same
thing that pg_dumpall does. Arguably, it should also open the input
file in binary, so that if there really is a CRLF in the dump it
won't be eaten.

So, right now we are only add \r for function bodies, which is mostly
harmless, but what if a function body has strings with an embedded
newlines?  What about creating a table with newlines in its identifiers:

CREATE TABLE "a
b" ("c
d" int);

If \r is added in there, it would be a data corruption problem. Can you
test that?


These are among the reasons why I am suggesting opening the file in 
binary mode. You're right, that would be data corruption.


I can set up a check, but it will take a bit of time.



As expected, we get a difference in field names. Here's the extract from 
the dumps diff (* again represents CR):



 ***
   *** 5220,5228 
  --

  CREATE TABLE hasnewline (
   ! "x
  y" integer,
   ! "a
  b" text
  );

   --- 5220,5228 
  --

  CREATE TABLE hasnewline (
   ! "x*
  y" integer,
   ! "a*
  b" text
  );

If we open the input and output files in binary mode in pg_upgrade's 
dump.c this disappears.


Given this, I think we have no choice but to apply the patch, all the 
way back to 9.0 in fact.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Kohei KaiGai
2012/9/5 Andrew Dunstan :
>
> On 09/05/2012 09:11 AM, Alvaro Herrera wrote:
>>
>> Excerpts from Kohei KaiGai's message of mié sep 05 08:30:37 -0300 2012:
>>>
>>> 2012/9/3 Alvaro Herrera :

 Excerpts from Kohei KaiGai's message of dom sep 02 15:53:22 -0300 2012:
>
> This patch fixes a few portions on which sepgsql didn't follow the
> latest
> core API changes.

 I think you should get a buildfarm animal installed that builds and
 tests sepgsql, to avoid this kind of problem in the future.

>>> Thanks for your suggestion. I'm interested in.
>>>
>>> http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto
>>>
>>> Does it test only build-correctness? Or, is it possible to include
>>> result of regression test for result to be alarmed?
>>
>> Yes, regression test diffs are also reported and can cause failures.
>> As far as I know, you can construct your own test steps, if you want to
>> do something customized that's not present in regular BF animals.
>
> Looking at SEPgsql testing is on my long TODO list. I'll have to set up a
> separate VM for it, as I don't habitually run SELinux.
>
If you are available to provide a VM environment for sepgsql, let me help
set up its build and regression test environment.

As you may know, the regression test of sepgsql requires some additional
configurations on operating system level, such as security policy load and
so on. I expect we have to add something special stuff onto the buildfirm
system.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade diffs on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 04:22:18PM -0400, Andrew Dunstan wrote:
> >>So, right now we are only add \r for function bodies, which is mostly
> >>harmless, but what if a function body has strings with an embedded
> >>newlines?  What about creating a table with newlines in its identifiers:
> >>
> >>CREATE TABLE "a
> >>b" ("c
> >>d" int);
> >>
> >>If \r is added in there, it would be a data corruption problem. Can you
> >>test that?
> >
> >These are among the reasons why I am suggesting opening the file
> >in binary mode. You're right, that would be data corruption.
> >
> >I can set up a check, but it will take a bit of time.
> 
> 
> As expected, we get a difference in field names. Here's the extract
> from the dumps diff (* again represents CR):
> 
> 
>  ***
>*** 5220,5228 
>   --
> 
>   CREATE TABLE hasnewline (
>! "x
>   y" integer,
>! "a
>   b" text
>   );
> 
>--- 5220,5228 
>   --
> 
>   CREATE TABLE hasnewline (
>! "x*
>   y" integer,
>! "a*
>   b" text
>   );
> 
> If we open the input and output files in binary mode in pg_upgrade's
> dump.c this disappears.
> 
> Given this, I think we have no choice but to apply the patch, all
> the way back to 9.0 in fact.

I think you are right.  

I think I could use some "quite time" right now, as Tom suggested.  ;-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Josh Berkus
Tom,

> However, there are some additional things
> we'd need to think about before advertising it as a fit solution for that.
> Notably, while the lack of any background processes is just what you want
> for pg_upgrade and disaster recovery, an ordinary application is probably
> going to want to rely on autovacuum; and we need bgwriter and other
> background processes for best performance.  So I'm speculating about
> having a postmaster process that isn't listening on any ports, but is
> managing background processes in addition to a single child backend.
> That's for another day though.

Well, if you think about standalone mode as "developer" mode, it's not
quite so clear that we'd need those things.  Generally when people are
testing code in development they don't care about vacuum or bgwriter
because the database is small and ephemeral.  So even without background
processes, standalone mode would be useful for many users for
development and automated testing.

For that matter, applications which embed postgresql and have very small
databases could also live without autovacuum and bgwriter.  Heck,
Postgres existed without them for many years.

You just doc that, if you're running postgres standalone, you need to
run a full VACUUM ANALYZE on the database cluster once per day.  And you
live with the herky-jerky write performance.  If the database is 5GB,
who's going to notice anyway?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Tom Lane
Josh Berkus  writes:
>> However, there are some additional things
>> we'd need to think about before advertising it as a fit solution for that.
>> Notably, while the lack of any background processes is just what you want
>> for pg_upgrade and disaster recovery, an ordinary application is probably
>> going to want to rely on autovacuum; and we need bgwriter and other
>> background processes for best performance.  So I'm speculating about
>> having a postmaster process that isn't listening on any ports, but is
>> managing background processes in addition to a single child backend.
>> That's for another day though.

> Well, if you think about standalone mode as "developer" mode, it's not
> quite so clear that we'd need those things.  Generally when people are
> testing code in development they don't care about vacuum or bgwriter
> because the database is small and ephemeral.  So even without background
> processes, standalone mode would be useful for many users for
> development and automated testing.

Only if startup and shutdown were near instantaneous, which as Andres
was pointing out would be far from the truth.  I am envisioning the
use-case for this thing as stuff like desktop managers and mail
programs, which tend to be rather lumbering on startup anyway.  (And
yes, a lot of those have got embedded databases in them these days.
More often than not it's mysql.)  I don't see people wanting to use this
feature for unit tests.

> For that matter, applications which embed postgresql and have very small
> databases could also live without autovacuum and bgwriter.  Heck,
> Postgres existed without them for many years.

Um ... true with respect to autovacuum, perhaps, but what about
checkpoints?  A standalone backend will never perform a checkpoint
unless explicitly told to.  (Before we invented the bgwriter, the
postmaster was in charge of launching checkpoints every so often.)
Again, this is probably just what you want for disaster recovery, but
it wouldn't be terribly friendly for an embedded-database application.

In general I think the selling point for such a feature would be "no
administrative hassles", and I believe that has to go not only for the
end-user experience but also for the application-developer experience.
If you have to manage checkpointing and vacuuming in the application,
you're probably soon going to look for another database.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 01:50:06PM -0700, Josh Berkus wrote:
> Tom,
> 
> > However, there are some additional things
> > we'd need to think about before advertising it as a fit solution for that.
> > Notably, while the lack of any background processes is just what you want
> > for pg_upgrade and disaster recovery, an ordinary application is probably
> > going to want to rely on autovacuum; and we need bgwriter and other
> > background processes for best performance.  So I'm speculating about
> > having a postmaster process that isn't listening on any ports, but is
> > managing background processes in addition to a single child backend.
> > That's for another day though.
> 
> Well, if you think about standalone mode as "developer" mode, it's not
> quite so clear that we'd need those things.  Generally when people are
> testing code in development they don't care about vacuum or bgwriter
> because the database is small and ephemeral.  So even without background
> processes, standalone mode would be useful for many users for
> development and automated testing.
> 
> For that matter, applications which embed postgresql and have very small
> databases could also live without autovacuum and bgwriter.  Heck,
> Postgres existed without them for many years.
> 
> You just doc that, if you're running postgres standalone, you need to
> run a full VACUUM ANALYZE on the database cluster once per day.  And you
> live with the herky-jerky write performance.  If the database is 5GB,
> who's going to notice anyway?

If this mode slows down pg_upgrade, that is going to be a problem.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication on Windows bug

2012-09-05 Thread Tom Lane
Heikki Linnakangas  writes:
> That doesn't work on Windows. As long as a walsender is keeping the old 
> file open, the unlink() on it fails. You get an error like this in the 
> startup process:
> FATAL:  could not rename file "pg_xlog/RECOVERYXLOG" to 
> "pg_xlog/0001000D": Permission denied

I thought we had some workaround for that problem.  Otherwise, you'd be
seeing this type of failure every time a checkpoint tries to drop or
rename files.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] State of the on-disk bitmap index

2012-09-05 Thread Gianni Ciolli
Hi Daniel,

On Wed, Sep 05, 2012 at 01:37:59PM +0200, Daniel Bausch wrote:
> Is that, what your bmi-perf-test.tar.gz from 2008 does?  I did not
> look into that.

IIRC yes (but it's been a long time and I don't have a copy at hand
now).

Best regards,
Dr. Gianni Ciolli - 2ndQuadrant Italia
PostgreSQL Training, Services and Support
gianni.cio...@2ndquadrant.it | www.2ndquadrant.it


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Peter Eisentraut
On 9/5/12 5:03 PM, Tom Lane wrote:
> I don't see people wanting to use this feature for unit tests.

If this is going to become an official feature (as opposed to an
internal interface only for use by pg_upgrade), then I think that's
exactly what people will want to use it for.  In fact, it might even
make it more likely that people will write unit tests suits to begin with.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Daniel Farina
On Wed, Sep 5, 2012 at 2:50 PM, Peter Eisentraut  wrote:
> On 9/5/12 5:03 PM, Tom Lane wrote:
>> I don't see people wanting to use this feature for unit tests.
>
> If this is going to become an official feature (as opposed to an
> internal interface only for use by pg_upgrade), then I think that's
> exactly what people will want to use it for.  In fact, it might even
> make it more likely that people will write unit tests suits to begin with.

I agree with this, even though in theory (but not in practice)
creative use of unix sockets (sorry windows, perhaps some
port-allocating and URL mangling can be done instead) and conventions
for those would allow even better almost-like-embedded results,
methinks.  That may still be able to happen.

The biggest improvement to that situation is the recent drastic
reduction in use of shared memory, and that only became a thing recently.

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Peter Eisentraut
On 8/29/12 11:52 PM, Andrew Dunstan wrote:
> >> Why does this need to be tied into the build farm?  Someone can surely
>> set up a script that just runs the docs build at every check-in, like it
>> used to work.  What's being proposed now just sounds like a lot of
>> complication for little or no actual gain -- net loss in fact.
> 
> It doesn't just build the docs. It makes the dist snapshots too.

Thus making the turnaround time on a docs build even slower ... ?

> And the old script often broke badly, IIRC.

The script broke on occasion, but the main problem was that it wasn't
monitored.  Which is something that could have been fixed.

> The current setup doesn't install
> anything if the build fails, which is a distinct improvement.

You mean it doesn't build the docs if the code build fails?  Would that
really be an improvement?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cascading replication on Windows bug

2012-09-05 Thread Heikki Linnakangas
Starting with 9.2, when a WAL segment is restored from the archive, it 
is copied over any existing file in pg_xlog with the same name. This is 
done in two steps: first the file is restored from archive to a 
temporary file called RECOVERYXLOG, then the old file is deleted and the 
temporary file is renamed in place. After that, a flag is set in shared 
memory for each WAL sender, to tell them to close the old file if they 
still have it open.


That doesn't work on Windows. As long as a walsender is keeping the old 
file open, the unlink() on it fails. You get an error like this in the 
startup process:


FATAL:  could not rename file "pg_xlog/RECOVERYXLOG" to 
"pg_xlog/0001000D": Permission denied


Not sure how to fix that. Perhaps we could copy the data over the old 
file, rather than unlink and rename it. Or signal the walsenders and 
retry if the unlink() fails with EACCESS.


Now, another question is, do we need to delay the release because of 
this? The impact of this is basically that cascading replication 
sometimes causes the standby to die, if a WAL archive is used together 
with streaming replication.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Peter Eisentraut
On 9/5/12 5:59 PM, Daniel Farina wrote:
> I agree with this, even though in theory (but not in practice)
> creative use of unix sockets (sorry windows, perhaps some
> port-allocating and URL mangling can be done instead) and conventions
> for those would allow even better almost-like-embedded results,
> methinks.  That may still be able to happen.

Sure, everyone who cares can already do this, but some people probably
don't care enough.  Also, making this portable and robust for everyone
to use, not just your local environment, is pretty tricky.  See
pg_upgrade test script, for a prominent example.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Josh Berkus
On 9/5/12 2:50 PM, Peter Eisentraut wrote:
> On 9/5/12 5:03 PM, Tom Lane wrote:
>> I don't see people wanting to use this feature for unit tests.
> 
> If this is going to become an official feature (as opposed to an
> internal interface only for use by pg_upgrade), then I think that's
> exactly what people will want to use it for.  In fact, it might even
> make it more likely that people will write unit tests suits to begin with.

Heck, *I'll* use it for unit tests.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Josh Berkus

> Um ... true with respect to autovacuum, perhaps, but what about
> checkpoints?  A standalone backend will never perform a checkpoint
> unless explicitly told to. 

Hmmm, that's definitely an issue.

> (Before we invented the bgwriter, the
> postmaster was in charge of launching checkpoints every so often.)
> Again, this is probably just what you want for disaster recovery, but
> it wouldn't be terribly friendly for an embedded-database application.

Yeah, we'd have to put in a clock-based thing which did checkpoints
every 5 minutes and VACUUM ANALYZE every hour or something.  That seems
like a chunk of extra code.

> In general I think the selling point for such a feature would be "no
> administrative hassles", and I believe that has to go not only for the
> end-user experience but also for the application-developer experience.
> If you have to manage checkpointing and vacuuming in the application,
> you're probably soon going to look for another database.

Well, don't discount the development/testing case.  If you do agile or
TDD (a lot of people do), you often have a workload which looks like:

1) Start framework
2) Start database
3) Load database with test data
4) Run tests
5) Print results
6) Shut down database

In a case like that, you can live without checkpointing, even; the
database is ephemeral.

In other words, let's make this a feature and document it for use in
testing, and that it's not really usable for production embedded apps yet.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-09-05 Thread Tom Lane
Andres Freund  writes:
> On Wednesday, September 05, 2012 07:15:52 PM Tom Lane wrote:
>> OK.  Do we want to commit this now, or wait till after 9.2.0?
>> My feeling is it's probably okay to include in 9.2.0, but I can see
>> that somebody might want to argue not to.  Any objections out there?

> Perhaps unsurprisingly I would argue for including it. I am not saying its a 
> perfect solution, but not bandaiding seems to open a bigger hole/DOS. Given 
> that any occurance of SIGFPE inside perl on linux in the last 10 years or so 
> would have lead to perl (including postgres w. plperl[u]) getting killed with
> a somewhat distinctive message and the lack of reports I could find about it 
> the risk doesn't seem to be too big.

Hearing no objections, committed and back-patched.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bugfix] sepgsql didn't follow the latest core API changes

2012-09-05 Thread Kohei KaiGai
2012/9/5 Robert Haas :
> On Sun, Sep 2, 2012 at 2:53 PM, Kohei KaiGai  wrote:
>> This patch fixes a few portions on which sepgsql didn't follow the latest
>> core API changes.
>>
>> 1) Even though the prototype of ProcessUtility_hook was recently changed,
>> sepgsql side didn't follow this update, so it made build failed.
>>
>> 2) sepgsql internally uses GETSTRUCT() and HeapTupleGetOid() macro
>> these were moved to htup_details.h, so it needs an additional #include
>> for "access/htup_defails.h".
>>
>> 3) sepgsql internally used a bool typed variable named "abort".
>> I noticed it conflicts with ereport macro because it internally expanded to
>> ereport_domain that contains invocation of "abort()". So, it renamed this
>> variables to abort_on_violation.
>>
>> #define ereport_domain(elevel, domain, rest)\
>> (errstart(elevel, __FILE__, __LINE__, PG_FUNCNAME_MACRO, domain) ? \
>>  (errfinish rest) : (void) 0), \
>> ((elevel) >= ERROR ? abort() : (void) 0)
>>
>> This does not affect to v9.2, so please apply it on the master branch.
>
> I have committed this untested.  It seems pretty mechanical and I
> assume that you tested it.  Anyway, it's certainly broken without the
> patch.
>
Thanks, I'd like to pay attention to core API changes more.

I still have one other bug fix for v9.2 and master branch.
Isn't it obvious to apply?

http://archives.postgresql.org/message-id/cadyhksvwkjcky3cdeqg6qp7oczqsbjtt9cihk3hb7tkvced...@mail.gmail.com
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Report proper GUC parameter names in error messages

2012-09-05 Thread Gurjeet Singh
Error messages when terminating xlog redo leads the user to believe that
there are parameters named max_prepared_xacts and max_locks_per_xact, which
is not true. This patch corrects the parameter names emitted in the logs.

Best regards,
-- 
Gurjeet Singh


proper_GUC_names.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 06:13 PM, Peter Eisentraut wrote:

On 8/29/12 11:52 PM, Andrew Dunstan wrote:

Why does this need to be tied into the build farm?  Someone can surely

set up a script that just runs the docs build at every check-in, like it
used to work.  What's being proposed now just sounds like a lot of
complication for little or no actual gain -- net loss in fact.

It doesn't just build the docs. It makes the dist snapshots too.

Thus making the turnaround time on a docs build even slower ... ?



A complete run of this process takes less than 15 minutes. And as I have 
pointed out elsewhere that could be reduced substantially by skipping 
certain steps. It's as simple as changing the command line in the 
crontab entry.


The only reason there is a significant delay is that the administrators 
have chosen not to run the process more than once every 4 hours. That's 
a choice not dictated by the process they are using, but by other 
considerations concerning the machine it's being run on. Since I am not 
one of the admins and don't really want to take responsibility for it I 
am not going to second guess them. On the very rare occasions when I 
absolutely have to have the totally up to date docs I build them myself 
- it takes about 60 seconds on my modest hardware.



cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Tom Lane
Andrew Dunstan  writes:
> The only reason there is a significant delay is that the administrators 
> have chosen not to run the process more than once every 4 hours. That's 
> a choice not dictated by the process they are using, but by other 
> considerations concerning the machine it's being run on. Since I am not 
> one of the admins and don't really want to take responsibility for it I 
> am not going to second guess them. On the very rare occasions when I 
> absolutely have to have the totally up to date docs I build them myself 
> - it takes about 60 seconds on my modest hardware.

I think the argument for having a quick docs build service is not about
the time needed, but the need to have all the appropriate tools
installed.  While I can understand that argument for J Random Hacker,
I'm mystified why Bruce doesn't seem to have bothered to get a working
SGML toolset installed.  It's not like editing the docs is a one-shot
task for him.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication on Windows bug

2012-09-05 Thread Heikki Linnakangas

On 05.09.2012 14:28, Tom Lane wrote:

Heikki Linnakangas  writes:

That doesn't work on Windows. As long as a walsender is keeping the old
file open, the unlink() on it fails. You get an error like this in the
startup process:
FATAL:  could not rename file "pg_xlog/RECOVERYXLOG" to
"pg_xlog/0001000D": Permission denied


I thought we had some workaround for that problem.  Otherwise, you'd be
seeing this type of failure every time a checkpoint tries to drop or
rename files.


Hmm, now that I look at the error message more carefully, what happens 
is that the unlink() succeeds, but when the startup process tries to 
rename the new file in place, the rename() fails. The comments in 
RemoveOldXLogFiles() explains that, and also shows how to work around it:



/*
 * On Windows, if another process (e.g another backend)
 * holds the file open in FILE_SHARE_DELETE mode, unlink
 * will succeed, but the file will still show up in
 * directory listing until the last handle is closed. To
 * avoid confusing the lingering deleted file for a live
 * WAL file that needs to be archived, rename it before
 * deleting it.
 *
 * If another process holds the file open without
 * FILE_SHARE_DELETE flag, rename will fail. We'll try
 * again at the next checkpoint.
 */


I think we need the same trick here, and rename the old file first, then 
unlink() it, and then rename the new file in place. I'll try that out..


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Daniel Farina
On Wed, Sep 5, 2012 at 3:17 PM, Peter Eisentraut  wrote:
> On 9/5/12 5:59 PM, Daniel Farina wrote:
>> I agree with this, even though in theory (but not in practice)
>> creative use of unix sockets (sorry windows, perhaps some
>> port-allocating and URL mangling can be done instead) and conventions
>> for those would allow even better almost-like-embedded results,
>> methinks.  That may still be able to happen.
>
> Sure, everyone who cares can already do this, but some people probably
> don't care enough.  Also, making this portable and robust for everyone
> to use, not just your local environment, is pretty tricky.  See
> pg_upgrade test script, for a prominent example.

To my knowledge, no one has even really seriously tried to package it
yet and then told the tale of woe, and it was an especially
un-gratifying exercise for quite a while on account of multiple
postgreses not getting along on the same machine because of SysV
shmem.

The bar for testing is a lot different than pg_upgrade (where a
negative consequence is confusing and stressful downtime), and many
programs use fork/threads and multiple connections even in testing,
making its requirements different.

So consider me still skeptical given the current reasoning that unix
sockets can't be a good-or-better substitute, and especially
accounting for programs that need multiple backends.

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Alvaro Herrera
Excerpts from Tom Lane's message of mié sep 05 20:24:08 -0300 2012:
> Andrew Dunstan  writes:
> > The only reason there is a significant delay is that the administrators 
> > have chosen not to run the process more than once every 4 hours. That's 
> > a choice not dictated by the process they are using, but by other 
> > considerations concerning the machine it's being run on. Since I am not 
> > one of the admins and don't really want to take responsibility for it I 
> > am not going to second guess them. On the very rare occasions when I 
> > absolutely have to have the totally up to date docs I build them myself 
> > - it takes about 60 seconds on my modest hardware.
> 
> I think the argument for having a quick docs build service is not about
> the time needed, but the need to have all the appropriate tools
> installed.  While I can understand that argument for J Random Hacker,
> I'm mystified why Bruce doesn't seem to have bothered to get a working
> SGML toolset installed.  It's not like editing the docs is a one-shot
> task for him.

As far as I understand, Bruce's concern is not about seeing the docs
built himself, but having an HTML copy published somewhere that he can
point people to, after applying some patch.  To me, that's a perfectly
legitimate reason to want to have them quickly.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 12:02 AM, Bruce Momjian wrote:

On Mon, Sep  3, 2012 at 12:44:09PM -0400, Andrew Dunstan wrote:

The attached very small patch allows pg_upgrade's "make check" to
succeed on REL9_2_STABLE on my Mingw system.

However, I consider the issue I mentioned earlier regarding use of
forward slashes in the argument to rmdir to be a significant
blocker, so I'm going to go and fix that and then pull this all
together.

cheers

andrew
diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
index 6f993df..57ca1df 100644
--- a/contrib/pg_upgrade/exec.c
+++ b/contrib/pg_upgrade/exec.c
@@ -91,10 +91,12 @@ exec_prog(bool throw_error, bool is_priv, const char 
*log_file,
else
retval = 0;
  
+#ifndef WIN32

if ((log = fopen_priv(log_file, "a+")) == NULL)
pg_log(PG_FATAL, "cannot write to log file %s\n", log_file);
fprintf(log, "\n\n");
fclose(log);
+#endif
  
  	return retval;

  }

OK, I worked with Andrew on this issue, and have applied the attached
patch which explains what is happening in this case.  Andrew's #ifndef
WIN32 was the correct fix.  I consider this issue closed.




It looks like we still have problems in this area :-( see 



Now it looks like somehow the fopen on the log file that isn't commented 
out is failing. But the identical code worked on the same machine on 
HEAD. SO this does rather look like a timing issue.


Investigating ...


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 09:56:32PM -0300, Alvaro Herrera wrote:
> Excerpts from Tom Lane's message of mié sep 05 20:24:08 -0300 2012:
> > Andrew Dunstan  writes:
> > > The only reason there is a significant delay is that the administrators 
> > > have chosen not to run the process more than once every 4 hours. That's 
> > > a choice not dictated by the process they are using, but by other 
> > > considerations concerning the machine it's being run on. Since I am not 
> > > one of the admins and don't really want to take responsibility for it I 
> > > am not going to second guess them. On the very rare occasions when I 
> > > absolutely have to have the totally up to date docs I build them myself 
> > > - it takes about 60 seconds on my modest hardware.
> > 
> > I think the argument for having a quick docs build service is not about
> > the time needed, but the need to have all the appropriate tools
> > installed.  While I can understand that argument for J Random Hacker,
> > I'm mystified why Bruce doesn't seem to have bothered to get a working
> > SGML toolset installed.  It's not like editing the docs is a one-shot
> > task for him.
> 
> As far as I understand, Bruce's concern is not about seeing the docs
> built himself, but having an HTML copy published somewhere that he can
> point people to, after applying some patch.  To me, that's a perfectly
> legitimate reason to want to have them quickly.

Correct.  I have always had a working SGML toolset.  If we are not going
to have the developer site run more often, I will just go back to
setting up my own public doc build, like I used to do.  I removed mine
when the official one was more current/reliable --- if that has changed,
I will return to my old setup, and publish my own URL for users to
verify doc changes.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Josh Berkus

> Correct.  I have always had a working SGML toolset.  If we are not going
> to have the developer site run more often, I will just go back to
> setting up my own public doc build, like I used to do.  I removed mine
> when the official one was more current/reliable --- if that has changed,
> I will return to my old setup, and publish my own URL for users to
> verify doc changes.

I guess I don't see why building every 4 hours is an issue?  That's 6
times/day.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:25 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 09:56:32PM -0300, Alvaro Herrera wrote:

Excerpts from Tom Lane's message of mié sep 05 20:24:08 -0300 2012:

Andrew Dunstan  writes:

The only reason there is a significant delay is that the administrators
have chosen not to run the process more than once every 4 hours. That's
a choice not dictated by the process they are using, but by other
considerations concerning the machine it's being run on. Since I am not
one of the admins and don't really want to take responsibility for it I
am not going to second guess them. On the very rare occasions when I
absolutely have to have the totally up to date docs I build them myself
- it takes about 60 seconds on my modest hardware.

I think the argument for having a quick docs build service is not about
the time needed, but the need to have all the appropriate tools
installed.  While I can understand that argument for J Random Hacker,
I'm mystified why Bruce doesn't seem to have bothered to get a working
SGML toolset installed.  It's not like editing the docs is a one-shot
task for him.

As far as I understand, Bruce's concern is not about seeing the docs
built himself, but having an HTML copy published somewhere that he can
point people to, after applying some patch.  To me, that's a perfectly
legitimate reason to want to have them quickly.

Correct.  I have always had a working SGML toolset.  If we are not going
to have the developer site run more often, I will just go back to
setting up my own public doc build, like I used to do.  I removed mine
when the official one was more current/reliable --- if that has changed,
I will return to my old setup, and publish my own URL for users to
verify doc changes.


How often do you want? After all, 
 is presumably 
going to keep pointing to where it now points.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 06:32:48PM -0700, Josh Berkus wrote:
> 
> > Correct.  I have always had a working SGML toolset.  If we are not going
> > to have the developer site run more often, I will just go back to
> > setting up my own public doc build, like I used to do.  I removed mine
> > when the official one was more current/reliable --- if that has changed,
> > I will return to my old setup, and publish my own URL for users to
> > verify doc changes.
> 
> I guess I don't see why building every 4 hours is an issue?  That's 6
> times/day.

I can't commit and send someone a URL showing the change because they
might actually read their email in less than 4 hours.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 09:07:05PM -0400, Andrew Dunstan wrote:
> >OK, I worked with Andrew on this issue, and have applied the attached
> >patch which explains what is happening in this case.  Andrew's #ifndef
> >WIN32 was the correct fix.  I consider this issue closed.
> >
> 
> 
> It looks like we still have problems in this area :-( see 
> 
> 
> Now it looks like somehow the fopen on the log file that isn't
> commented out is failing. But the identical code worked on the same
> machine on HEAD. SO this does rather look like a timing issue.
> 
> Investigating ...

Yes, that is very odd.  It is also right after the code we just changed
to use binary mode to split the pg_dumpall file, split_old_dump().

The code is doing pg_ctl -w stop, then starting a new postmaster with
pg_ctl -w start.  Looking at the pg_ctl.c code (that you wrote), what
pg_ctl -w stop does is to wait for the postmaster.pid file to disappear,
then it returns complete.  I suppose it is possible that the pid file is
getting removed, pg_ctl is returning done, but the pg_ctl binary is
still running, holding open those log files.

I guess the buildfarm is showing us the problems in pg_upgrade, as it
should.  I think you might be right that we need to add a sleep(1) at
the end of stop_postmaster on Windows, and document it is to give the
postmaster time to release its log files.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 09:33:35PM -0400, Andrew Dunstan wrote:
> 
> On 09/05/2012 09:25 PM, Bruce Momjian wrote:
> >On Wed, Sep  5, 2012 at 09:56:32PM -0300, Alvaro Herrera wrote:
> >>Excerpts from Tom Lane's message of mié sep 05 20:24:08 -0300 2012:
> >>>Andrew Dunstan  writes:
> The only reason there is a significant delay is that the administrators
> have chosen not to run the process more than once every 4 hours. That's
> a choice not dictated by the process they are using, but by other
> considerations concerning the machine it's being run on. Since I am not
> one of the admins and don't really want to take responsibility for it I
> am not going to second guess them. On the very rare occasions when I
> absolutely have to have the totally up to date docs I build them myself
> - it takes about 60 seconds on my modest hardware.
> >>>I think the argument for having a quick docs build service is not about
> >>>the time needed, but the need to have all the appropriate tools
> >>>installed.  While I can understand that argument for J Random Hacker,
> >>>I'm mystified why Bruce doesn't seem to have bothered to get a working
> >>>SGML toolset installed.  It's not like editing the docs is a one-shot
> >>>task for him.
> >>As far as I understand, Bruce's concern is not about seeing the docs
> >>built himself, but having an HTML copy published somewhere that he can
> >>point people to, after applying some patch.  To me, that's a perfectly
> >>legitimate reason to want to have them quickly.
> >Correct.  I have always had a working SGML toolset.  If we are not going
> >to have the developer site run more often, I will just go back to
> >setting up my own public doc build, like I used to do.  I removed mine
> >when the official one was more current/reliable --- if that has changed,
> >I will return to my old setup, and publish my own URL for users to
> >verify doc changes.
> 
> How often do you want? After all,
>  is
> presumably going to keep pointing to where it now points.

Well, the old code checked every five minutes, and it rebuilt in 4
minutes, so there was a max of 10 minutes delay.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Report proper GUC parameter names in error messages

2012-09-05 Thread Tom Lane
Gurjeet Singh  writes:
> Error messages when terminating xlog redo leads the user to believe that
> there are parameters named max_prepared_xacts and max_locks_per_xact, which
> is not true. This patch corrects the parameter names emitted in the logs.

Good catch --- applied.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cascading replication on Windows bug

2012-09-05 Thread Heikki Linnakangas

On 05.09.2012 16:45, Heikki Linnakangas wrote:

On 05.09.2012 14:28, Tom Lane wrote:

Heikki Linnakangas writes:

That doesn't work on Windows. As long as a walsender is keeping the old
file open, the unlink() on it fails. You get an error like this in the
startup process:
FATAL: could not rename file "pg_xlog/RECOVERYXLOG" to
"pg_xlog/0001000D": Permission denied


I thought we had some workaround for that problem. Otherwise, you'd be
seeing this type of failure every time a checkpoint tries to drop or
rename files.


Hmm, now that I look at the error message more carefully, what happens
is that the unlink() succeeds, but when the startup process tries to
rename the new file in place, the rename() fails. The comments in
RemoveOldXLogFiles() explains that, and also shows how to work around it:


/*
* On Windows, if another process (e.g another backend)
* holds the file open in FILE_SHARE_DELETE mode, unlink
* will succeed, but the file will still show up in
* directory listing until the last handle is closed. To
* avoid confusing the lingering deleted file for a live
* WAL file that needs to be archived, rename it before
* deleting it.
*
* If another process holds the file open without
* FILE_SHARE_DELETE flag, rename will fail. We'll try
* again at the next checkpoint.
*/


I think we need the same trick here, and rename the old file first, then
unlink() it, and then rename the new file in place. I'll try that out..


Ok, committed a patch to do that.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> > How often do you want? After all,
> >  is
> > presumably going to keep pointing to where it now points.
> 
> Well, the old code checked every five minutes, and it rebuilt in 4
> minutes, so there was a max of 10 minutes delay.

I'm a bit mystified why we build them far *more* often than necessary..
Do we really commit documentation updates more than 6 times per day?
Wouldn't it be reasonably straight-forward to set up a commit-hook that
either kicks off a build itself, drops a file marker some place to
signal a cron job to do it, or something similar?

Have to agree with Bruce on this one, for my part.  I wonder if the
change to delay the crons was due to lack of proper locking or
tracking, or perhaps a lack of a filter for just changes which would
impact the documentation..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Aidan Van Dyk
So, in the spirit of not painting ourselves into a tiny corner here on
the whole "single backend" and "embedded database" problem with pg
options, can we generalize this a bit?

Any way we could make psql connect to a "given fd", as an option?  In
theory, that could be something opened by some out-side-of-postgresql
tunnel with 3rd party auth in the same app that uses libpq directly,
or it could be a fd prepared  by something that specifically launched
a single-backend postgres, like in the case of pg_upgrade, pg_uprade
itself, and passed to psql, etc, which would be passed in as options.

In theory, that might even allow the possibility of starting the
single-backend only once and passing it to multiple clients in
succession, instead of having to stop/start the backend between each
client.  And it would allow the possiblity of "something" (pg_upgrade,
or some other application) to control the start/stop of the backend
outside the libpq connection.

Now, I'm familiar with the abilities related to passing fd's around in
Linux, but have no idea if we'd have comparable methods to use on
Windows.

a.

On Wed, Sep 5, 2012 at 8:11 PM, Daniel Farina  wrote:
> On Wed, Sep 5, 2012 at 3:17 PM, Peter Eisentraut  wrote:
>> On 9/5/12 5:59 PM, Daniel Farina wrote:
>>> I agree with this, even though in theory (but not in practice)
>>> creative use of unix sockets (sorry windows, perhaps some
>>> port-allocating and URL mangling can be done instead) and conventions
>>> for those would allow even better almost-like-embedded results,
>>> methinks.  That may still be able to happen.
>>
>> Sure, everyone who cares can already do this, but some people probably
>> don't care enough.  Also, making this portable and robust for everyone
>> to use, not just your local environment, is pretty tricky.  See
>> pg_upgrade test script, for a prominent example.
>
> To my knowledge, no one has even really seriously tried to package it
> yet and then told the tale of woe, and it was an especially
> un-gratifying exercise for quite a while on account of multiple
> postgreses not getting along on the same machine because of SysV
> shmem.
>
> The bar for testing is a lot different than pg_upgrade (where a
> negative consequence is confusing and stressful downtime), and many
> programs use fork/threads and multiple connections even in testing,
> making its requirements different.
>
> So consider me still skeptical given the current reasoning that unix
> sockets can't be a good-or-better substitute, and especially
> accounting for programs that need multiple backends.
>
> --
> fdr
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:42 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 09:07:05PM -0400, Andrew Dunstan wrote:

OK, I worked with Andrew on this issue, and have applied the attached
patch which explains what is happening in this case.  Andrew's #ifndef
WIN32 was the correct fix.  I consider this issue closed.



It looks like we still have problems in this area :-( see 


Now it looks like somehow the fopen on the log file that isn't
commented out is failing. But the identical code worked on the same
machine on HEAD. SO this does rather look like a timing issue.

Investigating ...

Yes, that is very odd.  It is also right after the code we just changed
to use binary mode to split the pg_dumpall file, split_old_dump().

The code is doing pg_ctl -w stop, then starting a new postmaster with
pg_ctl -w start.  Looking at the pg_ctl.c code (that you wrote), what
pg_ctl -w stop does is to wait for the postmaster.pid file to disappear,
then it returns complete.  I suppose it is possible that the pid file is
getting removed, pg_ctl is returning done, but the pg_ctl binary is
still running, holding open those log files.

I guess the buildfarm is showing us the problems in pg_upgrade, as it
should.  I think you might be right that we need to add a sleep(1) at
the end of stop_postmaster on Windows, and document it is to give the
postmaster time to release its log files.




Icky. I wish there were some nice portable flock() mechanism we could use.

I just re-ran the test on the same machine, same code, same everything 
as the reporte3d failure, and it passed, so it definitely looks like 
it's a timing issue.


I'd be inclined to put a loop around that fopen() to try it once every 
second for, say, 5 seconds.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 10:04:07PM -0400, Andrew Dunstan wrote:
> 
> On 09/05/2012 09:42 PM, Bruce Momjian wrote:
> >On Wed, Sep  5, 2012 at 09:07:05PM -0400, Andrew Dunstan wrote:
> >>>OK, I worked with Andrew on this issue, and have applied the attached
> >>>patch which explains what is happening in this case.  Andrew's #ifndef
> >>>WIN32 was the correct fix.  I consider this issue closed.
> >>>
> >>
> >>It looks like we still have problems in this area :-( see 
> >>
> >>
> >>Now it looks like somehow the fopen on the log file that isn't
> >>commented out is failing. But the identical code worked on the same
> >>machine on HEAD. SO this does rather look like a timing issue.
> >>
> >>Investigating ...
> >Yes, that is very odd.  It is also right after the code we just changed
> >to use binary mode to split the pg_dumpall file, split_old_dump().
> >
> >The code is doing pg_ctl -w stop, then starting a new postmaster with
> >pg_ctl -w start.  Looking at the pg_ctl.c code (that you wrote), what
> >pg_ctl -w stop does is to wait for the postmaster.pid file to disappear,
> >then it returns complete.  I suppose it is possible that the pid file is
> >getting removed, pg_ctl is returning done, but the pg_ctl binary is
> >still running, holding open those log files.
> >
> >I guess the buildfarm is showing us the problems in pg_upgrade, as it
> >should.  I think you might be right that we need to add a sleep(1) at
> >the end of stop_postmaster on Windows, and document it is to give the
> >postmaster time to release its log files.
> 
> 
> 
> Icky. I wish there were some nice portable flock() mechanism we could use.
> 
> I just re-ran the test on the same machine, same code, same
> everything as the reporte3d failure, and it passed, so it definitely
> looks like it's a timing issue.
> 
> I'd be inclined to put a loop around that fopen() to try it once
> every second for, say, 5 seconds.

Yes, good idea.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 09:59:50PM -0400, Stephen Frost wrote:
> * Bruce Momjian (br...@momjian.us) wrote:
> > > How often do you want? After all,
> > >  is
> > > presumably going to keep pointing to where it now points.
> > 
> > Well, the old code checked every five minutes, and it rebuilt in 4
> > minutes, so there was a max of 10 minutes delay.
> 
> I'm a bit mystified why we build them far *more* often than necessary..
> Do we really commit documentation updates more than 6 times per day?
> Wouldn't it be reasonably straight-forward to set up a commit-hook that
> either kicks off a build itself, drops a file marker some place to
> signal a cron job to do it, or something similar?
> 
> Have to agree with Bruce on this one, for my part.  I wonder if the
> change to delay the crons was due to lack of proper locking or
> tracking, or perhaps a lack of a filter for just changes which would
> impact the documentation..

What the script I donated did was to do a cvs update in the sgml
directory and look for changes --- if it found them, it rebuilt.


-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 09:59 PM, Stephen Frost wrote:

* Bruce Momjian (br...@momjian.us) wrote:

How often do you want? After all,
 is
presumably going to keep pointing to where it now points.

Well, the old code checked every five minutes, and it rebuilt in 4
minutes, so there was a max of 10 minutes delay.

I'm a bit mystified why we build them far *more* often than necessary..
Do we really commit documentation updates more than 6 times per day?
Wouldn't it be reasonably straight-forward to set up a commit-hook that
either kicks off a build itself, drops a file marker some place to
signal a cron job to do it, or something similar?

Have to agree with Bruce on this one, for my part.  I wonder if the
change to delay the crons was due to lack of proper locking or
tracking, or perhaps a lack of a filter for just changes which would
impact the documentation..





The buildfarm code does not run if there are no changes. The job runs, 
sees that there are no changes, and exits.


And it has no problewm with collisions either. The code is guaranteed 
self-exclusionary. You can run it every minute from cron if you like and 
you will not get a collision. If it finds a running instance of itself 
it exits. Some people run the buildfarm script from cron every 15 
minutes or so relying on the locking mechanism.


And building the docs doesn't have a very high impact. And it takes 
about 2 minutes.


So, many of the assumptions / speculations in your email are wrong ;-)

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Tom Lane
Aidan Van Dyk  writes:
> So, in the spirit of not painting ourselves into a tiny corner here on
> the whole "single backend" and "embedded database" problem with pg
> options, can we generalize this a bit?

> Any way we could make psql connect to a "given fd", as an option?  In
> theory, that could be something opened by some out-side-of-postgresql
> tunnel with 3rd party auth in the same app that uses libpq directly,
> or it could be a fd prepared  by something that specifically launched
> a single-backend postgres, like in the case of pg_upgrade, pg_uprade
> itself, and passed to psql, etc, which would be passed in as options.

This seems to me to be going in exactly the wrong direction.  What
I visualize this feature as responding to is demand for a *simple*,
minimal configuration, minimal administration, quasi-embedded database.
What you propose above is not that, but is if anything even more
complicated for an application to deal with than a regular persistent
server.  More complication is *the wrong thing* for this use case.

The people who would be interested in this are currently using something
like SQLite within a single application program.  It hasn't got any of
the features you're suggesting either, and I don't think anybody wishes
it did.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 10:14 PM, Tom Lane wrote:

Aidan Van Dyk  writes:

So, in the spirit of not painting ourselves into a tiny corner here on
the whole "single backend" and "embedded database" problem with pg
options, can we generalize this a bit?
Any way we could make psql connect to a "given fd", as an option?  In
theory, that could be something opened by some out-side-of-postgresql
tunnel with 3rd party auth in the same app that uses libpq directly,
or it could be a fd prepared  by something that specifically launched
a single-backend postgres, like in the case of pg_upgrade, pg_uprade
itself, and passed to psql, etc, which would be passed in as options.

This seems to me to be going in exactly the wrong direction.  What
I visualize this feature as responding to is demand for a *simple*,
minimal configuration, minimal administration, quasi-embedded database.
What you propose above is not that, but is if anything even more
complicated for an application to deal with than a regular persistent
server.  More complication is *the wrong thing* for this use case.

The people who would be interested in this are currently using something
like SQLite within a single application program.  It hasn't got any of
the features you're suggesting either, and I don't think anybody wishes
it did.





Exactly. I think it's worth stating that this has a HUGE potential 
audience, and if we can get to this the deployment of Postgres could 
mushroom enormously. I'm really quite excited about it.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 10:07 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 10:04:07PM -0400, Andrew Dunstan wrote:

On 09/05/2012 09:42 PM, Bruce Momjian wrote:

On Wed, Sep  5, 2012 at 09:07:05PM -0400, Andrew Dunstan wrote:

OK, I worked with Andrew on this issue, and have applied the attached
patch which explains what is happening in this case.  Andrew's #ifndef
WIN32 was the correct fix.  I consider this issue closed.


It looks like we still have problems in this area :-( see 


Now it looks like somehow the fopen on the log file that isn't
commented out is failing. But the identical code worked on the same
machine on HEAD. SO this does rather look like a timing issue.

Investigating ...

Yes, that is very odd.  It is also right after the code we just changed
to use binary mode to split the pg_dumpall file, split_old_dump().

The code is doing pg_ctl -w stop, then starting a new postmaster with
pg_ctl -w start.  Looking at the pg_ctl.c code (that you wrote), what
pg_ctl -w stop does is to wait for the postmaster.pid file to disappear,
then it returns complete.  I suppose it is possible that the pid file is
getting removed, pg_ctl is returning done, but the pg_ctl binary is
still running, holding open those log files.

I guess the buildfarm is showing us the problems in pg_upgrade, as it
should.  I think you might be right that we need to add a sleep(1) at
the end of stop_postmaster on Windows, and document it is to give the
postmaster time to release its log files.



Icky. I wish there were some nice portable flock() mechanism we could use.

I just re-ran the test on the same machine, same code, same
everything as the reporte3d failure, and it passed, so it definitely
looks like it's a timing issue.

I'd be inclined to put a loop around that fopen() to try it once
every second for, say, 5 seconds.

Yes, good idea.



Suggested patch attached.

cheers

andrew

diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
index 99f5006..f84d857 100644
--- a/contrib/pg_upgrade/exec.c
+++ b/contrib/pg_upgrade/exec.c
@@ -63,7 +63,25 @@ exec_prog(const char *log_file, const char *opt_log_file,
 	if (written >= MAXCMDLEN)
 		pg_log(PG_FATAL, "command too long\n");
 
-	if ((log = fopen_priv(log_file, "a")) == NULL)
+#ifdef WIN32
+	{
+		/* 
+		 * Try to open the log file a few times in case the
+		 * server takes a bit longer than we'd like to release it.
+		 */
+		int iter;
+		for (iter = 0; iter < 5; iter++)
+		{
+			log = fopen_priv(log_file, "a");
+			if (log != NULL || iter == 4)
+break;
+			sleep(1);
+		}
+	}
+#else
+	log = fopen_priv(log_file, "a");
+#endif
+	if (log == NULL)
 		pg_log(PG_FATAL, "cannot write to log file %s\n", log_file);
 #ifdef WIN32
 	fprintf(log, "\n\n");

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 10:35:26PM -0400, Andrew Dunstan wrote:
> >>Icky. I wish there were some nice portable flock() mechanism we could use.
> >>
> >>I just re-ran the test on the same machine, same code, same
> >>everything as the reporte3d failure, and it passed, so it definitely
> >>looks like it's a timing issue.
> >>
> >>I'd be inclined to put a loop around that fopen() to try it once
> >>every second for, say, 5 seconds.
> >Yes, good idea.
> >
> 
> Suggested patch attached.
> 
> cheers
> 
> andrew
> 

> diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
> index 99f5006..f84d857 100644
> --- a/contrib/pg_upgrade/exec.c
> +++ b/contrib/pg_upgrade/exec.c
> @@ -63,7 +63,25 @@ exec_prog(const char *log_file, const char *opt_log_file,
>   if (written >= MAXCMDLEN)
>   pg_log(PG_FATAL, "command too long\n");
>  
> - if ((log = fopen_priv(log_file, "a")) == NULL)
> +#ifdef WIN32
> + {
> + /* 
> +  * Try to open the log file a few times in case the
> +  * server takes a bit longer than we'd like to release it.
> +  */
> + int iter;
> + for (iter = 0; iter < 5; iter++)
> + {
> + log = fopen_priv(log_file, "a");
> + if (log != NULL || iter == 4)
> + break;
> + sleep(1);
> + }
> + }
> +#else
> + log = fopen_priv(log_file, "a");
> +#endif
> + if (log == NULL)
>   pg_log(PG_FATAL, "cannot write to log file %s\n", log_file);
>  #ifdef WIN32
>   fprintf(log, "\n\n");

I would like to see a more verbose comment, so we don't forget why we
did this.  I think my inability to quickly discover the cause of the
previous log write problem is that I didn't document which file
descriptors are kept open on Windows.  I suggest for a comment:

/* 
 * "pg_ctl -w stop" might have reported that the server has
 * stopped because the postmaster.pid file has been removed,
 * but "pg_ctl -w start" might still be in the process of
 * closing and might still be holding its stdout and -l log
 * file descriptors open.  Therefore, try to open the log 
 * file a few times.
 */

Anyway, we can easily adjust the comment post-9.2.0.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 10:35:26PM -0400, Andrew Dunstan wrote:
> diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
> index 99f5006..f84d857 100644
> --- a/contrib/pg_upgrade/exec.c
> +++ b/contrib/pg_upgrade/exec.c
> @@ -63,7 +63,25 @@ exec_prog(const char *log_file, const char *opt_log_file,
>   if (written >= MAXCMDLEN)
>   pg_log(PG_FATAL, "command too long\n");
>  
> - if ((log = fopen_priv(log_file, "a")) == NULL)
> +#ifdef WIN32
> + {
> + /* 
> +  * Try to open the log file a few times in case the
> +  * server takes a bit longer than we'd like to release it.
> +  */
> + int iter;
> + for (iter = 0; iter < 5; iter++)
> + {
> + log = fopen_priv(log_file, "a");
> + if (log != NULL || iter == 4)
> + break;
> + sleep(1);
> + }
> + }
> +#else
> + log = fopen_priv(log_file, "a");
> +#endif
> + if (log == NULL)
>   pg_log(PG_FATAL, "cannot write to log file %s\n", log_file);
>  #ifdef WIN32
>   fprintf(log, "\n\n");

Oh, also, we normally put the ifndef WIN32 code first because that is
our most common platform.  Also, is "|| iter == 4" necessary?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 10:41 PM, Bruce Momjian wrote:


I would like to see a more verbose comment, so we don't forget why we
did this.  I think my inability to quickly discover the cause of the
previous log write problem is that I didn't document which file
descriptors are kept open on Windows.  I suggest for a comment:

/*
 * "pg_ctl -w stop" might have reported that the server has
 * stopped because the postmaster.pid file has been removed,
 * but "pg_ctl -w start" might still be in the process of
 * closing and might still be holding its stdout and -l log
 * file descriptors open.  Therefore, try to open the log
 * file a few times.
 */

Anyway, we can easily adjust the comment post-9.2.0.



Shall I apply the patch now? If so I'll include your comment.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
> The buildfarm code does not run if there are no changes. The job
> runs, sees that there are no changes, and exits.

Right, hence it makes great sense to use it for this (as opposed to
Bruce's previous script or some other new one).  While it might appear
to be overkill, it actually does lots of useful and good things and
integrates better with the existing setup anyway.

Now that you've provided the magic sauce wrt --skip-steps, can we get an
admin to implement a doc-only build that runs more frequently to update
the dev docs..?

Andrew, if we're going to rely on that, even just internally, perhaps we
should go ahead and add documentation for it?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] 9.2 pg_upgrade regression tests on WIndows

2012-09-05 Thread Bruce Momjian
On Wed, Sep  5, 2012 at 10:46:17PM -0400, Andrew Dunstan wrote:
> 
> On 09/05/2012 10:41 PM, Bruce Momjian wrote:
> >
> >I would like to see a more verbose comment, so we don't forget why we
> >did this.  I think my inability to quickly discover the cause of the
> >previous log write problem is that I didn't document which file
> >descriptors are kept open on Windows.  I suggest for a comment:
> >
> > /*
> >  * "pg_ctl -w stop" might have reported that the server has
> >  * stopped because the postmaster.pid file has been removed,
> >  * but "pg_ctl -w start" might still be in the process of
> >  * closing and might still be holding its stdout and -l log
> >  * file descriptors open.  Therefore, try to open the log
> >  * file a few times.
> >  */
> >
> >Anyway, we can easily adjust the comment post-9.2.0.
> 
> 
> Shall I apply the patch now? If so I'll include your comment.

Well, seems it is a crash bug.  Apply so we can get some buildfarm
testing overnight.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] too much pgbench init output

2012-09-05 Thread Peter Eisentraut
On Tue, 2012-09-04 at 23:44 -0400, Tom Lane wrote:
> > b) There is no indication of where the end is.
> 
> Well, surely *that* can be fixed in a noncontroversial way: just
> print "M/N tuples done", where N is the target.

I have made this change.  I won't pursue using \r if others find it
useful as is.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 11:01 PM, Stephen Frost wrote:

* Andrew Dunstan (and...@dunslane.net) wrote:

The buildfarm code does not run if there are no changes. The job
runs, sees that there are no changes, and exits.

Right, hence it makes great sense to use it for this (as opposed to
Bruce's previous script or some other new one).  While it might appear
to be overkill, it actually does lots of useful and good things and
integrates better with the existing setup anyway.

Now that you've provided the magic sauce wrt --skip-steps, can we get an
admin to implement a doc-only build that runs more frequently to update
the dev docs..?

Andrew, if we're going to rely on that, even just internally, perhaps we
should go ahead and add documentation for it?





You mean in my copious spare time?

AIUI the only thing stopping the admins from doing what is wanted is a 
shortage of tuits. I suspect if we're all a tiny bit patient it will happen.


But I guess they can speak for themselves.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Revert "commit_delay" change; just add comment that we don't hav

2012-09-05 Thread Greg Smith

On 08/15/2012 11:41 AM, Peter Geoghegan wrote:

I know that someone is going to point out that in some particularly benchmark,
they can get another relatively modest increase in throughput (perhaps
2%-3%) by splitting the difference between two adjoining millisecond
integer values. In that scenario, I'd be tempted to point out that
that increase is quite unlikely to carry over to real-world benefits,
because the setting is then right on the cusp of where increasing
commit_delay stops helping throughput and starts hurting it.


You guessed right on that.  I just responded to your survey over on 
pgsql-performance with two cases where older versions found optimal 
performance with commit_delay in the <=10 usec range.  Those are all in 
the BBWC case that I don't think you've been testing much of yet.


I recall Jignesh Shah reported his seeing that was from slightly better 
chunking of writes to disk, with a small but measurable drop in disk I/O 
operations (such as IOPS) relative to TPS.  The average throughput was 
no different; the number of *operations* was smaller though.  Less 8K 
I/O requests, more 16K+ ones.  Like a lot of these situations, adding 
some latency to every transactions can make them batch better.  And that 
can unexpectedly boost throughput enough that net latency is actually 
faster.  It's similar to how adding input queue latency with a pooler, 
limiting active connections, can actually make latency better by 
increasing efficiency.


On higher-end storage you can reach a point where IOPS gets high enough 
that the per-operation overhead becomes a problem, on top of the usual 
"is there enough write throughput?" question.  I suspect this situation 
might even be more common now, given IOPS issues like this are commonly 
highlighted when people do SSD reviews.


I still don't know that it's a widely popular situation.  But this 
particular use case has been one of the more persistent ones arguing to 
keep the parameter around until now.  Making sub-microsecond resolution 
on the parameter go away would effectively trash it just when it might 
get even more useful than before.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
> You mean in my copious spare time?

If you're alright with the concept, then anyone can do it.  I was
looking more for your concurrence on the idea of documenting this
explicitly (which also implies that it'll be supported, etc).

I'd be happy to develop the actual patch to add the documentation.

> AIUI the only thing stopping the admins from doing what is wanted is
> a shortage of tuits. I suspect if we're all a tiny bit patient it
> will happen.

I agree that they'll now get to it, based off your explanation of how to
use --skip-steps, and it'll be done and good.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Draft release notes complete

2012-09-05 Thread Andrew Dunstan


On 09/05/2012 11:44 PM, Stephen Frost wrote:

* Andrew Dunstan (and...@dunslane.net) wrote:

You mean in my copious spare time?

If you're alright with the concept, then anyone can do it.  I was
looking more for your concurrence on the idea of documenting this
explicitly (which also implies that it'll be supported, etc).

I'd be happy to develop the actual patch to add the documentation.





Sure, go for it. The buildfarm code is entirely public, 
 and the documentation lives 
on the wiki: 
 and an be 
edited by anyone with edit privs there.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2012-09-05 Thread Daniel Farina
On Wed, Sep 5, 2012 at 7:14 PM, Tom Lane  wrote:
> This seems to me to be going in exactly the wrong direction.  What
> I visualize this feature as responding to is demand for a *simple*,
> minimal configuration, minimal administration, quasi-embedded database.
> What you propose above is not that, but is if anything even more
> complicated for an application to deal with than a regular persistent
> server.  More complication is *the wrong thing* for this use case.
>
> The people who would be interested in this are currently using something
> like SQLite within a single application program.  It hasn't got any of
> the features you're suggesting either, and I don't think anybody wishes
> it did.

I am failing to understand how one could easily replicate the SQLite
feature of (even fairly poorly) using multiple processes addressing
one database, and supporting multiple executors per-database (which
correspond to every open 'connection' in SQLite, as far as I can
understand).  My best thoughts are in the direction of EXEC_BACKEND
and hooking up to a cluster post-facto, but I wasn't really looking
for solutions so much as to raise this (to me) important use-case.

I'm just thinking about all the enormously popular prefork based web
servers out there like unicorn (Ruby), gunicorn (Python), and even
without forking language-specific database abstractions like that seen
in Go ("database/sql") that have decided to make pooling the default
interaction.

It is easiest to use these prefork embedded servers in both in
development and production, so people (rather sensibly) do that --
better parity, and no fuss.

I really would rather not see that regress by appropriating special
mechanics for test/development scenarios with regards to managing
database connections (e.g. exactly one of them), so how do we not make
that a restriction, unless I misunderstood and was a non-restriction
already?

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Behavior difference for walsender and walreceiver for n/w breakdown case

2012-09-05 Thread Amit Kapila
I have observed that currently incase there is a network break between
master and standby, walsender process gets terminated immediately, however 
walreceiver detects the breakage after long time. 
The main reason I could see is due to replication_timeout configuration
parameter, walsender checks for replication_timeout, if there is no
communication from other side till replication_timeout time it detects it as
a condition to terminate the walsender. 
However there is no such mechanism in walreceiver, it fails during send
socket call from XLogWalRcvSendReply() after calling the same many times as
internally might be in send until the sockets internal buffer is full, it
keeps accumulating even if other side recv has not received the data. 

Shouldn't in walreceiver, there be a mechanism so that it can detect n/w
failure sooner?


Basic Steps to observe above behavior 
1. Both master and standby machine are connected normally, 
2. then you use the command: ifconfig ip down; make the network card of
master and standby down, 
Observation 
master can detect connect abnormal, but the standby can't detect connect
abnormal and show a connected channel long time. 

With Regards, 
Amit Kapila