Re: [HACKERS] tuplesort memory usage: grow_memtuples

2012-10-13 Thread Jeff Janes
On Thu, Aug 16, 2012 at 4:26 PM, Peter Geoghegan  wrote:
> On 27 July 2012 16:39, Jeff Janes  wrote:
>>> Can you suggest a benchmark that will usefully exercise this patch?
>>
>> I think the given sizes below work on most 64 bit machines.
>
> My apologies for not getting around to taking a look at this sooner.
>
...
>
> I have attached a revision for your consideration, with a few
> editorialisations, mostly style-related.

Sorry, this fell through the cracks.  Your proposed patch looks good.


...

> I think this patch (or at least your observation about I/O waits
> within vmstat) may point to a more fundamental issue with our sort
> code: Why are we not using asynchronous I/O in our implementation?

I only see a lot of io waits when using a small value of work_mem.
And I don't know how useful async would be there, as where would we
stash the read-ahead data with work_mem being so small?  At larger
sizes, the kernel or raid controller automatic read ahead seems to be
enough to saturate a CPU.

Maybe just giving more aggressive advice about the default value of
work_mem being quite low for modern hardware, or making it scale with
shared_buffers by default similar to the way wal_buffers now does,
would be sufficient.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Fujii Masao
On Sun, Oct 14, 2012 at 10:46 AM, Satoshi Nagayasu  wrote:
> HEAD
> 
> number of transactions actually processed: 3439971
> tps = 57331.891602 (including connections establishing)
> tps = 57340.932324 (excluding connections establishing)

> pg_stat_lwlocks patch (reporting disabled)
> ==
> number of transactions actually processed: 3429370
> tps = 57155.286475 (including connections establishing)
> tps = 57163.996943 (excluding connections establishing)
>
> So, I think some additional hack to reduce reporting is needed.
> Would it be acceptable in terms of the performance?

The tracing lwlock usage seems to still cause a small performance
overhead even if reporting is disabled. I believe some users would
prefer to avoid such overhead even if pg_stat_lwlocks is not available.
It should be up to a user to decide whether to trace lwlock usage, e.g.,
by using trace_lwlock parameter, I think.

>> Another comment is; local_calls/waits/time_ms are really required?
>> I'm not sure how those info would help the performance debugging.
>
>
> I think there are some needs to observe/determine how your test
> query is affected by the other workload from the other sessions.
> So, splitting local and shared statistics would be nice to have.
> Just my thought though.

What I don't like is that a session can see only local stats of its own
session. It's hard to monitor local stats. Imagine the case where you'd
like to monitor local stats of each pgbench session. To monitor such
stats, you need to modify pgbench so that its each session monitors
its own local stats. Even if you run a monitoring software, it cannot
collect those stats because they don't belong to the session that it uses.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Successor of MD5 authentication, let's use SCRAM

2012-10-13 Thread Daniel Farina
On Sat, Oct 13, 2012 at 7:00 AM, Andrew Dunstan  wrote:
> Does Debian they create a self-signed certificate? If so, count me as
> unimpressed. I'd argue that's worse than doing nothing. Here's what the docs
> say (rightly) about such certificates:

Debian will give you a self signed certificate by default.  Protecting
against passive eavesdroppers is not an inconsiderable benefit to get
for "free", and definitely not a marginal attack technique: it's
probably the most common.

For what they can possibly know about the end user, Debian has it right here.

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Successor of MD5 authentication, let's use SCRAM

2012-10-13 Thread Greg Stark
On Wed, Oct 10, 2012 at 11:41 AM, Heikki Linnakangas
 wrote:
> 1. Salt length. Greg Stark calculated the odds of salt collisions here:
> http://archives.postgresql.org/pgsql-hackers/2004-08/msg01540.php. It's not
> too bad as it is, and as Greg pointed out, if you can eavesdrop it's likely
> you can also hijack an already established connection. Nevertheless I think
> we should make the salt longer, say, 16 bytes.

Fwiw that calculation was based on the rule of thumb that a collision
is likely when you have sqrt(hash space) elements. Wikipedia has a
better formula which comes up with 77,163.

For 16 bytes that formula gives 2,171,938,135,516,356,249 salts before
you expect a collision.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Satoshi Nagayasu

Thanks for the review.

2012/10/14 8:55, Michael Paquier wrote:



On Sun, Oct 14, 2012 at 6:00 AM, Fujii Masao mailto:masao.fu...@gmail.com>> wrote:

On Sun, Oct 14, 2012 at 3:34 AM, Fujii Masao mailto:masao.fu...@gmail.com>> wrote:
 > On Sat, Oct 13, 2012 at 11:34 PM, Satoshi Nagayasu
mailto:sn...@uptime.jp>> wrote:
 >> Hi,
 >>
 >> 2012/10/13 23:05, Satoshi Nagayasu wrote:
 >>> Hi all,
 >>>
 >>> I have fixed my previous patch for pg_stat_lwlocks view, and
 >>> as Josh commented, it now supports local and global (shared)
 >>> statistics in the same system view.
 >>
 >> Sorry, I found my mistakes. New fixed one is attached to this mail.
 >
 > Thanks for revising the patch. Here are the comments:
 >
 > The document needs to be updated.
 >
 > The patch caused the following compile warnings in my machine.
 >
 > pgstat.c:1357: warning: no previous prototype for
'pgstat_report_lwlockstat'
 > postgres.c:3922: warning: implicit declaration of function
 > 'pgstat_report_lwlockstat'
 > pgstatfuncs.c:1854: warning: no previous prototype for
'pg_stat_reset_lwlocks'


Oops. I just fixed them. Thanks.


 > In my test, this patch caused the measurable performance overhead.
 > I created the test database by pgbench -s10 and ran pgbench -c8
-j8 -T60 -S.
 > Results are:
 >
 > [HEAD]
 > number of transactions actually processed: 1401369
 > tps = 23351.375811 (including connections establishing)
 > tps = 23355.900043 (excluding connections establishing)
 >
 > [PATCH]
 > number of transactions actually processed: 1401369
 > tps = 23351.375811 (including connections establishing)
 > tps = 23355.900043 (excluding connections establishing)

Oops! Obviously I copied and pasted the test result wrongly...
Here is the right result.

[HEAD]
number of transactions actually processed: 1401369
tps = 23351.375811 (including connections establishing)
tps = 23355.900043 (excluding connections establishing)

[PATCH]
number of transactions actually processed: 1092400
tps = 18179.498013  (including connections
establishing)
tps = 18182.450824  (excluding connections
establishing)

Performance difference is due to only the mutex lock taken?


I think it is coming from high-frequent reporting through
pgstat collector process, which means calling
pgstat_report_lwlocks() at PostgresMain().

diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 585db1a..5ca2c6f 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -3919,6 +3919,8 @@ PostgresMain(int argc, char *argv[], const char 
*username)

pgstat_report_activity(STATE_IDLE, NULL);
}

+   pgstat_report_lwlockstat();
+
ReadyForQuery(whereToSendOutput);
send_ready_for_query = false;
}

When I reduced reporting (or just disabled reporting),
it shows that the performance would not be affected
by this patch.

Here are some additional results of the performance test
which is the same one Fujii-san did.

HEAD

number of transactions actually processed: 3439971
tps = 57331.891602 (including connections establishing)
tps = 57340.932324 (excluding connections establishing)

pg_stat_lwlocks patch (reporting enabled)
=
number of transactions actually processed: 2665550
tps = 44425.038125 (including connections establishing)
tps = 44430.565651 (excluding connections establishing)

pg_stat_lwlocks patch (reporting disabled)
==
number of transactions actually processed: 3429370
tps = 57155.286475 (including connections establishing)
tps = 57163.996943 (excluding connections establishing)

pg_stat_lwlocks patch (reporting reduced 1/100)
===
number of transactions actually processed: 3421879
tps = 57030.660814 (including connections establishing)
tps = 57038.950498 (excluding connections establishing)

So, I think some additional hack to reduce reporting is needed.
Would it be acceptable in terms of the performance?


Another comment is; local_calls/waits/time_ms are really required?
I'm not sure how those info would help the performance debugging.


I think there are some needs to observe/determine how your test
query is affected by the other workload from the other sessions.
So, splitting local and shared statistics would be nice to have.
Just my thought though.

Regards,



Regards,

--
Fujii Masao


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers




--
Michael Paquier
http://michael.otacoo.com



--
Satoshi Na

Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Michael Paquier
On Sun, Oct 14, 2012 at 6:00 AM, Fujii Masao  wrote:

> On Sun, Oct 14, 2012 at 3:34 AM, Fujii Masao 
> wrote:
> > On Sat, Oct 13, 2012 at 11:34 PM, Satoshi Nagayasu 
> wrote:
> >> Hi,
> >>
> >> 2012/10/13 23:05, Satoshi Nagayasu wrote:
> >>> Hi all,
> >>>
> >>> I have fixed my previous patch for pg_stat_lwlocks view, and
> >>> as Josh commented, it now supports local and global (shared)
> >>> statistics in the same system view.
> >>
> >> Sorry, I found my mistakes. New fixed one is attached to this mail.
> >
> > Thanks for revising the patch. Here are the comments:
> >
> > The document needs to be updated.
> >
> > The patch caused the following compile warnings in my machine.
> >
> > pgstat.c:1357: warning: no previous prototype for
> 'pgstat_report_lwlockstat'
> > postgres.c:3922: warning: implicit declaration of function
> > 'pgstat_report_lwlockstat'
> > pgstatfuncs.c:1854: warning: no previous prototype for
> 'pg_stat_reset_lwlocks'
> >
> > In my test, this patch caused the measurable performance overhead.
> > I created the test database by pgbench -s10 and ran pgbench -c8 -j8 -T60
> -S.
> > Results are:
> >
> > [HEAD]
> > number of transactions actually processed: 1401369
> > tps = 23351.375811 (including connections establishing)
> > tps = 23355.900043 (excluding connections establishing)
> >
> > [PATCH]
> > number of transactions actually processed: 1401369
> > tps = 23351.375811 (including connections establishing)
> > tps = 23355.900043 (excluding connections establishing)
>
> Oops! Obviously I copied and pasted the test result wrongly...
> Here is the right result.
>
> [HEAD]
> number of transactions actually processed: 1401369
> tps = 23351.375811 (including connections establishing)
> tps = 23355.900043 (excluding connections establishing)
>
> [PATCH]
> number of transactions actually processed: 1092400
> tps = 18179.498013 (including connections establishing)
> tps = 18182.450824 (excluding connections establishing)
>
Performance difference is due to only the mutex lock taken?



>
> Another comment is; local_calls/waits/time_ms are really required?
> I'm not sure how those info would help the performance debugging.
>
> Regards,
>
> --
> Fujii Masao
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] proposal - assign result of query to psql variable

2012-10-13 Thread Erik Rijkers
On Sat, October 13, 2012 19:26, Pavel Stehule wrote:
> 2012/10/13 Shigeru HANADA :
>> After you determine whether it's ok or unnecessary, I'll mark this patch as
>> "Ready for committer".
>>
>

I found this behaviour which I think must count as a bug.
\gset doesn't allow more \\-separated lines behind it:

Only the last of these commands is problematic, and giving the syntax error

$ psql
psql (9.3devel-psql_var-20121012_2345-8b728e5c6e0ce6b6d6f54b92b390f14aa1aca6db)
Type "help" for help.

testdb=# select 1,2 \gset x,y
testdb=# \echo :x
1
testdb=# \echo :y
2
testdb=# \echo :x \\ \echo :y
1
2
testdb=# select 1,2 \gset x,y \\ \echo :x
\gset: syntax error
testdb=#

It'd be nice if it could be made to work

Thanks

Erik Rijkers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [WIP] pg_ping utility

2012-10-13 Thread Phil Sorber
Based on a previous thread
(http://archives.postgresql.org/pgsql-hackers/2012-10/msg00131.php) I
have put together a first attempt of a pg_ping utility. I am attaching
two patches. One for the executable and one for the docs.

I would also like to make a regression tests and translations, but
wanted to get feedback on what I had so far.

Thanks.


pg_ping_bin.diff
Description: Binary data


pg_ping_docs.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Fujii Masao
On Sun, Oct 14, 2012 at 3:34 AM, Fujii Masao  wrote:
> On Sat, Oct 13, 2012 at 11:34 PM, Satoshi Nagayasu  wrote:
>> Hi,
>>
>> 2012/10/13 23:05, Satoshi Nagayasu wrote:
>>> Hi all,
>>>
>>> I have fixed my previous patch for pg_stat_lwlocks view, and
>>> as Josh commented, it now supports local and global (shared)
>>> statistics in the same system view.
>>
>> Sorry, I found my mistakes. New fixed one is attached to this mail.
>
> Thanks for revising the patch. Here are the comments:
>
> The document needs to be updated.
>
> The patch caused the following compile warnings in my machine.
>
> pgstat.c:1357: warning: no previous prototype for 'pgstat_report_lwlockstat'
> postgres.c:3922: warning: implicit declaration of function
> 'pgstat_report_lwlockstat'
> pgstatfuncs.c:1854: warning: no previous prototype for 'pg_stat_reset_lwlocks'
>
> In my test, this patch caused the measurable performance overhead.
> I created the test database by pgbench -s10 and ran pgbench -c8 -j8 -T60 -S.
> Results are:
>
> [HEAD]
> number of transactions actually processed: 1401369
> tps = 23351.375811 (including connections establishing)
> tps = 23355.900043 (excluding connections establishing)
>
> [PATCH]
> number of transactions actually processed: 1401369
> tps = 23351.375811 (including connections establishing)
> tps = 23355.900043 (excluding connections establishing)

Oops! Obviously I copied and pasted the test result wrongly...
Here is the right result.

[HEAD]
number of transactions actually processed: 1401369
tps = 23351.375811 (including connections establishing)
tps = 23355.900043 (excluding connections establishing)

[PATCH]
number of transactions actually processed: 1092400
tps = 18179.498013 (including connections establishing)
tps = 18182.450824 (excluding connections establishing)

Another comment is; local_calls/waits/time_ms are really required?
I'm not sure how those info would help the performance debugging.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer regression

2012-10-13 Thread Jim Nasby

On 10/13/12 3:15 PM, Jim Nasby wrote:

FWIW, it's definitely an issue of not being able to push down past the GROUP BY:


I take that back... GROUP BY doesn't matter. It's an issue of having the EXISTS 
in the inner query. I realize the examples have gotten a bit silly, but this 
seems to break it down to the simplest case of what's happening.

FAST:

explain analyze   SELECT p.customer_id, p.status_cd, EXISTS( SELECT * FROM loan_statuses ls WHERE ls.loan_id = p.id AND ls.status_cd = 'in_default' )  FROM loans pLEFT JOIN  
 
   ( SELECT * FROM loans p WHERE loan_type_cd IN ( 'payday', 'cso' )   ) d USING( id ) WHERE p.customer_id = 10287151  AND p.status_cd = 'paid_off'  
 
AND p.loan_type_cd IN ( 'payday', 'cso' )   ;

QUERY 
PLAN
---
 Nested Loop Left Join  (cost=0.00..234.87 rows=13 width=17) (actual 
time=0.085..0.861 rows=31 loops=1)
   ->  Index Scan using loans_m13 on loans p  (cost=0.00..36.01 rows=13 
width=17) (actual time=0.045..0.137 rows=31 loops=1)
 Index Cond: ((customer_id = 10287151) AND ((status_cd)::text = 
'paid_off'::text))
 Filter: ((loan_type_cd)::text = ANY ('{payday,cso}'::text[]))
   ->  Index Scan using loans_pkey on loans p  (cost=0.00..5.12 rows=1 width=4) 
(actual time=0.011..0.011 rows=1 loops=31)
 Index Cond: (p.id = p.id)
 Filter: ((p.loan_type_cd)::text = ANY ('{payday,cso}'::text[]))
   SubPlan 1
 ->  Index Scan using loan_statuses__loan_id__status on loan_statuses ls  
(cost=0.00..10.17 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=31)
   Index Cond: ((loan_id = $0) AND ((status_cd)::text = 
'in_default'::text))
 Total runtime: 0.950 ms
(11 rows)


SLOW:


cnuapp_p...@postgres10.obr=# explain   SELECT p.customer_id, p.status_cdFROM loans p  

Re: [HACKERS] Optimizer regression

2012-10-13 Thread Tom Lane
Jim Nasby  writes:
> FWIW, it's definitely an issue of not being able to push down past the GROUP 
> BY:

I think it's not that so much as the EXISTS inside a LEFT JOIN.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential autovacuum optimization: new tables

2012-10-13 Thread Joshua Berkus

> Ah.  Okay, maybe we can agree that that wasn't a good idea.

Oh, I'd say there's no question it was a mistake.  We just didn't have the data 
at the time to realize it.

> I don't really see that we need to bend over backwards to exactly
> match
> some data points that you made up out of thin air.  How about
> ceil(sqrt(N)) to start with?

We can start with anything, including Jeff Jane's equation (for my part, I 
think sqrt(N) will result in analyzing very large tables a bit too often) The 
tough part will be coming up with some way to test it.

--Josh


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecating RULES

2012-10-13 Thread Joshua Berkus
Simon,

> I think its sad we can't even attempt a technical conversation
> without
> you making snide ad hominem attacks that aren't even close to being
> true on a personal level, nor accurate in a technical sense.

I would prefer it if you actually addressed my substantive arguments, which, so 
far, you haven't.

--Josh Berkus


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer regression

2012-10-13 Thread Jim Nasby

On 10/13/12 2:45 PM, Tom Lane wrote:

Jim Nasby  writes:

Just upgraded to 8.4 (I know, I know…) and ran across this. Unfortunately I 
have no way to test this on 9.x, so I don't know if it's been fixed or not. I'm 
hoping that someone *cough*Tom*cough* would quickly recognize whether this push 
into subquery issue has been fixed or not, so I haven't included full details 
or a test case. I have a work-around so I don't care about this in 8.4, but if 
this regression still exists it would be nice if it were fixed.


It's hard to be sure with such an incomplete example, but I think 8.4 is
flattening the EXISTS to a semijoin and then getting trapped by join
order constraints into doing something less than optimal for this
particular use-case.  It was this type of example that motivated the
"parameterized path" stuff I've been working on for the past couple
of years.

In short, 9.2 should produce at least as good a plan as 8.3 for this
example, but 8.4 through 9.1 might not.


FWIW, it's definitely an issue of not being able to push down past the GROUP BY:

cnuapp_p...@postgres10.obr=# explain  WITH default_stats AS (select 
customer_id, status_cd, count(*), max(id)  from loans.payday_defaulted group by 
customer_id, status_cd) SELECT * FROM default_stats  where customer_id=10287151;
 QUERY PLAN

 CTE Scan on default_stats  (cost=2980046.56..3004313.73 rows=5393 width=162)
   Filter: (customer_id = 10287151)
   CTE default_stats
 ->  HashAggregate  (cost=2963868.44..2980046.56 rows=1078541 width=17)
   ->  Hash Join  (cost=2028045.22..2902409.22 rows=6145922 width=17)
 Hash Cond: (loans.id = ls.loan_id)
 ->  Seq Scan on loans  (cost=0.00..688437.25 rows=10785404 
width=17)
   Filter: ((loan_type_cd)::text = ANY 
('{payday,cso}'::text[]))
 ->  Hash  (cost=2015864.33..2015864.33 rows=974471 width=4)
   ->  HashAggregate  (cost=2006119.62..2015864.33 
rows=974471 width=4)
 ->  Seq Scan on loan_statuses ls  
(cost=0.00..1984723.02 rows=8558638 width=4)
   Filter: ((status_cd)::text = 
'in_default'::text)
(12 rows)

cnuapp_p...@postgres10.obr=# explain analyze  select customer_id, status_cd, 
count(*), max(id)  from loans.payday_defaulted where customer_id=10287151 group 
by customer_id, status_cd;
  QUERY 
PLAN
---
 HashAggregate  (cost=202.16..202.19 rows=2 width=17) (actual time=0.422..0.422 
rows=0 loops=1)
   ->  Nested Loop Semi Join  (cost=0.00..202.07 rows=9 width=17) (actual 
time=0.422..0.422 rows=0 loops=1)
 ->  Index Scan using loans_m12 on loans  (cost=0.00..41.48 rows=16 
width=17) (actual time=0.028..0.121 rows=31 loops=1)
   Index Cond: (customer_id = 10287151)
   Filter: ((loan_type_cd)::text = ANY ('{payday,cso}'::text[]))
 ->  Index Scan using loan_statuses__loan_id__status on loan_statuses 
ls  (cost=0.00..10.17 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=31)
   Index Cond: ((ls.loan_id = loans.id) AND ((ls.status_cd)::text = 
'in_default'::text))
 Total runtime: 0.510 ms
(8 rows)

cnuapp_p...@postgres10.obr=#

I hope that we'll have 9.2 stood up before the year is out, so we'll check this 
then and see if it's fixed.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential autovacuum optimization: new tables

2012-10-13 Thread Tom Lane
Joshua Berkus  writes:
> I've been going over the notes and email archives from the period
> where Matt O'Connor and I arrived at the current settings.  All of our
> testing was devoted to autovacuum, not autoanalyze.
> Our mistake was assuming that the same formula which worked well for
> vacuum would work well for analyze.

Ah.  Okay, maybe we can agree that that wasn't a good idea.

> So, problem #1 is coming up with a mathematical formula.  My initial target 
> values are in terms of # of rows in the table vs. # of writes before analyze 
> is triggered:

> 1 : 3
> 10 : 5
> 100 : 10
> 1000 : 100
> 10 : 2000
> 100 : 5000
> 1000 : 25000
> 1 : 10

I don't really see that we need to bend over backwards to exactly match
some data points that you made up out of thin air.  How about
ceil(sqrt(N)) to start with?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential autovacuum optimization: new tables

2012-10-13 Thread Jeff Janes
On Sat, Oct 13, 2012 at 12:49 PM, Joshua Berkus  wrote:
>
> So, problem #1 is coming up with a mathematical formula.  My initial target 
> values are in terms of # of rows in the table vs. # of writes before analyze 
> is triggered:
>
> 1 : 3
> 10 : 5
> 100 : 10
> 1000 : 100
> 10 : 2000
> 100 : 5000
> 1000 : 25000
> 1 : 10
>
>  etc.  So problem #1 is a mathematical formula which gives this kind of 
> curve.  I've tried some solution-seeking software, but I don't know how to 
> use it well enough to get something useful.

That is close to a power law, where best fit is about "threshold = 1.5
* (rows ** 0.6)"

rowsyours   powerfit
1.00E+003.00E+001.50E+00
1.00E+015.00E+005.97E+00
1.00E+021.00E+012.38E+01
1.00E+031.00E+029.46E+01
1.00E+052.00E+031.50E+03
1.00E+065.00E+035.97E+03
1.00E+072.50E+042.38E+04
1.00E+081.00E+059.46E+04

If you want something more natural, reduce the exponent from 0.6 to
0.5 so it becomes the square root.

I have no opinion on the suitability of this, I'm just crunching the
numbers for you.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer regression

2012-10-13 Thread Jim Nasby

On 10/13/12 2:45 PM, Tom Lane wrote:

BTW, your workaround looks wrong --- you need to constrain the outside
of the left join not the inside, no?


Ugh, yes, you're correct. :(
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential autovacuum optimization: new tables

2012-10-13 Thread Joshua Berkus

> For my part, while that's certainly an interesting idea, it's far
> more
> complicated than even providing GUCs and the idea is to make PG just
> "do
> it right", not to offer the user more ways to get it wrong...

Yes, please let's not replace the existing too-simplistic knobs with giant 
complicated gadgets nobody, including us, understands.

For my part, over the last 3 years of consulting and dealing with 
postgresql.conf settings for more than 140 clients:

* only 10% of them ever touched the autoanalyze settings at all
* of the ~~ 14 who did:
   * 1 improved the tuning of their database
   * 3 of them messed up autoanalyze, causing stats and vacuum issues
   * ~~ 10 had no measurable effect

... so you'll understand when I say that I don't think ease of knob-twiddling 
is a priority for autoanalyze design.  In fact, I'd say that removing the knobs 
entirely is a design goal.

I've been going over the notes and email archives from the period where Matt 
O'Connor and I arrived at the current settings.  All of our testing was devoted 
to autovacuum, not autoanalyze.  The threshold+scale_factor design works pretty 
well for autovacuum; it prevents us from constantly vacuuming small tables, or 
large tables with less than 20% dead rows.  And I did extensive testing using 
DBT2 on OSDL to set the current defaults.

Our mistake was assuming that the same formula which worked well for vacuum 
would work well for analyze.  And since the DBT2 database has entirely 
medium-sized tables full of random data, no shortcomings in this thinking 
showed up in the tests.  Since the only counterproposal at the time was to have 
a flat percentage without a threshold, we got the current defaults.

So, problem #1 is coming up with a mathematical formula.  My initial target 
values are in terms of # of rows in the table vs. # of writes before analyze is 
triggered:

1 : 3
10 : 5
100 : 10
1000 : 100
10 : 2000
100 : 5000
1000 : 25000
1 : 10

 etc.  So problem #1 is a mathematical formula which gives this kind of 
curve.  I've tried some solution-seeking software, but I don't know how to use 
it well enough to get something useful.

Second problem is actually testing the result.  At this point, we don't have 
any performance tests which create anything other than fairly randomly 
distributed data, which doesn't tend to show up any issues in analyze.  We 
really need a performance test where new data is skewed and unbalanced, 
including tables of radically different sizes, and where we're set up to 
measure the level of inaccuracy in query statistics.  

Hmmm.  Actually, for measuring the innacuracy, I have some tools thanks to 
David Wheeler.  But not to generate the test in the first place.

--Josh Berkus


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer regression

2012-10-13 Thread Tom Lane
Jim Nasby  writes:
> Just upgraded to 8.4 (I know, I know…) and ran across this. Unfortunately I 
> have no way to test this on 9.x, so I don't know if it's been fixed or not. 
> I'm hoping that someone *cough*Tom*cough* would quickly recognize whether 
> this push into subquery issue has been fixed or not, so I haven't included 
> full details or a test case. I have a work-around so I don't care about this 
> in 8.4, but if this regression still exists it would be nice if it were fixed.

It's hard to be sure with such an incomplete example, but I think 8.4 is
flattening the EXISTS to a semijoin and then getting trapped by join
order constraints into doing something less than optimal for this
particular use-case.  It was this type of example that motivated the
"parameterized path" stuff I've been working on for the past couple
of years.

In short, 9.2 should produce at least as good a plan as 8.3 for this
example, but 8.4 through 9.1 might not.

BTW, your workaround looks wrong --- you need to constrain the outside
of the left join not the inside, no?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] getopt() and strdup()

2012-10-13 Thread Phil Sorber
On Sat, Oct 13, 2012 at 3:14 PM, Tom Lane  wrote:
> Phil Sorber  writes:
>> On Wed, Oct 10, 2012 at 7:54 PM, Bruce Momjian  wrote:
>>> Also, do we want to centralize the definition of pg_strdup() in /port,
>>> or leave each module to define it on its own?
>
>> +1 for a centralized definition.
>
> The difficulty with a centralized definition is that it's not clear that
> the error path is or should be *exactly* the same for all these usages.
> I see at least six variants right now.  While some are gratuitous,
> some of them are tied into local conventions of each program.
>
> regards, tom lane

Is it possible to at least standardize on one for the front-end and
one for the back-end? Then we can use those two going forward and sort
out all these other usages and get them to conform to one of the
aforementioned definitions over time.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding comments for system table/column names

2012-10-13 Thread Magnus Hagander
On Sat, Oct 13, 2012 at 9:12 PM, Bruce Momjian  wrote:
> On Sat, Oct 13, 2012 at 09:10:05PM +0200, Magnus Hagander wrote:
>> >> > I think the idea of having the short descriptions in SQL and longer ones
>> >> > in SGML is not maintainable.  One idea would be to clip the SQL
>> >> > description to be no longer than a specified number of characters, with
>> >> > proper word break detection.
>> >>
>> >> I prefer overlong entries to machine-truncated ones.  Seeing "Does the 
>> >> access
>> >> method support ordered" for both pg_am.amcanorder and pg_am.amcanorderbyop
>> >> thanks to the choice of truncation point does not seem like a win.
>> >>
>> >> We could store a short version in the SGML markup, solely for this 
>> >> process to
>> >> extract.  In its absence, use the documentation-exposed text. The 
>> >> extractor
>> >> could emit a warning when it uses a string longer than N characters, 
>> >> serving
>> >> as a hint to add short-version markup for some column.  If that's too 
>> >> hard,
>> >> though, I'd still prefer overlong entries to nothing or to truncated 
>> >> entries.
>> >
>> > I think the simplest solution would be to place SGML comment markers
>> > around text we want to extract from overly-long SGML descriptions.
>> > Descriptions without SGML comments would be extracted unchanged.
>>
>> Not sure how convenient that is, but it would certainly work. And it
>> would be a lot better than cutting off at word or character limits or
>> anything like that.
>
> Well, I figure we have to do something, because people would like those
> descriptions, and recording them in two places is too much overhead.

Agreed, this is definitely better than the other options there. And
the best suggetsion so far.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] getopt() and strdup()

2012-10-13 Thread Tom Lane
Phil Sorber  writes:
> On Wed, Oct 10, 2012 at 7:54 PM, Bruce Momjian  wrote:
>> Also, do we want to centralize the definition of pg_strdup() in /port,
>> or leave each module to define it on its own?

> +1 for a centralized definition.

The difficulty with a centralized definition is that it's not clear that
the error path is or should be *exactly* the same for all these usages.
I see at least six variants right now.  While some are gratuitous,
some of them are tied into local conventions of each program.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding comments for system table/column names

2012-10-13 Thread Bruce Momjian
On Sat, Oct 13, 2012 at 09:10:05PM +0200, Magnus Hagander wrote:
> >> > I think the idea of having the short descriptions in SQL and longer ones
> >> > in SGML is not maintainable.  One idea would be to clip the SQL
> >> > description to be no longer than a specified number of characters, with
> >> > proper word break detection.
> >>
> >> I prefer overlong entries to machine-truncated ones.  Seeing "Does the 
> >> access
> >> method support ordered" for both pg_am.amcanorder and pg_am.amcanorderbyop
> >> thanks to the choice of truncation point does not seem like a win.
> >>
> >> We could store a short version in the SGML markup, solely for this process 
> >> to
> >> extract.  In its absence, use the documentation-exposed text. The extractor
> >> could emit a warning when it uses a string longer than N characters, 
> >> serving
> >> as a hint to add short-version markup for some column.  If that's too hard,
> >> though, I'd still prefer overlong entries to nothing or to truncated 
> >> entries.
> >
> > I think the simplest solution would be to place SGML comment markers
> > around text we want to extract from overly-long SGML descriptions.
> > Descriptions without SGML comments would be extracted unchanged.
> 
> Not sure how convenient that is, but it would certainly work. And it
> would be a lot better than cutting off at word or character limits or
> anything like that.

Well, I figure we have to do something, because people would like those
descriptions, and recording them in two places is too much overhead.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding comments for system table/column names

2012-10-13 Thread Magnus Hagander
On Sat, Oct 13, 2012 at 5:17 PM, Bruce Momjian  wrote:
> On Sat, Oct 13, 2012 at 07:03:49AM -0400, Noah Misch wrote:
>> On Fri, Oct 12, 2012 at 01:29:21PM -0400, Bruce Momjian wrote:
>> > There was a thread in January of 2012 where we discussed the idea of
>> > pulling system table/column name descriptions from the SGML docs and
>> > creating SQL comments for them:
>> >
>> > http://archives.postgresql.org/pgsql-hackers/2012-01/msg00837.php
>> >
>> > Magnus didn't seem to like the idea:
>> >
>> > http://archives.postgresql.org/pgsql-hackers/2012-01/msg00848.php
>> >
>> > Well, I'd expect some of those columns to get (at least over time)
>> > significantly more detailed information than they have now. Certainly
>> > more than you'd put in comments in the catalogs. And having some sort
>> > of combination there seems to overcomplicate things...
>> >
>> > I think the idea of having the short descriptions in SQL and longer ones
>> > in SGML is not maintainable.  One idea would be to clip the SQL
>> > description to be no longer than a specified number of characters, with
>> > proper word break detection.
>>
>> I prefer overlong entries to machine-truncated ones.  Seeing "Does the access
>> method support ordered" for both pg_am.amcanorder and pg_am.amcanorderbyop
>> thanks to the choice of truncation point does not seem like a win.
>>
>> We could store a short version in the SGML markup, solely for this process to
>> extract.  In its absence, use the documentation-exposed text. The extractor
>> could emit a warning when it uses a string longer than N characters, serving
>> as a hint to add short-version markup for some column.  If that's too hard,
>> though, I'd still prefer overlong entries to nothing or to truncated entries.
>
> I think the simplest solution would be to place SGML comment markers
> around text we want to extract from overly-long SGML descriptions.
> Descriptions without SGML comments would be extracted unchanged.

Not sure how convenient that is, but it would certainly work. And it
would be a lot better than cutting off at word or character limits or
anything like that.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] getopt() and strdup()

2012-10-13 Thread Phil Sorber
On Wed, Oct 10, 2012 at 7:54 PM, Bruce Momjian  wrote:
> On Mon, Oct  8, 2012 at 09:03:37PM -0400, Bruce Momjian wrote:
>> On Mon, Oct  8, 2012 at 04:33:29PM -0400, Tom Lane wrote:
>> > Bruce Momjian  writes:
>> > > A while ago I noticed that in some places we strdup/pg_strdup() optarg
>> > > strings from getopt(), and in some places we don't.
>> >
>> > > If we needed the strdup(), the missing cases should generate errors.  If
>> > > we don't need them, the strdup() is unnecessary, and research confirms
>> > > they are unnecessary.  Should we remove the extra strdup/pg_strdup()
>> > > calls, for consistency.
>> >
>> > What research?  Given the number of different ways argv[] is handled
>> > on different platforms (cf ps_status.c), I am very unwilling to trust
>> > that it's safe to hang onto an argv string for long without strdup'ing
>> > it.
>> >
>> > > I think we might have had old platforms that required it, but none are
>> > > still supported today.
>> >
>> > And what's your grounds for stating that?  All the alternatives in
>> > ps_status.c are still live code AFAICS.
>> >
>> > My feeling is it's more likely to be a good idea to be adding strdup's
>> > than removing them.
>>
>> Well, what we have now is either wrong or over-kill --- I don't know for
>> sure which.
>
> OK, I have developed the attached patch to add strdup/pg_strdup() calls
> to all saving of getopt optarg arguments.
>
> Also, do we want to centralize the definition of pg_strdup() in /port,
> or leave each module to define it on its own?   I see pg_strdup() defined
> in these modules:
>
> /pgtop/contrib/oid2name
> /pgtop/contrib/pgbench
> /pgtop/contrib/pg_upgrade
> /pgtop/src/bin/initdb
> /pgtop/src/bin/pg_basebackup
> /pgtop/src/bin/pg_ctl
> /pgtop/src/bin/pg_dump
> /pgtop/src/bin/psql
> /pgtop/src/bin/scripts
>

+1 for a centralized definition.

> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
>   + It's impossible for everything to be true. +
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2012-10-13 Thread Peter Geoghegan
On 12 October 2012 20:27, Pavel Stehule  wrote:
> I understand to your request, but I don't thing so this request is
> 100% valid. Check violation is good example. Constraint names are
> "optional" in PostgreSQL - so we cannot require constraint_name. One
> from first prototypes I used generated name for NULL constraints and
> it was rejected - because It can be confusing, because a user doesn't
> find these names in catalogue. I agree with it now - it better show
> nothing, than show some phantom. More - a design of these feature from
> SQL/PSM and ANSI/SQL is not too strict. There is no exception, when
> you asking any unfilled value - you get a empty string instead.

That's beside the point. NOT NULL constraints are not catalogued (for
now), so sure, the only reasonable thing to do is to have an empty
string in that case. Since no one expects to be able to get the name
of a NOT NULL constraint anyway, that isn't a problem.

Once again, the problem, in particular, is that there is no
well-defined set of rules that client code can follow to be sure that
a field name they're interested in will be available at all. In all
revisions thus far, you have seemingly arbitrarily decided to not add
some some fields in some places. I mentioned already that some
ERRCODE_CHECK_VIOLATION sites didn't name a constraint - other places
don't name a table when one is available, as with some
ERRCODE_NOT_NULL_VIOLATION sites. These fields need to be added, and
what's more, the rules for where they need to be added need to be
coherently described. So, that's about 3 sentences of extra
documentation, saying to both users and hackers (at the very least):

* NOT NULL constraints won't have a CONSTRAINT_NAME available, since
they aren't catalogued.

* Domains won't have a TABLE_NAME available, even though there may
actually be a table name associated with the error.

Have I missed one?

That all seems pretty simple to me, and I don't see what the problem is.

> And although we don't checking consistence of exception fields, I
> think so this patch is very usable. I have a three text fields now:
> message, detail, hint - and I can do same error, that you are
> described. This patch doesn't change it. But it creates a few new
> basic variables (for all possible exceptions), that can be used for
> simplification of error processing. It is not silver bullet. And it is
> not C++.

The simplification of error processing is that they can now reliably
get these fields - they don't have to use some kludge like parsing a
(possibly localised) error message to look for a check constraint
name. I'm not asking you to add run-time verification - I'm asking you
to institute a coding standard, that is limited to backend code, and
to document what assumptions applications can make.

To my mind, if the user cannot rely on the fields accurately
indicating error conditions according to some coherent set of rules
(in actuality, one or two simple and obvious exceptions, only one of
which is slightly surprising), then this patch is not only not
helpful, it's harmful. If they're only available according to some
completely arbitrary and obscure criteria, (like the fact that you've
included one ERRCODE_CHECK_VIOLATION site but not another), that's a
footgun.

> Creating some new tool for checking consistency of exceptions
> is not good way - and you are newer ensure consistency of custom
> exceptions.

Pointing out that some extension author, or pl/pgsql function, could
in principle ignore the documentation I'm asking you to write and not
supply a constraint, while raising their own, say, magical
ERRCODE_CHECK_VIOLATION is a bit of a cop-out. I should also mention
that things like ERRCODE_INTERNAL_ERROR are naturally going to be a
bit fuzzy, and that's fine. Nobody is ever going to expect those
anyway.

> yes, CONSTRAINT_NAME in this case should be used. TABLE_NAME can be or
> should not be empty, but this information is not available, because
> some facts can be changed in rewriter stage.

Right, or because you could do this and get an exception:

select 'foo'::bar_domain;

> I can agree, so some documentation is necessary (maybe some table) -
> now we have not described context of all errors. Other needs a
> searching of some consensus - or searching solution  - our syntax
> allows some variations that are unsupported in ANSI/SQL - and then we
> have to use some generated name or we don't show this information. It
> is a basic and most important question.

Can you give an example of when a generated name might be used, beyond
the example you've already given (that is, NULL constraints)? I'm not
completely opposed to the idea of a generated name. I think that it's
very much a secondary issue, though.

> So first we have to find reply
> to following question: this patch should to follow our current
> implementation of exceptions or we modify exceptions to be more close
> to ANSI/SQL (and we have to modify model)?

What does the option of following the SQL 

Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Fujii Masao
On Sat, Oct 13, 2012 at 11:34 PM, Satoshi Nagayasu  wrote:
> Hi,
>
> 2012/10/13 23:05, Satoshi Nagayasu wrote:
>> Hi all,
>>
>> I have fixed my previous patch for pg_stat_lwlocks view, and
>> as Josh commented, it now supports local and global (shared)
>> statistics in the same system view.
>
> Sorry, I found my mistakes. New fixed one is attached to this mail.

Thanks for revising the patch. Here are the comments:

The document needs to be updated.

The patch caused the following compile warnings in my machine.

pgstat.c:1357: warning: no previous prototype for 'pgstat_report_lwlockstat'
postgres.c:3922: warning: implicit declaration of function
'pgstat_report_lwlockstat'
pgstatfuncs.c:1854: warning: no previous prototype for 'pg_stat_reset_lwlocks'

In my test, this patch caused the measurable performance overhead.
I created the test database by pgbench -s10 and ran pgbench -c8 -j8 -T60 -S.
Results are:

[HEAD]
number of transactions actually processed: 1401369
tps = 23351.375811 (including connections establishing)
tps = 23355.900043 (excluding connections establishing)

[PATCH]
number of transactions actually processed: 1401369
tps = 23351.375811 (including connections establishing)
tps = 23355.900043 (excluding connections establishing)

So I think that tracking lwlock usage should be enabled only when
trace_lwlocks is enabled, so that a user who is not interested in
lwlock usage can avoid such performance overhead.

As far as I read the patch, only lwlock usage by backends is collected.
Why aren't the lwlock usages by autovacuum worker and auxiliary
processes collected?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extend argument of OAT_POST_CREATE

2012-10-13 Thread Kohei KaiGai
Thanks for your reviews.

2012/10/10 Alvaro Herrera :
> Kohei KaiGai escribió:
>> The attached patch adds argument of OAT_POST_CREATE hook;
>> to inform extensions type of the context of this object creation. It allows
>> extensions to know whether the new object is indirectly created apart
>> from user's operations, or not.
>
> Can we add Assert(!is_internal) to the ProcedureRelationId case in
> sepgsql_object_access() too?  I don't see any caller that would set it
> true anywhere, but maybe you have a good reason for omitting it.
>
No, I just missed to add Assert() here.

> I'm not clear on what's sepgsql_relation_setattr for; it doesn't seem to
> be called anywhere (other than sepgsql_relation_setattr_extra, but
> that's static, so why isn't sepgsql_relation_setattr also static?).  But
> I notice that it calls sepgsql_index_modify without first checking for
> the toast namespace like the other callers do.  Is this okay or an
> oversight?
>
I assume sepgsql_relation_setattr is also called on ALTER TABLE
command; to check privilege to modify properties of the target table.
Entrypoint of the object_access_hook is at sepgsql/hooks.c, so this
function was declared without static for (near) future usage.
Regarding to toast relation/index, as default permission mechanism
doing, sepgsql handles toast is a pure-internal semantics, thus, no
security label is assigned and no permission checks are applied.
(Also, please check check_relation_privileges at sepgsql/dml.c.
 It does not allow to reference toast relation using regular SQL
 with hardwired rule, because of the nature of "internal stuff".)

> I admit the new RELKIND_INDEX cases in various places make for strange
> flow.  Not sure how it can be improved though.
>
The idea is not so complicated. This code considers index is an
property of a certain table like as individual field of pg_class.
Relation is the most complex object in PostgreSQL. Its property
is not only ones in pg_class, but some extra catalogs such as
pg_trigger, pg_rewrite and so on.
The default permission checks ownership of the table, instead
of triggers, rules or indexes, when user tries to alter them.
Here is no good reason why sepgsql needs to give its own
definition of relation, so I just followed this manner.

> I didn't find anything wrong with the changes to src/backend.  One thing
> that I noticed is that when bootstrapping, all relation creation is
> considered internal.  I am sure this is okay fo the normal case, but I
> wonder if a malicious superuser could get a hold of things that he
> shouldn't by starting a bootstrapping backend and run relation creation
> there.
>
Yes, you are right. Even though it is harmless right now because we have
no way to load extension prior to initdb, it makes confusion if some built-in
feature used object access hook. The "internal" flag means the given SQL
statement does not intend creation itself of the target object, but bootstrap
command definitely intend to create initial objects.

On the other hand, I don't care about the scenario with malicious superuser
that runs initdb, because it is an assumption of sepgsql to set up initial
database on environment without something malicious.
If we try to prevent to create an initial database by malicious users, it should
be a responsibility of operating system and its policy.

Thanks,
-- 
KaiGai Kohei 


sepgsql-v9.3-extend-post-create-hook.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal - assign result of query to psql variable

2012-10-13 Thread Pavel Stehule
2012/10/13 Shigeru HANADA :
> Hi Pavel,
>
>
> On Sat, Oct 13, 2012 at 12:58 AM, Pavel Stehule 
> wrote:
>>
>> * merge Shigeru's doc patch
>> * rename psql regression test from "psql" to "psql_cmd"
>
>
> Those changes seem good.
>
> Besides, I found an error message which doesn't end with '¥n' in common.c.
> In general, messages passed to psql_error end with '¥n', unless additional
> information follows.  Please see attached patch for additional change.
>
> After you determine whether it's ok or unnecessary, I'll mark this patch as
> "Ready for committer".
>

it is ok, thank you

Pavel


> Regards,
> --
> Shigeru HANADA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown

2012-10-13 Thread Fujii Masao
On Thu, Oct 11, 2012 at 11:52 PM, Heikki Linnakangas
 wrote:
> On 11.10.2012 13:17, Amit Kapila wrote:
>>>
>>> How does this look now?
>>
>>
>> The Patch is fine and test results are also fine.
>
>
> Ok, thanks. Committed.

I found one typo. The attached patch fixes that typo.

ISTM you need to update the protocol.sgml because you added
the field 'replyRequested' to WalSndrMessage and StandbyReplyMessage.

Is it worth adding the same mechanism (send back the reply immediately
if walsender request a reply) into pg_basebackup and pg_receivexlog?

Regards,

-- 
Fujii Masao


typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Optimizer regression

2012-10-13 Thread Jim Nasby
Just upgraded to 8.4 (I know, I know…) and ran across this. Unfortunately I 
have no way to test this on 9.x, so I don't know if it's been fixed or not. I'm 
hoping that someone *cough*Tom*cough* would quickly recognize whether this push 
into subquery issue has been fixed or not, so I haven't included full details 
or a test case. I have a work-around so I don't care about this in 8.4, but if 
this regression still exists it would be nice if it were fixed.

CREATE VIEW loans.payday AS SELECT * FROM loans WHERE loan_type_cd IN ( 
'payday', 'other' );
CREATE VIEW loans.payday_defaulted AS SELECT * FROM loans.payday p WHERE 
EXISTS( SELECT * FROM loan_statuses ls WHERE ls.loan_id = p.id AND ls.status_cd 
= 'in_default' );

This query is fast:

SELECT defaulted_then_paid_loans
   , ( SELECT count(*)
 FROM loans.payday
 WHERE ROW( customer_id, status_cd ) = ROW( d.customer_id, d.status_cd )
   AND id > coalesce( max_defaulted_loan_id, 0 )
  ) AS number_of_loans_since_last_default
FROM ( SELECT p.customer_id, p.status_cd, count( d.id ) AS 
defaulted_then_paid_loans
  , max( d.id ) AS max_defaulted_loan_id
  FROM loans.payday p
LEFT JOIN loans.payday_defaulted d USING( id )
  WHERE d.customer_id = ?
  GROUP BY p.customer_id, p.status_cd
  ) d
WHERE status_cd = 'paid_off';

This query is not (but was fine on 8.3):
SELECT defaulted_then_paid_loans
   , ( SELECT count(*)
 FROM loans.payday
 WHERE ROW( customer_id, status_cd ) = ROW( d.customer_id, d.status_cd )
   AND id > coalesce( max_defaulted_loan_id, 0 )
  ) AS number_of_loans_since_last_default
FROM ( SELECT p.customer_id, p.status_cd, count( d.id ) AS 
defaulted_then_paid_loans
  , max( d.id ) AS max_defaulted_loan_id
  FROM loans.payday p
LEFT JOIN loans.payday_defaulted d USING( id )
  GROUP BY p.customer_id, p.status_cd
  ) d
WHERE status_cd = 'paid_off'
  AND customer_id = ?
;

Plan from the "bad" query on 8.3:

QUERY PLAN  
 
--
 Subquery Scan d  (cost=0.00..438.00 rows=2 width=162) (actual 
time=4883.286..4883.286 rows=1 loops=1)
   ->  GroupAggregate  (cost=0.00..421.91 rows=2 width=17) (actual 
time=4883.181..4883.181 rows=1 loops=1)
 ->  Nested Loop Left Join  (cost=0.00..421.75 rows=13 width=17) 
(actual time=314.426..4883.082 rows=31 loops=1)
   ->  Index Scan using loans_m13 on loans  (cost=0.00..36.72 
rows=13 width=17) (actual time=52.209..561.240 rows=31 loops=1)
 Index Cond: ((customer_id = 10287151) AND 
((status_cd)::text = 'paid_off'::text))
 Filter: ((loan_type_cd)::text = ANY 
('{payday,cso}'::text[]))
   ->  Index Scan using loans_pkey on loans  (cost=0.00..29.61 
rows=1 width=4) (actual time=139.410..139.410 rows=0 loops=31)
 Index Cond: (cnu.loans.id = cnu.loans.id)
 Filter: (((cnu.loans.loan_type_cd)::text = ANY 
('{payday,cso}'::text[])) AND (subplan))
 SubPlan
   ->  Index Scan using loan_status_u1 on loan_statuses ls  
(cost=0.00..23.43 rows=1 width=88) (actual time=109.521..109.521 rows=0 
loops=31)
 Index Cond: (loan_id = $3)
 Filter: ((status_cd)::text = 'in_default'::text)
   SubPlan
 ->  Aggregate  (cost=8.03..8.04 rows=1 width=0) (actual time=0.100..0.100 
rows=1 loops=1)
   ->  Index Scan using loans_m13 on loans  (cost=0.00..8.02 rows=1 
width=0) (actual time=0.041..0.084 rows=31 loops=1)
 Index Cond: ((customer_id = $0) AND ((status_cd)::text = 
($1)::text))
 Filter: (((loan_type_cd)::text = ANY ('{payday,cso}'::text[])) 
AND (id > COALESCE($2, 0)))
 Total runtime: 4883.439 ms
(19 rows)

And from 8.4…
  QUERY PLAN
  
--
 Subquery Scan d  (cost=3003014.53..3027074.69 rows=2 width=162)
   ->  GroupAggregate  (cost=3003014.53..3027059.89 rows=2 width=17)
 ->  Hash Left Join  (cost=3003014.53..3027059.73 rows=13 width=17)
   Hash Cond: (cnu.loans.id = cnu.loans.id)
   ->  Index Scan using loans_m13 on loans  (cost=0.00..36.01 
rows=13 width=17)
 Index Cond: ((customer_id = 10287151) AND 
((status_cd)::text = 'paid_off'::text))
 Filter: ((loan_type_cd)::text = ANY 
('{payday,cso}'::text[]))
   ->  Hash  (cost=2902187.44..2902187.44 rows=6

Re: [HACKERS] Successor of MD5 authentication, let's use SCRAM

2012-10-13 Thread Stephen Frost
* Andrew Dunstan (and...@dunslane.net) wrote:
> Does Debian they create a self-signed certificate? If so, count me
> as unimpressed. I'd argue that's worse than doing nothing. Here's
> what the docs say (rightly) about such certificates:

Self-signed certificates do provide for in-transit encryption.  I agree
that they don't provide a guarantee of the remote side being who you
think it is, but setting up a MITA attack is more difficult than
eavesdropping on a connection and more likely to be noticed.

You can, of course, set up your own CA and sign certs off of it under
Debian as well.  Unfortunately, most end users aren't going to do that.
Many of those same do benefit from at least having an encrypted
connection when it's all done for them.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Adding comments for system table/column names

2012-10-13 Thread Bruce Momjian
On Sat, Oct 13, 2012 at 07:03:49AM -0400, Noah Misch wrote:
> On Fri, Oct 12, 2012 at 01:29:21PM -0400, Bruce Momjian wrote:
> > There was a thread in January of 2012 where we discussed the idea of
> > pulling system table/column name descriptions from the SGML docs and
> > creating SQL comments for them:
> > 
> > http://archives.postgresql.org/pgsql-hackers/2012-01/msg00837.php
> > 
> > Magnus didn't seem to like the idea:
> > 
> > http://archives.postgresql.org/pgsql-hackers/2012-01/msg00848.php
> > 
> > Well, I'd expect some of those columns to get (at least over time)
> > significantly more detailed information than they have now. Certainly
> > more than you'd put in comments in the catalogs. And having some sort
> > of combination there seems to overcomplicate things...
> > 
> > I think the idea of having the short descriptions in SQL and longer ones
> > in SGML is not maintainable.  One idea would be to clip the SQL
> > description to be no longer than a specified number of characters, with
> > proper word break detection.
> 
> I prefer overlong entries to machine-truncated ones.  Seeing "Does the access
> method support ordered" for both pg_am.amcanorder and pg_am.amcanorderbyop
> thanks to the choice of truncation point does not seem like a win.
> 
> We could store a short version in the SGML markup, solely for this process to
> extract.  In its absence, use the documentation-exposed text. The extractor
> could emit a warning when it uses a string longer than N characters, serving
> as a hint to add short-version markup for some column.  If that's too hard,
> though, I'd still prefer overlong entries to nothing or to truncated entries.

I think the simplest solution would be to place SGML comment markers
around text we want to extract from overly-long SGML descriptions. 
Descriptions without SGML comments would be extracted unchanged.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug in -c CLI option of pg_dump/pg_restore

2012-10-13 Thread Guillaume Lelarge
Hi,

One of my colleagues, Jehan-Guillaume de Rorthais, found a weird
behaviour of the "-c" command line option in the pg_restore tool while
doing a training. Here is the following steps he followed:

createdb foo

pg_dump -Fc foo > foo.dump
createdb bar
pg_restore -c -d bar foo.dump

bar contains the same objects as foo (nothing unusual here), but... foo
is no longer present. Actually, if you use the "-c" command line option,
you get a "DROP DATABASE" statement. To me, it feels like a quite
terrible bug.

It's quite easy to reproduce. Just create a database, and use pg_dump
with the "-c" option:

createdb foo
pg_dump -s -c foo | grep DATABASE

and you end up with this:

DROP DATABASE foo;

I tried from 8.3 till 9.2, and only 9.2 has this behaviour.

You'll find attached a patch that fixes this issue. Another colleague,
Gilles Darold, tried it in every possible way, and it works. I'm not
sure the test I added makes it a very good patch, but it fixes the bug.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index c7ef9a6..d1bd454 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -456,8 +456,8 @@ RestoreArchive(Archive *AHX)
 		{
 			AH->currentTE = te;
 
-			/* We want anything that's selected and has a dropStmt */
-			if (((te->reqs & (REQ_SCHEMA | REQ_DATA)) != 0) && te->dropStmt)
+			/* We want anything but database that's selected and has a dropStmt */
+			if (((te->reqs & (REQ_SCHEMA | REQ_DATA)) != 0) && strcmp(te->desc, "DATABASE") != 0 && te->dropStmt)
 			{
 ahlog(AH, 1, "dropping %s %s\n", te->desc, te->tag);
 /* Select owner and schema as necessary */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Satoshi Nagayasu
Hi,

2012/10/13 23:05, Satoshi Nagayasu wrote:
> Hi all,
> 
> I have fixed my previous patch for pg_stat_lwlocks view, and
> as Josh commented, it now supports local and global (shared)
> statistics in the same system view.

Sorry, I found my mistakes. New fixed one is attached to this mail.

Regards,

> 
> Local statistics means the counters are only effective in the
> same session, and shared ones means the counters are shared within
> the entire cluster.
> 
> Also the global statistics would be collected via pgstat collector
> process like other statistics do.
> 
> Now, the global statistics struct has been splitted into two parts
> for different use, for bgwriter stats and lwlock stats.
> 
> Therefore, calling pg_stat_reset_shared('bgwriter') or
> pg_stat_reset_shared('lwlocks') would reset dedicated struct,
> not entire PgStat_GlobalStats.
> 
> Comments and review are always welcome.
> 
> Regards,
> 
> --
> postgres=# SELECT * FROM pg_stat_lwlocks;
>   lwlockid | local_calls | local_waits | local_time_ms | shared_calls |
> shared_waits | shared_time_ms
> --+-+-+---+--+--+
>  0 |   0 |   0 | 0 | 4268 |
>0 |  0
>  1 |  43 |   0 | 0 |  387 |
>0 |  0
>  2 |   0 |   0 | 0 |   19 |
>0 |  0
>  3 |   0 |   0 | 0 |   28 |
>0 |  0
>  4 |   3 |   0 | 0 |  315 |
>0 |  0
>  5 |   0 |   0 | 0 |   24 |
>0 |  0
>  6 |   1 |   0 | 0 |   76 |
>0 |  0
>  7 |   0 |   0 | 0 |16919 |
>0 |  0
>  8 |   0 |   0 | 0 |0 |
>0 |  0
>  9 |   0 |   0 | 0 |0 |
>0 |  0
> 10 |   0 |   0 | 0 |0 |
>0 |  0
> 11 |   0 |   0 | 0 |   75 |
>0 |  0
> 12 |   0 |   0 | 0 |0 |
>0 |  0
> 13 |   0 |   0 | 0 |0 |
>0 |  0
> 14 |   0 |   0 | 0 |0 |
>0 |  0
> 15 |   0 |   0 | 0 |0 |
>0 |  0
> 16 |   0 |   0 | 0 |0 |
>0 |  0
> 17 |   0 |   0 | 0 |61451 |
>6 |  0
> 18 |   0 |   0 | 0 |0 |
>0 |  0
> 19 |   0 |   0 | 0 |0 |
>0 |  0
> 20 |   0 |   0 | 0 |0 |
>0 |  0
> 21 |   1 |   0 | 0 |9 |
>0 |  0
> 22 |   0 |   0 | 0 |0 |
>0 |  0
> 23 |   0 |   0 | 0 |0 |
>0 |  0
> 24 |   0 |   0 | 0 |1 |
>0 |  0
> 25 |   0 |   0 | 0 |0 |
>0 |  0
> 26 |   2 |   0 | 0 |   18 |
>0 |  0
> 27 |   0 |   0 | 0 |0 |
>0 |  0
> 28 |   0 |   0 | 0 |0 |
>0 |  0
> 29 |   0 |   0 | 0 |0 |
>0 |  0
> 30 |   0 |   0 | 0 |0 |
>0 |  0
> 31 |   0 |   0 | 0 |0 |
>0 |  0
> 32 |   0 |   0 | 0 |0 |
>0 |  0
> 33 |   4 |   0 | 0 |   207953 |
>0 |  0
> 50 |   8 |   0 | 0 |33388 |
>0 |  0
> 67 |   0 |   

[HACKERS] pg_stat_lwlocks view - lwlocks statistics, round 2

2012-10-13 Thread Satoshi Nagayasu
Hi all,

I have fixed my previous patch for pg_stat_lwlocks view, and
as Josh commented, it now supports local and global (shared)
statistics in the same system view.

Local statistics means the counters are only effective in the
same session, and shared ones means the counters are shared within
the entire cluster.

Also the global statistics would be collected via pgstat collector
process like other statistics do.

Now, the global statistics struct has been splitted into two parts
for different use, for bgwriter stats and lwlock stats.

Therefore, calling pg_stat_reset_shared('bgwriter') or
pg_stat_reset_shared('lwlocks') would reset dedicated struct,
not entire PgStat_GlobalStats.

Comments and review are always welcome.

Regards,

--
postgres=# SELECT * FROM pg_stat_lwlocks;
 lwlockid | local_calls | local_waits | local_time_ms | shared_calls |
shared_waits | shared_time_ms
--+-+-+---+--+--+
0 |   0 |   0 | 0 | 4268 |
  0 |  0
1 |  43 |   0 | 0 |  387 |
  0 |  0
2 |   0 |   0 | 0 |   19 |
  0 |  0
3 |   0 |   0 | 0 |   28 |
  0 |  0
4 |   3 |   0 | 0 |  315 |
  0 |  0
5 |   0 |   0 | 0 |   24 |
  0 |  0
6 |   1 |   0 | 0 |   76 |
  0 |  0
7 |   0 |   0 | 0 |16919 |
  0 |  0
8 |   0 |   0 | 0 |0 |
  0 |  0
9 |   0 |   0 | 0 |0 |
  0 |  0
   10 |   0 |   0 | 0 |0 |
  0 |  0
   11 |   0 |   0 | 0 |   75 |
  0 |  0
   12 |   0 |   0 | 0 |0 |
  0 |  0
   13 |   0 |   0 | 0 |0 |
  0 |  0
   14 |   0 |   0 | 0 |0 |
  0 |  0
   15 |   0 |   0 | 0 |0 |
  0 |  0
   16 |   0 |   0 | 0 |0 |
  0 |  0
   17 |   0 |   0 | 0 |61451 |
  6 |  0
   18 |   0 |   0 | 0 |0 |
  0 |  0
   19 |   0 |   0 | 0 |0 |
  0 |  0
   20 |   0 |   0 | 0 |0 |
  0 |  0
   21 |   1 |   0 | 0 |9 |
  0 |  0
   22 |   0 |   0 | 0 |0 |
  0 |  0
   23 |   0 |   0 | 0 |0 |
  0 |  0
   24 |   0 |   0 | 0 |1 |
  0 |  0
   25 |   0 |   0 | 0 |0 |
  0 |  0
   26 |   2 |   0 | 0 |   18 |
  0 |  0
   27 |   0 |   0 | 0 |0 |
  0 |  0
   28 |   0 |   0 | 0 |0 |
  0 |  0
   29 |   0 |   0 | 0 |0 |
  0 |  0
   30 |   0 |   0 | 0 |0 |
  0 |  0
   31 |   0 |   0 | 0 |0 |
  0 |  0
   32 |   0 |   0 | 0 |0 |
  0 |  0
   33 |   4 |   0 | 0 |   207953 |
  0 |  0
   50 |   8 |   0 | 0 |33388 |
  0 |  0
   67 |   0 |   0 | 0 |0 |
  0 |  0
(36 rows)

postgres=#
--


2012/06/26 21:11, Satoshi Nagayasu wrote:
> Hi all,
> 
> I've modified the pg_stat_lwlocks patch to be able to work with
> the latest PostgreSQL Git code.
> 
> This patch provides:
>pg_stat_lwlocks   New system view 

Re: [HACKERS] Successor of MD5 authentication, let's use SCRAM

2012-10-13 Thread Andrew Dunstan


On 10/13/2012 01:55 AM, Darren Duncan wrote:

John R Pierce wrote:

On 10/12/12 9:00 PM, Darren Duncan wrote:
And now we're migrating to Red Hat for the production launch, using 
the http://www.postgresql.org/download/linux/redhat/ packages for 
Postgres 9.1, and these do *not* include the SSL. 


hmm?  I'm using the 9.1 for CentOS 6(RHEL 6) and libpq.so certainly 
has libssl3.so, etc as references.  ditto the postmaster/postgres 
main program has libssl3.so too.   maybe your certificate chains 
don't come pre-built, I dunno, I haven't dealt with that end of things.


Okay, I'll have to look into that.  All I know is out of the box SSL 
just worked on Debian and it didn't on Red Hat; trying to enable SSL 
on out of the box Postgres on Red Hat gave a fatal error on server 
start, at the very least needing the installation of SSL keys/certs, 
which I didn't have to do on Debian. -- Darren Duncan

.
Of course RedHat RPMs are build with SSL.

Does Debian they create a self-signed certificate? If so, count me as 
unimpressed. I'd argue that's worse than doing nothing. Here's what the 
docs say (rightly) about such certificates:


   A self-signed certificate can be used for testing, but a certificate
   signed by a certificate authority (CA) (either one of the global CAs
   or a local one) should be used in production so that clients can
   verify the server's identity. If all the clients are local to the
   organization, using a local CA is recommended.

Creation of properly signed certificates is entirely outside the scope 
of Postgres, and I would not expect packagers to do it. I have created a 
local CA for RedHat and friends any number of times, and created signed 
certs for Postgres, both server and client, using them. It's not 
terribly hard.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding comments for system table/column names

2012-10-13 Thread Noah Misch
On Fri, Oct 12, 2012 at 01:29:21PM -0400, Bruce Momjian wrote:
> There was a thread in January of 2012 where we discussed the idea of
> pulling system table/column name descriptions from the SGML docs and
> creating SQL comments for them:
> 
>   http://archives.postgresql.org/pgsql-hackers/2012-01/msg00837.php
> 
> Magnus didn't seem to like the idea:
> 
>   http://archives.postgresql.org/pgsql-hackers/2012-01/msg00848.php
> 
>   Well, I'd expect some of those columns to get (at least over time)
>   significantly more detailed information than they have now. Certainly
>   more than you'd put in comments in the catalogs. And having some sort
>   of combination there seems to overcomplicate things...
> 
> I think the idea of having the short descriptions in SQL and longer ones
> in SGML is not maintainable.  One idea would be to clip the SQL
> description to be no longer than a specified number of characters, with
> proper word break detection.

I prefer overlong entries to machine-truncated ones.  Seeing "Does the access
method support ordered" for both pg_am.amcanorder and pg_am.amcanorderbyop
thanks to the choice of truncation point does not seem like a win.

We could store a short version in the SGML markup, solely for this process to
extract.  In its absence, use the documentation-exposed text. The extractor
could emit a warning when it uses a string longer than N characters, serving
as a hint to add short-version markup for some column.  If that's too hard,
though, I'd still prefer overlong entries to nothing or to truncated entries.

> Should I continue working on this patch?

Please do; I've missed having this information handy.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers