Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Boszormenyi Zoltan

2012-04-09 19:32 keltezéssel, Robert Haas írta:

On Sun, Apr 8, 2012 at 9:37 PM, Robert Haas  wrote:

Robert, the Assert triggering with the above procedure
is in your "fast path" locking code with current GIT.

Yes, that sure looks like a bug.  It seems that if the top-level
transaction is aborting, then LockReleaseAll() is called and
everything gets cleaned up properly; or if a subtransaction is
aborting after the lock is fully granted, then the locks held by the
subtransaction are released one at a time using LockRelease(), but if
the subtransaction is aborted *during the lock wait* then we only do
LockWaitCancel(), which doesn't clean up the LOCALLOCK.  Before the
fast-lock patch, that didn't really matter, but now it does, because
that LOCALLOCK is tracking the fact that we're holding onto a shared
resource - the strong lock count.  So I think that LockWaitCancel()
needs some kind of adjustment, but I haven't figured out exactly what
it is yet.

I looked at this more.  The above analysis is basically correct, but
the problem goes a bit beyond an error in LockWaitCancel().  We could
also crap out with an error before getting as far as LockWaitCancel()
and have the same problem.  I think that a correct statement of the
problem is this: from the time we bump the strong lock count, up until
the time we're done acquiring the lock (or give up on acquiring it),
we need to have an error-cleanup hook in place that will unbump the
strong lock count if we error out.   Once we're done updating the
shared and local lock tables, the special handling ceases to be
needed, because any subsequent lock release will go through
LockRelease() or LockReleaseAll(), which will do the appropriate
clenaup.

The attached patch is an attempt at implementing that; any reviews appreciated.


This patch indeed fixes the scenario discovered by Cousin Marc.

Reading this patch also made me realize that my lock_timeout
patch needs adjusting, i.e. needs an AbortStrongLockAcquire()
call if waiting for a lock timed out.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] To Do wiki

2012-04-09 Thread Heikki Linnakangas

On 10.04.2012 03:32, Jeff Janes wrote:

The To Do wiki says not to add things to the page with discussing here.

So here are some things to discuss.  Assuming the discussion is a
brief yup or nope, it seems to make sense to lump them into one email:

Vacuuming a table with a large GIN index is painfully slow, because
the index is vacuumed in logical order not physical order.  Is making
a vacuum in physical order a to-do?  Does this belong to vacuuming, or
to GIN indexing?  Looking at the complexity of how this was done for
btree index, I would say this is far from easy.  I wonder if there is
an easier way that is still good enough, for example every time you
split a page, check to see if a vacuum is in the index, and if so only
move tuples physically rightward.  If the table is so active that
there is essentially always a vacuum in the index, this could lead to
bloat.  But if the table is that large and active, under the current
non-physical order the vacuum would likely take approximately forever
to finish and so the bloat would be just as bad under that existing
system.


Yup, seems like a todo. It doesn't sound like a good idea to force 
tuples to be moved right when a vacuum is in progress, that could lead 
to bloating, but it should be feasible to implement the same 
cycleid-mechanism in gin that we did in b-tree.



"Speed up COUNT(*)"  is marked as done.  While index-only-scans should
speed this up in certain cases, it is nothing compared to the speed up
that could be obtained by "use a fixed row count and a +/- count to
follow MVCC visibility rules", and that speed-up is the one people
used to MyISAM are expecting.  We might not want to actually implement
the fixed row count +/- MVCC count idea, but we probably shouldn't
mark the whole thing as done because just one approach to it was
implemented.


I think the way we'd speed up COUNT(*) further would be to implement 
materialized views. Then you could define a materialized view on 
COUNT(*), and essentially get a row counter similar to MyISAM. I think 
it's fair to mark this as done.



sort_support was implemented for plain tuple sorting only, To Do is
extend to index-creation sorts (item 2 from message
<1698.1323222...@sss.pgh.pa.us>)


Index-creation sorts are already handled, Tom is referring to using the 
new comparator API for index searches in that email. The change would go 
to _bt_compare().


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #6522: PostgreSQL does not start

2012-04-09 Thread Amit Kapila
I cannot see your task manager, may be you can send it as .bmp attached with
this mail.

 

As there is only one postgres process, it seems your postgres server itself
is not started.

 

>>For the second I have little experience with computers, you could help me
write the correct command.

a.Go to your postgres installation directory in command prompt

b.Run command : Postgres.exe -D 

c.For your data directory path, check the path where you have created
the initial database with initdb

 

 

 

 

From: Tatiana Ortiz [mailto:tatyp...@gmail.com] 
Sent: Tuesday, April 10, 2012 9:37 AM
To: Amit Kapila
Subject: Re: [BUGS] BUG #6522: PostgreSQL does not start

 

Thanks, for your help and sorry for the delay, in Puerto Rico, we had some
days off. I did the first recommendation you asked. 

 

In task manager I see one process:

 

 For the second I have little experience with computers, you could help me
write the correct command. 

 

Tatiana

 

On Thu, Apr 5, 2012 at 9:24 PM, Amit Kapila  wrote:

According to what I can see from this defect that it is not confirmed
whether the Postgre server is started or not properly.

You can try on confirming about the following 2 points:
1. How many postgres processes you are able to see in your task manager.
This can give hint whether appropriate postgres services are started
2. try to start the postgres from command promt with command
  Postgres.exe -D 
  Please tell what it prints on command promt.



-Original Message-
From: pgsql-bugs-ow...@postgresql.org
[mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of Kevin Grittner
Sent: Monday, April 02, 2012 11:43 PM
To: Tatiana Ortiz
Cc: pgsql-b...@postgresql.org
Subject: Re: [BUGS] BUG #6522: PostgreSQL does not start

[Please keep the list copied.  I won't respond to any more emails
directly to me without a copy to the list.]

Tatiana Ortiz  wrote:
> Kevin Grittner >> Test if you have network connectivity from your client to the
>>> server host using ping or equivalent tools.
>>
>> Do you get a response when you ping 127.0.0.1?
>
> I have not tried that.

Well, nobody here can, so how will we know if that is working?

>>> Is your network / VPN/SSH tunnel / firewall configured
>>> correctly?
>>
>> What did you do to check that?
>
> It*s configured correctly; I have verified it in the control
> panel.

What, exactly, did you verify was true about the configuration?

>>> If you double-checked your configuration but still get this
>>> error message, it`s still unlikely that you encounter a fatal
>>> PostgreSQL misbehavior. You probably have some low level network
>>> connectivity problems (e.g. firewall configuration). Please
>>> check this thoroughly before reporting a bug to the PostgreSQL
>>> community.
>>
>> What did you do to check this?
>
> The Firewall configuration is correct.

Something isn't.  If you have a firewall running, I sure wouldn't
rule it out without pretty good evidence pointing to something else.
Do you have an anti-virus product installed?  (Note, I didn't ask
whether it was enabled -- even when supposedly disabled, many AV
products can cause problems like this.)

>> Your previous email mentioned deleting the postmaster.pid file.
>> Do you have any more detail on what you did?
>
> When I deleted the postmaster.pid, and then went to the Services
> to give a restart to the Postgre service, the file reappeared.

That's an interesting data point, although not enough to pin it down
without other facts not in evidence.

>>> If you know of something I could do to gain access to the
>>> database let me know.
>>
>> Start Task Manager and look in the Processes tab.  Are there any
>> Postgres processes active?
>
> [suggestion apparently ignored]

If you won't provide information, nobody can help you.

>> From a command line, run:
>>
>> netstat -p TCP -a
>>
>> and see if anything is listening on port 5432.
>
> I tried this, and it gave me this result:
>
> [image: nothing listening on port 5432]

So, either the PostgreSQL service isn't running, or it is not
offering IP services on the default port.

Is anything appearing in any of the Windows event logs around the
time you attempt to start the service?  Can you find a PostgreSQL
log file anywhere?  Without knowing what installer was used, it
would be hard to suggest where to look, but sometimes the log files
are in a subdirectory named pg_log, and sometimes there is a file
named "logfile" in the PostgreSQL data directory.

Assistance on this list is provided by volunteers.  If you don't
care enough about what you've got wrong in your environment to
perform the requested diagnostic steps, those contributing their
time are likely to lose interest and stop responding.  I have 200
databases running just fine on 100 servers scattered across the
state.  What are you doing that isn't working?  It's not my
responsibility to sort that out, but I'm willing to help if you're
willing to take responsibility for your end.

-Kevin

--
Sent via pgsql-bugs m

Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Jeff Davis
On Mon, 2012-04-09 at 22:47 -0700, Jeff Davis wrote:
> but other similar paths do:
> 
>   if (!proclock)
>   {
> AbortStrongLockAcquire();
> 
> I don't think it's necessary outside of LockErrorCleanup(), right?

I take that back, it's necessary for the dontwait case, too.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Jeff Davis
On Mon, 2012-04-09 at 13:32 -0400, Robert Haas wrote:
> I looked at this more.  The above analysis is basically correct, but
> the problem goes a bit beyond an error in LockWaitCancel().  We could
> also crap out with an error before getting as far as LockWaitCancel()
> and have the same problem.  I think that a correct statement of the
> problem is this: from the time we bump the strong lock count, up until
> the time we're done acquiring the lock (or give up on acquiring it),
> we need to have an error-cleanup hook in place that will unbump the
> strong lock count if we error out.   Once we're done updating the
> shared and local lock tables, the special handling ceases to be
> needed, because any subsequent lock release will go through
> LockRelease() or LockReleaseAll(), which will do the appropriate
> clenaup.
> 
> The attached patch is an attempt at implementing that; any reviews 
> appreciated.
> 

This path doesn't have an AbortStrongLockAcquire:

  if (!(proclock->holdMask & LOCKBIT_ON(lockmode)))
  {
...
elog(ERROR,...)

but other similar paths do:

  if (!proclock)
  {
AbortStrongLockAcquire();

I don't think it's necessary outside of LockErrorCleanup(), right?

I think there could be some more asserts. There are three sites that
decrement FastPathStrongRelationLocks, but in two of them
StrongLockInProgress should always be NULL.

Other than that, it looks good to me.

Regards,
Jeff Davis






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-09 Thread Jan Urbański

On 10/04/12 04:20, Tom Lane wrote:

Don't know if anybody noticed bug #6559
http://archives.postgresql.org/pgsql-bugs/2012-03/msg00180.php

I've confirmed that the given test case works in 9.0 but fails in
9.1 and HEAD.

I find this code pretty unreadable, though, and know nothing to
speak of about the Python side of things anyhow.  So somebody else
had better pick this up.


I'll look into that.

Cheers,
Jan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Christopher Browne
On Mon, Apr 9, 2012 at 7:38 PM, Robert Haas  wrote:
> On Mon, Apr 9, 2012 at 6:23 PM, Noah Misch  wrote:
>> But objective rules do not require a just judge, and they have a
>> different advantage: predictability.  If I know that a clock starts ticking
>> the moment I get my first review, I'll shape my personal plan accordingly.
>> That works even if I don't favor that timer to govern CFs.
>
> In theory this is true, but previous attempts at enforcing a
> time-based rule were, as I say, not a complete success.  Maybe we just
> need greater consensus around the rule, whatever it is.
>
> At any rate, I think your comments are driving at a good point, which
> is that CommitFests are a time for patches that are done or very
> nearly done to get committed, and a time for other patches to get
> reviewed if they haven't been already.  If we make it clear that the
> purpose of the CommitFest is to assess whether the patch is
> committable, rather than to provide an open-ended window for it to
> become committable, we might do better.

Yeah, I think there's pretty good room for a "+1" on that.

We have seen a number of patches proposed where things have clearly
stepped backwards into the Design Phase, and when that happens, it
should be pretty self-evident that the would-be change can NOT
possibly be nearly-ready-to-commit.

It seems as though we need to have a "bad guy" that will say, "that
sure isn't ready to COMMIT, so we'd better step back from imagining
that it ought to be completed as part of this COMMITfest."

But there is also a flip side to that, namely that if we do so, there
ought to be some aspect to the process to help guide those items that
*aren't* particularly close to being committable.  That seems
nontrivial, as it shouldn't involve quite the same behaviors, and I'm
not quite certain what the differences ought to be.  Further, the
"HackFest" activities will be somewhat immiscible with CommitFest
activities, as they're of somewhat different kinds.

Or perhaps I'm wrong there.  Perhaps it's just that we need to be
*much* more willing to have the final 'fest bounce things.

I wonder if we're starting to have enough data to establish meaningful
statistics on feedback.  The "Scrum" development methodology tries to
attach estimated costs to tasks and then compare to completion rates
to then refine the estimates on completion rates to therefore improve
future estimates.  We have a fair body of data available from the
CommitFest data; perhaps it is time to try to infer some rules as to
what patterns on patches may indicate troubled features that are
particularly likely to get deferred.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plpython triggers are broken for composite-type columns

2012-04-09 Thread Tom Lane
Don't know if anybody noticed bug #6559
http://archives.postgresql.org/pgsql-bugs/2012-03/msg00180.php

I've confirmed that the given test case works in 9.0 but fails in
9.1 and HEAD.  It's not terribly sensitive to the details of the
SQL: any non-null value for the composite column fails, for instance
you can try
INSERT INTO tbl VALUES (row(3), 4);
and it spits up just the same.  The long and the short of it is that
PLy_modify_tuple fails to make sense of what PLyDict_FromTuple
produced for the table row.

I tried to trace through things to see exactly where it was going wrong,
and noted that

(1) When converting the table row to a Python dict, the composite
column value is fed through the generic PLyString_FromDatum() function,
which seems likely to be the wrong choice.

(2) When converting back, the composite column value is routed to
PLyObject_ToTuple, which decides it is a Python sequence, which seems
a bit improbable considering it was merely a string a moment ago.

I find this code pretty unreadable, though, and know nothing to
speak of about the Python side of things anyhow.  So somebody else
had better pick this up.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] To Do wiki

2012-04-09 Thread Jeff Janes
The To Do wiki says not to add things to the page with discussing here.

So here are some things to discuss.  Assuming the discussion is a
brief yup or nope, it seems to make sense to lump them into one email:

Vacuuming a table with a large GIN index is painfully slow, because
the index is vacuumed in logical order not physical order.  Is making
a vacuum in physical order a to-do?  Does this belong to vacuuming, or
to GIN indexing?  Looking at the complexity of how this was done for
btree index, I would say this is far from easy.  I wonder if there is
an easier way that is still good enough, for example every time you
split a page, check to see if a vacuum is in the index, and if so only
move tuples physically rightward.  If the table is so active that
there is essentially always a vacuum in the index, this could lead to
bloat.  But if the table is that large and active, under the current
non-physical order the vacuum would likely take approximately forever
to finish and so the bloat would be just as bad under that existing
system.

"Speed up COUNT(*)"  is marked as done.  While index-only-scans should
speed this up in certain cases, it is nothing compared to the speed up
that could be obtained by "use a fixed row count and a +/- count to
follow MVCC visibility rules", and that speed-up is the one people
used to MyISAM are expecting.  We might not want to actually implement
the fixed row count +/- MVCC count idea, but we probably shouldn't
mark the whole thing as done because just one approach to it was
implemented.

sort_support was implemented for plain tuple sorting only, To Do is
extend to index-creation sorts (item 2 from message
<1698.1323222...@sss.pgh.pa.us>)

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Tom Lane
Robert Haas  writes:
> At any rate, I think your comments are driving at a good point, which
> is that CommitFests are a time for patches that are done or very
> nearly done to get committed, and a time for other patches to get
> reviewed if they haven't been already.  If we make it clear that the
> purpose of the CommitFest is to assess whether the patch is
> committable, rather than to provide an open-ended window for it to
> become committable, we might do better.

Yeah, earlier today I tried to draft a reply saying more or less that,
though I couldn't arrive at such a succinct formulation.  It's clear
that in this last fest, there was a lot of stuff submitted that was not
ready for commit or close to it.  What we should have done with that was
review it, but *not* hold open the fest while it got rewritten.

We've previously discussed ideas like more and shorter commitfests
--- I seem to recall proposals like a week-long fest once a month,
for instance.  That got shot down on the argument that it presumed
too much about authors and reviewers being able to sync their schedules
to a narrow review window.  But I think that fests lasting more than a
month are definitely not good.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 6:23 PM, Noah Misch  wrote:
> But objective rules do not require a just judge, and they have a
> different advantage: predictability.  If I know that a clock starts ticking
> the moment I get my first review, I'll shape my personal plan accordingly.
> That works even if I don't favor that timer to govern CFs.

In theory this is true, but previous attempts at enforcing a
time-based rule were, as I say, not a complete success.  Maybe we just
need greater consensus around the rule, whatever it is.

At any rate, I think your comments are driving at a good point, which
is that CommitFests are a time for patches that are done or very
nearly done to get committed, and a time for other patches to get
reviewed if they haven't been already.  If we make it clear that the
purpose of the CommitFest is to assess whether the patch is
committable, rather than to provide an open-ended window for it to
become committable, we might do better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Jeff Davis
On Mon, 2012-04-09 at 17:42 -0500, Jim Nasby wrote:
> Dumb question... should operations in the various StrongLock functions
> take place in a critical section? Or is that already ensure outside of
> these functions?

Do you mean CRITICAL_SECTION() in the postgres sense (that is, avoid
error paths by making all ERRORs into PANICs and preventing interrupts);
or the sense described here:
http://en.wikipedia.org/wiki/Critical_section ?

If you mean in the postgres sense, you'd have to hold the critical
section open from the time you incremented the strong lock count all the
way until you decremented it (which is normally at the time the lock is
released); which is a cure worse than the disease.

Regards,
Jeff Davis



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Jim Nasby

On 4/9/12 12:32 PM, Robert Haas wrote:

I looked at this more.  The above analysis is basically correct, but
the problem goes a bit beyond an error in LockWaitCancel().  We could
also crap out with an error before getting as far as LockWaitCancel()
and have the same problem.  I think that a correct statement of the
problem is this: from the time we bump the strong lock count, up until
the time we're done acquiring the lock (or give up on acquiring it),
we need to have an error-cleanup hook in place that will unbump the
strong lock count if we error out.   Once we're done updating the
shared and local lock tables, the special handling ceases to be
needed, because any subsequent lock release will go through
LockRelease() or LockReleaseAll(), which will do the appropriate
clenaup.

The attached patch is an attempt at implementing that; any reviews appreciated.


Dumb question... should operations in the various StrongLock functions take 
place in a critical section? Or is that already ensure outside of these 
functions?
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Noah Misch
On Mon, Apr 09, 2012 at 09:25:36AM -0400, Robert Haas wrote:
> On Mon, Apr 9, 2012 at 1:38 AM, Noah Misch  wrote:
> > http://wiki.postgresql.org/wiki/Running_a_CommitFest suggests marking a 
> > patch
> > Returned with Feedback after five consecutive days of Waiting on Author. 
> > ?That
> > was a great tool for keeping things moving, and I think we should return to 
> > it
> > or a similar timer. ?It's also an objective test approximating the 
> > subjective
> > "large patch needs too much rework" test. ?One cure for insufficient review
> > help is to then ratchet down the permitted Waiting on Author days.
> 
> Fully agreed.  However, attempts by me to vigorously enforce that
> policy in previous releases met with resistance.  However, not
> enforcing it has led to the exact same amount of unhappiness by, well,
> more or less the exact same set of people.

Incidentally, a limit of ~8 total days in Waiting on Author might work better
than a limit of 5 consecutive days in Waiting on Author.  It would bring fewer
perverse incentives at the cost of taking more than a glance to calculate.

> > I liked Simon's idea[1] for increasing the review supply: make a community
> > policy that patch submitters shall furnish commensurate review effort. ?If
> > review is available-freely-but-we-hope-you'll-help, then the supply relative
> > to patch submissions is unpredictable. ?Feature sponsors should see patch
> > review as efficient collaborative development. ?When patch authorship teams
> > spend part of their time reviewing other submissions with the expectation of
> > receiving comparable reviews of their own work, we get a superior final
> > product compared to allocating all that time to initial patch writing. ?(The
> > details might need work. ?For example, do we give breaks for new 
> > contributors
> > or self-sponsored authors?)
> 
> I guess my problem is that I have always viewed it as the
> responsibility of patch submitters to furnish commensurate review
> effort.  The original intent of the CommitFest process was that
> everyone would stop working on their own patches and review other
> people's patches.  That's clearly not happening any more.

Maybe we'd just reemphasize/formalize that past understanding, then.

> Of course, part of the problem here is that it's very hard to enforce
> sanctions.  First, people don't like to be sanctioned and tend to
> argue about it, which is not only un-fun for the person attempting to
> impose the sanction but also chews up even more of the limited review
> time in argument.  Second, the standard is inevitably going to be
> fuzzy.  If person A submits a large patch and two small patches and
> reviews two medium-size patches and misses a serious design flaw in
> one of them that Tom spends four days fixing, what's the appropriate
> sanction for that?  Especially if their own patches are already
> committed?  Does it matter whether they missed the design flaw due to
> shoddy reviewing or just because most of us aren't as smart as Tom?  I
> mean, we can't go put time clocks on everyone's desk and measure the
> amount of time they spend on patch development and patch review and
> start imposing sanctions when that falls below some agreed-upon ratio.
>  In the absence of some ability to objectively measure people's
> contributions in this area, we rely on everyone's good faith.
> 
> So the we-should-require-people-to-review thing seems like a bit of a
> straw man to me.  It's news to me that any such policy has ever been
> lacking.  The thing is that, aside from the squishiness of the
> criteria, we have no enforcement mechanism.  As a result, some people
> choose to take advantage of the system, and the longer we fail to
> enforce, the more people go that route, somewhat understandably.

I don't envision need for sanctions based on missing things in reviews.  It
doesn't take much of a review to be better than nothing, so let's keep the
process friendly to new and tentative reviewers.  When an experienced hacker
misses something sufficiently-obvious, the self-recognition will regularly
motivate greater care going forward.  Nonetheless, yes, I don't see any of
this insulating us from bad faith or gaming of the system.  I tentatively
assume that we have people acting in good faith on perverse incentives, not
people acting in bad faith.

> David Fetter has floated the idea, a few times, of appointing a
> release manager who, AIUI, would be given dictatorial power to evict
> patches from the last CommitFest according to that person's technical
> judgement and ultimately at their personal discretion to make sure
> that the release happens in a timely fashion.  I remarked at the last
> developer meeting that I would be happy to have such a role, as long
> as I got to occupy it.  This was actually intended as a joking remark,
> but I think several people took it more seriously than I meant it.
> The point I was going for is: nobody really likes having a dictator,
> unless either (1

Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Jeff Davis
On Mon, 2012-04-09 at 16:11 -0400, Robert Haas wrote:
> > I wonder though whether
> > you actually need a *count*.  What if it were just a flag saying "do not
> > take any fast path locks on this object", and once set it didn't get
> > unset until there were no locks left at all on that object?
> 
> I think if you read the above-referenced section of the README you'll
> be deconfused.

My understanding:

The basic reason for the count is that we need to preserve the
information that a strong lock is being acquired between the time it's
put in FastPathStrongRelationLocks and the time it actually acquires the
lock in the lock manager.

By definition, the lock manager doesn't know about it yet (so we can't
use a simple rule like "there are no locks on the object so we can unset
the flag"). Therefore, the backend must indicate whether it's in this
code path or not; meaning that it needs to do something on the error
path (in this case, decrement the count). That was the source of this
bug.

There may be a way around this problem, but nothing occurs to me right
now.

Regards,
Jeff Davis

PS: Oops, I missed this bug in the review, too.

PPS: In the README, FastPathStrongRelationLocks is referred to as
FastPathStrongLocks. Worth a quick update for easier symbol searching.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Peter Eisentraut
On lör, 2012-04-07 at 16:51 -0400, Robert Haas wrote:
> Even before this CommitFest, it's felt to me like this hasn't been a
> great cycle for reviewing.  I think we have generally had fewer people
> doing reviews than we did during the 9.0 and 9.1 cycles.  I think we
> had a lot of momentum with the CommitFest process when it was new, but
> three years on I think there's been some ebbing of the relative
> enthusiastic volunteerism that got off the ground.  I don't have a
> very good idea what to do about that, but I think it bears some
> thought.

But the patches left in the current commit fest all have gotten a decent
amount of reviewing.  The patches are still there because the reviews
have identified problems and there was not enough development time to
fix them.  I don't think more reviewing resources would have changed
this in a significant way.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 2:42 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Mon, Apr 9, 2012 at 1:49 PM, Tom Lane  wrote:
>>> Haven't looked at the code, but maybe it'd be better to not bump the
>>> strong lock count in the first place until the final step of updating
>>> the lock tables?
>
>> Well, unfortunately, that would break the entire mechanism.  The idea
>> is that we bump the strong lock count first.  That prevents anyone
>> from taking any more fast-path locks on the target relation.  Then, we
>> go through and find any existing fast-path locks that have already
>> been taken, and turn them into regular locks.  Finally, we resolve the
>> actual lock request and either grant the lock or block, depending on
>> whether conflicts exist.
>
> OK.  (Is that explained somewhere in the comments?  I confess I've not
> paid any attention to this patch up to now.)

There's a new section in src/backend/storage/lmgr/README on Fast Path
Locking, plus comments at various places in the code.  It's certainly
possible I've missed something that should be updated, but I did my
best.

> I wonder though whether
> you actually need a *count*.  What if it were just a flag saying "do not
> take any fast path locks on this object", and once set it didn't get
> unset until there were no locks left at all on that object?

I think if you read the above-referenced section of the README you'll
be deconfused.  The short version is that we divide up the space of
lockable objects into 1024 partitions and the strong lock counts are
actually a count of all locks in the partition.  It is therefore
theoretically possible for locking to get slower on table A because
somebody's got an AccessExclusiveLock on table B, if the low-order 10
bits of the locktag hashcodes happen to collide.  In such a case, all
locks on both relations would be forced out of the fast path until the
AccessExclusiveLock was released. If it so happens that table A is
getting pounded with something that looks a lot like pgbench -S -c 32
-j 32 on a system with more than a couple of cores, the user will be
sad.  I judge that real-world occurrences of this problem will be
quite rare, since most people have adequate reasons for long-lived
strong table locks anyway, and 1024 partitions seemed like enough to
keep most people from suffering too badly.  I don't see any way to
eliminate the theoretical possibility of this while still having the
basic mechanism work, either, though we could certainly crank up the
partition count.

> In
> particular, it's not clear from what you're saying here why it's okay
> to let the value revert once you've changed some of the FP locks to
> regular locks.

It's always safe to convert a fast-path lock to a regular lock; it
just costs you some performance.  The idea is that everything that
exists as a fast-path lock is something that's certain not to have any
lock conflicts.  As soon as we discover that a particular lock might
be involved in a lock conflict, we have to turn it into a "real" lock.
 So if backends 1, 2, and 3 take fast-path locks on A (to SELECT from
it, for example) and then backend 4 wants an AccessExclusiveLock, it
will pull the locks from those backends out of the fast-path mechanism
and make regular lock entries for them before checking for lock
conflicts.  Then, it will discover that there are in fact conflicts
and go to sleep.  When those backends go to release their locks, they
will notice that their locks have been moved to the main lock table
and will release them there, eventually waking up backend 4 to go do
his thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Tom Lane
Alvaro Herrera  writes:
> Excerpts from Tom Lane's message of lun abr 09 15:38:21 -0300 2012:
>> What exactly would you do with it there that you couldn't do more easily
>> and clearly with plain timestamp comparisons?  I'm willing to be
>> convinced, but I want to see a case where it really is the best way.

> You mean, having the constraint declaration rotate the timestamptz
> column to timestamp and then extract the epoch from that?  If you go
> that route, then the queries that wish to take advantage of constraint
> exclusion would have to do likewise, which becomes ugly rather quickly.

No, I'm wondering why the partition constraints wouldn't just be

tstzcol >= '2012-04-01 00:00' and tstzcol < '2012-05-01 00:00'

or similar.  What sort of constraint have you got in mind that is more
naturally expressed involving extract(epoch)?  (And will the planner
think so too?)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] should encoding names be quoted in error messages?

2012-04-09 Thread Peter Eisentraut
Encoding names are currently sometimes quoted (encoding \"%s\"),
sometimes not (encoding %s).  Which one should we settle on?

In favor of quoting is that we do this for everything else.  But since
the possible encoding names are known in advance, we know we don't have
to do the quoting to avoid ambiguities.

Opinions?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another review of URI for libpq, v7 submission

2012-04-09 Thread Alvaro Herrera
Excerpts from Peter Eisentraut's message of vie abr 06 03:09:10 -0300 2012:
> On fre, 2012-04-06 at 00:25 -0300, Alvaro Herrera wrote:
> > Some moments of radical thinking later, I became unhappy with the fact
> > that the conninfo stuff and parameter keywords are all crammed in the
> > PQconnectdbParams description.  This feels wrong to me, even more so
> > after we expand it even more to add URIs to the mix.  I think it's
> > better to create a separate sect1 (which I've entitled "Connection
> > Strings") which explains the conninfo and URI formats as well as
> > accepted keywords.  The new section is referenced from the multiple
> > places that need it, without having to point to PQconnectdbParams.
> 
> Yes, it should be split out.  But the libpq chapter already has too many
> tiny sect1s.  I think it should be a sect2 under "Database Connection
> Control".

Thanks, that seems a good idea.  I have tweaked things slightly and it
looks pretty decent to me.  Wording improvements are welcome.  The file
in its entirety can be seen here:
https://github.com/alvherre/postgres/blob/uri/doc/src/sgml/libpq.sgml
The new bits start at line 1224.  I also attach the HTML output for easy
reading.  (I wonder if it's going to be visible in the archives).

There are three minor things that need to be changed for this to be
committable:

1. it depends on strtok_r which is likely to be a problem in MSVC++ and
perhaps older Unix platforms as well.

2. The ssl=true trick being converted into sslmode=require doesn't work
if the URI specifies them uri-encoded, which seems bogus.

3. if an unknown keyword is uri-encoded, the error message displays it
still uri-encoded.  Seems to me it'd be better to uri-decode it before
throwing error.

Alexander says he's going to work on these and then I'll finally commit it.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Title: Database Connection Control Functions

PostgreSQL 9.2devel DocumentationPrevUpChapter 31. libpq - C LibraryNext31.1. Database Connection Control Functions   The following functions deal with making a connection to a
   PostgreSQL backend server.  An
   application program can have several backend connections open at
   one time.  (One reason to do that is to access more than one
   database.)  Each connection is represented by a
   PGconn object, which
   is obtained from the function PQconnectdb,
   PQconnectdbParams, or
   PQsetdbLogin.  Note that these functions will always
   return a non-null object pointer, unless perhaps there is too
   little memory even to allocate the PGconn object.
   The PQstatus function should be called to check
   the return value for a successful connection before queries are sent
   via the connection object.

   Warning On Unix, forking a process with open libpq connections can lead to
 unpredictable results because the parent and child processes share
 the same sockets and operating system resources.  For this reason,
 such usage is not recommended, though doing an exec from
 the child process to load a new executable is safe.


   Note:  On Windows, there is a way to improve performance if a single
 database connection is repeatedly started and shutdown.  Internally,
 libpq calls WSAStartup() and WSACleanup() for connection startup
 and shutdown, respectively.  WSAStartup() increments an internal
 Windows library reference count which is decremented by WSACleanup().
 When the reference count is just one, calling WSACleanup() frees
 all resources and all DLLs are unloaded.  This is an expensive
 operation.  To avoid this, an application can manually call
 WSAStartup() so resources will not be freed when the last database
 connection is closed.


   PQconnectdbParams   Makes a new connection to the database server.

PGconn *PQconnectdbParams(const char * const *keywords,
  const char * const *values,
  int expand_dbname);
 This function opens a new database connection using the parameters taken
   from two NULL-terminated arrays. The first,
   keywords, is defined as an array of strings, each one
   being a key word. The second, values, gives the value
   for each key word. Unlike PQsetdbLogin below, the parameter
   set can be extended without changing the function signature, so use of
   this function (or its nonblocking analogs PQconnectStartParams
   and PQconnectPoll) is preferred for new application
   programming.
 The currently recognized parameter key words are listed in
   Section 31.1.1.
 When expand_dbname is non-zero, the
   dbname key word value is allowed to be recognized
   as a connection string. More details on the possible formats appear in 
   Section 31.1.2.
 The passed arrays can be empty to use all default parame

Re: [HACKERS] HOT updates & REDIRECT line pointers

2012-04-09 Thread Bruce Momjian
On Wed, Mar 21, 2012 at 09:28:22PM -0400, Robert Haas wrote:
> On Wed, Mar 21, 2012 at 9:22 PM, Tom Lane  wrote:
> >> It strikes me that it likely wouldn't be any
> >> worse than, oh, say, flipping the default value of
> >> standard_conforming_strings,
> >
> > Really?  It's taking away functionality and not supplying any substitute
> > (or at least you did not propose any).  In fact, you didn't even suggest
> > exactly how you propose to not break joined UPDATE/DELETE.
> 
> Oh, hmm, interesting.  I had been thinking that you were talking about
> a case where *user code* was relying on the semantics of the TID,
> which has always struck me as an implementation detail that users
> probably shouldn't get too attached to.  But now I see that you're
> talking about something much more basic - the fundamental
> implementation of UPDATE and DELETE relies on the TID not changing
> under them.  That pretty much kills this idea dead in the water.

Should this information be added to src/backend/access/heap/README.HOT?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Alvaro Herrera

Excerpts from Tom Lane's message of lun abr 09 15:38:21 -0300 2012:
> 
> Alvaro Herrera  writes:
> >> Robert Haas  writes:
> >>> If somebody needs it I'd probably be in favor of doing it.  I'm not
> >>> sure I'd do it on spec.
> 
> > It would be useful to have a simple function to use with timestamp in
> > constraint exclusion without having to use contorted expressions ...
> > An immutable extract_epoch(timestamptz) would fit the bill.
> 
> What exactly would you do with it there that you couldn't do more easily
> and clearly with plain timestamp comparisons?  I'm willing to be
> convinced, but I want to see a case where it really is the best way.

You mean, having the constraint declaration rotate the timestamptz
column to timestamp and then extract the epoch from that?  If you go
that route, then the queries that wish to take advantage of constraint
exclusion would have to do likewise, which becomes ugly rather quickly.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding column reordering project for GSoc 2012

2012-04-09 Thread Bruce Momjian
On Mon, Apr 09, 2012 at 01:29:46PM -0500, Merlin Moncure wrote:
> but generally speaking jdbc is displacing odbc as the 'go to' library
> for connection between different kinds of database systems, especially
> on non-windows environments.  jdbc is to java as fdw is to postgres
> basically.  so a fdw exposed jdbc driver should be able to connect and
> gather data from just about anything -- even something like sql server
> so that you could bypass the freetds dependency which is quite nice.

Yes, I can see jdbc-fdw being very powerful.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Tom Lane
Robert Haas  writes:
> On Mon, Apr 9, 2012 at 1:49 PM, Tom Lane  wrote:
>> Haven't looked at the code, but maybe it'd be better to not bump the
>> strong lock count in the first place until the final step of updating
>> the lock tables?

> Well, unfortunately, that would break the entire mechanism.  The idea
> is that we bump the strong lock count first.  That prevents anyone
> from taking any more fast-path locks on the target relation.  Then, we
> go through and find any existing fast-path locks that have already
> been taken, and turn them into regular locks.  Finally, we resolve the
> actual lock request and either grant the lock or block, depending on
> whether conflicts exist.

OK.  (Is that explained somewhere in the comments?  I confess I've not
paid any attention to this patch up to now.)  I wonder though whether
you actually need a *count*.  What if it were just a flag saying "do not
take any fast path locks on this object", and once set it didn't get
unset until there were no locks left at all on that object?  In
particular, it's not clear from what you're saying here why it's okay
to let the value revert once you've changed some of the FP locks to
regular locks.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Tom Lane
Alvaro Herrera  writes:
>> Robert Haas  writes:
>>> If somebody needs it I'd probably be in favor of doing it.  I'm not
>>> sure I'd do it on spec.

> It would be useful to have a simple function to use with timestamp in
> constraint exclusion without having to use contorted expressions ...
> An immutable extract_epoch(timestamptz) would fit the bill.

What exactly would you do with it there that you couldn't do more easily
and clearly with plain timestamp comparisons?  I'm willing to be
convinced, but I want to see a case where it really is the best way.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding column reordering project for GSoc 2012

2012-04-09 Thread Andrew Dunstan



On 04/09/2012 02:14 PM, Bruce Momjian wrote:

On Tue, Mar 20, 2012 at 01:25:15PM +0100, Claes Jakobsson wrote:

On 20 mar 2012, at 13.08, Heikki Linnakangas wrote:

On 20.03.2012 11:10, Claes Jakobsson wrote:

Personally I'd love a type 2 JDBC driver for PostgreSQL.

Why?

listen/notify over SSL for example unless that's been fixed in the
JDBC driver recently. And I'm sure there are other things in libpq that
would be nice to have.>  >  As mainly a Perl dude which uses libpq via
DBD::Pg I find it odd that the Java people doesn't do the same instead
of reimplementing everything.

Well, I assume they reimplemented libpq so that java would not rely on a
platform-specific library like libpq.



Type 4 drivers are the norm in the Java world. You would find it much 
more difficult to get traction among Java users, in my experience, with 
a driver that's not pure Java.


And in any case, I think it's a good thing to have two significant 
independent implementations of the wire protocol out there.


Note too that the maintainer of the Perl DBD driver has opined in my 
hearing that he would like to be able to move from relying on libpq to 
having a pure Perl driver (although personally speaking I'm glad he hasn't.)



cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding column reordering project for GSoc 2012

2012-04-09 Thread Merlin Moncure
On Mon, Apr 9, 2012 at 1:14 PM, Bruce Momjian  wrote:
> Well, I assume they reimplemented libpq so that java would not rely on a
> platform-specific library like libpq.

yes, that is correct.  jdbc for postgres is a complete implementation
of the client side protocol.  this has some good and bad points -- on
the good side you have some features libpq is only about to get, like
row level result processing, but on them minus side you are missing
some features libpq has, like gssapi authentication (but you can still
get that with jdbc->odbc bridge).

but generally speaking jdbc is displacing odbc as the 'go to' library
for connection between different kinds of database systems, especially
on non-windows environments.  jdbc is to java as fdw is to postgres
basically.  so a fdw exposed jdbc driver should be able to connect and
gather data from just about anything -- even something like sql server
so that you could bypass the freetds dependency which is quite nice.

there's an odbc-fdw project that does something pretty similar and
might be a more natural choice for windows coders.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Andrew Dunstan



On 04/09/2012 01:38 PM, Tom Lane wrote:

Andrew Dunstan  writes:

i.e. we'd forbid:
  initdb -D foo bar
which the OP's error more or less devolves to.

Makes sense.  Don't we have a similar issue with psql, pg_dump, etc?



From a quick survey:

psql won't override a dbname or username set explicitly with an option 
argument.


pg_dump doesn't have an option argument to set the dbname.

pg_restore doesn't have an option argument to set the input file name.

vacuumdb, clusterdb, reindexdb, createlang and droplang all need 
remediation. createuser and dropuser look ok.


pg_ctl seems a mess :-( I'll need to look at it closer.


cheers

andrew





regards, tom lane



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 1:49 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> I looked at this more.  The above analysis is basically correct, but
>> the problem goes a bit beyond an error in LockWaitCancel().  We could
>> also crap out with an error before getting as far as LockWaitCancel()
>> and have the same problem.  I think that a correct statement of the
>> problem is this: from the time we bump the strong lock count, up until
>> the time we're done acquiring the lock (or give up on acquiring it),
>> we need to have an error-cleanup hook in place that will unbump the
>> strong lock count if we error out.   Once we're done updating the
>> shared and local lock tables, the special handling ceases to be
>> needed, because any subsequent lock release will go through
>> LockRelease() or LockReleaseAll(), which will do the appropriate
>> clenaup.
>
> Haven't looked at the code, but maybe it'd be better to not bump the
> strong lock count in the first place until the final step of updating
> the lock tables?

Well, unfortunately, that would break the entire mechanism.  The idea
is that we bump the strong lock count first.  That prevents anyone
from taking any more fast-path locks on the target relation.  Then, we
go through and find any existing fast-path locks that have already
been taken, and turn them into regular locks.  Finally, we resolve the
actual lock request and either grant the lock or block, depending on
whether conflicts exist.  So there's some necessary separation between
the action of bumping the strong lock count and updating the lock
tables; the entire mechanism relies on being able to do non-trivial
processing in between.  I thought that I had nailed down the error
exit cases in the original patch, but this test case, and some code
reading with fresh eyes, shows that I didn't do half so good a job as
I had thought.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Alvaro Herrera

Excerpts from Tom Lane's message of lun abr 09 15:04:10 -0300 2012:
> Robert Haas  writes:
> > On Mon, Apr 9, 2012 at 1:30 PM, Tom Lane  wrote:
> >> http://archives.postgresql.org/pgsql-general/2012-01/msg00649.php
> >> The above-linked discussion also brings up a different point, which is
> >> that extracting the epoch from a timestamptz is an immutable operation,
> >> but because it's provided in the context of timestamptz_part we can only
> >> mark it stable.  (That is correct because the other cases depend on the
> >> timezone setting ... but epoch doesn't.)  It seems like it might be
> >> worth providing a single-purpose function equivalent to extract(epoch),
> >> so that we could mark it immutable.  On the other hand, it's not
> >> entirely apparent why people would need to create indexes on the epoch
> >> value rather than just indexing the timestamp itself, so I'm a tad less
> >> excited about this angle of it.
> 
> > If somebody needs it I'd probably be in favor of doing it.  I'm not
> > sure I'd do it on spec.
> 
> Hmm, I thought depesz was asking for such a function here:
> http://archives.postgresql.org/pgsql-hackers/2012-01/msg01690.php
> but now that I look more closely, he may have just meant that as an
> alternative to touching the existing behavior of timestamp_part.
> But providing a new function wouldn't be enough to solve the problem
> that timestamp_part's immutability marking is wrong.

It would be useful to have a simple function to use with timestamp in
constraint exclusion without having to use contorted expressions ...
An immutable extract_epoch(timestamptz) would fit the bill.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding column reordering project for GSoc 2012

2012-04-09 Thread Bruce Momjian
On Tue, Mar 20, 2012 at 01:25:15PM +0100, Claes Jakobsson wrote:
> 
> On 20 mar 2012, at 13.08, Heikki Linnakangas wrote:
> > On 20.03.2012 11:10, Claes Jakobsson wrote:
> >> 
> >> Personally I'd love a type 2 JDBC driver for PostgreSQL.
> > 
> > Why?
> 
> listen/notify over SSL for example unless that's been fixed in the
> JDBC driver recently. And I'm sure there are other things in libpq that
> would be nice to have.  > > As mainly a Perl dude which uses libpq via
> DBD::Pg I find it odd that the Java people doesn't do the same instead
> of reimplementing everything.

Well, I assume they reimplemented libpq so that java would not rely on a
platform-specific library like libpq.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Tom Lane
"Greg Sabino Mullane"  writes:
>> so that we could mark it immutable.  On the other hand, it's not
>> entirely apparent why people would need to create indexes on the epoch
>> value rather than just indexing the timestamp itself

> Well, it makes for smaller indexes if you don't really care about 
> sub-second resolutions.

Well, maybe in principle, but in practice it's an 8-byte value either
way.  I guess you could down-convert to an int4 if you plan to be
safely dead before 2038 ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Atri Sharma
On Mon, Apr 9, 2012 at 11:40 PM, Merlin Moncure  wrote:
> On Mon, Apr 9, 2012 at 12:25 PM, Atri Sharma  wrote:
>> On Mon, Apr 9, 2012 at 10:15 PM, Andrew Dunstan  wrote:
>>> On 04/09/2012 12:14 PM, Dave Cramer wrote:
 So I'm confused, once they link a file to an FDW can't you just read
 it with an normal select ?

 What additional functionality will this provide ?

>>>
>>>
>>>
>>> I'm confused about what you're confused about. Surely this won't be linking
>>> files to an FDW, but foreign DBMS tables, in anything you can access via
>>> JDBC. All you'll need on the postgres side is the relevant JDBC driver, so
>>> you'd have instant access via standard select queries to anything you can
>>> get a JDBC driver to talk to. That seems to me something worth having.
>>>
>>> I imagine it would look rather like this:
>>>
>>>   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
>>>   OPTIONS (driver 'jdbc.foodb.org');
>>>   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
>>>   '1.2.3.4', user 'foouser', password 'foopw');
>>>   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
>>>   SELECT * from footbl;
>>>
>>>
>>> cheers
>>>
>>> andrew
>>
>> Hi Andrew,
>>
>> Thanks for going through my proposal and commenting on it.
>>
>> I think you have hit the nail on the head.We will be connecting the
>> foreign DBMS tables.The main aim of the project is to wrap JDBC so we
>> can connect to anything that can be reached through a JDBC URL.
>>
>> I am considering two paths for doing this:
>> The first one takes the help of the SPI(Server Programming Interface)
>> and the second one directly connects through Pl/Java and JNI(Java
>> Native Interface).
>>
>> Please let me know your further comments and also,please advise me on
>> how to proceed further.
>
> I think the best way to go is as planned.  Step one is to get pl/java
> installed and attempt a minimal set of functions that can connect to
> and gather data from an external source...let's start with pl/java
> 'hello world' and go from there.  Once done it's time to start
> thinking about how the java internals will look like -- we can crib
> from dblink for that though.
>
> merlin

I agree,this should be the correct path.

Atri
-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Merlin Moncure
On Mon, Apr 9, 2012 at 12:25 PM, Atri Sharma  wrote:
> On Mon, Apr 9, 2012 at 10:15 PM, Andrew Dunstan  wrote:
>> On 04/09/2012 12:14 PM, Dave Cramer wrote:
>>> So I'm confused, once they link a file to an FDW can't you just read
>>> it with an normal select ?
>>>
>>> What additional functionality will this provide ?
>>>
>>
>>
>>
>> I'm confused about what you're confused about. Surely this won't be linking
>> files to an FDW, but foreign DBMS tables, in anything you can access via
>> JDBC. All you'll need on the postgres side is the relevant JDBC driver, so
>> you'd have instant access via standard select queries to anything you can
>> get a JDBC driver to talk to. That seems to me something worth having.
>>
>> I imagine it would look rather like this:
>>
>>   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
>>   OPTIONS (driver 'jdbc.foodb.org');
>>   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
>>   '1.2.3.4', user 'foouser', password 'foopw');
>>   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
>>   SELECT * from footbl;
>>
>>
>> cheers
>>
>> andrew
>
> Hi Andrew,
>
> Thanks for going through my proposal and commenting on it.
>
> I think you have hit the nail on the head.We will be connecting the
> foreign DBMS tables.The main aim of the project is to wrap JDBC so we
> can connect to anything that can be reached through a JDBC URL.
>
> I am considering two paths for doing this:
> The first one takes the help of the SPI(Server Programming Interface)
> and the second one directly connects through Pl/Java and JNI(Java
> Native Interface).
>
> Please let me know your further comments and also,please advise me on
> how to proceed further.

I think the best way to go is as planned.  Step one is to get pl/java
installed and attempt a minimal set of functions that can connect to
and gather data from an external source...let's start with pl/java
'hello world' and go from there.  Once done it's time to start
thinking about how the java internals will look like -- we can crib
from dblink for that though.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Tom Lane
Robert Haas  writes:
> On Mon, Apr 9, 2012 at 1:30 PM, Tom Lane  wrote:
>> http://archives.postgresql.org/pgsql-general/2012-01/msg00649.php
>> The above-linked discussion also brings up a different point, which is
>> that extracting the epoch from a timestamptz is an immutable operation,
>> but because it's provided in the context of timestamptz_part we can only
>> mark it stable.  (That is correct because the other cases depend on the
>> timezone setting ... but epoch doesn't.)  It seems like it might be
>> worth providing a single-purpose function equivalent to extract(epoch),
>> so that we could mark it immutable.  On the other hand, it's not
>> entirely apparent why people would need to create indexes on the epoch
>> value rather than just indexing the timestamp itself, so I'm a tad less
>> excited about this angle of it.

> If somebody needs it I'd probably be in favor of doing it.  I'm not
> sure I'd do it on spec.

Hmm, I thought depesz was asking for such a function here:
http://archives.postgresql.org/pgsql-hackers/2012-01/msg01690.php
but now that I look more closely, he may have just meant that as an
alternative to touching the existing behavior of timestamp_part.
But providing a new function wouldn't be enough to solve the problem
that timestamp_part's immutability marking is wrong.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Tom Lane
Robert Haas  writes:
> I looked at this more.  The above analysis is basically correct, but
> the problem goes a bit beyond an error in LockWaitCancel().  We could
> also crap out with an error before getting as far as LockWaitCancel()
> and have the same problem.  I think that a correct statement of the
> problem is this: from the time we bump the strong lock count, up until
> the time we're done acquiring the lock (or give up on acquiring it),
> we need to have an error-cleanup hook in place that will unbump the
> strong lock count if we error out.   Once we're done updating the
> shared and local lock tables, the special handling ceases to be
> needed, because any subsequent lock release will go through
> LockRelease() or LockReleaseAll(), which will do the appropriate
> clenaup.

Haven't looked at the code, but maybe it'd be better to not bump the
strong lock count in the first place until the final step of updating
the lock tables?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> so that we could mark it immutable.  On the other hand, it's not
> entirely apparent why people would need to create indexes on the epoch
> value rather than just indexing the timestamp itself

Well, it makes for smaller indexes if you don't really care about 
sub-second resolutions.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201204091345
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8



-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk+DIJcACgkQvJuQZxSWSsiLsQCgrA8Sxcljm+HPJ1jQY7l0u3UZ
UTwAnjBGM7SstLCnihtRkxDJrMax2Ikl
=Kjic
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 1:30 PM, Tom Lane  wrote:
> A long time ago, we had this bug report:
> http://archives.postgresql.org/pgsql-bugs/2003-02/msg00069.php
> in consequence of which, I changed timestamp_part() so that it would
> rotate a timestamp-without-timezone from the local timezone to GMT
> before extracting the epoch offset (commit
> 191ef2b407f065544ceed5700e42400857d9270f).
>
> Recent discussion makes it seem like this was a bad idea:
> http://archives.postgresql.org/pgsql-general/2012-01/msg00649.php
> The big problem is that timestamp_part() is marked as immutable, which
> is a correct statement for every other field type that it can extract,
> but wrong for epoch if that depends on the setting of the timezone GUC.
> So if we leave this behavior alone, we're going to have to downgrade
> timestamp_part() to stable, which is quite likely to break applications
> using it in index expressions.  Furthermore, while you could still get
> the current behavior by explicitly casting the timestamp to timestamptz
> before extracting the epoch, there is currently no convenient way to get
> a non-timezone-aware epoch value from a timestamp.  Which seems rather
> silly given that one point of the timestamp type is to not be timezone
> sensitive.
>
> So I'm kind of inclined to revert that old change.  Back in the day
> we thought it was a relatively insignificant bug fix and applied it in a
> minor release, but I think now our standards are higher and we'd want to
> treat this as a release-notable incompatibility.

+1 to all the above.

> The above-linked discussion also brings up a different point, which is
> that extracting the epoch from a timestamptz is an immutable operation,
> but because it's provided in the context of timestamptz_part we can only
> mark it stable.  (That is correct because the other cases depend on the
> timezone setting ... but epoch doesn't.)  It seems like it might be
> worth providing a single-purpose function equivalent to extract(epoch),
> so that we could mark it immutable.  On the other hand, it's not
> entirely apparent why people would need to create indexes on the epoch
> value rather than just indexing the timestamp itself, so I'm a tad less
> excited about this angle of it.

If somebody needs it I'd probably be in favor of doing it.  I'm not
sure I'd do it on spec.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Tom Lane
Andrew Dunstan  writes:
> i.e. we'd forbid:
>  initdb -D foo bar
> which the OP's error more or less devolves to.

Makes sense.  Don't we have a similar issue with psql, pg_dump, etc?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Atri Sharma
On Mon, Apr 9, 2012 at 10:15 PM, Andrew Dunstan  wrote:
>
>
> On 04/09/2012 12:14 PM, Dave Cramer wrote:
>>
>>
>> So I'm confused, once they link a file to an FDW can't you just read
>> it with an normal select ?
>>
>> What additional functionality will this provide ?
>>
>
>
>
> I'm confused about what you're confused about. Surely this won't be linking
> files to an FDW, but foreign DBMS tables, in anything you can access via
> JDBC. All you'll need on the postgres side is the relevant JDBC driver, so
> you'd have instant access via standard select queries to anything you can
> get a JDBC driver to talk to. That seems to me something worth having.
>
> I imagine it would look rather like this:
>
>   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
>   OPTIONS (driver 'jdbc.foodb.org');
>   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
>   '1.2.3.4', user 'foouser', password 'foopw');
>   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
>   SELECT * from footbl;
>
>
> cheers
>
> andrew

Hi Andrew,

Thanks for going through my proposal and commenting on it.

I think you have hit the nail on the head.We will be connecting the
foreign DBMS tables.The main aim of the project is to wrap JDBC so we
can connect to anything that can be reached through a JDBC URL.

I am considering two paths for doing this:
The first one takes the help of the SPI(Server Programming Interface)
and the second one directly connects through Pl/Java and JNI(Java
Native Interface).

Please let me know your further comments and also,please advise me on
how to proceed further.

Atri
-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-09 Thread Robert Haas
On Sun, Apr 8, 2012 at 9:37 PM, Robert Haas  wrote:
>> Robert, the Assert triggering with the above procedure
>> is in your "fast path" locking code with current GIT.
>
> Yes, that sure looks like a bug.  It seems that if the top-level
> transaction is aborting, then LockReleaseAll() is called and
> everything gets cleaned up properly; or if a subtransaction is
> aborting after the lock is fully granted, then the locks held by the
> subtransaction are released one at a time using LockRelease(), but if
> the subtransaction is aborted *during the lock wait* then we only do
> LockWaitCancel(), which doesn't clean up the LOCALLOCK.  Before the
> fast-lock patch, that didn't really matter, but now it does, because
> that LOCALLOCK is tracking the fact that we're holding onto a shared
> resource - the strong lock count.  So I think that LockWaitCancel()
> needs some kind of adjustment, but I haven't figured out exactly what
> it is yet.

I looked at this more.  The above analysis is basically correct, but
the problem goes a bit beyond an error in LockWaitCancel().  We could
also crap out with an error before getting as far as LockWaitCancel()
and have the same problem.  I think that a correct statement of the
problem is this: from the time we bump the strong lock count, up until
the time we're done acquiring the lock (or give up on acquiring it),
we need to have an error-cleanup hook in place that will unbump the
strong lock count if we error out.   Once we're done updating the
shared and local lock tables, the special handling ceases to be
needed, because any subsequent lock release will go through
LockRelease() or LockReleaseAll(), which will do the appropriate
clenaup.

The attached patch is an attempt at implementing that; any reviews appreciated.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


fix-strong-lock-cleanup.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Revisiting extract(epoch from timestamp)

2012-04-09 Thread Tom Lane
A long time ago, we had this bug report:
http://archives.postgresql.org/pgsql-bugs/2003-02/msg00069.php
in consequence of which, I changed timestamp_part() so that it would
rotate a timestamp-without-timezone from the local timezone to GMT
before extracting the epoch offset (commit
191ef2b407f065544ceed5700e42400857d9270f).

Recent discussion makes it seem like this was a bad idea:
http://archives.postgresql.org/pgsql-general/2012-01/msg00649.php
The big problem is that timestamp_part() is marked as immutable, which
is a correct statement for every other field type that it can extract,
but wrong for epoch if that depends on the setting of the timezone GUC.
So if we leave this behavior alone, we're going to have to downgrade
timestamp_part() to stable, which is quite likely to break applications
using it in index expressions.  Furthermore, while you could still get
the current behavior by explicitly casting the timestamp to timestamptz
before extracting the epoch, there is currently no convenient way to get
a non-timezone-aware epoch value from a timestamp.  Which seems rather
silly given that one point of the timestamp type is to not be timezone
sensitive.

So I'm kind of inclined to revert that old change.  Back in the day
we thought it was a relatively insignificant bug fix and applied it in a
minor release, but I think now our standards are higher and we'd want to
treat this as a release-notable incompatibility.

The above-linked discussion also brings up a different point, which is
that extracting the epoch from a timestamptz is an immutable operation,
but because it's provided in the context of timestamptz_part we can only
mark it stable.  (That is correct because the other cases depend on the
timezone setting ... but epoch doesn't.)  It seems like it might be
worth providing a single-purpose function equivalent to extract(epoch),
so that we could mark it immutable.  On the other hand, it's not
entirely apparent why people would need to create indexes on the epoch
value rather than just indexing the timestamp itself, so I'm a tad less
excited about this angle of it.

Thoughts?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Andrew Dunstan



On 04/09/2012 12:36 PM, Clover White wrote:

2012/4/9 Andrew Dunstan mailto:and...@dunslane.net>>



On 04/09/2012 07:38 AM, Clover White wrote:

Hi,
 I'm debugging initdb using gdb.
 I found that I could not step in the function getopt_long in
line 2572 in initdb.c.
 I also found that the value of VAR optind never be changed.
VAR optind is always equal to 1 but how could optind be larger
than the value of argc(the value of argc is 6) in line 2648
and 2654.

I was so confused. Could someone give me some help? Thank you~



Why do you expect it to be? Perhaps if you tell us what problem
you're actually trying to solve we can help you better.

cheers

andrew


Hi, this is my story, it may be a little long :)
  I mistook the parameter -W of initdb at the first time and used it 
like this:

initdb -U pgsql -W 12345 -D /home/pgsql/pg_data
  And I found the database was not created in the right directory, but 
I could not find a log file to find out why.
  So, I debug initdb and found out I have mistook the parameter -W, I 
should use it like this:

initdb -U pgsql -W -D /home/pgsql/pg_data



This is arguably a bug. Maybe we should change this:

 if (optind < argc)
 {
 pg_data = xstrdup(argv[optind]);
 optind++;
 }

to

 if (optind < argc && strlen(pg_data) == 0)
 {
 pg_data = xstrdup(argv[optind]);
 optind++;
 }

i.e. we'd forbid:

initdb -D foo bar


which the OP's error more or less devolves to.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Kevin Grittner
Dave Cramer  wrote: 
> Andrew Dunstan  wrote:
 
>> All you'll need on the postgres side is the relevant JDBC driver,
>> so you'd have instant access via standard select queries to
>> anything you can get a JDBC driver to talk to. That seems to me
>> something worth having.
>>
>> I imagine it would look rather like this:
>>
>>   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
>>   OPTIONS (driver 'jdbc.foodb.org');
>>   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
>>   '1.2.3.4', user 'foouser', password 'foopw');
>>   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
>>   SELECT * from footbl;
>>
> Well this is certainly more explanation than we have so far.
> 
> Is this the intended use case ?
 
That is how I've understood it from the discussion I've seen -- an
FDW to connect to JDBC so that you can wrap anything accessible from
JDBC. The use of pl/Java seems to be the easiest way to get there.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Dave Cramer
On Mon, Apr 9, 2012 at 12:45 PM, Andrew Dunstan  wrote:
>
>
> On 04/09/2012 12:14 PM, Dave Cramer wrote:
>>
>>
>> So I'm confused, once they link a file to an FDW can't you just read
>> it with an normal select ?
>>
>> What additional functionality will this provide ?
>>
>
>
>
> I'm confused about what you're confused about. Surely this won't be linking
> files to an FDW, but foreign DBMS tables, in anything you can access via
> JDBC. All you'll need on the postgres side is the relevant JDBC driver, so
> you'd have instant access via standard select queries to anything you can
> get a JDBC driver to talk to. That seems to me something worth having.
>
> I imagine it would look rather like this:
>
>   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
>   OPTIONS (driver 'jdbc.foodb.org');
>   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
>   '1.2.3.4', user 'foouser', password 'foopw');
>   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
>   SELECT * from footbl;
>
Well this is certainly more explanation than we have so far.

Is this the intended use case ?

Dave

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Andrew Dunstan



On 04/09/2012 12:14 PM, Dave Cramer wrote:


So I'm confused, once they link a file to an FDW can't you just read
it with an normal select ?

What additional functionality will this provide ?





I'm confused about what you're confused about. Surely this won't be 
linking files to an FDW, but foreign DBMS tables, in anything you can 
access via JDBC. All you'll need on the postgres side is the relevant 
JDBC driver, so you'd have instant access via standard select queries to 
anything you can get a JDBC driver to talk to. That seems to me 
something worth having.


I imagine it would look rather like this:

   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
   OPTIONS (driver 'jdbc.foodb.org');
   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
   '1.2.3.4', user 'foouser', password 'foopw');
   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
   SELECT * from footbl;


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Clover White
2012/4/9 Robert Haas 

> On Mon, Apr 9, 2012 at 7:38 AM, Clover White 
> wrote:
> > Hi,
> >   I'm debugging initdb using gdb.
> >   I found that I could not step in the function getopt_long in line 2572
> in
> > initdb.c.
> >   I also found that the value of VAR optind never be changed. VAR optind
> is
> > always equal to 1 but how could optind be larger than the value of
> argc(the
> > value of argc is 6) in line 2648 and 2654.
>
> Read the man page for getopt_long.  It changes the global variable optind.
>
> It's a silly interface, but also a long and hallowed UNIX tradition,
> so we're stuck with it.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Thanks Robert. I have read the man page for getopt_long and optind. they
are in the same man page.

but i still could not understand why optind always equal to 1 when I gdb
initdb and print optind?
Was optind  supported to increased after getopt_long pasered every
parameter?

-- 
Clover White


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Merlin Moncure
On Mon, Apr 9, 2012 at 11:14 AM, Dave Cramer  wrote:
> So I'm confused, once they link a file to an FDW can't you just read
> it with an normal select ?
>
> What additional functionality will this provide ?
>
> Dave

The basic objective is to expose the JDBC to postgres for grabbing
external data.  FDW is a C API and JDBC is java routines so the main
challenge is to figure out how to jump from a FDW call into java.
pl/java is one way to solve that problem.

Once done, you should be able to FDW to any jdbc supporting data
source, which is basically everything.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Clover White
2012/4/9 Andrew Dunstan 

>
>
> On 04/09/2012 07:38 AM, Clover White wrote:
>
>> Hi,
>>  I'm debugging initdb using gdb.
>>  I found that I could not step in the function getopt_long in line 2572
>> in initdb.c.
>>  I also found that the value of VAR optind never be changed. VAR optind
>> is always equal to 1 but how could optind be larger than the value of
>> argc(the value of argc is 6) in line 2648 and 2654.
>>
>> I was so confused. Could someone give me some help? Thank you~
>>
>>
>>
> Why do you expect it to be? Perhaps if you tell us what problem you're
> actually trying to solve we can help you better.
>
> cheers
>
> andrew
>

Hi, this is my story, it may be a little long :)
  I mistook the parameter -W of initdb at the first time and used it like
this:
initdb -U pgsql -W 12345 -D /home/pgsql/pg_data
  And I found the database was not created in the right directory, but I
could not find a log file to find out why.
  So, I debug initdb and found out I have mistook the parameter -W, I
should use it like this:
initdb -U pgsql -W -D /home/pgsql/pg_data

  however, when I debug initdb.c, VAR optind was supported to increased
after getopt_long pasered every parameter,
  but it was alway equal to 1.

  And there is a segment of initdb.c.
if (optind < argc)
  {
  do something statement
  }

  I print the value of optind and argc:

(gdb) p optind
$11 = 1
(gdb) p argc
$12 = 6

  optind is obvious less than argc, but the statement above do not excute
at all.

  QUESTION:
1.why does the statement above not excute?
2.why is optind always equal to 1?

-- 
Clover White


Re: [HACKERS] Deprecating non-select rules (was Re: Last gasp)

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 11:32 AM, Noah Misch  wrote:
> On Mon, Apr 09, 2012 at 03:35:06PM +0200, Andres Freund wrote:
>> On Monday, April 09, 2012 03:25:36 PM Robert Haas wrote:
>> > contrib/xml2 isn't doing us much harm beyond being an ugly wart, but non-
>> > SELECT rules are a land mine for the unwary at best.
>> Which we could start deprecating now btw. since INSTEAD triggers landed in
>> 9.1. There were quite some use-cases for non-select rules that couldn't be
>> fullfilled before but I think saying that we won't support those rules for
>> more than 3 releases or so might be a good idea. I have seen too many bugs
>> being caused by experienced people not realizing the pitfalls of rules.
>
> A new documentation section "Pitfalls of the Rule System" discussing the known
> hazards would help users immediately and be far easier to adopt.  In contrast
> to the breathless vitriol against rules that periodically appears on these
> lists, current documentation barely hints at the trouble.

We already have a section on rules-vs-triggers, but it presents them
as being about equal in terms of advantages and disadvantages; in
fact, there are some implications that rules are generally superior.
This is a minority point of view on this list, and a rewrite of that
section seems overdue.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Dave Cramer
On Mon, Apr 9, 2012 at 11:55 AM, Merlin Moncure  wrote:
> On Mon, Apr 9, 2012 at 10:47 AM, Dave Cramer  wrote:
>> How will the user access this? Will it be a normal query through the
>> existing API ? Will it be a private postgresql API ?
>>
>> How will they set it up ? It appears complicated as you have to setup
>> PL/Java as well
>
> Yeah -- it will run through pl/java (at least, that's the idea). What
> pl/java brings to the table is well thought out integration of the JVM
> to postgres so that you can invoke java as functions from postgres.
> PL/java of course is a heavy dependency and non-trivial to set up and
> install.  But to access the jdbc from postgres I think it's the
> easiest way forward.  Straight JNI to the JVM from FDW might be a
> better/cleaner route but we haven't done the research to see exactly
> what's involved there.  I suspect that invoking java from postgres is
> non trivial any way you slice it and that's not a wheel worth
> re-inventing.
>
> In other words, the basic idea is to do two things: a dblink-ish
> wrapper for JDBC via pl/java and a FDW wrapper through that via SPI.
> Better ideas and criticism are welcome of course.
>
> merlin

So I'm confused, once they link a file to an FDW can't you just read
it with an normal select ?

What additional functionality will this provide ?

Dave

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Merlin Moncure
On Mon, Apr 9, 2012 at 10:47 AM, Dave Cramer  wrote:
> How will the user access this? Will it be a normal query through the
> existing API ? Will it be a private postgresql API ?
>
> How will they set it up ? It appears complicated as you have to setup
> PL/Java as well

Yeah -- it will run through pl/java (at least, that's the idea). What
pl/java brings to the table is well thought out integration of the JVM
to postgres so that you can invoke java as functions from postgres.
PL/java of course is a heavy dependency and non-trivial to set up and
install.  But to access the jdbc from postgres I think it's the
easiest way forward.  Straight JNI to the JVM from FDW might be a
better/cleaner route but we haven't done the research to see exactly
what's involved there.  I suspect that invoking java from postgres is
non trivial any way you slice it and that's not a wheel worth
re-inventing.

In other words, the basic idea is to do two things: a dblink-ish
wrapper for JDBC via pl/java and a FDW wrapper through that via SPI.
Better ideas and criticism are welcome of course.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Dave Cramer
How will the user access this? Will it be a normal query through the
existing API ? Will it be a private postgresql API ?

How will they set it up ? It appears complicated as you have to setup
PL/Java as well

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca



On Mon, Apr 9, 2012 at 11:45 AM, Merlin Moncure  wrote:
> On Sun, Apr 8, 2012 at 8:56 AM, Dave Cramer  wrote:
>> Hi Atri,
>>
>> Is there some JDBC API that supports this in newer versions of the API ?
>
> Didn't parse that question.  My understanding is that the only JDBC
> features needed are what's already there, to make connections to
> databases and execute queries.
>
> The GSoC proposal is here:
>
> https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2012/atrisharma/1001
>
> merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-09 Thread Merlin Moncure
On Sun, Apr 8, 2012 at 8:56 AM, Dave Cramer  wrote:
> Hi Atri,
>
> Is there some JDBC API that supports this in newer versions of the API ?

Didn't parse that question.  My understanding is that the only JDBC
features needed are what's already there, to make connections to
databases and execute queries.

The GSoC proposal is here:

https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2012/atrisharma/1001

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regarding GSoc proposal

2012-04-09 Thread Atri Sharma
Hi all,

I submitted a proposal for GSoc 2012.Please review it and let me know
your comments.

The link is:

https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2012/atrisharma/1001

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecating non-select rules (was Re: Last gasp)

2012-04-09 Thread Noah Misch
On Mon, Apr 09, 2012 at 03:35:06PM +0200, Andres Freund wrote:
> On Monday, April 09, 2012 03:25:36 PM Robert Haas wrote:
> > contrib/xml2 isn't doing us much harm beyond being an ugly wart, but non-
> > SELECT rules are a land mine for the unwary at best.
> Which we could start deprecating now btw. since INSTEAD triggers landed in 
> 9.1. There were quite some use-cases for non-select rules that couldn't be 
> fullfilled before but I think saying that we won't support those rules for 
> more than 3 releases or so might be a good idea. I have seen too many bugs 
> being caused by experienced people not realizing the pitfalls of rules.

A new documentation section "Pitfalls of the Rule System" discussing the known
hazards would help users immediately and be far easier to adopt.  In contrast
to the breathless vitriol against rules that periodically appears on these
lists, current documentation barely hints at the trouble.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_prewarm

2012-04-09 Thread Robert Haas
On Sun, Mar 18, 2012 at 7:25 AM, Cédric Villemain
 wrote:
>> Would be nice to sort out the features of the two Postgres extentions
>> pgfincore (https://github.com/klando/pgfincore ) and pg_prewarm: what
>> do they have in common, what is complementary?
>
> pg_prewarm use postgresql functions (buffer manager) to warm data (different
> kind of 'warm', see pg_prewarm code). Relations are warmed block by block,
> for a range of block.

pg_prewarm actually supports three modes of prewarming: (1) pulling
things into the OS cache using PostgreSQL's asynchronous prefetching
code, which internally uses posix_fadvise on platforms where it's
available, (2) reading the data into a fixed-size buffer a block at a
time to force the OS to read it in synchronously, and (3) actually
pulling the data all the way into shared buffers.  So in terms of
prewarming, it can do the stuff that pgfincore does, plus some extra
stuff.  Of course, pgfincore has a bunch of extra capabilities in
related areas, like being able to check what's in core and being able
to evict things from core, but those things aren't prewarming and I
didn't feel any urge to include them in pg_prewarm, not because they
are bad ideas but just because they weren't what I was trying to do.

> pgfincore does not use the postgresql buffer manager, it uses the posix
> calls. It can proceed per block or full relation.
>
> Both need POSIX_FADVISE compatible system to be efficient.
>
> The main difference between pgfincore and pg_prewarm about full relation
> warm is that pgfincore will make very few system calls when pg_prewarm will
> do much more.

That's a fair complaint, but I'm not sure it matters in practice,
because I think that in real life the time spent prewarming is going
to be dominated by I/O, not system call time.  Now, that's not an
excuse for being less efficient, but I actually did have a reason for
doing it this way, which is that it makes it work on systems that
don't support POSIX_FADVISE, like Windows and MacOS X.  Unless I'm
mistaken or it's changed recently, pgfincore makes no effort to be
cross-platform, whereas pg_prewarm should be usable anywhere that
PostgreSQL is, and you'll be able to do prewarming in any of those
places, though of course it may be a bit less efficient without
POSIX_FADVISE, since you'll have to use the "read" or "buffer" mode
rather than "prefetch".  Still, being able to do it less efficiently
is better than not being able to do it at all.

Again, I'm not saying this to knock pgfincore: I see the advantages of
its approach in exposing a whole suite of tools to people running on,
well, the operating systems on which the largest number of people run
PostgreSQL.  But I do think that being cross-platform is an advantage,
and I think it's essential for anything we'd consider shipping as a
contrib module.  I think you could rightly view all of this as
pointing to a deficiency in the APIs exposed by core: there's no way
for anything above the smgr layer to do anything with a range of
blocks, which is exactly what we want to do here.  But I wasn't as
interested in fixing that as I was in getting something which did what
I needed, which happened to be getting the entirety of a relation into
shared_buffers without much ado.

> The current implementation of pgfincore allows to make a snapshot and
> restore via pgfincore or via pg_prewarm (just need some SQL-fu for the
> later).

Indeed.

Just to make completely clear my position on pgfincore vs. pg_prewarm,
I think they are complementary utilities with a small overlap.  I
think that the prewarming is important enough to a broad enough group
of people that we should find some way of exposing that functionality
in core or contrib, and I wrote pg_prewarm as a minimalist
implementation of that concept.  I am not necessarily opposed to
someone taking the bull by the horns and coming up with a grander
vision for what kind of tool we pull into the core distribution -
either by extending pg_prewarm, recasting pgfincore as a contrib
module with appropriate cross-platform sauce, or coming up with some
third approach that is truly the one ring to rule them all and in the
darkness bind them.  At the same time, I want to get something done
for 9.3 and I don't want to make it harder than it needs to be.  I
honestly believe that just having an easy way to pull stuff into
memory/shared_buffers will give us eighty to ninety percent of what
people need in this area; we can do more, either in core or elsewhere,
as the motivation may strike us.

Attached is an updated patch, with fixes for documentation typo noted
by Jeff Janes and some addition documentation examples also inspired
by comments from Jeff.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pg_prewarm_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-04-09 Thread Tom Lane
Peter Geoghegan  writes:
> Having taken another look at the code, I wonder if we wouldn't have
> been better off just fastpathing out of pgss_store in the first call
> (in a pair of calls made by a backend as part an execution of some
> non-prepared query) iff there is already an entry in the hashtable -
> after all, we're now going to the trouble of acquiring the spinlock
> just to increment the usage for the entry by 0 (likewise, every other
> field), which is obviously superfluous. I apologise for not having
> spotted this before submitting my last patch.

On reflection, we can actually make the code a good bit simpler if
we push the responsibility for initializing the usage count correctly
into entry_alloc(), instead of having to fix it up later.  Then we
can just skip the entire adjust-the-stats step in pgss_store when
building a sticky entry.  See my commit just now.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Potential for bugs while using COPY_POINTER_FIELD to copy NULL pointer

2012-04-09 Thread Tom Lane
Ashutosh Bapat  writes:
> After such a copy tests like if (pointer) will start failing. There are few
> callers of COPY_POINTER_FIELD which do not call the macro if the size can
> be 0. But there are some who do not do so. This looks fishy, in case we
> have if (pointer) kinds of cases.

I don't think we do.  That macro is only used to copy fixed-length
support arrays like sort column numbers.  There would be no reason to
test such a field for null-ness; its size is always determined by other
properties of the node.

It does look like all the actual uses of the macro are protected by
if-tests if the number of columns could be zero (except for MergeJoin
which didn't use to support zero columns but now does; should go fix
that).  But AFAICS that is purely to save a couple of cycles in the copy
operation, not because it would matter later.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About the behavior of array_length() for empty array

2012-04-09 Thread Robert Haas
On Thu, Apr 5, 2012 at 8:35 PM, iihero  wrote:
> From this point of view, seems N dimensions of empty array all are
> equivalent.

Yes.  It's effectively viewed as a 0-dimensional array.

> Is there standard definition of this behavior?

No.  Multi-dimensional arrays are a PostgreSQL extension.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Deprecating non-select rules (was Re: Last gasp)

2012-04-09 Thread Andres Freund
On Monday, April 09, 2012 03:25:36 PM Robert Haas wrote:
> contrib/xml2 isn't doing us much harm beyond being an ugly wart, but non-
> SELECT rules are a land mine for the unwary at best.
Which we could start deprecating now btw. since INSTEAD triggers landed in 
9.1. There were quite some use-cases for non-select rules that couldn't be 
fullfilled before but I think saying that we won't support those rules for 
more than 3 releases or so might be a good idea. I have seen too many bugs 
being caused by experienced people not realizing the pitfalls of rules.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 7:38 AM, Clover White  wrote:
> Hi,
>   I'm debugging initdb using gdb.
>   I found that I could not step in the function getopt_long in line 2572 in
> initdb.c.
>   I also found that the value of VAR optind never be changed. VAR optind is
> always equal to 1 but how could optind be larger than the value of argc(the
> value of argc is 6) in line 2648 and 2654.

Read the man page for getopt_long.  It changes the global variable optind.

It's a silly interface, but also a long and hallowed UNIX tradition,
so we're stuck with it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-09 Thread Robert Haas
On Mon, Apr 9, 2012 at 1:38 AM, Noah Misch  wrote:
> http://wiki.postgresql.org/wiki/Running_a_CommitFest suggests marking a patch
> Returned with Feedback after five consecutive days of Waiting on Author.  That
> was a great tool for keeping things moving, and I think we should return to it
> or a similar timer.  It's also an objective test approximating the subjective
> "large patch needs too much rework" test.  One cure for insufficient review
> help is to then ratchet down the permitted Waiting on Author days.

Fully agreed.  However, attempts by me to vigorously enforce that
policy in previous releases met with resistance.  However, not
enforcing it has led to the exact same amount of unhappiness by, well,
more or less the exact same set of people.

> I liked Simon's idea[1] for increasing the review supply: make a community
> policy that patch submitters shall furnish commensurate review effort.  If
> review is available-freely-but-we-hope-you'll-help, then the supply relative
> to patch submissions is unpredictable.  Feature sponsors should see patch
> review as efficient collaborative development.  When patch authorship teams
> spend part of their time reviewing other submissions with the expectation of
> receiving comparable reviews of their own work, we get a superior final
> product compared to allocating all that time to initial patch writing.  (The
> details might need work.  For example, do we give breaks for new contributors
> or self-sponsored authors?)

I guess my problem is that I have always viewed it as the
responsibility of patch submitters to furnish commensurate review
effort.  The original intent of the CommitFest process was that
everyone would stop working on their own patches and review other
people's patches.  That's clearly not happening any more.  Instead,
the CommitFest becomes another month (or three) in which to continue
working on your own patches while expecting other people to review and
commit them.  The reviewing is getting done by people who happen to be
interested in what the patch does, often people who are not code
contributors themselves, or by a small handful of dedicated reviewers
who actually conform to what I see as the original spirit of this
process by reviewing whatever's there because it's there, rather than
because they care about it personally.

Of course, part of the problem here is that it's very hard to enforce
sanctions.  First, people don't like to be sanctioned and tend to
argue about it, which is not only un-fun for the person attempting to
impose the sanction but also chews up even more of the limited review
time in argument.  Second, the standard is inevitably going to be
fuzzy.  If person A submits a large patch and two small patches and
reviews two medium-size patches and misses a serious design flaw in
one of them that Tom spends four days fixing, what's the appropriate
sanction for that?  Especially if their own patches are already
committed?  Does it matter whether they missed the design flaw due to
shoddy reviewing or just because most of us aren't as smart as Tom?  I
mean, we can't go put time clocks on everyone's desk and measure the
amount of time they spend on patch development and patch review and
start imposing sanctions when that falls below some agreed-upon ratio.
 In the absence of some ability to objectively measure people's
contributions in this area, we rely on everyone's good faith.

So the we-should-require-people-to-review thing seems like a bit of a
straw man to me.  It's news to me that any such policy has ever been
lacking.  The thing is that, aside from the squishiness of the
criteria, we have no enforcement mechanism.  As a result, some people
choose to take advantage of the system, and the longer we fail to
enforce, the more people go that route, somewhat understandably.
David Fetter has floated the idea, a few times, of appointing a
release manager who, AIUI, would be given dictatorial power to evict
patches from the last CommitFest according to that person's technical
judgement and ultimately at their personal discretion to make sure
that the release happens in a timely fashion.  I remarked at the last
developer meeting that I would be happy to have such a role, as long
as I got to occupy it.  This was actually intended as a joking remark,
but I think several people took it more seriously than I meant it.
The point I was going for is: nobody really likes having a dictator,
unless either (1) they themselves are the dictator or (2) the dictator
is widely perceived as benevolent and impartial.  In reality, there
are probably half a dozen people I would trust in such a role, or
maybe it could be some small group of individuals.  I would in every
way prefer not to have to go this route, because I think that
self-policing is in every way better: less adversarial and, if people
are honest with themselves and each other, more fair.  But the current
system has become dysfunctional, so we're going to have to do
something.

> F

Re: [HACKERS] [patch] for "psql : Allow processing of multiple -f (file) options "

2012-04-09 Thread Euler Taveira
On 09-04-2012 02:43, Vikash3 S wrote:
> Please find the patch regarding trivial changes against To Do item list for
> "psql : Allow processing of multiple -f (file) options ".
> Looking for valuable feedback.
> 
Aren't you forget to cover the single transaction (-1) mode? How would you
handle ON_ERROR_* options? Look at the archives for references. Also, your
disclaimer doesn't seems attractive; make it clear you're contributing code
under the PostgreSQL license.


-- 
   Euler Taveira de Oliveira - Timbira   http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Andrew Dunstan



On 04/09/2012 07:38 AM, Clover White wrote:

Hi,
  I'm debugging initdb using gdb.
  I found that I could not step in the function getopt_long in line 
2572 in initdb.c.
  I also found that the value of VAR optind never be changed. VAR 
optind is always equal to 1 but how could optind be larger than the 
value of argc(the value of argc is 6) in line 2648 and 2654.


I was so confused. Could someone give me some help? Thank you~




Why do you expect it to be? Perhaps if you tell us what problem you're 
actually trying to solve we can help you better.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] why was the VAR 'optind' never changed in initdb?

2012-04-09 Thread Clover White
Hi,
  I'm debugging initdb using gdb.
  I found that I could not step in the function getopt_long in line 2572 in
initdb.c.
  I also found that the value of VAR optind never be changed. VAR optind is
always equal to 1 but how could optind be larger than the value of argc(the
value of argc is 6) in line 2648 and 2654.

I was so confused. Could someone give me some help? Thank you~

here is my configure:
./configure CFLAGS=-O0 --enable-debug --enable-depend --enable-cassert
--prefix=/home/pgsql/pgsql

follows is my debug log by gdb:

[pgsql@vmlinux postgresql-9.1.2]$ gdb initdb
GNU gdb Red Hat Linux (6.3.0.0-1.63rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host
libthread_db library "/lib/tls/libthread_db.so.1".

(gdb) set args -U pgsql -W -D /home/pgsql/pg_data
(gdb) b main
Breakpoint 1 at 0x804d133: file initdb.c, line 2553.
(gdb) b 2572
Breakpoint 2 at 0x804d20c: file initdb.c, line 2572.
(gdb) run
Starting program: /home/pgsql/pgsql/bin/initdb -U pgsql -W -D
/home/pgsql/pg_data

Breakpoint 1, main (argc=6, argv=0xbfec0ad4) at initdb.c:2553
2553progname = get_progname(argv[0]);
(gdb) c
Continuing.

Breakpoint 2, main (argc=6, argv=0xbfec0ad4) at initdb.c:2572
2572while ((c = getopt_long(argc, argv, "dD:E:L:nU:WA:sT:X:",
long_options, &option_index)) != -1)
(gdb) p optind
$1 = 1
(gdb) s
2574switch (c)
(gdb) n
2589username = xstrdup(optarg);
(gdb)
2590break;
(gdb) p optind
$2 = 1
(gdb) n

Breakpoint 2, main (argc=6, argv=0xbfec0ad4) at initdb.c:2572
2572while ((c = getopt_long(argc, argv, "dD:E:L:nU:WA:sT:X:",
long_options, &option_index)) != -1)
(gdb) p optind
$3 = 1
(gdb) n
2574switch (c)
(gdb) p optind
$4 = 1
(gdb) n
2586pwprompt = true;
(gdb)
2587break;
(gdb)

Breakpoint 2, main (argc=6, argv=0xbfec0ad4) at initdb.c:2572
2572while ((c = getopt_long(argc, argv, "dD:E:L:nU:WA:sT:X:",
long_options, &option_index)) != -1)
(gdb) p optind
$5 = 1
(gdb) n
2574switch (c)
(gdb)
2580pg_data = xstrdup(optarg);
(gdb) p optarg
$6 = 0x0
(gdb) n
2581break;
(gdb) p optarg
$7 = 0x0
(gdb) n

Breakpoint 2, main (argc=6, argv=0xbfec0ad4) at initdb.c:2572
2572while ((c = getopt_long(argc, argv, "dD:E:L:nU:WA:sT:X:",
long_options, &option_index)) != -1)
(gdb) p pg_data
$8 = 0x9d328e8 "/home/pgsql/pg_data"
(gdb) n
2648if (optind < argc)
(gdb) p optind
$9 = 1
(gdb) p argc
$10 = 6
(gdb) n
2654if (optind < argc)
(gdb) p optind
$11 = 1
(gdb) p argc
$12 = 6
(gdb) n
2663if (pwprompt && pwfilename)
(gdb)

-- 
Clover White


[HACKERS] Potential for bugs while using COPY_POINTER_FIELD to copy NULL pointer

2012-04-09 Thread Ashutosh Bapat
Hi,
COPY_POINTER_FIELD is defined as -
  61 #define COPY_POINTER_FIELD(fldname, sz) \
  62 do { \
  63 Size_size = (sz); \
  64 newnode->fldname = palloc(_size); \
  65 memcpy(newnode->fldname, from->fldname, _size); \
  66 } while (0)

Since we allocate _size memory irrespective of whether from->fldname is
NULL, every NULL pointer can get copied as non-NULL pointer because the way
*alloc routines handle 0 sizes.
-- from man malloc
If size  is  0,  then  malloc()  returns either NULL, or a unique pointer
value that can later be successfully passed to free()
--

After such a copy tests like if (pointer) will start failing. There are few
callers of COPY_POINTER_FIELD which do not call the macro if the size can
be 0. But there are some who do not do so. This looks fishy, in case we
have if (pointer) kinds of cases.

Shouldn't COPY_POINTER_FIELD return NULL, if the pointer to be copied is
NULL?
-- 
Best Wishes,
Ashutosh Bapat
EntepriseDB Corporation
The Enterprise Postgres Company


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-09 Thread Thom Brown
2012/4/9 Shigeru HANADA :
>    1) connect to the server at the beginning of the local query
>    2) execute EXPLAIN for foreign table foo
>    3) execute EXPLAIN for foreign table bar
>    4) execute actual query for foreign table foo
>    5) execute actual query for foreign table bar
>    6) disconnect from the server at the end of the local query
>
> If the connection has broken between 4) and 5), and immediate reconnect
> succeeded, retrieved results for foo and bar might be inconsistent from
> the viewpoint of transaction isolation.
>
> In current implementation, next local query which contains foreign table
> of failed foreign table tries to reconnect to the server.

How would this apply to the scenario where you haven't even begun a
transaction yet?  There's no risk of inconsistency if the connection
is lost before the first command can execute, so why fail in such a
case?  Isn't there a line in the sand we can draw where we say that if
we have passed it, we just die, otherwise we try to reconnect as
there's no risk of undesirable results?

>> Also I'm not particularly keen on the message provided to the user in
>> this event:
>>
>> ERROR:  could not execute EXPLAIN for cost estimation
>> DETAIL:  FATAL:  terminating connection due to administrator command
>> FATAL:  terminating connection due to administrator command
>>
>> There's no explanation what the "administrator" command was, and I
>> suspect this is really just a "I don't know what's happened here"
>> condition.  I don't think we should reach that point.
>
> That FATAL message is returned by remote backend's ProcessInterrupts()
> during some administrator commands, such as immediate shutdown or
> pg_terminate_backend().  If remote backend died of fast shutdown or
> SIGKILL, no error message is available (see the sample below).
>
> postgres=# select * From pgsql_branches ;
> ERROR:  could not execute EXPLAIN for cost estimation
> DETAIL:
> HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches
>
> I agree that the message is confusing.  How about showing message like
> "pgsql_fdw connection failure on " or something with remote
> error message for such cases?  It can be achieved by adding extra check
> for connection status right after PQexec()/PQexecParams().  Although
> some word polishing would be required :)
>
> postgres=# select * from pgsql_branches ;
> ERROR:  pgsql_fdw connection failure on subaru_pgbench
> DETAIL:  FATAL:  terminating connection due to administrator command
> FATAL:  terminating connection due to administrator command

Yes, that would be an improvement.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Incorrect behaviour when using a GiST index on points

2012-04-09 Thread Alexander Korotkov
On Mon, Mar 12, 2012 at 3:50 PM, Alexander Korotkov wrote:

> I believe that attached version of patch can be backpatched. It fixes this
> problem without altering of index building procedure. It just makes checks
> in internal pages softener enough to compensate effect of gist_box_same
> implementation.
>

Any comments about this?

--
With best regards,
Alexander Korotkov.