Re: [HACKERS] Memory allocation in spi_printtup()

2015-08-17 Thread Neil Conway
On Mon, Aug 17, 2015 at 7:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Hi Neil!  Long time no see.

Likewise :)

 Attached is a one-liner to double the size of the table when space is
 exhausted.

 I think this could use a comment, but otherwise seems OK.

Attached is a revised patch with a comment.

 Should we back-patch this change?  Seems like it's arguably a
 performance bug.

Sounds good to me.

Neil
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index d544ad9..05a2b21 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -1797,7 +1797,8 @@ spi_printtup(TupleTableSlot *slot, DestReceiver *self)
 
 	if (tuptable-free == 0)
 	{
-		tuptable-free = 256;
+		/* Double the size of the table */
+		tuptable-free = tuptable-alloced;
 		tuptable-alloced += tuptable-free;
 		tuptable-vals = (HeapTuple *) repalloc(tuptable-vals,
 	  tuptable-alloced * sizeof(HeapTuple));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Memory allocation in spi_printtup()

2015-08-17 Thread Neil Conway
spi_printtup() has the following code (spi.c:1798):

if (tuptable-free == 0)
{
tuptable-free = 256;
tuptable-alloced += tuptable-free;
tuptable-vals = (HeapTuple *) repalloc(tuptable-vals,

   tuptable-alloced * sizeof(HeapTuple));
}

i.e., it grows the size of the tuptable by a fixed amount when it runs
out of space. That seems odd; doubling the size of the table would be
more standard. Does anyone know if there is a rationale for this
behavior?

Attached is a one-liner to double the size of the table when space is
exhausted. Constructing a simple test case in which a large result is
materialized via SPI_execute() (e.g., by passing two large queries to
crosstab() from contrib/tablefunc), this change reduces the time
required to exceed the palloc size limit from ~300 seconds to ~93
seconds on my laptop.

Of course, using SPI_execute() rather than cursors for queries with
large result sets is not a great idea, but there is demonstrably code
that does this (e.g., contrib/tablefunc -- I'll send a patch for that
shortly).

Neil
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index d544ad9..2573b3f 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -1797,7 +1797,7 @@ spi_printtup(TupleTableSlot *slot, DestReceiver *self)
 
 	if (tuptable-free == 0)
 	{
-		tuptable-free = 256;
+		tuptable-free = tuptable-alloced;
 		tuptable-alloced += tuptable-free;
 		tuptable-vals = (HeapTuple *) repalloc(tuptable-vals,
 	  tuptable-alloced * sizeof(HeapTuple));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Gsoc2012 Idea --- Social Network database schema

2012-03-20 Thread Neil Conway
2012/3/19 Qi Huang huangq...@hotmail.com:
 I actually tried to find out, personally...not sure if I was searching
 wrongly, but searching for TABLESAMPLE did not yield a cornucopia of
 useful conversations at the right time in history (~2007), even when
 the search is given a broad date-horizon (all), so I, too, an
 uninformed as to the specific objections.

 http://www.postgresql.org/search/?m=1q=TABLESAMPLEl=d=-1s=d

 I sent a mail to Nail Conway asking him about this. Hope he could give a
 good answer.

I never tried to get TABLESAMPLE support into the main PostgreSQL tree
-- I just developed the original code as an exercise for the purposes
of the talk. Implementing TABLESAMPLE would probably be a reasonable
GSoc project.

My memory of the details is fuzzy, but one thing to check is whether
the approach taken by my patch (randomly choose heap pages and then
return all the live tuples in a chosen page) actually meets the
standard's requirements -- obviously it is not true that each heap
page has the same number of live tuples, so you aren't getting a truly
random sample.

Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Compiling CVS HEAD with clang under OSX

2010-08-01 Thread Neil Conway
I tried $subject recently, and noticed some minor issues:

(1) Two warnings that suggest bugs; in src/backend/utils/adt,

datetime.c:3101:27: warning: use of logical || with constant operand;
switch to bitwise | or remove constant

And similarly for src/interfaces/ecpg/pgtypeslib/interval.c. Attached
is a patch that replaces logical OR with bitwise OR, which seems to be
the intended coding.

(2) clang doesn't support (or require) -no-cpp-precomp, which
src/template/darwin adds to $CC unconditionally. Adding the flag
unconditionally seems wrong regardless: e.g., -no-cpp-precomp isn't
supported by FSF GCC on OSX either. clang is happy to ignore the flag,
but it just emits countless warning: argument unused during
compilation: '-no-cpp-precomp' -- not sure the best way to fix this.
Perhaps have configure grep for apple in gcc --version?

(As an aside, is no-cpp-precomp still necessary for
reasonably-modern versions of Apple GCC?)

(3) There are countless warnings emitted during the compilation of
regcomp.c and related files, due to unused values returned by ERR(),
VERR(), FAILW(), and similar macros. Perhaps it is possible to rewrite
the macros to avoid the warning, although I didn't see an easy way to
do that. We could also specify -Wno-unused-value, but probably not
worth bothering just for clang.

Neil


clang_pgsql_tweaks.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiling CVS HEAD with clang under OSX

2010-08-01 Thread Neil Conway
On Sun, Aug 1, 2010 at 7:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I tried to duplicate your results using what I believe to be the latest
 version of clang,

I'm using SVN tip of llvm+clang from ~one week ago.

 (2) clang doesn't support (or require) -no-cpp-precomp, which
 src/template/darwin adds to $CC unconditionally. Adding the flag
 unconditionally seems wrong regardless: e.g., -no-cpp-precomp isn't
 supported by FSF GCC on OSX either. clang is happy to ignore the flag,
 but it just emits countless warning: argument unused during
 compilation: '-no-cpp-precomp'

 I do see that, but I also see it complaining about -fwrapv:

[...]

 We're certainly not going to just drop -fwrapv, as that would break the
 code on many modern versions of gcc.  (I'm a bit surprised and concerned
 that clang hasn't got this flag, btw.)

Support for -fwrapv was apparently implemented recently:

https://llvm.org/viewvc/llvm-project?view=revsortby=daterevision=106956

FWIW, I think we should aim to eventually remove the dependency on
-fwrapv, and instead make the code correct under the semantics
guaranteed by the C spec. That will be hard to do without a tool that
checks for code that makes incorrect assumptions about integer
overflow behavior, but there seems to be progress in that area
recently:

http://blog.regehr.org/archives/226

(As it happens, their checker uses llvm, which is what motivated me to
start poking around in the first place...)

 (3) There are countless warnings emitted during the compilation of
 regcomp.c and related files, due to unused values returned by ERR(),
 VERR(), FAILW(), and similar macros.

 I fixed this in HEAD, or at least my copy of clang doesn't complain
 anymore.

Yep, looks good here. Thanks!

Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiling CVS HEAD with clang under OSX

2010-08-01 Thread Neil Conway
On Sun, Aug 1, 2010 at 9:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'm still wondering about the bleats I saw for -fwrapv though.
 configure already is set up to install that switch only conditionally:

  # Disable optimizations that assume no overflow; needed for gcc 4.3+
  PGAC_PROG_CC_CFLAGS_OPT([-fwrapv])

 but apparently the test used for this does not notice warning messages.
 Can we improve that?

I think this is a non-issue at least with respect to clang, since they
added support for -fwrapv recently. However, I wonder if the logic
should be the reverse: unless we have evidence to suggest that the
compiler provides the integer overflow behavior we require (e.g., it
supports -fwrapv, sufficiently old GCC, etc.), then we should emit a
warning to suggest that the resulting binary might be buggy.

Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Serializable Isolation without blocking

2009-05-05 Thread Neil Conway
On Tue, May 5, 2009 at 8:50 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 While discussing potential changes to PostgreSQL documentation of
 transaction isolation levels, Emmanuel Cecchet pointed out an
 intriguing new paper[1] on a new algorithm to provide true
 serializable behavior in a MVCC based database

I agree, this is very interesting work. I blogged about it a while ago[1].

 Making these changes to Berkeley DB involved only modest changes to
 the source code. In total, only 692 lines of code (LOC) were modified
 out of a total of over 200,000 lines of code in Berkeley DB.

Tracking the read sets of each transaction would be very expensive.
Worse still, that information needs to be kept around after
end-of-transaction, which raises questions about where it should be
stored and how it should be cleaned up. Note that the benchmarks in
the paper involve transactions that perform a small number of simple
read and update operations, which reduces the bookkeeping overhead.

Neil

[1] http://everythingisdata.wordpress.com/2009/02/25/february-25-2009/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory leak on hashed agg rescan

2008-10-16 Thread Neil Conway
On Thu, Oct 16, 2008 at 5:26 AM, Tom Lane [EMAIL PROTECTED] wrote:
 It would probably be cleaner to take that logic out of build_hash_table
 altogether, and put it in a separate function to be called by
 ExecInitAgg.

Yeah, I considered that -- makes sense. Attached is the patch I
applied to HEAD, REL8_3_STABLE and REL8_2_STABLE.

Neil


hashed_agg_mem_leak-2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Memory leak on hashed agg rescan

2008-10-15 Thread Neil Conway
I noticed a minor leak in the per-query context when ExecReScanAgg()
is called for a hashed aggregate. During rescan, build_hash_table() is
called to create a new empty hash table in the aggcontext. However,
build_hash_table() also constructs the hash_needed column list in
the per-query context, so repeated calls of build_hash_table() result
in leaking this memory for the duration of the query.

Attached is a patch that fixes this by only constructing hash_needed
if it doesn't already exist. I also bms_free'd the temporary BMS that
is created, although that's pretty harmless.

Neil


hashed_agg_mem_leak-1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Window functions patch v04 for the September commit fest

2008-09-02 Thread Neil Conway
On Tue, Sep 2, 2008 at 9:35 AM, David Fetter [EMAIL PROTECTED] wrote:
 Any chance we can buy a few copies of the official one for use on the
 project?

AFAIK there is no significant difference between the official
standard and the draft version available online, so I don't see the
point.

Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mysterious Bus Error with get_fn_expr_argtype()

2008-09-01 Thread Neil Conway
On Mon, Sep 1, 2008 at 9:35 PM, Brendan Jurd [EMAIL PROTECTED] wrote:
 +1.  I've been using a variation on this theme (it returns the type
 OID, not a text value) for a couple of years.

Returning regtype seems like the natural choice.

-Neil

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No answers on CommitFest procedures?

2008-07-09 Thread Neil Conway
On Wed, 2008-07-09 at 10:58 -0700, Josh Berkus wrote:
 Have received exactly zero feedback on the question of whether I should 
 be assigning reviewers to WIP patches or not.

I suppose it depends on what exactly WIP means, but I should think
that if work is still in progress, the patch is not ready to be
reviewed yet. Patches that are truly WIP probably shouldn't be
candidates for a commit fest until they are finished, anyway.

BTW, the pg_dump lock timeout patch was previously listed as WIP, but
I think it is better classified as waiting for response. Waiting for a
reviewer's comments to be incorporated into a new version of the patch
should be a distinct state from waiting for the initial version of the
patch to be completed.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest: how does handoff work for non-committer reviewers?

2008-07-09 Thread Neil Conway
On Wed, 2008-07-09 at 10:59 -0700, Josh Berkus wrote:
 This commitfest we have a number of non-committer reviewers doing 
 reviewing.  When they're done with their review, how do they handoff 
 to a committer for final check and commit?

One approach would be to assign a committer to each patch, in addition
to a reviewer (the committer and the reviewer might be the same, of
course). Once the reviewer has signed off on the patch, the committer
can do the final check over and commit.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] GIN improvements

2008-07-09 Thread Neil Conway
On Tue, 2008-07-08 at 14:51 -0400, Tom Lane wrote:
 I'd still like to take a look.

I was tasked with reviewing this for the current commit fest, although
so far I've just been working on grokking the rest of the GIN code. But
if you'd like to review it instead, that's fine with me.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No answers on CommitFest procedures?

2008-07-09 Thread Neil Conway
On Wed, 2008-07-09 at 14:38 -0400, Alvaro Herrera wrote:
 But patches in progress still need comments from reviewers.

Certainly, but should this be done as part of the commit fest process?
Commenting on the future direction that an in-development patch ought to
take sounds more like a task for -hackers at large, rather than for an
individual reviewer.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest: how does handoff work for non-committer reviewers?

2008-07-09 Thread Neil Conway
On Wed, 2008-07-09 at 11:50 -0700, Josh Berkus wrote:
 So one thing I was thinking of is:
 
 1) change status to ready for committer
 2) post message to -hackers detailing the review and calling for a 
 committer to check the patch
 3) a committer picks it up

Sure -- or else have we could have a round-robin list of committers to
whom patches in ready for committer status can be assigned, rather
than emailing -hackers at large.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .psqlrc output for \pset commands

2008-06-12 Thread Neil Conway
On Wed, 2008-06-11 at 19:24 -0400, Bruce Momjian wrote:
 Is this desirable?  \set QUIET at the top of .psqlrc fixes it, but I am
 wondering if we should be automatically doing quiet while .psqlrc is
 processed.

There is some precedent for not emitting the messages: most Unix tools
don't echo the results of applying their .rc files at startup.
Personally, I run psql frequently but very rarely modify my .psqlrc, so
seeing timing is on and similar messages echoed to the screen is
almost always noise.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] TRUNCATE TABLE with IDENTITY

2008-05-16 Thread Neil Conway
On Fri, 2008-05-16 at 19:41 -0400, Tom Lane wrote:
 Applied with corrections.  Most notably, since ALTER SEQUENCE RESTART
 is nontransactional like most other ALTER SEQUENCE operations, I
 rearranged things to try to ensure that foreseeable failures like
 deadlock and lack of permissions would be detected before TRUNCATE
 starts to issue any RESTART commands.

Ugh. The fact that the RESTART IDENTITY part of TRUNCATE is
non-transactional is a pretty unsightly wort. I would also quarrel with
your addition to the docs that suggests this is only an issue in
practice if TRUNCATE RESTART IDENTITY is used in a transaction block:
unpredictable failures (such as OOM or query cancellation) can certainly
occur in practice, and would be very disruptive (e.g. if the sequence
values are stored into a column with a UNIQUE constraint, it would break
all inserting transactions until the DBA intervenes).

I wonder if it would be possible to make the sequence operations
performed by TRUNCATE transactional: while the TRUNCATE remains
uncommitted, it should be okay to block concurrent access to the
sequence.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VacAttrStatsP vs VacAttrStats * (typedef'ing pointer types)

2008-05-02 Thread Neil Conway
On Fri, 2008-05-02 at 23:22 +0200, Jan Urbański wrote:
 While looking around vacuum.h (for my GSoC project) I found:
 typedef struct VacAttrStats *VacAttrStatsP;
 and then throughout the code sometimes VacAttrStats *foo is used and 
 sometimes VacAttrStatsP bar is used.

Personally I think we should be consistent about this and just use one
form for a given type.

(I don't really care which variant is preferred. Personally, I think
that hiding pointer types inside a typedef is evil and wrong, but it is
relatively common practice in the source...)

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow COPY from STDIN to absorb all input before throwing an error

2008-04-08 Thread Neil Conway
On Tue, 2008-04-08 at 15:26 -0500, Decibel! wrote:
 My idea to avoid this situation is to add an option to COPY that  
 tells it not to throw an error until it runs out of input data.

An alternative would be to have the client continue reading (and
discarding) COPY input until the end-of-COPY-input sequence is reached,
and then switch back into normal input processing mode.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surfacing qualifiers

2008-03-26 Thread Neil Conway
On Wed, 2008-03-26 at 14:38 -0400, Andrew Dunstan wrote:
 I'm still waiting to see an example of where you say this patch is even 
 marginally useful.

It's not hard to think of one:

SELECT * FROM remote_table() WHERE x = 5;

Applying the predicate on the remote database (pushing the predicate
below the function scan) is an elementary optimization, and in many
cases would be enormously more efficient than materializing the entire
remote table at the local site and then applying the qual there.

Certainly I agree with Tom that proper SQL/MED support requires
significant support from both the executor and the optimizer. This is
just a quick hack to take advantage of the existing predicate pushdown
logic -- I just thought it was a cute trick, not something I'd suggest
we include in mainline sources.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Idea for minor tstore optimization

2008-03-25 Thread Neil Conway
On Sat, 2008-03-22 at 21:24 -0400, Tom Lane wrote:
 Oh, wait, that's just a -patches entry; it doesn't look like Neil
 ever committed it.  Neil, how come?

Sorry, slipped through the cracks -- I've now committed the patch.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] advancing snapshot's xmin

2008-03-25 Thread Neil Conway
On Tue, 2008-03-25 at 17:26 -0300, Alvaro Herrera wrote:
 There is one hole here: contention on ProcArrayLock.  Basically, for
 simple transactions we will need to update MyProc after every command.

If we're just updating MyProc-xmin, we only need to acquire
ProcArrayLock in shared mode, right?

 Another idea is to throttle the updating of Xmin so it only happens once
 in a while, but it's difficult to find a useful criterion and avoid
 falling into the trap that we just neglected to update it before a large
 command.

Using LWLockConditionalAcquire() might help also.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Integer datetimes

2008-03-20 Thread Neil Conway
On Thu, 2008-03-20 at 20:05 -0300, Alvaro Herrera wrote:
 Neil, you're on the loop for changing the default in configure.  Want to
 do the honors?

Sure -- I sent in a patch earlier, but I'll post an updated version
shortly.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]COPY issue(gsoc project)

2008-03-11 Thread Neil Conway
On Tue, 2008-03-11 at 20:56 +0800, longlong wrote:
 This would be a nice feature.  Right now there are often applications
 where there is a data loading or staging table that ends up being
 merged with a larger table after some cleanup.  Moving that data from
 the preperation area into the final table right now is most easily
 done with INSERT INTO X (SELECT A,B FROM C) type actions.  This is
 slow because INSERT takes much longer than COPY.

Why would INSERT INTO ... SELECT be any slower than COPY ... FROM
SELECT?

 2.this come from TODO list: COPY always behaviors like a unit of work
 thar consists of some insert commands, if any error, it rollback. but
 sometimes we only care the data should be inserted. in that situation,
 i used to use trycatch insert row by row to skip the error,
 because it will take much time to examine every row. so:
 Allow COPY to report error lines and continue.  
 this is a good idea.

Search the archives for prior discussions of this idea; the
implementation will require some careful thought. This is a relevant
thread:

http://markmail.org/message/y3atxu56s2afgidg

Note also that pg_bulkload currently does something analogous to this
outside of the DBMS proper:

http://pgbulkload.projects.postgresql.org/

 which one should i choose to proposal or both?

FWIW, error handling for COPY sounds like a more useful project to me.

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]COPY issue(gsoc project)

2008-03-11 Thread Neil Conway
On Tue, 2008-03-11 at 15:18 -0700, Neil Conway wrote:
 Note also that pg_bulkload currently does something analogous to this
 outside of the DBMS proper:
 
 http://pgbulkload.projects.postgresql.org/

Sorry, wrong project. I mean pgloader:

http://pgfoundry.org/projects/pgloader/

-Neil



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: plpgsql return execute ...

2008-02-27 Thread Neil Conway
On Wed, 2008-02-27 at 15:24 +0100, Pavel Stehule wrote:
 I thing RETURN QUERY is successful idea. It should be completed with
 support  of dynamic SQL.

Yeah, I can see that being useful.

 RETURN EXECUTE sqlstring [USING];

What is the USING clause for?

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Idea for minor tstore optimization

2008-02-27 Thread Neil Conway
I notice that several of the call sites of tuplestore_puttuple() start
with arrays of datums and nulls, call heap_form_tuple(), and then switch
into the tstore's context and call tuplestore_puttuple(), which
deep-copies the HeapTuple into the tstore. ISTM it would be faster and
simpler to provide a tuplestore_putvalues(), which just takes the datum
+ nulls arrays and avoids the additional copy.

-Neil



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] code cleanup of timestamp code

2008-02-26 Thread Neil Conway
On Tue, 2008-02-26 at 00:22 -0800, Warren Turkal wrote:
 As a result, I have a few questions about the timestamp code. In what
 instances is the floating point timestamp recommended?

One circumstance is when there isn't a native int64 type available. The
floating point datetime code is the traditional implementation -- until
recently the integer datetime code was less tested and more buggy,
although I don't think that is still the case. For 8.4 I'm planning to
submit a patch to make integer datetimes the default, per earlier
discussion.

 Is the backend smart enough to not load and use a database with
 timestamp fields created with the representation not compiled into the
 compiler?

Postgres will refuse to start if the compiled-in datetime representation
doesn't match the datetime representation used by the specified data
directory.

 And finally, would this work be welcome in PostgreSQL?

Yes, sounds like a useful improvement to me. There are quite a few
cleanups and refactorings that could be done to the datetime code.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] code cleanup of timestamp code

2008-02-26 Thread Neil Conway
On Tue, 2008-02-26 at 14:55 -0500, Tom Lane wrote:
 Anyway I think they both have their place.

I think it is very fragile to have the semantics of a builtin datatype
change depending on a configure option. It makes it difficult to write
applications that depend on the unique properties of *either* of the
implementations, and introduces the possibility of subtle portability
issues (e.g. your application works fine when developed against a build
of Postgres with integer datetimes, but subtly loses precision when used
with a floating-point build). It also complicates life for binary-mode
clients.

If we're convinced that both implementations are worth keeping, IMHO the
better route would be to separate them into two distinct sets of
datatypes. If not that, then I'd like to see integer datetimes adopted
as the default, and FP datetimes deprecated.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Two Coverity Scan volunteers needed

2008-02-26 Thread Neil Conway
On Tue, 2008-02-26 at 11:33 -0800, Josh Berkus wrote:
 We need two (or more) PostgreSQL hackers to volunteer to regularly check the 
 Coverity reports and either fix/forward the bugs found, or (more often) mark 
 them as non-bugs in the Coverity system.

I take a look at this periodically. Apparently the last run of the tool
for Postgres happened on October 30th -- do you know if there's a way to
schedule more frequent runs?

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Two Coverity Scan volunteers needed

2008-02-26 Thread Neil Conway
On Tue, 2008-02-26 at 14:57 -0800, Joshua D. Drake wrote:
 Would there be a way to script the responses to flag us for things
 that are important? 

I think you need human verification / analysis, which isn't an easy
thing to script.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Memory leaks on SRF rescan

2008-02-21 Thread Neil Conway
Consider this test case:

create table baz (a int, b int);
insert into baz (select 1, generate_series(1, 300));

select baz.*, (select 1 from generate_series(1, 1) g(a)
   where g.a  baz.b)
   from baz;

The final query consumes ~800 MB of memory on my system before it
finishes executing. The guilty memory context is pretty obvious:

ExecutorState: 612360192 total in 82 blocks; 1664004 free (117
chunks); 610696188 used

Two different classes of allocations in the EState's per-query context
are leaked:

(1) In FunctionNext(), we call ExecMakeTableFunctionResult() to compute
the result set of the SRF, which allocates the TupleDesc out-parameter
in the per-query memory context. Since rescan of a function scan (with
chgParam != NULL) results in taking this code path again, we can leak 1
TupleDesc for each rescan of a function scan. I think this is plainly a
bug -- the first attached patch fixes it.

(2) In various SRFs, allocations are made in the multi-call context
but are not released before calling SRF_RETURN_DONE. This results in
leaking those allocations for the duration of the query, since the
multi-call context is the EState's per-query context. There appears to
have been some thought that the SRF author would be enlightened enough
to free allocations made in the multi-call context before calling
SRF_RETURN_DONE, but this is rarely done. From a cursory look at the
builtin SRFs, generate_series_step_int4, generate_series_step_int8,
pg_logdir_ls, ts_token_type_byid, ts_token_type_byname, ts_parse_byid,
ts_parse_byname and pg_timezone_abbrevs all get this wrong, at which
point I stopped looking further. I wouldn't think very many user-written
SRFs have gotten this right either.

We could fix this by patching all the guilty SRFs (the second attached
patch does this for the int4 and int8 variants of generate_series()),
but I think it would be more robust to arrange for the FuncCallContext's
multi-call context to have a shorter lifetime: instead of using the
EState's per-query context, we could instead use a context that is
deleted after SRF_RETURN_DONE() is called (or at a minimum, reset each
time the function scan is rescanned).

BTW, it seems to me that shutdown_MultiFuncCall() is wrong in any case:
it frees the SRF's AttInMetadata, but not any of its fields.

Thanks to my colleague Alan Li at Truviso for reporting this issue.

-Neil

Index: src/backend/executor/nodeFunctionscan.c
===
RCS file: /home/neilc/postgres/cvs_root/pgsql/src/backend/executor/nodeFunctionscan.c,v
retrieving revision 1.45
diff -p -c -r1.45 nodeFunctionscan.c
*** src/backend/executor/nodeFunctionscan.c	1 Jan 2008 19:45:49 -	1.45
--- src/backend/executor/nodeFunctionscan.c	22 Feb 2008 02:00:56 -
*** FunctionNext(FunctionScanState *node)
*** 77,83 
--- 77,86 
  		 * do it always.
  		 */
  		if (funcTupdesc)
+ 		{
  			tupledesc_match(node-tupdesc, funcTupdesc);
+ 			FreeTupleDesc(funcTupdesc);
+ 		}
  	}
  
  	/*
Index: src/backend/utils/adt/int.c
===
RCS file: /home/neilc/postgres/cvs_root/pgsql/src/backend/utils/adt/int.c,v
retrieving revision 1.81
diff -p -c -r1.81 int.c
*** src/backend/utils/adt/int.c	1 Jan 2008 19:45:52 -	1.81
--- src/backend/utils/adt/int.c	22 Feb 2008 02:01:17 -
*** generate_series_step_int4(PG_FUNCTION_AR
*** 1377,1382 
--- 1377,1385 
  		SRF_RETURN_NEXT(funcctx, Int32GetDatum(result));
  	}
  	else
+ 	{
  		/* do when there is no more left */
+ 		pfree(fctx);
  		SRF_RETURN_DONE(funcctx);
+ 	}
  }
Index: src/backend/utils/adt/int8.c
===
RCS file: /home/neilc/postgres/cvs_root/pgsql/src/backend/utils/adt/int8.c,v
retrieving revision 1.68
diff -p -c -r1.68 int8.c
*** src/backend/utils/adt/int8.c	1 Jan 2008 19:45:52 -	1.68
--- src/backend/utils/adt/int8.c	22 Feb 2008 02:01:40 -
*** generate_series_step_int8(PG_FUNCTION_AR
*** 1215,1220 
--- 1215,1223 
  		SRF_RETURN_NEXT(funcctx, Int64GetDatum(result));
  	}
  	else
+ 	{
  		/* do when there is no more left */
+ 		pfree(fctx);
  		SRF_RETURN_DONE(funcctx);
+ 	}
  }

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Memory leaks on SRF rescan

2008-02-21 Thread Neil Conway
On Thu, 2008-02-21 at 21:42 -0500, Tom Lane wrote:
 Given your point (2), is this worth fixing by itself?

Right, probably not.

 Yeah.  I think it's hopeless to expect these functions to all hew to
 the straight and narrow path.  It seems to me that the right way is for
 the sub-select to somehow run in its own per-query context.

Hmm, I was thinking of just fixing this by arranging for the
FuncCallContext's multi-call context to be a special context created by
the function scan, and that is reset/deleted at the appropriate time.
Would this not fix the issue as well?

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Including PL/PgSQL by default

2008-02-19 Thread Neil Conway
On Tue, 2008-02-19 at 18:13 -0500, Andrew Dunstan wrote:
 I am having trouble locating the previous thread - can someone please 
 point me at it?

http://markmail.org/message/kyjbj5qovadfoe3w

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] RFC: array_agg() per SQL:200n

2008-02-02 Thread Neil Conway
On Tue, 2008-01-29 at 13:06 +0100, Peter Eisentraut wrote:
 The ORDER BY clause would also used in XMLAGG, so we should try to parse this 
 in a generalized way.

Yeah, that should be doable. We could go further and expose ORDER BY to
CREATE AGGREGATE, so that users could write aggregates that are
guaranteed to see their input in a certain order. This would be rather
more complicated to implement, though (for one thing, you couldn't do
the qsort in the final function trick -- the input to the agg would
need to be presented in the right order, which might differ from the
ordering required by the rest of the query block. We'll need to arrange
for something vaguely similar to do window functions, though.)

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Win32: Building with Longhorn SDK

2008-01-29 Thread Neil Conway
When building with MSVC 2005 (Express) and the Longhorn Platform SDK
(version 6.0a), it seems that IPPROTO_IPV6 is only defined when
_WIN32_WINNT = 0x0501. This results in a compiler error when trying to
build pqcomm.c (line 389). According to [1], building for Windows 2000
(that is, _WIN32_WINNT == 0x0500) is no longer supported with the
Longhorn version of the Platform SDK.

This isn't a huge problem (we can just require the use of prior versions
of the SDK), but I thought I'd mention it for the archives.

-Neil

[1]
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1960499SiteID=1


---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] RFC: array_agg() per SQL:200n

2008-01-27 Thread Neil Conway
I recently noticed that SQL:200n[1] defines a new aggregate function,
array_agg(). The relevant portions of the spec are:

p. 66: If ARRAY_AGG is specified, then an array value with one element
formed from the value expression evaluated for each row that
qualifies.

p. 556: array aggregate function ::=
  ARRAY_AGG left paren
  value expression [ ORDER BY sort specification list ]
  right paren

p. 564 discusses the required behavior. The result of array_agg() is an
array with one element per input value, sorted according to the optional
ORDER BY clause. NULL input values are included in the array, and the
result for an empty group is NULL, not an empty array. Note that per
page 66, I'd expect array values in the input to array_agg() not to be
flattened.

I'd like to implement array_agg() for 8.4. In the past, we've talked
about moving the array_accum() example aggregate into the backend[2].
Now that there's SQL-standard syntax, that's another reason to do it --
I think this is clearly useful functionality.

The previous discussion got tied up in how to expose the aggregate's
transition value to the type system. The problem is that the aggregate
wants to use a transition value that is not a SQL-level type, to allow
efficient array append operations. Various solutions were mooted about,
typically involving a pass-by-val pseudotype used to hold a pointer to
the C struct holding the transition state.

AFAIK the conclusion reached by the previous thread was that to be type
safe, you'd need one distinct pseudotype per aggregate function, along
with some way to let the planner distinguish this class of pseudotypes
from other types (in order to apply the heuristic that functions like
these are likely to consume more memory). You could identify this class
by an additional column in pg_type, but I think we'd need a lot of
machinery to do this properly (e.g. to allow these types to be created
via SQL). I wonder if this isn't over-engineering: the simple approach
originally followed by Stephen Frost was to declare the transition value
as, say, int8, and just disallow the transition and final functions from
being called outside an aggregate context. AFAIK this would be safe,
although of course it is ugly.

To parse the ORDER BY clause, we'd need to special-case array_agg() in
the grammar, which is a bit unfortunate. Implementation-wise, because
there is no way to lazily evaluate an array expression, I don't think
there's much to be gained by using the tuplesort infrastructure -- we'll
need to materialize the entire array into memory when the final function
is called anyway. Therefore, a simpler approach might be to just
accumulate inputs in the transition function as usual, and then qsort()
them in the final function. We could also have the planner arrange for
the sort to be skipped if it knows that the input to the aggregate will
be delivered in a compatible ordering.

Comments welcome.

-Neil

[1] http://www.wiscorp.com/SQLStandards.html ; apparently SQL:200n is
likely to become SQL:2008 without further changes.

[2] http://archives.postgresql.org/pgsql-hackers/2006-10/msg00362.php
http://archives.postgresql.org/pgsql-patches/2006-10/msg00059.php
http://archives.postgresql.org/pgsql-hackers/2006-10/msg00683.php



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] autonomous transactions

2008-01-23 Thread Neil Conway
On Tue, 2008-01-22 at 20:53 +0100, Pavel Stehule wrote:
 And there is most important question about data visibility - is
 autonomous transaction independent on main transaction (isolation)?

From looking at how Oracle does them, autonomous transactions are
completely independent of the transaction that originates them -- they
take a new database snapshot. This means that uncommitted changes in the
originating transaction are not visible to the autonomous transaction.

On Wed, 2008-01-23 at 08:13 +, Simon Riggs wrote:
 Yes, I think autonomous transactions should be on the TODO. They're
 useful for
 - error logging
 - auditing
 - creating new partitions automatically

I think they would also be useful to implement procedures that perform
DDL operations or COMMITs / ROLLBACKs.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] autonomous transactions

2008-01-23 Thread Neil Conway
On Wed, 2008-01-23 at 09:30 +, Gregory Stark wrote:
 I think the hard part would be error handling. You have to be able to catch
 any errors and resume the outer transaction.

I agree that you'd need to do this, but I don't follow why it would be
particularly difficult. You essentially have a stack of active
transactions (since one autonomous transaction can start another
autonomous transaction, and so forth). If you encounter an error in the
current transaction, you abort it as normal, pop the stack, and resume
execution of the originating transaction.

I think the hard part is fixing the parts of the backend that assume
that a single process can only have a single top-level transaction in
progress at a given time.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] autonomous transactions

2008-01-22 Thread Neil Conway
On Tue, 2008-01-22 at 10:02 -0600, Roberts, Jon wrote:
 Maybe someone could enhance this concept to include it with the core
 database to provide autonomous transactions.

I agree that autonomous transactions would be useful, but doing them via
dblink is a kludge. If we're going to include anything in the core
database, it should be done properly (i.e. as an extension to the
existing transaction system).

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] DECLARE CURSOR code question

2008-01-11 Thread Neil Conway
In CVS HEAD, why does exec_simple_query() pass an empty cursorOptions to
pg_plan_queries() at postgres.c:903? If we're planning DECLARE CURSOR,
ISTM we ought to be passing down the DECLARE CURSOR's cursorOptions.

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] VLDB Features

2007-12-15 Thread Neil Conway
On Tue, 2007-12-11 at 19:11 -0500, Greg Smith wrote:
 I'm curious what you feel is missing that pgloader doesn't fill that 
 requirement:  http://pgfoundry.org/projects/pgloader/

For complicated ETL, I agree that using an external tool makes the most
sense. But I think there is still merit in adding support to COPY for
the simple case of trying to load a data file that has some corrupted,
invalid or duplicate records.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] VLDB Features

2007-12-14 Thread Neil Conway
On Fri, 2007-12-14 at 14:48 +0200, Hannu Krosing wrote:
 How did you do it ?
 
 Did you enchance COPY command or was it something completely new ?

By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
to drop (and log) rows that contain malformed data. That is, rows with
too many or too few columns, rows that result in constraint violations,
and rows containing columns where the data type's input function raises
an error. The last case is the only thing that would be a bit tricky to
implement, I think: you could use PG_TRY() around the InputFunctionCall,
but I guess you'd need a subtransaction to ensure that you reset your
state correctly after catching an error.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] VLDB Features

2007-12-14 Thread Neil Conway
On Fri, 2007-12-14 at 18:22 -0500, Tom Lane wrote:
 If we could somehow only do a subtransaction per failure, things would
 be much better, but I don't see how.

One approach would be to essentially implement the pg_bulkloader
approach inside the backend. That is, begin by doing a subtransaction
for every k rows (with k = 1000, say). If you get any errors, then
either repeat the process with k/2 until you locate the individual
row(s) causing the trouble, or perhaps just immediately switch to k = 1.
Fairly ugly though, and would be quite slow for data sets with a high
proportion of erroneous data.

Another approach would be to distinguish between errors that require a
subtransaction to recover to a consistent state, and less serious errors
that don't have this requirement (e.g. invalid input to a data type
input function). If all the errors that we want to tolerate during a
bulk load fall into the latter category, we can do without
subtransactions.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] EXPLAIN ANALYZE printing logical and hardware I/O per-node

2007-12-14 Thread Neil Conway
On Fri, 2007-12-14 at 15:47 +, Gregory Stark wrote:
 I've wanted for a long time to have EXPLAIN ANALYZE output per-node I/O usage.
 This would be especially useful if we could distinguish hardware versus
 logical I/O though. And I always thought that would be very hard.
 
 My thought in the past was that would could do it on Solaris by having
 Postgres use DTrace directly via its (undocumented but existing) programmatic
 interface.
 
 For other operating systems it was tempting to suggest just timing the read(2)
 call to see if it took too long to be a logical operation. The problem there
 is that gettimeofday would impose far too much overhead to make that practical
 (or even be precise enough to work properly).
 
 But it occurred to me just now that the hardware instruction counter available
 on just about every platform would be good enough for a heuristic guess at
 whether the read(2) was cached.

I'm skeptical that this would be reliable enough to be very useful,
especially in the face of concurrent, unpredictable system activity on a
busy system. I agree that it would be useful information, though.
Perhaps a useful first step would be to teach EXPLAIN ANALYZE to report
the number of logical and physical I/Os from Postgres' perspective (i.e.
physical I/O just means we need to go to the kernel).

  The problem generally with using the hardware
 instruction counter is that it's not necessarily in sync between processors
 and might therefore run backwards or skip time forwards. This is a problem for
 profiling but if all we care about is a boolean guess at whether the request
 was satisfied quickly from cache then any such skipping forward or backward
 would represent a context switch which we could just toss in the hardware i/o
 bucket. It doesn't matter exactly how long the hardware i/o took, only that
 there was one.
 
 To that end I would love to see something like:
 
  QUERY PLAN   

 -
  Bitmap Heap Scan on h  (cost=8.52..16.45 rows=2 width=512) (actual 
 time=78.926..87.708 rows=2 loops=1 logical-I/O=2 physical-I/O=1)
Recheck Cond: (i = ANY ('{100,1000}'::integer[]))
-  Bitmap Index Scan on hi  (cost=0.00..8.52 rows=2 width=0) (actual 
 time=74.539..74.539 rows=2 loops=1 logical-I/O=2 physical-I/O=2))
  Index Cond: (i = ANY ('{100,1000}'::integer[]))
  Total runtime: 87.820 ms
 
 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Negative LIMIT and OFFSET?

2007-12-13 Thread Neil Conway
On Thu, 2007-12-13 at 22:06 -0500, Tom Lane wrote:
 I guess that on purely philosophical grounds, it's not an unreasonable
 behavior.  For example, LIMIT n means output at most n tuples,
 not output exactly n tuples.  So when it outputs no tuples in the face
 of a negative limit, it's meeting its spec.

If LIMIT n means emit at most n tuples, then a query that produces 0
rows with n  0 is arguably violating its spec, since it has produced
more tuples than the LIMIT specified (0  n). Interpreted this way, no
result set can be consistent with a negative limit, so I'd vote for
throwing an error.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] VLDB Features

2007-12-11 Thread Neil Conway
On Tue, 2007-12-11 at 10:53 -0800, Josh Berkus wrote:
 Just so you don't lose sight of it, one of the biggest VLDB features we're 
 missing is fault-tolerant bulk load.

I actually had to cook up a version of this for Truviso recently. I'll
take a look at submitting a cleaned-up implementation for 8.4.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] String encoding during connection handshake

2007-12-02 Thread Neil Conway
On Wed, 2007-11-28 at 09:38 -0800, Trevor Talbot wrote:
 PostgreSQL's problem is that it (and AFAICT POSIX) conflates encoding
 with locale, when the two are entirely separate concepts.

In what way does PostgreSQL conflate encoding with locale?

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Status report on 8.3 release

2007-11-29 Thread Neil Conway
On Thu, 2007-11-29 at 11:26 -0500, Bruce Momjian wrote:
 I expect these cleanups to continue for at least another week or two. 
 Once they slow we will schedule RC1.

So are there no plans for an additional beta? Given the recent addition
of changes like

http://archives.postgresql.org/pgsql-committers/2007-11/msg00552.php
http://archives.postgresql.org/pgsql-committers/2007-11/msg00532.php

I wonder if another beta before RC1 would be warranted.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Bug: --with-libxml does not take a location

2007-11-06 Thread Neil Conway
On Wed, 2007-11-07 at 11:38 +0900, Josh Berkus wrote:
 ./configure --with-libxml does not accept a location argument.  This 
 makes it impossible to configure 8.3 with LibXML on the Mac, because I 
 can't upgrade the main libxml without breaking something, and 
 ./configure doesn't let me specify an alternate location.

--with-libraries, --with-includes should work.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] bad key in cancel request

2007-11-04 Thread Neil Conway
I noticed that processCancelRequest() emits a log message at DEBUG2 when
it receives a cancel request with a bad key or for a non-existent PID.
For example,

ereport(DEBUG2,
(errmsg_internal(bad key in cancel request for process %d,
 backendPID)));

I think this ought to be logged at a higher level than DEBUG2: for one
thing, it is a potential security issue the DBA might want to be aware
of. It could also indicate a misconfigured or buggy client application.
Therefore, I think we should raise the level to WARNING, or perhaps
NOTICE. Any objections?

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] bad key in cancel request

2007-11-04 Thread Neil Conway
On Sun, 2007-11-04 at 11:06 -0500, Tom Lane wrote:
 No, if it's intended for the log it should be LOG.  Your other proposals
 are actually *less* likely to get to where the DBA could see them.

Good point. I suggested WARNING because that suggests that something is
awry, whereas LOG is used for routine log messages.

I've changed the messages from DEBUG2 to LOG in CVS HEAD -- I didn't
backport the change, but I can if anyone feels strongly about it.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] PostgreSQL performance issues

2007-10-23 Thread Neil Conway
On Mon, 2007-10-22 at 15:40 +0200, Cédric Villemain wrote:
 Does postgresql use posix_fallocate ?

No.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] FOUND with EXECUTE

2007-10-15 Thread Neil Conway
On Tue, 2007-10-16 at 11:24 +1000, Brendan Jurd wrote:
 Is there a technical reason we do not set the value of FOUND when
 executing a dynamic statement in plpgsql?

See prior discussion:

http://archives.postgresql.org/pgsql-bugs/2004-10/msg1.php

It would be easy enough to have EXECUTE modify FOUND, and that might
well be worth doing. Adding an EVAL concept would also be useful,
though, and would avoid changing EXECUTE's behavior in a way that might
break client apps.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Release notes introductory text

2007-10-11 Thread Neil Conway
On Thu, 2007-10-11 at 16:04 -0400, Bruce Momjian wrote:
 I have added the following introductory paragraph to the release notes:
 
   This release represents a major leap forward by adding significant new
   functionality and performance enhancements to
   productnamePostgreSQL/. Many complex ideas that normally take years
   to implement were added rapidly to this release by our development team.
   This release starts productnamePostgreSQL/ on a trajectory moving
   beyond feature-parity with other database systems to a time when
   productnamePostgreSQL/ will be a technology leader in the database
   field.

Frankly, this sounds like empty hyperbole to me. There is a LOT of work
left to do before we reach feature parity with the major players, let
alone become a technology leader in the database field. I would
personally vote for just saying that this release brings with it a lot
of useful new features and performance improvements, and leave it up to
the reader to decide whether we're on a trajectory moving beyond
feature-parity.

If you want to compare where we are with the major players, I think it
would be more accurate to say that we're doing fairly well on OLTP
oriented features, but there is a lot of work left on OLAP
functionality.

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Skytools committed without hackers discussion/review

2007-10-09 Thread Neil Conway
On Tue, 2007-10-09 at 16:50 -0700, Joshua D. Drake wrote:
 I think this almost says it all. My particular gripe about this whole
 thing is that there are other features that are not too intrusive (or
 appear so anyway) that are easily more useful that are not being 
 considered at all. Namely,
 http://archives.postgresql.org/pgsql-patches/2007-10/msg00087.php .

That is NOT a good example. That patch is a first-cut of a non-trivial
optimizer feature that was submitted just before beta1 shipped, by
someone who hasn't modified the optimizer before. Jan's patch was a
contrib module that has been already developed by the Skype folks, and
it goes without saying that Jan has contributed to Postgres extensively.

 It makes the whole process seem tilted and subjective.

There are some fairly obvious, objective reasons why txid differs from
the inline-SQL-SRF patch.

That said, I agree that the process should have been followed in this
case.

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] PG on NFS may be just a bad idea

2007-10-08 Thread Neil Conway
On Mon, 2007-10-08 at 16:50 -0400, Alvaro Herrera wrote:
 palloc uses malloc underneath.  My thought is to replace that with
 sbrk, mmap or something like that.  Not very portable though, a lot of
 work, and most likely not nearly enough benefits.

Yeah, I agree this isn't likely to be a win in the general case.
However, it would be interesting to explore a specialized allocator for
short-lived memory contexts, where we don't care about having an
effective pfree(). If the context is going to be reset or deleted
shortly anyway, we could probably optimize and simplify palloc() by
skipping free space accounting and then make pfree() a no-op. I recall
Tom mentioning something to this effect a few months back...

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Getting to 8.3 beta1

2007-10-04 Thread Neil Conway
On Thu, 2007-10-04 at 09:04 +0200, Guillaume Smet wrote:
 There is a typo in the contrib part:
 # Add GIN support for hstore (Guillaume Smet, Teodor)
 # Add GIN support for pg_trgm (Guillaume Smet, Teodor0
 
 s/Teodor0/Teodor)/
 
 And I didn't participate to the GIN support of hstore, I just added it
 to pg_trgm with the help of Teodor so it should be Teodor alone on
 hstore GIN support.

Fixed, thanks.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Connection Pools and DISCARD ALL

2007-10-04 Thread Neil Conway
On Thu, 2007-10-04 at 15:50 +0100, Simon Riggs wrote:
 On Thu, 2007-10-04 at 10:29 -0400, Tom Lane wrote:
   Somebody who wants the
  above behavior can send ROLLBACK; DISCARD ALL.
 
 ...which generates an ERROR if no transaction is in progress and fills
 the log needlessly.

Well, it's a WARNING, but your point is taken. Can't a clueful interface
just check what the transaction status of the connection is, rather than
unconditionally issuing a ROLLBACK?

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] plpgsql TABLE patch

2007-09-25 Thread Neil Conway
On Tue, 2007-25-09 at 22:15 -0400, Tom Lane wrote:
 I believe that (1) is now committed (renamed to RETURN QUERY),
 but what is the status of (2)?
 
 Personally I won't cry if this doesn't make it into 8.3, particularly
 since there was some disagreement about it.  But if you intend to make
 it happen, the days grow short.

Sorry, my day job is currently taking up all my spare cycles :( So I
don't think I'll get a chance to wrap this up for 8.3.

My recollection is that the patch was okay as far as it went, but I'm
hesitant to add yet another alternative to the already complex set of
choices for returning composite types and sets from functions. If we
just make TABLE() syntax sugar for the existing OUT function stuff we
would avoid at least some of that complexity, but Pavel still prefers a
distinct proargmode, last I heard.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] stored procedure stats in collector

2007-09-20 Thread Neil Conway
On Thu, 2007-09-20 at 16:08 +0300, Martin Pihlak wrote:
 The GUC variable stats_function_level now takes 3 values: on, off and all.

That seems a confusing set of values. Perhaps off, pl, and all
would be clearer?

I'm curious if you've measured the performance overhead of enabling this
functionality.

 PS. Would something like this be a canditate for 8.4 inclusion (if polished 
 up)?

It sounds like a useful feature to me.

-Neil



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] txn in pg_stat_activity

2007-09-10 Thread Neil Conway
On Mon, 2007-09-10 at 21:04 -0400, Tom Lane wrote:
 I have just noticed that a column txn_start has appeared in
 pg_stat_activity since 8.2.  It's a good idea, but who chose the name?

Me.

 I'm inclined to rename it to xact_start, which is an abbreviation
 that we *do* use in the code, and in some user-visible places, eg,
 pg_stat_database has xact_commit and xact_rollback columns.

I personally find xact to be a less intuitive abbreviation of
transaction than txn, but for the sake of consistency, I agree it is
probably better to use xact_start.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Hash index todo list item

2007-09-07 Thread Neil Conway
On Sun, 2007-02-09 at 13:04 -0500, Kenneth Marshall wrote:
 2. Evaluate the performance of different hash index implementations
and/or changes to the current implementation. My current plan is
to keep the implementation as simple as possible and still provide
the desired performance. Several hash index suggestions deal with
changing the layout of the keys on a page to improve lookup
performance, including reducing the bucket size to a fraction of
a page or only storing the hash value on the page, instead of
the index value itself.

You might find this patch useful:

http://archives.postgresql.org/pgsql-patches/2005-05/msg00164.php

It implements the just store the hash in the index idea; it also sorts
the entries in a bucket by the hash value, which allows binary search to
be used to locate candidate matches.

I was surprised that this didn't result in a performance improvement for
the benchmarks that I ran, but I never got around to investigating
further (either running more benchmarks or checking whether there was a
bug in the implementation).

Unfortunately, the patch doesn't apply cleanly to HEAD, but I can merge
it up to HEAD if you'd like.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [DOCS] [HACKERS] Contrib modules documentation online

2007-08-29 Thread Neil Conway
On Wed, 2007-08-29 at 13:53 -0400, Tom Lane wrote:
 Why wouldn't we just remove the README files altogether?  I can't
 see maintaining duplicate sets of documentation.

I agree that duplication is bad, but I think README files in the
individual contrib directories is useful and worth keeping: if I'm about
to install a contrib module and want to learn how to install and use it,
this change would only make that information *more* difficult to find.

I wonder if it would be possible to keep the master version of the
contrib docs as SGML, and generate plaintext READMEs from it during the
documentation build.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] int8 INT64_IS_BUSTED

2007-08-29 Thread Neil Conway
On Wed, 2007-08-29 at 22:41 +0200, Florian G. Pflug wrote:
 Or are platforms with INT64_IS_BUSTED no longer supported,
 and are all those #ifdefs only legacy code?

Personally I think we should head in that direction: if we enable
integer datetimes by default in 8.4 (per earlier discussion), such
machines will be more broken still. We could fallback to using FP
datetimes on INT64_IS_BUSTED machines, but IMHO it is just fundamentally
unwise to have the behavior of a builtin data type dependent on this
sort of thing.

 Please enlighten a poor linux+gcc user who can't remember
 ever using a compiler without a long long datatype after
 leaving TurboC under DOS.

I'm not aware of any platform we might conceivably care about that
doesn't have a 64-bit integral type. To verify this, Peter E. suggested
that we emit a build-time warning if compiling on such a platform for
8.3, which I think would be worth doing.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] MSVC build system

2007-08-27 Thread Neil Conway
On Mon, 2007-08-27 at 11:00 -0400, Andrew Dunstan wrote:
 In the longer run I want to make the whole system more data driven, so 
 that it's comparatively easy for someone to add stuff.

I don't mean to hijack your thread, but I wonder if maintaining two
separate build systems is the best approach in the long term. I think
CMake[1] is an interesting alternative: it would allow us to generate
both makefiles and MSVC .proj's from a single set of master build files.

-Neil

[1] http://www.cmake.org



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] .NET driver

2007-08-04 Thread Neil Conway
On Sat, 2007-04-08 at 09:26 -0400, Andrew Dunstan wrote:
 So what are *you* doing about it? This is open source, where if you want 
 it and it's not there you make it. Otherwise you're just one more 
 whinger wanting something for nothing.

I don't agree with this attitude at all: we should be listening to our
users, not insulting them for their feedback. We can (and should) ignore
certain suggested features, but the idea that asking for a feature is
whinging if it doesn't include a patch is plainly wrong, IMHO.

-Neil



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] plpgsql TABLE patch

2007-07-17 Thread Neil Conway
To review, Pavel Stehule submitted a proposal and patch to add support
for table functions a few months back:

http://archives.postgresql.org/pgsql-hackers/2007-02/msg00318.php
http://archives.postgresql.org/pgsql-patches/2007-05/msg00054.php

Pavel proposed two basically independent features:

(1) RETURN TABLE syntax sugar for PL/PgSQL

This allows you to return the result of evaluating a SELECT query as the
result of a SETOF pl/pgsql function. I don't like the RETURN TABLE
syntax, because TABLE (...) is defined as part of SQL (6.39 in SQL:2003,
as one of the variants of multiset value constructor). If we're going
to implement TABLE (...), the right place to do that is in the Postgres
backend proper (presumably as part of a larger effort to implement
multisets). Therefore I'd like to rename the PL/PgSQL syntax sugar to
RETURN QUERY (I'm open to other suggestions for the name).

Another question is whether it is sensible to allow RETURN QUERY and
RETURN NEXT to be combined in a single function. That is, whether RETURN
QUERY should be more like RETURN (and return from the function
immediately), or more like RETURN NEXT (just append a result set to the
SRF's tuplestore and continue evaluating the function). I think making
it behave more like RETURN NEXT would be more flexible, but perhaps it
would be confusing for users to see a RETURN QUERY statement that does
not in fact return control to the caller of the function... (Is RETURN
NEXT QUERY too ugly a name?)

(2) RETURNS TABLE (...) syntax sugar for CREATE FUNCTION

This lets you write CREATE FUNCTION ... RETURNS TABLE (x int, y int)
as essentially syntax sugar for OUT parameters. The syntax is specified
by SQL:2003, so I think this feature is worth implementing.

When Pavel proposed this, the sticking point is whether RETURNS TABLE
(...) is truly just syntax sugar for OUT parameters, or whether it
should behave differently with regard to variables with the same name in
the function body:[1]

CREATE OR REPLACE FUNCTION foo(arg int) RETURNS TABLE (cust_id int) AS
$$
BEGIN
  RETURN QUERY (SELECT cust_id FROM tab WHERE some = arg);
END; $$ LANGUAGE plpgsql;

would cause a name collision if RETURNS TABLE were treated as syntax
sugar for OUT parameters. Pavel's patch fixes this by introducing a new
proargmode for RETURNS TABLE parameters. Tom objected to this on the
grounds that it could break user code that examines pg_proc.proargmode,
but I'm inclined to think that it is worth the trouble to avoid what
could be a common source of confusion.

Comments welcome; I'll submit revised patches for these features
shortly.

-Neil

[1] example stolen shamelessly from a prior mail from Pavel



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] stored procedure stats in collector

2007-07-13 Thread Neil Conway
On Fri, 2007-07-13 at 14:11 +0300, Martin Pihlak wrote:
 I'm working on a patch to extend the stats collector to handle stored 
 procedure
 statistics (call counts, duration etc). The goal is to make this information
 visible via pg_stat functions/views. The collection would be controllable via
 stats_function_level GUC and will have minimal overhead when turned off. At
 our company we depend heavily on stored procedures and such a tool would be of
 great help. Perhaps others could also find it somewhat useful.

Very cool, certainly sounds like a useful feature to me.

 martinp=# select * from pg_stat_user_functions ;
   procid | schemaname | procname | nargs | calls | total_time | total_cpu | 
 self_time | self_cpu
 ++--+---+---++---+---+--
16388 | public | f1   | 0 |  4000 |  14978 |  8352 |   
   14978 | 8352
16389 | public | f2   | 0 |  2000 |  40044 |  8364 |   
   25066 |   12
16390 | public | f3   | 0 |  1000 |  40054 |  8364 |   
   9 |0
 (3 rows)

(schemaname, procname, nargs) is still ambiguous in the face of function
overloading. Although the presence of procid uniquely identifies each
function anyway, if you're going to include the name and argument
information, it might be worth including the argument types as well (as
an array of regtype, perhaps).

 Only functions with oid = FirstNormalObjectId are accounted.

Sounds reasonable to me; adding profiling to every DirectFunctionCall
invocation is likely to be too expensive anyway.

From looking quickly at the patch, I don't think the current coding
handles set-returning functions (ExecMakeTableFunctionResult).

-Neil



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] non-blocking CREATE INDEX in 8.2??

2007-07-13 Thread Neil Conway
On Fri, 2007-07-13 at 15:38 -0500, Jim C. Nasby wrote:
 According to http://developer.postgresql.org/index.php/Feature_Matrix,
 8.2 has non-blocking CREATE INDEX, which is news to me. Is it correct?

http://www.postgresql.org/docs/8.2/static/sql-createindex.html

See the CONCURRENTLY clause.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] libedit-preferred by default

2007-05-30 Thread Neil Conway
On Mon, 2007-28-05 at 15:24 -0400, Tom Lane wrote:
 readline has much more functionality

Fair enough, that's probably a good enough reason to leave things as
they are for now.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Padding on 64-bit

2007-05-29 Thread Neil Conway
On Tue, 2007-29-05 at 16:01 -0400, Tom Lane wrote:
 (I imagine someday we'll get around to allowing int8 to be pass-by-value
 on 64-bit platforms.)

This could really be a significant performance win: I'm planning to take
a look at doing it for 8.4.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] libedit-preferred by default

2007-05-28 Thread Neil Conway
If both libedit and libreadline are available, we currently default to
linking against libreadline, unless the --with-libedit-preferred
configure option is used. Why do we default to preferring GNU readline
over libedit?

libedit is clearly preferable for license reasons. You could make the
argument that libedit is perhaps less stable than readline (although I'm
not aware of any outstanding issues with it), but (a) we need to support
libedit anyway, due to Darwin and NetBSD (b) if there are any issues
with libedit, users can always specify --without-libedit-preferred (c)
those users with both libraries installed are presumably at least
somewhat satisfied with libedit.

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Async commands (like drop index)

2007-05-18 Thread Neil Conway
On Fri, 2007-18-05 at 11:47 -0500, Jim C. Nasby wrote:
 Assuming the concurrent psql stuff gets in, do you still see a use for
 this?

I think concurrent psql (and/or async libpq) is the right way to handle
this sort of requirement. DROP INDEX NOWAIT is hacky, and would be
difficult (impossible?) to implement in a reasonable manner: the backend
is fundamentally single-threaded. Also, how does the client learn when
the DROP INDEX actually finishes? The client would either need to poll
the database, or we'd need to implement something like select() --
neither is a very appealing alternative.

-1 from me: this functionality belongs on the client-side, where
asynchronous operations are much easier to manage.

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Async commands (like drop index)

2007-05-18 Thread Neil Conway
On Fri, 2007-18-05 at 13:29 -0400, Alvaro Herrera wrote:
 I think what Joshua really wants is an equivalent of this

That's not what his original email asked for, at any rate.

 start:
   BEGIN;
   LOCK TABLE foo IN ACCESS EXCLUSIVE MODE NOWAIT;
   -- if fail, rollback and go to start
   DROP INDEX foo_idx;
   COMMIT;
 
 The idea is that the lock is only acquired if immediately available,
 thus not blocking other queries which would otherwise be blocked behind
 the DROP INDEX.

ISTM this can easily be implemented with statement_timeout (which is
more general to boot).

-Neil



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Signing off of patches (was Re: Not ready for 8.3)

2007-05-18 Thread Neil Conway
On Fri, 2007-18-05 at 21:59 -0400, Tom Lane wrote:
 I kinda think this is emphasizing the wrong end of the process.

I don't disagree, but I think a tool like this would still be enormously
helpful (to me, at any rate). While there's more to the process of
feature development than just mailing patch hunks back and forth, Review
Board seems a nice improvement for that part of the process.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Integer datetimes

2007-05-16 Thread Neil Conway
On Wed, 2007-16-05 at 11:25 -0400, Bruce Momjian wrote:
 Are we agreed to wait for 8.4 for this?

Yes.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Performance monitoring

2007-05-12 Thread Neil Conway
On Sat, 2007-12-05 at 14:26 -0700, Joshua D. Drake wrote:
 Either way, we are taking the hit, it is just a matter of where. IMO it 
 would be better to have the information in the database where it makes 
 sense, than pushing out to a log

If performance monitoring information is provided as a database object,
what would the right interface be? IMHO the problem with cleanly
presenting monitoring information within a normal database system is
that this sort of data is fundamentally dynamic and continuous: to
determine how the performance of the system changes over time, you need
to repeatedly rescan the table/view/SRF and recompute your analysis
essentially from scratch. Trying to get even simple information like
queries per second from pg_stat_activity is an example of how this can
be painful.

plug
BTW, if the system included the concept of a continuous data *stream* as
a kind of database object, this problem would be much more tractable :)
In fact, there is some code in a version of TelegraphCQ that exposes
various information about the runtime state of the system as a set of
system-defined data streams -- like any other stream, users could then
use those streams in arbitrary queries.
/plug

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Planning large IN lists

2007-05-10 Thread Neil Conway
When planning queries with a large IN expression in the WHERE clause,
the planner transforms the IN list into a scalar array expression. In
clause_selectivity(), we estimate the selectivity of the ScalarArrayExpr
by calling scalararraysel(), which in turn estimates the selectivity of
*each* array element in order to determine the selectivity of the array
expression as a whole.

This is quite inefficient when the IN list is large. In a test case that
someone sent me privately, a simple query involving two cheap joins and
a ~1800 element IN list in the WHERE clause requires about 100ms to plan
but only ~10 ms to execute -- about 85% of the total runtime is spent in
scalararraysel(). (I'd include the profiling data, but KCacheGrind seems
stubbornly opposed to providing a textual summary of its results...)

Clearly, the current approach is fine when the array is small -- perhaps
for arrays above a certain number of elements, we could switch to
randomly sampling array elements, estimating their selectivities, and
then using that information to infer the estimated selectivity of the
entire array expression. That seems fairly crude, though: does anyone
have any better ideas?

-Neil



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] problem using semaphores

2007-05-08 Thread Neil Conway
On Wed, 2007-09-05 at 01:07 +0530, Raja Agrawal wrote:
 We are using the following piece of code for updating a list
 synchronously i.e. no two threads would update the list at a time.

Do you mean threads or processes?

 Is the following way of using semaphores not correct in the postgres
 environment?

Why can't you use LWLocks?

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Integer datetimes

2007-05-06 Thread Neil Conway
On Sun, 2007-06-05 at 13:09 -0400, Bruce Momjian wrote:
 Also, are we sure we can load a dump that used the float format?  What
 happens for a date out of int8 range?

AFAIK we should always be able to reload timestamp values that are in
the legal range for an int8-based timestamp. For values outside that
range, the restore will fail, just as it would if you tried to move an
application from PG 8.2 with float timestamps to PG 8.2 with integer
timestamps. The user can always reconfigure with
--disable-integer-datetimes.

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Integer datetimes

2007-05-05 Thread Neil Conway
What is the reasoning behind having two different implementations of the
datetime types, with slightly different behavior? Do we intend to keep
supporting both FP- and integer-based datetimes indefinitely?

Clearly, there are some costs associated with maintaining two different
implementations:

(1) It means we need to maintain two sets of code, with a corresponding
increase in the maintenance burden, the probability of introducing bugs,
etc., and making datetime behavior more difficult to test.

(2) In general, I think it is a fundamentally *bad* idea to have the
semantics of a builtin data type differ subtly depending on the value of
a configure parameter. It makes writing portable applications more
difficult, and can introduce hard-to-fix bugs.

So, are there any corresponding benefits to providing both FP and
integer datetimes? AFAIK the following differences in user-visible
behavior exist:

* integer timestamps have the same precision over their entire range
  (microsecond precision), whereas FP timestamps do not. This is
  clearly an advantage for integer timestamps.

* integer timestamps have a smaller range than FP timestamps
  (294276 AD vs. 5874897 AD). Are there actually applications
  that use timestamps larger than 300,000 AD?

Unless there are lots of applications that need timestamps over such a
large range, ISTM integer datetimes are the better long-term approach,
and I don't see how the FP-based datetime code justifies the maintenance
burden. Notably, the FP datetime code doesn't depend on having a
functional int64 type, but in 2007, are there really any platforms we
care about that don't have such a type?

Therefore, I propose that we make integer datetimes the default (perhaps
for 8.4), and then eventually remove the floating-point datetime code.

Comments?

-Neil

P.S. One thing to verify is that the performance of integer datetimes is
no worse than the perf. of FP datetimes. I'd intuitively expect this to
be true, but it would be worth investigating.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Integer datetimes

2007-05-05 Thread Neil Conway
On Sat, 2007-05-05 at 11:03 -0400, Tom Lane wrote:
 We've so far managed to avoid having any hard dependency on a working
 int64 type, but this would certainly be one.  I don't really think the
 code-size-reduction argument is strong enough to justify that.

What benefit do we get from avoiding this dependency? Can we really
avoid a dependency on a 64-bit integral type in the long run?

 I'm not necessarily opposed to changing the default configure selection,
 but I am opposed to removing the FP code entirely.

I would be satisfied with changing the default to integer and
deprecating the FP code (but keeping it around as a configure option).
Are there any objections to doing this for 8.3?

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Integer datetimes

2007-05-05 Thread Neil Conway
On Sat, 2007-05-05 at 20:52 -0400, Bruce Momjian wrote:
 What?  We don't pass float as a binary to clients.

Sure we do, if the client is sending or receiving data in binary format.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [DOCS] row-level stats and last analyze time

2007-05-03 Thread Neil Conway
On Thu, 2007-26-04 at 18:07 -0400, Neil Conway wrote:
 (1) I believe the reasoning for Tom's earlier change was not to reduce
 the I/O between the backend and the pgstat process [...]

Tom, any comments on this? Your change introduced an undocumented
regression into 8.2. I think you're on the hook for a documentation
update at the very least, if not a revert.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] RETURN QUERY in PL/PgSQL?

2007-05-03 Thread Neil Conway
Pavel, my apologies for not getting back to you sooner.

On Wed, 2007-25-04 at 07:12 +0200, Pavel Stehule wrote:
 example: I have table with attr. cust_id, and I want to use parametrized 
 view (table function) where I want to have attr cust_id on output.

Hmm, I see your point. I'm personally satisfied with adding a new
proargmode to solve this as you suggest.

 RETURN TABLE is specified in std, and it's last statement.

Where is RETURN TABLE defined in the standard? The only reference to
TABLE I can see is as a multiset value constructor (section 6.39 in
the current SQL 200n draft). That would allow RETURN TABLE(...), but it
would also allow TABLE(...) to be used in other contexts. I think
the right place to implement TABLE(...) per the spec would be in the
backend, as part of an implementation of the standard's multiset
concept.

Therefore, we probably should *not* use RETURN TABLE in PL/PgSQL, since
it would induce confusion if we ever do a proper multiset
implementation.

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] pgsql crollable cursor doesn't support one form ofpostgresql's cu

2007-04-28 Thread Neil Conway
On Fri, 2007-04-27 at 07:36 +0200, Pavel Stehule wrote:
 it's true. There is bug. I'll send actualised version tomorrow.

No need: I fixed the bug and applied the patch. Thanks for the patch.

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] RESET command seems pretty disjointed now

2007-04-26 Thread Neil Conway
On Tue, 2007-04-24 at 18:04 +0300, Marko Kreen wrote:
 Attached patch addresses all 3 comments.  As it will be
 top-level command, I put code into commands/discard.c

Applied with some minor tweaks -- thanks for the patch. I didn't bother
moving the regression tests out of guc.sql, although they don't really
belong there any longer.

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [DOCS] row-level stats and last analyze time

2007-04-26 Thread Neil Conway
On Tue, 2007-04-24 at 17:38 -0400, Neil Conway wrote:
 which included other modifications to reduce the pgstat I/O volume in
 8.1. I don't think this particular change was wise

I looked into this a bit further:

(1) I believe the reasoning for Tom's earlier change was not to reduce
the I/O between the backend and the pgstat process: it was to keep the
in-memory stats hash tables small, and to reduce the amount of data that
needs to be written to disk. When the only stats messages we get for a
table are VACUUM or ANALYZE messages, we discard the message in the
pgstat daemon.

(2) If stats_row_level is false, there won't be a stats hash entry for
any tables, so we can skip sending the VACUUM or ANALYZE message in the
first place, by the same logic. (This is more debatable if the user just
disabled stats_row_level for the current session, although since only a
super-user can do that, perhaps that's OK.)

(3) I don't like the fact that the current coding is so willing to throw
away VACUUM and ANALYZE pgstat messages. I think it is quite plausible
that the DBA might be interested in the last-VACUUM and last-ANALYZE
information for a table which hasn't had live operations applied to it
recently. The rest of the pgstat code has a similarly disappointing
willingness to silently discard messages it doesn't think are worth
keeping (e.g. pgstat_recv_autovac() is ignored for databases with no
other activity, and pgstat_count_xact_commit/rollback() is a no-op
unless *either* row-level or block-level stats are enabled.)

If we're so concerned about saving space in the stats hash tables for
tables that don't see non-VACUUM / non-ANALYZE activity, why not arrange
to record the timestamps for database-wide VACUUMs and ANALYZEs
separately from table-local VACUUMs and ANALYZEs? That is, a table's
last_vacuum time could effectively be the max of the last database-wide
vacuum time and the last VACUUM on that particular table. (Recording the
time of the last database-wide VACUUM might be worth doing anyway, e.g.
for avoiding wraparound failure).

Comments?

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pgsql crollable cursor doesn't support one form of postgresql's cu

2007-04-26 Thread Neil Conway
I haven't read the rest of the thread yet, but is this hunk not buggy?
yylex() is side-effecting, so the two calls to yylex() do not do what
the comment suggests.

 *** 2083,2091 
   check_FROM = false;
   }
 
 ! /* check FROM keyword after direction's specification */
 ! if (check_FROM  yylex() != K_FROM)
 ! yyerror(expected \FROM\);
 
   return fetch;
   }
 --- 2089,2097 
   check_FROM = false;
   }
 
 ! /* check FROM or IN keyword after direction's specification */
 ! if (check_FROM  (yylex() != K_FROM  yylex() != K_IN))
 ! yyerror(expected \FROM/IN\);
 
   return fetch;

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] temporal variants of generate_series()

2007-04-25 Thread Neil Conway
On Thu, 2007-04-12 at 14:56 -0700, Andrew Hammond wrote:
 I've written the following function definitions to extend
 generate_series to support some temporal types (timestamptz, date and
 time). Please include them if there's sufficient perceived need or
 value.

I could see these being useful, but a PL/PgSQL implementation is not
eligible for inclusion in the core backend (since PL/PgSQL is not
enabled by default).

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] RETURN QUERY in PL/PgSQL?

2007-04-24 Thread Neil Conway
On Mon, 2007-04-23 at 17:48 -0400, Tom Lane wrote:
 I think we've got something isomorphic to that in the patch queue
 already --- take a look at Pavel's table function patch.  It's in
 need of cleanup but I think it will make it in.

Interesting -- I missed that patch, but it seems like a better approach.
Are you already reviewing Pavel's patch, or is it something I could take
a look at?

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] RETURN QUERY in PL/PgSQL?

2007-04-24 Thread Neil Conway
On Tue, 2007-04-24 at 07:58 +0200, Pavel Stehule wrote:
 It is RETURN TABLE(SQL) via ANSI SQL 2003

I think there are two basically orthogonal features in the patch: the
RETURNS TABLE addition to CREATE FUNCTION, and the RETURN TABLE
statement in PL/PgSQL. The former is specified by the SQL standard and
is applicable to all PLs, while the latter is syntax sugar for PL/PgSQL.
I think it would make sense to split the patch into two separate
patches, one for each feature.

I'm inclined to agree with Tom that adding PROARGMODE_TABLE isn't worth
the trouble: making RETURNS TABLE(...) equivalent to RETURNS SETOF
RECORD with OUT parameters strikes me as more elegant. I didn't really
understand the name collision argument you made earlier[1]; can you
elaborate?

Another question is how RETURN NEXT and RETURN TABLE should interact (in
PL/PgSQL). I think the two sensible choices are to either disallow a
function from using both statements (which is what the patch currently
does), or allow both statements to be used, and have RETURN TABLE *not*
return from the function -- both RETURN TABLE and RETURN NEXT would
append results to the function's result tuplestore. The latter seems
more flexible.

Do we need the extra set of parentheses in RETURN TABLE? To use one of
your earlier examples:

CREATE FUNCTION fooff(a int)
RETURNS TABLE(a int, b int) AS $$
BEGIN
RETURN TABLE(SELECT * FROM Foo WHERE x  a);
END; $$ LANGUAGE plpgsql;

RETURN TABLE SELECT ... ; should be sufficient to allow correct
parsing, and is more consistent with the lack of parentheses in the
other RETURN variants.

-Neil

[1] http://archives.postgresql.org/pgsql-patches/2007-04/msg00311.php


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] row-level stats and last analyze time

2007-04-24 Thread Neil Conway
[ CC'ing -hackers ]

On Sun, 2007-04-22 at 16:10 +0200, Guillaume Lelarge wrote:
 This patch adds a sentence on monitoring.sgml explaining that 
 stats_row_level needs to be enabled if user wants to get last 
 vacuum/analyze execution time.

This behavior was introduced in r1.120 of postmaster/pgstat.c:

Modify pgstats code to reduce performance penalties from
oversized stats data files: avoid creating stats
hashtable entries for tables that aren't being touched
except by vacuum/analyze [...]

which included other modifications to reduce the pgstat I/O volume in
8.1. I don't think this particular change was wise: the reduction in
pgstat volume is pretty marginal, and it is counter-intuitive for
stats_row_level to effect whether the last ANALYZE / VACUUM is recorded.
(Plus, the optimization is not even enabled with the default
postgresql.conf settings.)

-Neil



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] RESET command seems pretty disjointed now

2007-04-23 Thread Neil Conway
On Tue, 2007-04-17 at 16:34 +0300, Marko Kreen wrote:
 Attached patch does following conversions:

ISTM it would be cleaner to use an enum to identify the different
variants of the DISCARD command, rather than a character string.

Is guc.c still the logical place for the implementation of DISCARD?
Something under backend/commands might be better, although I don't see a
real obvious place for it.

The psql tab completion code requires updating for the new DISCARD
command.

-Neil



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Improving deadlock error messages

2007-04-23 Thread Neil Conway
On Sat, 2007-04-21 at 19:43 -0400, Neil Conway wrote:
 Attached is a very quick hack of a patch to do this.

Does anyone have any feedback on this approach? If people are satisfied
with this solution, I can get a cleaned up patch ready to apply shortly.

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] RETURN QUERY in PL/PgSQL?

2007-04-23 Thread Neil Conway
In a PL/PgSQL set-returning function, returning the result set of a
query requires a FOR loop and repeated invocations of the RETURN NEXT
statement:

FOR x in SELECT ... LOOP
RETURN NEXT x;
END LOOP;

This works, but it seems overly verbose. It occurred to me that we could
easily add a new PL/PgSQL statement that evaluates a set-returning
expression and adds *all* the resulting rows to the function's result
set. For example:

RETURN QUERY SELECT ...;

I'm not sure of the right name: RETURN ROWS or RETURN ALL might also
work. Of course, this is syntax sugar (and superficial sugar at that),
but I believe this is a fairly common requirement.

Comments?

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Improving deadlock error messages

2007-04-23 Thread Neil Conway
On Mon, 2007-04-23 at 17:38 -0400, Tom Lane wrote:
 I'm really still opposed to the entire concept.  You're proposing to put
 a lot of fragile-looking code into a seldom-exercised error path.

There's certainly not a lot of code: the patch just adds a few
syscache lookups, wrapped in a PG_LOCK_NOWAIT() block.

As for fragility, I think the important point is whether it's safe to
siglongjmp() out of LockAcquire(); the rest is just window dressing.

-Neil



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Improving deadlock error messages

2007-04-21 Thread Neil Conway
On Sat, 2007-04-21 at 02:38 -0400, Tom Lane wrote:
 Maybe so, but you're going to be writing quite a lot of duplicative
 code, because the existing routines you might have been thinking of
 using (lsyscache.c etc) don't behave that way.

Right, I'm envisioning doing a conditional LockAcquire and then
heap_open() / heap_getnext() by hand. That will be relatively slow, but
code that emits a deadlock error message is almost by definition not
performance critical.

BTW, another alternative would be to set a global variable instructing
LockAcquire() to not block waiting for a lock; instead, it would
longjmp(), a la elog(ERROR). You could even construct something similar
to PG_TRY():

PG_COND_LOCK();
{
/* do various things that might acquire lmgr locks */
}
PG_ACQUIRE_FAILED()
{
/* failed to acquire an lmgr lock */
}
PG_END_COND_LOCK();

The risk would be leaving the LockAcquire() call site in an inconsistent
state when we longjmp(), but since DeadLockReport() is going to
ereport(ERROR) anyway, it might be sufficiently safe. This scheme does
seem a bit fragile, though...

-Neil



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   3   4   5   6   7   8   9   10   >