Re: [HACKERS] including backend ID in relpath of temp rels - updated patch

2010-07-29 Thread Jaime Casanova
On Sun, Jul 25, 2010 at 4:32 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Jul 25, 2010 at 2:37 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
 but i have a few questions, maybe is right what you did i only want to
 understand it:
 - you added this in include/storage/smgr.h, so why is safe to assume
 that if the backend != InvalidBackendId it must be a temp relation?

 +#define SmgrIsTemp(smgr) \
 +   ((smgr)-smgr_rnode.backend != InvalidBackendId)

 That's pretty much the whole point of the patch.  Instead of
 identifying relations as simply temporary or not temporary, we
 identify as a temporary relation owned by backend X or as not
 temporary.


Ok, this one seems good enough... i'm marking it as ready for committer

-- 
Jaime Casanova         www.2ndQuadrant.com
Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Simon Riggs
On Wed, 2010-07-28 at 14:22 -0700, Jeff Davis wrote:
 On Wed, 2010-07-28 at 15:37 -0400, Tom Lane wrote:
  So nevermind that distraction.  I'm back to thinking that fix1 is
  the way to go.
 
 Agreed.
 
 It's uncontroversial to have a simple guard against corrupting an
 uninitialized page, and uncontroversial is good for things that will be
 back-patched.

Still don't understand why we would not initialize such pages. If we're
copying a relation we must know enough about it to init a page.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Trouble with COPY IN

2010-07-29 Thread Matthew Wakeling
(Yes, I know I'm not on the hackers list. Most interested parties should 
get this directly anyway.)


Additionally the interface exposed by the JDBC driver lets the user 
write arbitrary CopyData bytes to the server, so without parsing all of 
that we don't know whether they've issued CopyData(EOF) or not.


Okay, so you can't know with absolute certainty without parsing the 
data, but the usual case would be handled by holding onto the last-N 
bytes or so. Enough to fit the EOF and perhaps a little more for 
paranoia's sake.


That's not to say that I'm missing the problem. When (not if, when) 
the user feeds data past a CopyData(EOF), it's going to get interesting.


This is the reason why the patch to the JDBC driver that I sent in is very 
fragile. In the case where a user provides a binary copy with lots of data 
after the EOF, the processCopyData method *will* get called after the 
CommandComplete and ReadyForQuery messages have been received, even if we 
try to delay processing of the ReadyForQuery message.


[Thinking about the logic necessary to handle such a case and avoid 
network buffer deadlock...] I would think the least invasive way to 
handle it would be to set the CommandComplete and ReadyForQuery messages 
aside when they are received if CopyDone hasn't been sent, continue the 
COPY operation as usual until it is shutdown, send CopyDone and, 
finally, reinstate CommandComplete and RFQ as if they were just 
received..


Basically, yes. We need to introduce a little more state into the JDBC 
driver. Currently, the driver is in one of two states:


1. In the middle of a copy.
2. Not in a copy.

These states are recorded in the lock system. We need to introduce a new 
state, where the copy is still locked, but we know that the 
CommandComplete and ReadyForQuery messages have been received. We can no 
longer unlock the copy in processCopyData - we need to do that in endCopy 
instead, after calling processCopyData to ensure that we wait for a valid 
CommandComplete and ReadyForQuery message first.


Matthew

--
Terrorists evolve but security is intelligently designed?  -- Jake von Slatt

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Incorrect FTS result with GIN index

2010-07-29 Thread Oleg Bartunov

Tom,

we're not able to work on this right now, so go ahead if you have time.
I also wonder why did I get right result :) Just repeated the query:

test=# select count(*) from search_tab where (to_tsvector('german', keywords ) @@ 
to_tsquery('german', 'ee:*  dd:*'));
 count 
---

   123
(1 row)

Time: 26.185 ms


Oleg
On Wed, 28 Jul 2010, Tom Lane wrote:


Oleg Bartunov o...@sai.msu.su writes:

you can download dump http://mira.sai.msu.su/~megera/tmp/search_tab.dump


Hmm ... I'm not sure why you're failing to reproduce it, because it's
falling over pretty easily for me.  After poking at it for awhile,
I am of the opinion that scanGetItem's handling of multiple keys is
fundamentally broken and needs to be rewritten completely.  The
particular case I'm seeing here is that one key returns this sequence of
TIDs/lossy flags:

...
1085/4 0
1086/65535 1
1087/4 0
...

while the other one returns this:

...
1083/11 0
1086/6 0
1086/10 0
1087/10 0
...

and what comes out of scanGetItem is just

...
1086/6 1
...

because after returning that, on the next call it advances both input
keystreams.  So 1086/10 should be visited and is not.

I think that depending on the previous entryRes state to determine what
to do is basically unworkable, and what should probably be done instead
is to remember the last-returned TID and advance keystreams with TIDs =
that.  I haven't quite thought through how that should interact with
lossy-page TIDs but it seems more robust than what we've got.

I'm also noticing that the ANDing behavior for the ee:*  dd:* query
style seems very much stupider than it needs to be --- it's returning
lossy pages that very obviously don't need to be examined because the
other keystream has no match at all on that page.  But I haven't had
time to probe into the reason why.

I'm out of time for today, do you want to work on it?

regards, tom lane



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch for check constraints using multiple inheritance

2010-07-29 Thread Henk Enting
Hi,

We ran into a problem on 9.0beta3 with check constraints using table
inheritance in a multi-level hierarchy with multiple inheritance.

A test script is provided below and a proposed patch is attached to this
email.

Regards,

Henk Enting, Yeb Havinga
MGRID B.V.
http://www.mgrid.net





/*

First, create a local inheritance structure:

level_0_parent
level_0_child inherits level_0_parent

This structure is the base level. The table definition and also check
constraints are defined on this level.

Add two levels that inherit this structure:

level_1_parent inherits level_0_parent
level_1_child inherits level_1_parent, level_0_child

level_2_parent inherits level_1_parent
level_2_child inherits level_2_parent, level_1_child

BTW: there is a reason that we want e.g. level_1_child to inherit from
both level_1_parent and level_0_child:
we want the data of level_1_child to be visible in both level_0_child
and level_1_parent

*/
DROP SCHEMA IF EXISTS test_inheritance CASCADE;
CREATE SCHEMA test_inheritance;
SET search_path TO test_inheritance;

CREATE TABLE level_0_parent (i int);
CREATE TABLE level_0_child (a text) INHERITS (level_0_parent);

CREATE TABLE level_1_parent() INHERITS (level_0_parent);
CREATE TABLE level_1_child() INHERITS (level_0_child, level_1_parent);

CREATE TABLE level_2_parent() INHERITS (level_1_parent);
CREATE TABLE level_2_child() INHERITS (level_1_child, level_2_parent);


-- Now add a check constraint on the top level table:
ALTER TABLE level_0_parent ADD CONSTRAINT a_check_constraint CHECK (i IN
(0,1));


/*
Check the coninhcount attribute of pg_constraint

Doxygen says this about the parameter:
coninhcount: Number of times inherited from direct parent relation(s)

On our machine (running 9.0beta3) the query below returns a
coninhcount of 3 for the level_2_child table.

This doesn't seem correct because the table only has two direct
parents.
*/


SELECT t.oid, t.relname, c.coninhcount
FROM pg_class t
JOIN pg_constraint c ON (c.conrelid = t.oid)
JOIN pg_namespace n ON (t.relnamespace = n.oid)
WHERE n.nspname = 'test_inheritance'
ORDER BY t.oid;

-- Next, drop the constraint on the top level table

ALTER TABLE level_0_parent DROP CONSTRAINT a_check_constraint;

/*

The constraint should now be dropped from all the tables in the
hierarchy, but the constraint hasn't been dropped on the level_2_child
table. It is still there and has a coninhcount of 1.

*/

SELECT t.oid, t.relname, c.conname, c.coninhcount
FROM pg_class t
JOIN pg_constraint c ON (c.conrelid = t.oid)
JOIN pg_namespace n ON (t.relnamespace = n.oid)
WHERE n.nspname = 'test_inheritance'
ORDER BY t.oid;

/*
Trying to drop this constraint that shouldn't be there anymore won't work.

The drop constraint statement below returns:
ERROR:  cannot drop inherited constraint a_check_constraint of
relation level_2_child

NB after fixing this bug, the statement should return
constraint does not exist
*/

ALTER TABLE level_2_child DROP CONSTRAINT a_check_constraint;

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 5f6fe41..d23dcdc 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -96,6 +96,17 @@ typedef struct OnCommitItem
SubTransactionId deleting_subid;
 } OnCommitItem;
 
+/*
+ * Visit information needed to prevent redundant constraint merges. This
+ * structure is needed to prevent faulty increments of coninhcount in the case
+ * of a multiple inheritance tree that has multiple paths to a parent.
+ */
+typedef struct ParentVisit
+{
+   Oid   parent;
+   Oid   child;
+} ParentVisit;
+
 static List *on_commits = NIL;
 
 
@@ -300,7 +311,7 @@ static void ATExecAddConstraint(List **wqueue,
 static void ATAddCheckConstraint(List **wqueue,
 AlteredTableInfo *tab, Relation rel,
 Constraint *constr,
-bool recurse, bool recursing);
+bool recurse, bool recursing, List 
*visited);
 static void ATAddForeignKeyConstraint(AlteredTableInfo *tab, Relation rel,
  Constraint *fkconstraint);
 static void ATExecDropConstraint(Relation rel, const char *constrName,
@@ -4584,7 +4595,7 @@ ATExecAddConstraint(List **wqueue, AlteredTableInfo *tab, 
Relation rel,
{
case CONSTR_CHECK:
ATAddCheckConstraint(wqueue, tab, rel,
-newConstraint, 
recurse, false);
+newConstraint, 
recurse, false, NIL);
break;
 
case CONSTR_FOREIGN:
@@ -4639,7 +4650,7 @@ ATExecAddConstraint(List **wqueue, AlteredTableInfo *tab, 
Relation rel,
  */
 static void
 ATAddCheckConstraint(List **wqueue, AlteredTableInfo *tab, Relation rel,
-

Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 4:58 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Wed, 2010-07-28 at 14:22 -0700, Jeff Davis wrote:
 On Wed, 2010-07-28 at 15:37 -0400, Tom Lane wrote:
  So nevermind that distraction.  I'm back to thinking that fix1 is
  the way to go.

 Agreed.

 It's uncontroversial to have a simple guard against corrupting an
 uninitialized page, and uncontroversial is good for things that will be
 back-patched.

 Still don't understand why we would not initialize such pages. If we're
 copying a relation we must know enough about it to init a page.

Well, I don't see why we'd want to do that.  As Jeff Davis pointed
out, if someone asks to move a table to a different tablespace,
changing the contents as we go along seems a bit off-topic.  But the
bigger problem is you haven't explained how you think we could
determine what initialization ought to be performed.  There's no
index-AM API that says initialize this page.  I suppose we could
invent one if there were some benefit, but we couldn't very well
back-patch such a thing to 8.0.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] review: xml_is_well_formed

2010-07-29 Thread Pavel Stehule
Hello

I looked on patch
https://commitfest.postgresql.org/action/patch_view?id=334 .This patch
moves function xml_is_well_formed from contrib xm2 to core.

*  Is the patch in context diff format?
yes

* Does it apply cleanly to the current CVS HEAD?
yes

* Does it include reasonable tests, necessary doc patches, etc?
yes

*  Does the patch actually implement that?
yes

* Do we want that?
yes

* Do we already have it?
yes - simplified version in core

* Does it follow SQL spec, or the community-agreed behavior?
no - I didn't find any resources about conformance with SQL spec, but
it has same behave like original contrib function

* Does it include pg_dump support (if applicable)?
not related

* Are there dangers?
no

*Are there any assertion failures or crashes?

not found

I have a few issues:
* broken regress test (fedora 13 - xmllint: using libxml version 20707)

postgres=# SELECT xml_is_well_formed('pg:foo
xmlns:pg=http://postgresql.org/stuff;;bar/pg:foo');
 xml_is_well_formed

 f
(1 row)

this xml is broken - but in regress tests is ok

[pa...@pavel-stehule ~]$ xmllint xxx
xxx:1: parser error : error parsing attribute name
pg:foo xmlns:pg=http://postgresql.org/stuff;;bar/pg:foo

* xml_is_well_formed returns true for simple text

postgres=# SELECT xml_is_well_formed('');
 xml_is_well_formed

 t
(1 row)

it is probably wrong result - is it ok??

* I don't understand to this fragment

   PG_TRY();
+   {
+   size_t  count;
+   xmlChar*version = NULL;
+   int standalone = -1;
+.
+   res_code = parse_xml_decl(string, count, version,
NULL, standalone);
+   if (res_code != 0)
+   xml_ereport_by_code(ERROR, ERRCODE_INVALID_XML_CONTENT,
+ invalid XML
content: invalid XML declaration,
+   res_code);
+.
+   doc = xmlNewDoc(version);
+   doc-encoding = xmlStrdup((const xmlChar *) UTF-8);
+   doc-standalone = 1;

why? This function can raise exception when declaration is wrong. It
is wrong - I think, this function should returns false instead
exception.


Regards

Pavel Stehule

postgres=# select version();
   version

--

 PostgreSQL 9.1devel on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.4.4 20100630 (Red Hat 4.4.4-10),
 64-bit
(1 row)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Incorrect FTS result with GIN index

2010-07-29 Thread Tom Lane
Oleg Bartunov o...@sai.msu.su writes:
 I also wonder why did I get right result :) Just repeated the query:

 test=# select count(*) from search_tab where (to_tsvector('german', keywords 
 ) @@ to_tsquery('german', 'ee:*  dd:*'));
   count 
 ---
 123
 (1 row)

Yeah, that case works (though I think it's unnecessarily slow).  The one
that gives the wrong answer is the equivalent form with two AND'ed @@
operators.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Jul 29, 2010 at 4:58 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Still don't understand why we would not initialize such pages. If we're
 copying a relation we must know enough about it to init a page.

 Well, I don't see why we'd want to do that.  As Jeff Davis pointed
 out, if someone asks to move a table to a different tablespace,
 changing the contents as we go along seems a bit off-topic.  But the
 bigger problem is you haven't explained how you think we could
 determine what initialization ought to be performed.  There's no
 index-AM API that says initialize this page.  I suppose we could
 invent one if there were some benefit, but we couldn't very well
 back-patch such a thing to 8.0.

Yeah.  And you really would have to get the AM involved.  Even if you
were willing to assume that you knew the special-space size for a
particular index type, it would not fly to assume that the special space
doesn't require initialization to some nonzero content.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Robert Haas
On Wed, Jul 28, 2010 at 5:22 PM, Jeff Davis pg...@j-davis.com wrote:
 On Wed, 2010-07-28 at 15:37 -0400, Tom Lane wrote:
 So nevermind that distraction.  I'm back to thinking that fix1 is
 the way to go.

 Agreed.

 It's uncontroversial to have a simple guard against corrupting an
 uninitialized page, and uncontroversial is good for things that will be
 back-patched.

Here's a version of Jeff's fix1 patch (with a trivial change to the
comment) that applies to HEAD, REL9_0_STABLE, REL8_4_STABLE, and
REL8_3_STABLE; a slightly modified version that applies to
REL8_2_STABLE; and another slightly modified version that applies to
REL8_1_STABLE and REL8_0_STABLE.  REL7_4_STABLE doesn't have
tablespaces, so the problem can't manifest there, I think.

I'm currently compiling and testing all of these.  When that's done,
should I go ahead and check this in, or wait until after beta4 wraps?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


no_lsn_tli_on_zero_page.patch
Description: Binary data


no_lsn_tli_on_zero_page-v82.patch
Description: Binary data


no_lsn_tli_on_zero_page-v81.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Here's a version of Jeff's fix1 patch (with a trivial change to the
 comment) that applies to HEAD, REL9_0_STABLE, REL8_4_STABLE, and
 REL8_3_STABLE; a slightly modified version that applies to
 REL8_2_STABLE; and another slightly modified version that applies to
 REL8_1_STABLE and REL8_0_STABLE.  REL7_4_STABLE doesn't have
 tablespaces, so the problem can't manifest there, I think.

Looks sane to the eyeball.  I'm not sure if the oldest versions have the
same page-read-time header sanity checks that we have now, so it may be
that there is not a need for this patch all the way back; but it can't
hurt anything.

 I'm currently compiling and testing all of these.  When that's done,
 should I go ahead and check this in, or wait until after beta4 wraps?

Go ahead and commit, please.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] non-overlapping, consecutive partitions

2010-07-29 Thread Hans-Jürgen Schönig
hello ...

yeah, this is fairly complicated.

greg:
can you send me how far you got?
i would be curious to see how you have attacked this issue.

i am still in the process of checking the codes.
we somehow have to find a solution for that. otherwise we are in slight trouble 
here.
it seems we have to solve it no matter what it takes.

many thanks,

hans


On Jul 26, 2010, at 1:14 AM, Robert Haas wrote:

 On Sun, Jul 25, 2010 at 6:40 PM, Greg Stark gsst...@mit.edu wrote:
 2010/7/25 Robert Haas robertmh...@gmail.com:
 2010/7/25 PostgreSQL - Hans-Jürgen Schönig postg...@cybertec.at:
 
 On Jul 25, 2010, at 11:56 AM, Martijn van Oosterhout wrote:
 
 I think the right way to approach this is to teach the planner about
 merge sorts.
 
 For what it's worth I think this is a belt-and-suspenders type of
 situation where we want two solutions which overlap somewhat.
 
 I would really like to have merge-append nodes because there are all
 sorts of plans where append nodes destroying the ordering of their
 inputs eliminates a lot of good plans. Those cases can be UNION ALL
 nodes, or partitions where there's no filter on the partition key at
 all.
 
 But for partitioned tables like the OPs the real solution would be
 to have more structured meta-data about the partitions that allows the
 planner to avoid needing the merge at all. It would also means the
 planner wouldn't need to look at every node; it could do a binary
 search or equivalent for the right partitions.
 
 Agreed on all points.
 
 Greg Stark had a patch to do this a while back called merge append,
 but it never got finished...
 
 I was basically in over my head with the planner. I don't understand
 how equivalent classes are used or should be used and didn't
 understand the code I was pointed at as being analogous. It's probably
 not so complicated as all that, but I never really wrapped my head
 around it and moved onto tasks I could make more progress on.
 
 Yeah, I don't fully understand those either.
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


--
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multibyte charater set in levenshtein function

2010-07-29 Thread Alexander Korotkov
I forgot attribution in levenshtein.c file.


With best regards,
Alexander Korotkov.


fuzzystrmatch-0.5.1.tar.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: **[SPAM]*(8.2)** Re: [HACKERS] Query optimization problem

2010-07-29 Thread Zotov

27.07.2010 21:37, Tom Lane пишет:

Right.  Because of the OR, it is *not* possible to conclude that
d2.basedon is always equal to 234409763, which is the implication of
putting them into an equivalence class.

In the example, we do have d1.id and d2.basedon grouped in an
equivalence class.  So in principle you could substitute d1.id into the
WHERE clause in place of d2.basedon, once you'd checked that it was
being used with an operator that's compatible with the specific
equivalence class (ie it's in one of the eclass's opfamilies, I think).
The problem is to recognize that such a rewrite would be a win --- it
could just as easily be a big loss.

Even if we understood how to direct the rewriting process, I'm really
dubious that it would win often enough to justify the added planning
time.  The particular problem here seems narrow enough that solving it
on the client side is probably a whole lot easier and cheaper than
trying to get the planner to do it.

regards, tom lane
   
So sorry, Tom. As I can understand you. You wouldn`t do something about 
it. I think, what this problem can show class of optimization problems.

This query:
*SLOW*

SELECT d1.ID, d2.ID
FROM DocPrimary d1
JOIN DocPrimary d2 ON d2.BasedOn=d1.ID
WHERE (*d1.ID=234409763* and *d2.BasedOn=d1.id*
) OR (d2.ID=234409763);

*FAST*

SELECT d1.ID, d2.ID
FROM DocPrimary d1
JOIN DocPrimary d2 ON d2.BasedOn=d1.ID
WHERE (*d1.ID=234409763* and *d2.BasedOn=234409763*
) OR (d2.ID=234409763);

If i use constant obvious, it works use fast plan. I think query 
optimizer can do this.

I hope you do something to make this query faster/
Thank You.

--
С уважением,
Зотов Роман Владимирович
руководитель Отдела инструментария
ЗАО НПО Консультант
г.Иваново, ул. Палехская, д. 10
тел./факс: (4932) 41-01-21
mailto: zo...@oe-it.ru



Re: [HACKERS] CommitFest 2010-07 week one progress report

2010-07-29 Thread Kevin Grittner
New numbers on where we are with this CommitFest, as we approach the
half-way point:
 
72 patches were submitted
 3 patches were withdrawn (deleted) by their authors
 8 patches were moved to CommitFest 2010-09
--
61 patches in CommitFest 2010-07
--
 3 committed to 9.0
--
58 patches for 9.1
--
 1 rejected
13 returned with feedback
12 committed for 9.1
--
26 disposed
--
32 pending
10 ready for committer
--
22 will still need reviewer attention
 7 waiting on author to respond to review
--
15 need review before further action
 2 Needs Review patches don't have a reviewer assigned
--
13 patches need review and have a reviewer assigned
 
Of the eight patches moved to the next CF, all were moved by or at
the request of their authors.  One was because the author didn't
feel the patch was ready for review and didn't have time to take
care of that in this CF.  Six were WiP patches which need
documentation (perhaps a Wiki page) before others can effectively
review them.  One is ready for committer, but isn't needed until we
are ready to commit the KNN-GiST, which was submitted for the next
CF.
 
13 of the 22 patches which will still need reviewer attention have
had at least one review.  Many of the others have had discussion and
comment entries, but not yet a formal review.
 
The WIP patch for serializable transactions with predicate locking
by Dan Ports and myself has had some off-list questions from Joe
Conway.  The questions are noted as opportunities for further code
comments.  He pointed out one bug which has been fixed.  And the
questions have caused me to notice a couple areas which need work to
reduce the false positive rate.
 
The last two patches which are without an assigned reviewer appear
to be in that state because there aren't many people who feel
competent to review these areas.  The ECPG FETCH readahead patch
by Zoltán Böszörményi and the WiP: Per-column collation patch by
Peter Eisentraut both need *someone* to step up.  Volunteers or
suggestions welcome.
 
Perhaps the biggest CF news of the last week is that we are no
longer faced with a fork in the efforts to implement synchronous
replication for 9.1 -- Zoltán Böszörményi has heroically offered to
withdraw his patch and work with Fujii Masao on enhancing the
subsequent Another synchronous replication patch.  With everyone
working from the same base to push this effort forward, I'm hopeful
that we can overcome the challenges this technology presents.  I
think it will be very good for the project if we can get a fairly
polished and close to final version committed before the last
CommitFest, so that it has a full alpha test cycle to settle in. 
Note that this means that such a patch must be submitted within
*three and a half months*!  Yes, we are that far in to the 9.1
development cycle.
 
Some of the other patches may have funny dates, but I believe from
off-list emails that things are generally moving OK.
 
-Kevin
 
 
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 
 71 patches were submitted
  3 patches were withdrawn (deleted) by their authors
 --
 68 total patches currently in the application
 --
  3 committed to 9.0
 --
 65 9.1 patches
 --
  1 rejected
  5 returned with feedback
 11 committed for 9.1
 --
 17 9.1 patches disposed
 --
 48 pending
  8 ready for committer
 --
 40 will still need reviewer attention
  9 waiting on author to respond to review
 --
 31 need review before further action
 13 Needs Review patches don't have a reviewer assigned
 --
 18 patches have reviews due within four days or less


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Wed, 2010-07-28 at 20:25 -0400, Tom Lane wrote:
 I can't remember offhand whether there are any volatile type output
 functions, but if there were we'd really need to mark array_to_string()
 as volatile.  That would be unpleasant for performance though.   I'd
 rather compromise on stable.  Thoughts?

 Stable seems reasonable to me.

 A volatile type output function sounds like an edge case. Perhaps there
 are even grounds to force a type output function to be stable, similar
 to how we force the function for a functional index to be immutable.

I did a bit of research in the system catalogs, and found that the only
built-in type output function that is marked volatile is record_out().
I think this is probably from an excess of caution --- record_out has
the same issue that it's really as volatile as the underlying per-column
output functions.  I notice in particular that anyarray_out is marked
stable, and of course it's got the same issue too.

I propose changing both array_to_string() and record_out() to be marked
stable, and that that be the default assumption for similar future cases
as well.  This isn't something we can back-patch, but sneaking it into
9.0 at this point (without a catversion bump) seems reasonable to me.

I'm not in favor of trying to force output functions to be declared
non-volatile as Jeff suggests above.  I think doing that would probably
break user type definitions far and wide --- for a comparative sample,
all of the user-defined types added in the standard regression tests
would break, because we never bothered to mark their output functions
as to volatility.  If we did do it, it would retroactively justify
treating record_out and anyarray_out as stable, but I doubt it's worth
causing a flag day for user-defined types.

BTW, the situation on the input side is a bit different: record_in is
volatile because domain_in is, and I think we'd better leave that alone
since it's not too hard to believe that a domain might have volatile
CHECK expressions.  If we had arrays of domains, anyarray_in would have
to be volatile too, but we don't and it isn't.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Incorrect FTS result with GIN index

2010-07-29 Thread Oleg Bartunov

On Thu, 29 Jul 2010, Tom Lane wrote:


Oleg Bartunov o...@sai.msu.su writes:

I also wonder why did I get right result :) Just repeated the query:



test=# select count(*) from search_tab where (to_tsvector('german', keywords ) @@ 
to_tsquery('german', 'ee:*  dd:*'));
  count
---
123
(1 row)


Yeah, that case works (though I think it's unnecessarily slow).  The one
that gives the wrong answer is the equivalent form with two AND'ed @@
operators.


hmm, that query works too :)

test=# select count(*) from search_tab where (to_tsvector('german', keywords ) 
@@ to_tsquery('german', 'ee:*')) and (to_tsvector('german', keywords ) @@ 
to_tsquery('german', 'dd:*'));
 count 
---

   123
(1 row)

Time: 26.155 ms


test=# explain analyze select count(*) from search_tab where 
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'ee:*')) and 
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'dd:*'));
   QUERY PLAN 
-

 Aggregate  (cost=103.87..103.88 rows=1 width=0) (actual time=22.819..22.820 
rows=1 loops=1)
   -  Bitmap Heap Scan on search_tab  (cost=5.21..103.80 rows=25 width=0) 
(actual time=22.677..22.799 rows=123 loops=1)
 Recheck Cond: ((to_tsvector('german'::regconfig, keywords) @@ 
'''ee'':*'::tsquery) AND (to_tsvector('german'::regconfig, keywords) @@ 
'''dd'':*'::tsquery))
 -  Bitmap Index Scan on idx_keywords_ger  (cost=0.00..5.21 rows=25 
width=0) (actual time=22.655..22.655 rows=123 loops=1)
   Index Cond: ((to_tsvector('german'::regconfig, keywords) @@ 
'''ee'':*'::tsquery) AND (to_tsvector('german'::regconfig, keywords) @@ 
'''dd'':*'::tsquery))
 Total runtime: 22.865 ms



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] page corruption on 8.3+ that makes it to standby

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 11:09 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Here's a version of Jeff's fix1 patch (with a trivial change to the
 comment) that applies to HEAD, REL9_0_STABLE, REL8_4_STABLE, and
 REL8_3_STABLE; a slightly modified version that applies to
 REL8_2_STABLE; and another slightly modified version that applies to
 REL8_1_STABLE and REL8_0_STABLE.  REL7_4_STABLE doesn't have
 tablespaces, so the problem can't manifest there, I think.

 Looks sane to the eyeball.  I'm not sure if the oldest versions have the
 same page-read-time header sanity checks that we have now, so it may be
 that there is not a need for this patch all the way back; but it can't
 hurt anything.

It looks like they do.  I am able to reproduce the problem even on
8.0, and the patch does fix it.

 I'm currently compiling and testing all of these.  When that's done,
 should I go ahead and check this in, or wait until after beta4 wraps?

 Go ahead and commit, please.

Done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Incorrect FTS result with GIN index

2010-07-29 Thread Tom Lane
Oleg Bartunov o...@sai.msu.su writes:
 On Thu, 29 Jul 2010, Tom Lane wrote:
 Yeah, that case works (though I think it's unnecessarily slow).  The one
 that gives the wrong answer is the equivalent form with two AND'ed @@
 operators.

 hmm, that query works too :)

There may be some platform dependency involved --- in particular, you
wouldn't see the issue unless one keystream has two nonlossy TIDs on the
same page as the other one has a lossy TID, so it's going to depend on
the placement of heap rows.  Anyway, I can reproduce it just by loading
the given dump, on both 8.4 and HEAD.  Will work on a fix.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Pavel Stehule
2010/7/29 Tom Lane t...@sss.pgh.pa.us:
 Jeff Davis pg...@j-davis.com writes:
 On Wed, 2010-07-28 at 20:25 -0400, Tom Lane wrote:
 I can't remember offhand whether there are any volatile type output
 functions, but if there were we'd really need to mark array_to_string()
 as volatile.  That would be unpleasant for performance though.   I'd
 rather compromise on stable.  Thoughts?

 Stable seems reasonable to me.

 A volatile type output function sounds like an edge case. Perhaps there
 are even grounds to force a type output function to be stable, similar
 to how we force the function for a functional index to be immutable.

 I did a bit of research in the system catalogs, and found that the only
 built-in type output function that is marked volatile is record_out().
 I think this is probably from an excess of caution --- record_out has
 the same issue that it's really as volatile as the underlying per-column
 output functions.  I notice in particular that anyarray_out is marked
 stable, and of course it's got the same issue too.

 I propose changing both array_to_string() and record_out() to be marked
 stable, and that that be the default assumption for similar future cases
 as well.  This isn't something we can back-patch, but sneaking it into
 9.0 at this point (without a catversion bump) seems reasonable to me.

+1


 I'm not in favor of trying to force output functions to be declared
 non-volatile as Jeff suggests above.  I think doing that would probably
 break user type definitions far and wide --- for a comparative sample,
 all of the user-defined types added in the standard regression tests
 would break, because we never bothered to mark their output functions
 as to volatility.  If we did do it, it would retroactively justify
 treating record_out and anyarray_out as stable, but I doubt it's worth
 causing a flag day for user-defined types.

 BTW, the situation on the input side is a bit different: record_in is
 volatile because domain_in is, and I think we'd better leave that alone
 since it's not too hard to believe that a domain might have volatile
 CHECK expressions.  If we had arrays of domains, anyarray_in would have
 to be volatile too, but we don't and it isn't.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Tom Lane
I wrote:
 BTW, the situation on the input side is a bit different: record_in is
 volatile because domain_in is, and I think we'd better leave that alone
 since it's not too hard to believe that a domain might have volatile
 CHECK expressions.  If we had arrays of domains, anyarray_in would have
 to be volatile too, but we don't and it isn't.

Oh, wait: we have arrays of composites now, and a composite could
contain a domain.  So that's wrong too; anyarray_in had better be marked
volatile.  In general it seems that the coding rules need to be:

* if you depend on an arbitrary type output function, assume it's stable.

* if you depend on an arbitrary type input function, assume it's volatile.

* similarly for binary send/receive functions.

Or we could decide that volatile domain CHECK expressions are un-sensible
and just relabel all these input functions as stable, which would make
everything consistent.  Thoughts?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Robert Haas
On Wed, Jul 28, 2010 at 3:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Jul 16, 2010 at 2:39 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I don't like the way you did that either (specifically, not the kluge
 in NUMERIC_DIGITS()).  It would probably work better if you declared
 two different structs, or a union of same, to represent the two layout
 cases.

 n_sign_dscale is now pretty inappropriately named, probably better to
 change the field name.  This will also help to catch anything that's
 not using the macros.  (Renaming the n_weight field, or at least burying
 it in an extra level of struct, would be helpful for the same reason.)

 I'm not sure what you have in mind here.  If we create a union of two
 structs, we'll still have to pick one of them to use to check the high
 bits of the first word, so I'm not sure we'll be adding all that much
 in terms of clarity.

 No, you can do something like this:

 typedef struct numeric_short
 {
        uint16  word1;
        NumericDigit digits[1];
 } numeric_short;

 typedef struct numeric_long
 {
        uint16  word1;
        int16   weight;
        NumericDigit digits[1];
 } numeric_long;

 typedef union numeric
 {
        uint16  word1;
        numeric_short   short;
        numeric_long    long;
 } numeric;

That doesn't quite work because there's also a varlena header that has
to be accounted for, so the third member of the union can't be a
simple uint16.  I'm wondering if it makes sense to do something along
these lines:

typedef struct NumericData
{
int32   varlen;
int16   n_header;
union {
struct {
char n_data[1];
} short;
struct {
uint16 n_weight;
char n_data[1];
} long;
};
} NumericData;

Why n_data as char[1] instead of NumericDigit, you ask?  It's that way
now, mostly I think so that the rest of the system isn't allowed to
know what underlying type is being used for NumericDigit; it looks
like previously it was signed char, but now it's int16.

 It seems like you've handled the NAN case a bit awkwardly.  Since the
 weight is uninteresting for a NAN, it's okay to not store the weight
 field, so I think what you should do is consider that the dscale field
 is still full-width, ie the format of the first word remains old-style
 not new-style.  I don't remember whether dscale is meaningful for a NAN,
 but if it is, your approach is constraining what is possible to store,
 and is also breaking compatibility with old databases.

 There is only one NaN value.  Neither weight or dscale is meaningful.
 I think if the high two bits of the first word are 11 we never examine
 anything else - do you see somewhere that we're doing otherwise?

 I hadn't actually looked.  I think though that it's a mistake to break
 compatibility on both dscale and weight when you only need to break one.
 Also, weight is *certainly* uninteresting for NaNs since it's not even
 meaningful unless there are digits.  dscale could conceivably be worth
 something.

I don't think I'm breaking compatibility on anything.  Can you clarify
what part of the code you're referring to here?  I'm sort of lost.

 The sign extension code in the NUMERIC_WEIGHT() macro seems a bit
 awkward; I wonder if there's a better way.  One solution might be to
 offset the value (ie, add or subtract NUMERIC_SHORT_WEIGHT_MIN) rather
 than try to sign-extend per se.

 Hmm... so, if the weight is X we store the value
 X-NUMERIC_SHORT_WEIGHT_MIN as an unsigned integer?  That's kind of a
 funny representation - I *think* it works out to sign extension with
 the high bit flipped.  I guess we could do it that way, but it might
 make it harder/more confusing to do bit arithmetic with the weight
 sign bit later on.

 Yeah, it was just an idea.  It seems like there should be an easier way
 to extract the sign-extended value, though.

It seemed a bit awkward to me, too, but I'm not sure there's a better one.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 1:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I wrote:
 BTW, the situation on the input side is a bit different: record_in is
 volatile because domain_in is, and I think we'd better leave that alone
 since it's not too hard to believe that a domain might have volatile
 CHECK expressions.  If we had arrays of domains, anyarray_in would have
 to be volatile too, but we don't and it isn't.

 Oh, wait: we have arrays of composites now, and a composite could
 contain a domain.  So that's wrong too; anyarray_in had better be marked
 volatile.  In general it seems that the coding rules need to be:

 * if you depend on an arbitrary type output function, assume it's stable.

 * if you depend on an arbitrary type input function, assume it's volatile.

 * similarly for binary send/receive functions.

 Or we could decide that volatile domain CHECK expressions are un-sensible
 and just relabel all these input functions as stable, which would make
 everything consistent.  Thoughts?

Aren't volatile CHECK expressions pretty un-sensible in general?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] On Scalability

2010-07-29 Thread Vincenzo Romano
Hi all.
I'm wondering about PGSQL scalability.
In particular I have two main topics in my mind:

1. What'd be the behavior of the query planner in the case I have
a single huge table with hundreds or thousands of partial indexes
(just differing by the WHERE clause).
This is an idea of mine to make index-partitioning instead of
table-partitioning.

2. What'd be the behavior of the query planner in the case I have
hundreds or thousands of child tables, possibly in a multilevel hierarchy
(let's say, partitioning by year, month and company).

I fear the presence of linear selection algorithms in these two cases that
would kill my design.

Is there any insight about these two points?

-- 
NotOrAnd Information Technologies
Vincenzo Romano
--
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Jul 29, 2010 at 1:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Or we could decide that volatile domain CHECK expressions are un-sensible
 and just relabel all these input functions as stable, which would make
 everything consistent.  Thoughts?

 Aren't volatile CHECK expressions pretty un-sensible in general?

Yeah, probably so.  I can't think of a use-case that seems like it would
justify the possible performance hit from having to assume all functions
performing datatype input calls are volatile.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Joshua D. Drake
On Thu, 2010-07-29 at 19:08 +0200, Vincenzo Romano wrote:
 Hi all.
 I'm wondering about PGSQL scalability.
 In particular I have two main topics in my mind:
 
 1. What'd be the behavior of the query planner in the case I have
 a single huge table with hundreds or thousands of partial indexes
 (just differing by the WHERE clause).
 This is an idea of mine to make index-partitioning instead of
 table-partitioning.

Well the planner is not going to care about the partial indexes that
don't match the where clause but what you are suggesting is going to
make writes and maintenance extremely expensive. It will also increase
planning time as the optimizer at a minimum has to discard the use of
those indexes.

 
 2. What'd be the behavior of the query planner in the case I have
 hundreds or thousands of child tables, possibly in a multilevel hierarchy
 (let's say, partitioning by year, month and company).

Again, test it. Generally speaking the number of child tables directly
correlates to planning time. Most experience that 60-100 tables is
really the highest you can go.

It all depends on actual implementation and business requirements
however.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Wed, Jul 28, 2010 at 3:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 No, you can do something like this:
 
 typedef union numeric
 {
uint16  word1;
numeric_short   short;
numeric_longlong;
 } numeric;

 That doesn't quite work because there's also a varlena header that has
 to be accounted for, so the third member of the union can't be a
 simple uint16.

Yeah, you would need an additional layer of struct to represent the
numeric with a length word in front of it.  I think this is not
necessarily bad because it would perhaps open the door to working
directly with short-varlena-header values, which is never going to
be possible with this:

 typedef struct NumericData
 {
 int32   varlen;
 int16   n_header;
 union { ...

OTOH alignment considerations may make that idea hopeless anyway.


 Why n_data as char[1] instead of NumericDigit, you ask?

Yes, we'd have to export NumericDigit if we wanted to declare these
structs properly in numeric.h.  I wonder if that decision should
be revisited.  I'd lean to making the whole struct local to numeric.c
though.  Is there anyplace else that really ought to see it?


 I hadn't actually looked.  I think though that it's a mistake to break
 compatibility on both dscale and weight when you only need to break one.
 Also, weight is *certainly* uninteresting for NaNs since it's not even
 meaningful unless there are digits.  dscale could conceivably be worth
 something.

 I don't think I'm breaking compatibility on anything.  Can you clarify
 what part of the code you're referring to here?  I'm sort of lost.

On-disk is what I'm thinking about.  Right now, a NaN's first word is
all dscale except the sign bits.  You're proposing to change that
but I think it's unnecessary to do so.  If we do it the way I'm
thinking, dscale would still mean the same in a NaN, and we'd simply
be ignoring the weight field (which might or might not be there
physically).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] string_to_array has to be stable?

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 1:10 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Thu, Jul 29, 2010 at 1:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Or we could decide that volatile domain CHECK expressions are un-sensible
 and just relabel all these input functions as stable, which would make
 everything consistent.  Thoughts?

 Aren't volatile CHECK expressions pretty un-sensible in general?

 Yeah, probably so.  I can't think of a use-case that seems like it would
 justify the possible performance hit from having to assume all functions
 performing datatype input calls are volatile.

That's my thought, too.  Any non-immutable CHECK constraint is
basically playing with fire, to some degree.  But a stable check
constraint is at least playing with it somewhat responsibly, whereas a
volatile check constraint strikes me as more like doing it while
bathing in turpentine.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Vincenzo Romano
2010/7/29 Joshua D. Drake j...@commandprompt.com:
 On Thu, 2010-07-29 at 19:08 +0200, Vincenzo Romano wrote:
 Hi all.
 I'm wondering about PGSQL scalability.
 In particular I have two main topics in my mind:

 1. What'd be the behavior of the query planner in the case I have
 a single huge table with hundreds or thousands of partial indexes
 (just differing by the WHERE clause).
 This is an idea of mine to make index-partitioning instead of
 table-partitioning.

 Well the planner is not going to care about the partial indexes that
 don't match the where clause but what you are suggesting is going to
 make writes and maintenance extremely expensive. It will also increase
 planning time as the optimizer at a minimum has to discard the use of
 those indexes.


 2. What'd be the behavior of the query planner in the case I have
 hundreds or thousands of child tables, possibly in a multilevel hierarchy
 (let's say, partitioning by year, month and company).

 Again, test it. Generally speaking the number of child tables directly
 correlates to planning time. Most experience that 60-100 tables is
 really the highest you can go.

 It all depends on actual implementation and business requirements
 however.

 Sincerely,

 Joshua D. Drake

I expect that a more complex schema will imply higher workloads
on the query planner. What I don't know is how the increase in the
workload will happen: linearly, sublinearly, polinomially or what?

Significant testing would require a prototype implementation with
an almost complete feed of data from the current solution.
But I'm at the feasibility study stage and have not enough resources
for that.

Thanks anyway for the insights, Joshua.
Does the 60-100 tables limit applies to a single level
of inheritance? Or is it more general?

-- 
NotOrAnd Information Technologies
Vincenzo Romano
--
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Joshua D. Drake
On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:

 I expect that a more complex schema will imply higher workloads
 on the query planner. What I don't know is how the increase in the
 workload will happen: linearly, sublinearly, polinomially or what?
 
 Significant testing would require a prototype implementation with
 an almost complete feed of data from the current solution.
 But I'm at the feasibility study stage and have not enough resources
 for that.
 
 Thanks anyway for the insights, Joshua.
 Does the 60-100 tables limit applies to a single level
 of inheritance? Or is it more general?

I do not currently have experience (except that it is possible) with
multi-level inheritance and postgresql.

 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Vincenzo Romano
2010/7/29 Joshua D. Drake j...@commandprompt.com:
 On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:

 I expect that a more complex schema will imply higher workloads
 on the query planner. What I don't know is how the increase in the
 workload will happen: linearly, sublinearly, polynomially or what?

Do you think I should ask somewhere else?
Any hint?

 Thanks anyway for the insights, Joshua.
 Does the 60-100 tables limit applies to a single level
 of inheritance? Or is it more general?

 I do not currently have experience (except that it is possible) with
 multi-level inheritance and postgresql.

Thanks anyway.

-- 
Vincenzo Romano at NotOrAnd Information Technologies
Software Hardware Networking Training Support Security
--
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Joshua D. Drake
On Thu, 2010-07-29 at 19:52 +0200, Vincenzo Romano wrote:
 2010/7/29 Joshua D. Drake j...@commandprompt.com:
  On Thu, 2010-07-29 at 19:34 +0200, Vincenzo Romano wrote:
 
  I expect that a more complex schema will imply higher workloads
  on the query planner. What I don't know is how the increase in the
  workload will happen: linearly, sublinearly, polynomially or what?
 
 Do you think I should ask somewhere else?
 Any hint?

The two people that would likely know the best are on vacation, TGL and
Heikki. You may have to wait a bit.

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Yeah, you would need an additional layer of struct to represent the
 numeric with a length word in front of it.  I think this is not
 necessarily bad because it would perhaps open the door to working
 directly with short-varlena-header values, which is never going to
 be possible with this:

 typedef struct NumericData
 {
     int32           varlen;
     int16           n_header;
     union { ...

 OTOH alignment considerations may make that idea hopeless anyway.

My understanding of our alignment rules for on-disk storage is still a
bit fuzzy, but as I understand it we don't align varlenas.  So
presumably if we get a pointer directly into a disk block, the first
byte might happen to be not aligned, which would make the rest of the
structure aligned; and from playing around with the system, it looks
like if we get a value from anywhere else it's typically using the
4-byte varlena header.  So it seems like it might be possible to write
code that aligns the data only if needed and otherwise skips a
palloc-and-copy cycle.  I'm not totally sure that would be a win, but
it could be.  Actually, I had a thought that it might be even more of
a win if you added a flag to the NumericVar representation indicating
whether the digit array was palloc'd or from the original tuple.  Then
you might be able to avoid TWO palloc-and-copy cycles, although at the
price of a fairly significant code restructuring.

Which is a long-winded way of saying - it's probably not hopeless.

 Why n_data as char[1] instead of NumericDigit, you ask?

 Yes, we'd have to export NumericDigit if we wanted to declare these
 structs properly in numeric.h.  I wonder if that decision should
 be revisited.  I'd lean to making the whole struct local to numeric.c
 though.  Is there anyplace else that really ought to see it?

Probably not.  btree_gist is using it, but that's it, at least as far
as our tree is concerned.  Attached please find a patch to make the
numeric representation private and add a convenience function
numeric_is_nan() for the benefit of btree_gist.  If this looks sane,
I'll go ahead and commit it, which will simplify review of the main
patch once I rebase it over these changes.

 I hadn't actually looked.  I think though that it's a mistake to break
 compatibility on both dscale and weight when you only need to break one.
 Also, weight is *certainly* uninteresting for NaNs since it's not even
 meaningful unless there are digits.  dscale could conceivably be worth
 something.

 I don't think I'm breaking compatibility on anything.  Can you clarify
 what part of the code you're referring to here?  I'm sort of lost.

 On-disk is what I'm thinking about.  Right now, a NaN's first word is
 all dscale except the sign bits.  You're proposing to change that
 but I think it's unnecessary to do so.

*Where* am I proposing this?  The new versions of NUMERIC_WEIGHT() and
NUMERIC_DSCALE() determine where to look for the bits in question
using NUMERIC_IS_SHORT(), which just tests NUMERIC_FLAGBITS(n) ==
NUMERIC_SHORT.  There's nothing in there about the NaN case at all.
Even if there were, it's irrelevant because those bits are never
examined and, as far as I can tell, will always be zero barring a
cosmic ray hit.  But even if they WERE examined, I don't see where I'm
changing the interpretation of them; in fact, I think I'm very
explicitly NOT doing that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


make_numericdata_private.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Enhancement/Fix for Array Utility Functions

2010-07-29 Thread Robert Haas
On Wed, Jul 28, 2010 at 1:20 AM, Mike Lewis mikelikes...@gmail.com wrote:

  1. As-is, it's a significant *pessimization* for small arrays, because
  the heap_tuple_untoast_attr_slice code does a palloc/copy even when one
  is not needed because the data is already not toasted.  I think there
  needs to be a code path that avoids that.

 This seems like it shouldn't be too hard to fix, and I think it should be
 fixed.

 Do you have any suggestions where to start?  I do agree that this should be
 fixed as well.   I don't have too much time to dedicate to this project.  I
 can try to put in some time this weekend though if it isn't looking too bad.

Perhaps you could check VARATT_IS_EXTENDED.  If that's true, then
slice it, but if it's false, then just use the original datum.  You
might want to wrap that up in a function rather than cramming it all
in the macro definition, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Josh Berkus

 Do you think I should ask somewhere else?
 Any hint?

I might suggest asking on the pgsql-performance mailing list instead.
You'll get *lots* more speculation there.  However, the only way you're
really going to know is to test.


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] multibyte charater set in levenshtein function

2010-07-29 Thread Robert Haas
On Wed, Jul 21, 2010 at 5:59 PM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Jul 21, 2010 at 2:47 PM, Alexander Korotkov
 aekorot...@gmail.com wrote:
 On Wed, Jul 21, 2010 at 10:25 PM, Robert Haas robertmh...@gmail.com wrote:

 *scratches head*  Aren't you just moving the same call to a different
 place?

 So, where you can find this different place? :) In this patch
 null-terminated strings are not used at all.

 I can't.  You win.  :-)

 Actually, I wonder if there's enough performance improvement there
 that we might think about extracting that part of the patch and apply
 it separately.  Then we could continue trying to figure out what to do
 with the rest.  Sometimes it's simpler to deal with one change at a
 time.

I tested this today and the answer was a resounding yes.  I ran
sum(levenshtein(t, 'foo')) over a dictionary file with about 2 million
words and got a speedup of around 15% just by eliminating the
text_to_cstring() calls.  So I've committed that part of this patch.

I'll try to look at the rest of the patch when I get a chance, but I'm
wondering if it might make sense to split it into two patches -
specifically, one patch to handle multi-byte characters correctly, and
then a second patch for the less-than-or-equal-to functions.  I think
that might simplify reviewing a bit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 On-disk is what I'm thinking about.  Right now, a NaN's first word is
 all dscale except the sign bits.  You're proposing to change that
 but I think it's unnecessary to do so.

 *Where* am I proposing this?

Um, your patch has the comment

!  * If the high bits of n_scale_dscale are NUMERIC_NAN, the two-byte header
!  * format is also used, but the low bits of n_scale_dscale are discarded in
!  * this case.

but now that I look a bit more closely, I don't think that's what the
code is doing.  You've got the NUMERIC_DSCALE and NUMERIC_WEIGHT access
macros testing specifically for NUMERIC_IS_SHORT, not for high-bit-set
which I think is what I was assuming they'd do.  So actually that code
is good as is: a NAN still has the old header format.  It's just the
comment that's wrong.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch (for 9.1) string functions

2010-07-29 Thread Erik Rijkers
Hi Pavel,

In xfunc.sgml, I came across a function example (for use of VARIADIC in 
polymorphic functions),
where the function name is concat():  (in the manual: 35.4.10. Polymorphic SQL 
Functions). 
Although that is not strictly wrong, it seems better to change that name when 
concat goes into
core, as seems to be the plan.

If you agree, it seems best to include this change in your patch and change 
that example
function's name when the stringfunc patch gets applied.


Erik Rijkers


xfunc.sgml.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 4:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Thu, Jul 29, 2010 at 1:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 On-disk is what I'm thinking about.  Right now, a NaN's first word is
 all dscale except the sign bits.  You're proposing to change that
 but I think it's unnecessary to do so.

 *Where* am I proposing this?

 Um, your patch has the comment

 !  * If the high bits of n_scale_dscale are NUMERIC_NAN, the two-byte header
 !  * format is also used, but the low bits of n_scale_dscale are discarded in
 !  * this case.

 but now that I look a bit more closely, I don't think that's what the
 code is doing.  You've got the NUMERIC_DSCALE and NUMERIC_WEIGHT access
 macros testing specifically for NUMERIC_IS_SHORT, not for high-bit-set
 which I think is what I was assuming they'd do.  So actually that code
 is good as is: a NAN still has the old header format.  It's just the
 comment that's wrong.

OK.  I think you're misinterpreting the point of that comment, which
may mean that it needs some clarification.  By the two byte header
format is also used, I think I really meant the header (and in fact
the entire value) is just 2 bytes.  Really, the low order bits have
neither the old interpretation nor the new interpretation: they don't
have any interpretation at all - they're completely meaningless.
That's what the part after the word but was intended to clarify.
Every routine in numeric.c checks for NUMERIC_IS_NAN() and gives it
some special handling before doing anything else, so NUMERIC_WEIGHT()
and NUMERIC_DSCALE() are never called in that case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch (for 9.1) string functions ( correct patch attached )

2010-07-29 Thread Erik Rijkers
On Thu, July 29, 2010 22:43, Erik Rijkers wrote:
 Hi Pavel,

 In xfunc.sgml, I came across a function example (for use of VARIADIC in 
 polymorphic functions),
 where the function name is concat():  (in the manual: 35.4.10. Polymorphic 
 SQL Functions).
 Although that is not strictly wrong, it seems better to change that name when 
 concat goes into
 core, as seems to be the plan.

 If you agree, it seems best to include this change in your patch and change 
 that example
 function's name when the stringfunc patch gets applied.


My apologies, the previous email had the wrong doc-patch attached.

Here is the correct one.


Erik Rijkers


xfunc.sgml.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merge command - GSoC progress

2010-07-29 Thread Robert Haas
On Wed, Jul 28, 2010 at 11:51 AM, Boxuan Zhai bxzhai2...@gmail.com wrote:
 I have fixed the action qual problem. Now the system can run merge command,
 with quals.

 I create a clean patch file (no debug clauses). See the attachment.

 Please try this new command if you have interest.

So, I tried this today, but:

- I got some compiler warnings in analyze.c, and
- when tried to run 'make check' with the patch applied, initdb failed.

So you still need to do some more bug-squashing on this...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 OK.  I think you're misinterpreting the point of that comment, which
 may mean that it needs some clarification.  By the two byte header
 format is also used, I think I really meant the header (and in fact
 the entire value) is just 2 bytes.  Really, the low order bits have
 neither the old interpretation nor the new interpretation: they don't
 have any interpretation at all - they're completely meaningless.
 That's what the part after the word but was intended to clarify.
 Every routine in numeric.c checks for NUMERIC_IS_NAN() and gives it
 some special handling before doing anything else, so NUMERIC_WEIGHT()
 and NUMERIC_DSCALE() are never called in that case.

I would suggest the comment ought to read something like

NaN values also use a two-byte header (in fact, the
whole value is always only two bytes).  The low order bits of
the header word are available to store dscale, though dscale
is not currently used with NaNs.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 5:03 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 OK.  I think you're misinterpreting the point of that comment, which
 may mean that it needs some clarification.  By the two byte header
 format is also used, I think I really meant the header (and in fact
 the entire value) is just 2 bytes.  Really, the low order bits have
 neither the old interpretation nor the new interpretation: they don't
 have any interpretation at all - they're completely meaningless.
 That's what the part after the word but was intended to clarify.
 Every routine in numeric.c checks for NUMERIC_IS_NAN() and gives it
 some special handling before doing anything else, so NUMERIC_WEIGHT()
 and NUMERIC_DSCALE() are never called in that case.

 I would suggest the comment ought to read something like

        NaN values also use a two-byte header (in fact, the
        whole value is always only two bytes).  The low order bits of
        the header word are available to store dscale, though dscale
        is not currently used with NaNs.

I can do something along those lines, though I'm reluctant to mention
dscale specifically since we have no agreement that such a thing makes
sense, or is the only/best use for those bits.  Did you look at the
patch to move the numeric stuff out of the .h file that I attached a
few emails back?  If that looks OK, I can commit it and then redo the
rest of this along the lines we've discussed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On Scalability

2010-07-29 Thread Vincenzo Romano
2010/7/29 Josh Berkus j...@agliodbs.com:

 Do you think I should ask somewhere else?
 Any hint?

 I might suggest asking on the pgsql-performance mailing list instead.
 You'll get *lots* more speculation there.  However, the only way you're
 really going to know is to test.

Or maybe checking against the source code and its documentation, if any.

-- 
Vincenzo Romano at NotOrAnd Information Technologies
Software Hardware Networking Training Support Security
--
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Did you look at the
 patch to move the numeric stuff out of the .h file that I attached a
 few emails back?  If that looks OK, I can commit it and then redo the
 rest of this along the lines we've discussed.

A couple of thoughts:

* It'd be good to get NUMERIC_HDRSZ out of there too, especially since
it'll have only the shakiest connection to reality after this patch goes in.
It looks like doing that would only require modifying type_maximum_size().
I'd suggest inventing another small function along the lines of
int32 numeric_maximum_size(int32 typemod)
so that the NUMERIC-specific knowledge can be pulled out of format_type.c.

* I'd suggest leaving a comment along the lines of
/* The actual contents of Numeric are private to numeric.c */
with the now-opaque typedef for Numeric.

Otherwise +1.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ERROR: argument to pg_get_expr() must come from system catalogs

2010-07-29 Thread Tom Lane
Dave Page dp...@pgadmin.org writes:
 We had a report of the above error from a pgAdmin user testing
 1.12.0b3 with PG 9.0b3. The (highly simplified) query below works fine
 as a superuser:

 SELECT pg_get_expr(proargdefaults, 'pg_catalog.pg_class'::regclass)
   FROM pg_proc pr
   LEFT OUTER JOIN pg_description des ON des.objoid=pr.oid

 Run as a regular user though, we get the error.

I've applied a (rather hurried) patch for this for 9.0beta4.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] review patch: Distinguish between unique indexes and unique constraints

2010-07-29 Thread Kevin Grittner
The patch is in context diff format and applies cleanly.  No doc
changes were included.  Arguably there should be a mention in the
documentation for psql's \d+ commend, but since the number of child
tables and the display of reloptions aren't mentioned, perhaps we're
not trying to list *all* the differences the + makes.  If those
don't merit mention, this doesn't.
 
The patch implements what it's supposed to, there was consensus on
the list that we want it, we don't already have it, the SQL spec
doesn't apply to psql's backslash commands, and it was added just
for the verbose listing as requested (with no objection).  There
are no pg_dump issues.  The only danger is that someone is depending
on the current \d+ format and will be surprised at the new
distinction between unique indexes and unique constraints.  All
bases seem to me to be covered.
 
The feature works as advertised, I saw no problem corner cases, no
assertion failures or crashes.
 
The patch causes no noticeable performance change, doesn't claim to
affect performance, and will not slow down anything else.
 
The patch does not follow coding guidelines, as it places the
opening brace for an if block on the same line as the if.
There are no portability issues; it should work everywhere that
current backslash commands work.  The purpose of the code is obvious
enough to not require any comment lines.  It does what it says,
correctly.  It doesn't produce compiler warnings and does not crash.
 
It fits together coherently with all else, and introduces no new
interdependencies.
 
I am attaching a new version of the patch which fixes the formatting
issue and rearranges the code slightly in a way which I find more
readable.  (I leave it to the committer to determine which
arrangement better suits.)
 
I am marking this Ready for Committer.
 
-Kevin
*** a/src/bin/psql/describe.c
--- b/src/bin/psql/describe.c
***
*** 1592,1598  describeOneTableDetails(const char *schemaname,
--- 1592,1602 
if (strcmp(PQgetvalue(result, 
i, 1), t) == 0)
appendPQExpBuffer(buf, 
 PRIMARY KEY,);
else if 
(strcmp(PQgetvalue(result, i, 2), t) == 0)
+   {
appendPQExpBuffer(buf, 
 UNIQUE,);
+   if (verbose  
strcmp(PQgetvalue(result, i, 7), u) == 0)
+   
appendPQExpBuffer(buf,  CONSTRAINT,);
+   }
  
/* Everything after USING is 
echoed verbatim */
indexdef = PQgetvalue(result, 
i, 5);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Robert Haas
On Thu, Jul 29, 2010 at 5:35 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Did you look at the
 patch to move the numeric stuff out of the .h file that I attached a
 few emails back?  If that looks OK, I can commit it and then redo the
 rest of this along the lines we've discussed.

 A couple of thoughts:

 * It'd be good to get NUMERIC_HDRSZ out of there too, especially since
 it'll have only the shakiest connection to reality after this patch goes in.
 It looks like doing that would only require modifying type_maximum_size().
 I'd suggest inventing another small function along the lines of
        int32 numeric_maximum_size(int32 typemod)
 so that the NUMERIC-specific knowledge can be pulled out of format_type.c.

 * I'd suggest leaving a comment along the lines of
        /* The actual contents of Numeric are private to numeric.c */
 with the now-opaque typedef for Numeric.

 Otherwise +1.

Done, with those changes.

But, looking at it a bit more carefully, isn't the maximum-size logic
for numeric rather bogus?

rhaas=# \d n
 Table public.n
 Column | Type | Modifiers
+--+---
 a  | numeric(2,1) |

rhaas=# select atttypmod from pg_attribute where attrelid =
'n'::regclass and attname = 'a';
 atttypmod
---
131077
(1 row)

(gdb) p numeric_maximum_size(131077)
$1 = 9

rhaas=# select a, pg_column_size(a),
pg_column_size(a::text::numeric(2,1)) from n;
  a  | pg_column_size | pg_column_size
-++
 1.1 |  9 | 12
(1 row)

i.e. According to the max-size logic, the ostensible maximum size of a
numeric(2,1) is 9 bytes, but in fact the real maximum is 12 bytes = 4
byte varlena header + 2 bytes for sign/dscale + 2 bytes for weight +
(2 NumericDigits * 2 bytes/digit).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing NUMERIC size for 9.1

2010-07-29 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 But, looking at it a bit more carefully, isn't the maximum-size logic
 for numeric rather bogus?

Perhaps, but I think you're confused on at least one point.
numeric(2,1) has to be able to hold 2 decimal digits, not 2
NumericDigits (which'd actually be 8 decimal digits given
the current code).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers