Re: [HACKERS] Review: Non-inheritable check constraints

2011-10-07 Thread Nikhil Sontakke
Hi Alex,

 Hmmm, your patch checks for a constraint being only via:
 
!recurse  !recursing
 
  I hope that is good enough to conclusively conclude that the constraint
 is
  'only'. This check was not too readable in the existing code for me
 anyways
  ;). If we check at the grammar level, we can be sure. But I remember not
  being too comfortable about the right position to ascertain this
  characteristic.

 Well I traced through it here was my thinking (maybe should be a comment?):

 1: AlterTable() calls ATController() with recurse =
 interpretInhOption(stmt-relation-inhOpt
 2: ATController() calls ATPrepCmd() with recurse and recursing = false
 3: ATPrepCmd() saves the recurse flag with the subtup
 AT_AddConstraintRecurse, otherwise the subtype is AT_AddConstraint
 4: ATExecCmd() calls ATExecAddConstraint() with recurse == true when
 subtype == AT_AddConstraintRecurse, recurse = false otherwise
 5: ATExecAddConstraint() calls ATAddCheckConstraint() with recuse and
 recursing = false
 6: now we are in ATAddCheckConstraint() where recurse ==
 interpretInhOption(rv-inhOpt) and recursing == false. Recursing is
 only true when ATAddCheckConstaint() loops through children and
 recursively calls ATAddCheckConstraint()

 So with it all spelled out now I see the constraint must be added to
 child tables too check is dead code.


Thanks the above step-wise explanation helps.

But AFAICS, the default inhOpt value can be governed by the SQL_inheritance
guc too. So in that case, it's possible that recurse is false and child
tables are present, no?

Infact as I now remember, the reason my patch was looping through was to
handle this very case. It was based on the assumptions that some constraints
might be ONLY type and some can be inheritable. Although admittedly the
current ALTER TABLE functionality does not allow this.

So maybe we can still keep this check around IMO.

Regards,
Nikhils


Re: [HACKERS] Review: Non-inheritable check constraints

2011-10-07 Thread Alex Hunsaker
On Fri, Oct 7, 2011 at 00:28, Nikhil Sontakke nikkh...@gmail.com wrote:
 Hi Alex,

 So with it all spelled out now I see the constraint must be added to
 child tables too check is dead code.


 Thanks the above step-wise explanation helps.

 But AFAICS, the default inhOpt value can be governed by the SQL_inheritance
 guc too. So in that case, it's possible that recurse is false and child
 tables are present, no?

Well... Do we really want to differentiate between those two case? I
would argue no.

Given that:
  set sql_inhertance to off;
  alter table xxx alter column;
behaves the same as
  set sql_inhertance to on;
  alter table only xxx alter column;

Why should we treat constraints differently? Or put another way if set
sql_inhertance off makes alter table behave with an implicit only,
shouldn't add/drop constraint respect that?

 Infact as I now remember, the reason my patch was looping through was to
 handle this very case. It was based on the assumptions that some constraints
 might be ONLY type and some can be inheritable.
 Although admittedly the current ALTER TABLE functionality does not allow this.

Hrm... Ill I see is a user who turned off sql_inhertance wondering why
they have to specify ONLY on some alter table commands and not others.
I think if we want to support ONLY constraint types in the way you
are thinking about them, we need to put ONLY some place else (alter
table xxx add only constraint ?). Personally I don't see a reason to
have that kind of constraint. Mostly because I don't see how its
functionally different. Is it somehow?

Anyone else have any thoughts on this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable

2011-10-07 Thread Peter Eisentraut
On tor, 2011-10-06 at 20:15 -0400, Alex Goncharov wrote:
 P.S. And on the odd chance that somebody thinks that this
  functionality would be possible and helpful to add to libpq, and
  the problem is in the lack of human resources: I would be more
  then happy to dig into some PostgreSQL (the product) development
  under somebody's coaching, to start with.  This topic or other.
  I just wouldn't know where to start myself. 

I had some some research on this particular topic/feature recently.  My
notes currently say, it's better to not tackle this before the not-null
cataloging patch (see entry in current commitfest) is finished.  Because
that patch would presumably already implement much of the logic
necessary to determine whether a give expression implies nullability or
not and catalog this in a simpler fashion.  Based on that you will then
have to drag this information around and put it on the wire so that the
client APIs can process it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-10-07 Thread Alexander Korotkov
On Fri, Oct 7, 2011 at 7:41 AM, Jeff Davis pg...@j-davis.com wrote:

 I'd prefer to include it in the initial patch. If the current GiST code
 is going to be replaced, then there's not much sense reviewing/testing
 it.

 You may need to consider unbounded and empty ranges specially. I made an
 attempt to do so in the current GiST code, and you might want to take a
 look at that first. I'm not particularly attached to my approach, but we
 should do something reasonable with unbounded and empty ranges.


The first thing caught my eye in existing GiST code is idea of
subtype_float. float8 has limited precision and can't respresent, for
example, varlena values good enough. Even if we have large int8 value we can
loose lower bits, but data distribution can be so that these bits are
valuable. Wouldn't it better to have function like subtype_diff_float which
returns difference between two values of subtype as an float? Using of such
function could make penalty more sensible to even small difference between
values, and accordingly more relevant.

--
With best regards,
Alexander Korotkov.


[HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Andrea Suisani

On 10/05/2011 07:37 AM, Tom Lane wrote:

davegda...@sonic.net  writes:

Postgresql 9.0.4 has the timezone:
   America/Blanc-Sablon
However other sources seem to spell this with an underscore instead of dash:
   America/Blanc_Sablon


I don't know what other sources you're consulting, but Blanc-Sablon
is the way it appears in the Olson timezone database, and that's what
we follow.


Speaking of Olson tz database, I've just stumbled across this post
and I thought it would be worthy to report it here:

http://blog.joda.org/2011/10/today-time-zone-database-was-closed.html




We're not going to get into the business of editorializing
on their information.  If you want to fool with it locally, look into
the .../share/timezone/ directory.

regards, tom lane



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Collecting statistics on CSV file data

2011-10-07 Thread Etsuro Fujita
Hi Hanada-san,

I'm very sorry for late reply.

(2011/09/20 18:49), Shigeru Hanada wrote:
 I took a look at the patch, and found that it couldn't be applied
 cleanly against HEAD.  Please rebase your patch against current HEAD of
 master branch, rather than 9.1beta1.
 
 The wiki pages below would be helpful for you.
http://wiki.postgresql.org/wiki/Submitting_a_Patch
http://wiki.postgresql.org/wiki/Creating_Clean_Patches
http://wiki.postgresql.org/wiki/Reviewing_a_Patch
 
 And it would be easy to use git to follow changes made by other
 developers in master branch.
 http://wiki.postgresql.org/wiki/Working_with_Git

Thank you for the review and the helpful information.
I rebased. Please find attached a patch. I'll add the patch to the next CF.

Changes:

  * cleanups and fixes
  * addition of the following to ALTER FOREIGN TABLE
  ALTER [COLUMN] column SET STATISTICS integer
  ALTER [COLUMN] column SET ( n_distinct = val ) (n_distinct only)
  ALTER [COLUMN] column RESET ( n_distinct )
  * reflection of the force_not_null info in acquiring sample rows
  * documentation

Best regards,
Etsuro Fujita
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 15,30 
--- 15,42 
  #include sys/stat.h
  #include unistd.h
  
+ #include access/htup.h
  #include access/reloptions.h
+ #include access/transam.h
  #include catalog/pg_foreign_table.h
  #include commands/copy.h
+ #include commands/dbcommands.h
  #include commands/defrem.h
  #include commands/explain.h
+ #include commands/vacuum.h
  #include foreign/fdwapi.h
  #include foreign/foreign.h
  #include miscadmin.h
  #include nodes/makefuncs.h
  #include optimizer/cost.h
+ #include optimizer/plancat.h
+ #include parser/parse_relation.h
+ #include pgstat.h
+ #include utils/attoptcache.h
+ #include utils/elog.h
+ #include utils/guc.h
+ #include utils/lsyscache.h
+ #include utils/memutils.h
  #include utils/rel.h
  #include utils/syscache.h
  
***
*** 101,106  static void fileBeginForeignScan(ForeignScanState *node, int 
eflags);
--- 113,119 
  static TupleTableSlot *fileIterateForeignScan(ForeignScanState *node);
  static void fileReScanForeignScan(ForeignScanState *node);
  static void fileEndForeignScan(ForeignScanState *node);
+ static void fileAnalyzeForeignTable(Relation onerel, VacuumStmt *vacstmt, int 
elevel);
  
  /*
   * Helper functions
***
*** 112,118  static List *get_file_fdw_attribute_options(Oid relid);
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
   const char *filename,
   Cost *startup_cost, Cost *total_cost);
! 
  
  /*
   * Foreign-data wrapper handler function: return a struct with pointers
--- 125,132 
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
   const char *filename,
   Cost *startup_cost, Cost *total_cost);
! static void file_fdw_do_analyze_rel(Relation onerel, VacuumStmt *vacstmt, int 
elevel);
! static int  file_fdw_acquire_sample_rows(Relation onerel, int elevel, 
HeapTuple *rows, int targrows, BlockNumber *totalpages, double *totalrows);
  
  /*
   * Foreign-data wrapper handler function: return a struct with pointers
***
*** 129,134  file_fdw_handler(PG_FUNCTION_ARGS)
--- 143,149 
fdwroutine-IterateForeignScan = fileIterateForeignScan;
fdwroutine-ReScanForeignScan = fileReScanForeignScan;
fdwroutine-EndForeignScan = fileEndForeignScan;
+   fdwroutine-AnalyzeForeignTable = fileAnalyzeForeignTable;
  
PG_RETURN_POINTER(fdwroutine);
  }
***
*** 575,580  fileReScanForeignScan(ForeignScanState *node)
--- 590,605 
  }
  
  /*
+  * fileAnalyzeForeignTable
+  *Analyze table
+  */
+ static void
+ fileAnalyzeForeignTable(Relation onerel, VacuumStmt *vacstmt, int elevel)
+ {
+   file_fdw_do_analyze_rel(onerel, vacstmt, elevel);
+ }
+ 
+ /*
   * Estimate costs of scanning a foreign table.
   */
  static void
***
*** 584,590  estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
  {
struct stat stat_buf;
BlockNumber pages;
!   int tuple_width;
double  ntuples;
double  nrows;
Costrun_cost = 0;
--- 609,616 
  {
struct stat stat_buf;
BlockNumber pages;
!   BlockNumber relpages;
!   double  reltuples;
double  ntuples;
double  nrows;
Costrun_cost = 0;
***
*** 604,619  estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
if (pages  1)
pages = 1;
  
!   /*
!* Estimate the number of tuples in the file.  We back into this 
estimate
!* using the planner's idea of the relation width; which is bogus if not
!* all columns are being read, not to mention that 

[HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Fujii Masao
Hi,

I found that by default WAL_DEBUG macro has been defined in
9.2dev and 9.1. I'm very surprised at this. Why does WAL_DEBUG
need to be defined by default? The performance overhead
introduced by WAL_DEBUG is really vanishingly low?

WAL_DEBUG was defined in the following commit:
53dbc27c62d8e1b6c5253feba04a5094cb8fe046

--
Support unlogged tables.

The contents of an unlogged table are WAL-logged; thus, they are not
available on standby servers and are truncated whenever the database
system enters recovery.  Indexes on unlogged tables are also unlogged.
Unlogged GiST indexes are not currently supported.
--

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Patch for cursor calling with named parameters

2011-10-07 Thread Yeb Havinga

On 2011-10-06 16:04, Royce Ausburn wrote:

Initial Review for patch:

http://archives.postgresql.org/pgsql-hackers/2011-09/msg00744.php


Hello Royce,

Thank you for your review.



I don't think so.  The new feature accepts opening a cursor with some 
parameter names not specified:


  open cur1(param3 := 4, 1, param1 := 5);

It seems that if a parameter is not named, its position is used to 
bind to a variable.  For example, the following fail:


psql:plsqltest.sql:26: ERROR:  cursor cur1 argument 2 param2 
provided multiple times

LINE 10:   open cur1(param3 := 4, 1, param2 := 5);

and

psql:plsqltest.sql:26: ERROR:  cursor cur1 argument 2 param2 
provided multiple times

LINE 10:   open cur1(param2 := 4, 2, param1 := 5);


I think that postgres ought to enforce some consistency here.  Use one 
way or the other, never both.


This was meant as a feature, but I can remove it.



I can also produce some unhelpful errors when I give bad syntax.  For 
example:


psql:plsqltest.sql:28: ERROR:  cursor cur1 argument 1 param1 
provided multiple times

LINE 11:   open cur1( param3 : = 4, 2, param1 := 5);
(notice the space between the : and =)


Yes, the whole of the expression before the first comma, 'param3 : = 4' 
is not recognized as parametername := symbol expression, so that 
is taken as the value of the first parameter. This value is parsed after 
all named arguments are read, and hence no meaningful error is given. If 
there was no param1 parameter name at the end, the 'multiple times' 
error would not have caused the processing to stop, and a syntax error 
at the correct : would have been given.


The same reasoning also explains the other 'multiple times' errors you 
could get, by putting a syntax error in some value.


--

  open cur1( param3 := param3 , param2 = 3, param1 := 1 );

psql:plsqltest.sql:29: ERROR:  column param2 does not exist
LINE 2: ,param2 = 3
 ^
QUERY:  SELECT 1
,param2 = 3
,param3;
CONTEXT:  PL/pgSQL function named_para_test line 7 at OPEN


This is a valid error, since the parser / SQL will try to evaluate the 
boolean expression param2 = 3, while param2 is not a defined variabele.


Again, thank you very much for your thorough review. I'll update the 
patch so mixing positional and named parameters are removed, add 
documentation, and give syntax errors before an error message indicating 
that positional and named parameters were mixed.


--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Heikki Linnakangas

On 07.10.2011 12:19, Fujii Masao wrote:

Hi,

I found that by default WAL_DEBUG macro has been defined in
9.2dev and 9.1. I'm very surprised at this. Why does WAL_DEBUG
need to be defined by default? The performance overhead
introduced by WAL_DEBUG is really vanishingly low?

WAL_DEBUG was defined in the following commit:
53dbc27c62d8e1b6c5253feba04a5094cb8fe046

--
 Support unlogged tables.

 The contents of an unlogged table are WAL-logged; thus, they are not
 available on standby servers and are truncated whenever the database
 system enters recovery.  Indexes on unlogged tables are also unlogged.
 Unlogged GiST indexes are not currently supported.
--


I'm pretty sure that change was included in the commit by accident.

That said, the overhead of WAL_DEBUG probably is insignificant, as long 
as you don't actually set wal_debug=on. I wonder if we should leave it 
enabled.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Collecting statistics on CSV file data

2011-10-07 Thread Etsuro Fujita

Hi,

I'm very sorry for the late reply.

(2011/09/21 10:00), Alvaro Herrera wrote:

Excerpts from David Fetter's message of mar sep 20 21:22:32 -0300 2011:

On Tue, Sep 20, 2011 at 11:13:05AM -0400, Tom Lane wrote:



Probably a more interesting question is why we wouldn't change
autovacuum so that it calls this automatically for foreign tables.


How about a per-table setting that tells autovacuum whether to do
this?


Seems reasonable.  Have autovacuum assume that foreign tables are not to
be analyzed, unless some reloption is set.


Thank you for the comments. I'd like to leave that feature for future work.

(But this is BTW. I'm interested in developing CREATE FOREIGN INDEX. 
I've examined whether there are discussions about the design and 
implementation of it in the archive, but could not find information. If 
you know anything, please tell me.)


Best regards,
Etsuro Fujita

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable

2011-10-07 Thread Merlin Moncure
On Thu, Oct 6, 2011 at 5:02 PM, Alex Goncharov
alex-goncha...@comcast.net wrote:
 ,--- I/Alex (Thu, 06 Oct 2011 14:02:14 -0400) *
 | My understanding is that libpq does not allow one to find if a result
 | set column is nullable.
 ,--- You/Merlin (Thu, 6 Oct 2011 15:16:18 -0500) *
 | why aren't you using PQgetisnull()?
 ,--- I/Alex (Thu, 06 Oct 2011 16:22:28 -0400) *
 | This function is not about the nullability of a column but rather
 | about the value in a result set cell:
 |      int PQgetisnull(const PGresult *res, int row_number, int 
 column_number);
 | Notice the 'row_number'.
 ,--- Merlin Moncure (Thu, 6 Oct 2011 15:38:59 -0500) *
 | right -- get it.  well, your question is doesn't make sense then --

 What?..

 * It makes complete logical sense to ask a question if a result set
  column may ever have a NULL cell.

 * It can be done for a table using pg_attribute.attnotnull.

 * It can be done, at the C API level, in a wide variety of other
  databases, including the two most often mentioned in this audience:
  Oracle (through and OCI call) and MySQL (at least through ODBC.)

 | any column can be transformed in ad hoc query, so it only makes sense
 | to test individual values post query..

 What query?

 Look at the subject line: it mentioned PQdescribePrepared.

 I execute PQprepare, and then PQdescribePrepared -- I never fetch the
 data.  When the statement is described, plenty information can be
 obtained about the columns -- but not its nullability (what I wanted
 to be confirmed or denied -- for libpq API.)

 | btw, if you don't like querying system catalogs, check out
 | information_schema.columns.

 Than was not my question, right?  (What difference is there between
 using pg_X tables of information_schema?)

 ,--- Florian Pflug (Thu, 6 Oct 2011 23:16:53 +0200) *
 | Sure, but there are still a lot of cases where the database could deduce
 | (quite easily) that a result column cannot be null.

 Right. Of course.  I can do it in 'psql'.

 | Other databases do that - for example, I believe to remember that
 | Microsoft SQL Server preserves NOT NULL constraints if you do
 |
 |   CREATE TABLE bar AS SELECT * from foo;

 I don't know a database where this would not be true.

 | So the question makes perfect sense, and the answer is: No, postgres 
 currently
 | doesn't support that, i.e. doesn't deduce the nullability of result columns,
 | not even in the simplest cases.

 You are wrong: as in my original mail, use pg_attribute.attnotnull to
 see why I say this.

 ,--- Merlin Moncure (Thu, 6 Oct 2011 16:28:56 -0500) *
 | hm, good point.  not sure how it's useful though.  I suppose an
 | application could leverage that for validation purposes, but that's a
 | stretch I think.
 `*

 Thanks for sharing your knowledge of applications.

 (Look, I appreciate anybody's reply and readiness to help, but if you
 have a limited expertise in the subject area, why bother replying?)


Well, admittedly, perhaps my response was hastily written.  But try to
understand the zen of things around here: often if you
propose/gripe/suggest something, you'll get a challenge back which is
really fishing for more detail.  It's not personal.  By the way, you
still haven't explained use cases.  You can always talk
hypotheticals...'other people do it' is not a standard for inclusion
of a feature (although it can be).  I've been coding against libpq for
years and years and have never needed to test for nullability, so
that's where my skepticism comes from.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Collecting statistics on CSV file data

2011-10-07 Thread David Fetter
On Fri, Oct 07, 2011 at 08:09:44PM +0900, Etsuro Fujita wrote:
 Hi,
 
 I'm very sorry for the late reply.
 
 (2011/09/21 10:00), Alvaro Herrera wrote:
 Excerpts from David Fetter's message of mar sep 20 21:22:32 -0300 2011:
 On Tue, Sep 20, 2011 at 11:13:05AM -0400, Tom Lane wrote:
 
 Probably a more interesting question is why we wouldn't change
 autovacuum so that it calls this automatically for foreign tables.
 
 How about a per-table setting that tells autovacuum whether to do
 this?
 
 Seems reasonable.  Have autovacuum assume that foreign tables are not to
 be analyzed, unless some reloption is set.
 
 Thank you for the comments. I'd like to leave that feature for future work.

OK

 (But this is BTW. I'm interested in developing CREATE FOREIGN INDEX.
 I've examined whether there are discussions about the design and
 implementation of it in the archive, but could not find information.
 If you know anything, please tell me.)

Look into the virtual index interface from Informix.  We might want
to start a wiki page on this.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable

2011-10-07 Thread Alex Goncharov
,--- Peter Eisentraut (Fri, 07 Oct 2011 11:14:09 +0300) *
| On tor, 2011-10-06 at 20:15 -0400, Alex Goncharov wrote:
|  P.S. And on the odd chance that somebody thinks that this
|   functionality would be possible and helpful to add to libpq, and
|   the problem is in the lack of human resources: I would be more
|   then happy to dig into some PostgreSQL (the product) development
|   under somebody's coaching, to start with.  This topic or other.
|   I just wouldn't know where to start myself. 
| 
| I had some some research on this particular topic/feature recently.  My
| notes currently say, it's better to not tackle this before the not-null
| cataloging patch (see entry in current commitfest) is finished.

Peter,

Thank you -- this is very helpful: I was not aware of a commitfest
list.  Will try to check it out within a few days.

| Because that patch would presumably already implement much of the
| logic necessary to determine whether a give expression implies
| nullability or not and catalog this in a simpler fashion.  Based on
| that you will then have to drag this information around and put it
| on the wire so that the client APIs can process it.
`---*

Good to hear that I am not alone in the expressed wish. Thank you
again for all the technical details!

-- Alex -- alex-goncha...@comcast.net --

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable

2011-10-07 Thread Alex Goncharov
,--- You/Merlin (Fri, 7 Oct 2011 07:39:57 -0500) *
| On Thu, Oct 6, 2011 at 5:02 PM, Alex Goncharov
|  ,--- Merlin Moncure (Thu, 6 Oct 2011 16:28:56 -0500) *
|  | hm, good point.  not sure how it's useful though.  I suppose an
|  | application could leverage that for validation purposes, but that's a
|  | stretch I think.
|  `*
| 
|  Thanks for sharing your knowledge of applications.
| 
|  (Look, I appreciate anybody's reply and readiness to help, but if you
|  have a limited expertise in the subject area, why bother replying?)
| Well, admittedly, perhaps my response was hastily written.  But try
| to understand the zen of things around here: often if you
| propose/gripe/suggest something, you'll get a challenge back which
| is really fishing for more detail.  It's not personal.

Merlin,

I appreciate the spirit of the PostgreSQL technical lists: I am
permanently subscribed to PERFORM, and, occasionally, to HACKERS.  I
regularly unsubscribe from the latter because it quickly overloads me
with the flood of messages I have no time even to read, not to say,
digest.  HACKERS would be one of the most useful technical reads, if
it were not so bloody floody.

  (On GENERAL, take a look at this reply to a question similar to mine:

http://archives.postgresql.org/pgsql-general/2005-08/msg01152.php

  What's the value of this kind of advice?)

| By the way, you still haven't explained use cases.

As I said yesterday, it is for my client to find various meta data.

Also note that I posted the references to common APIs (JDBC and ODBC),
where this interface is available, because nullability is a natural
thing to ask about.  You can also find how this kind of functionality
is supported, e.g. in Oracle OCI.

Plus, now you have seen, from Peter Eisentraut's message that I just
replied to, and from the mail archive link I posted a dozen of lines
above here, that I am not the first person interested in this kind of
functionality in the PostgreSQL land.

| You can always talk hypotheticals...'other people do it' is not a
| standard for inclusion of a feature (although it can be).

I didn't ask anybody to include anything in PostgreSQL; my question,
now unambiguously answered (thank you, the list!) was:

,--- I/Alex (Thu, 06 Oct 2011 14:02:14 -0400) *
|
| My understanding is that libpq does not allow one to find if a result
| set column is nullable.
| 
| Is this right?
|
`-*

Compare this with what you have tried to write about.

| I've been coding against libpq for years and years and have never
| needed to test for nullability,

It's not a serious argument, in my opinion.

| so that's where my skepticism comes from.
`-*

But, sincerely, I do appreciate your readiness to help and continuing
the conversation this morning.

Thank you,

-- Alex -- alex-goncha...@comcast.net --


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table only ... drop constraint broken in HEAD

2011-10-07 Thread Robert Haas
On Thu, Oct 6, 2011 at 11:38 PM, Alex Hunsaker bada...@gmail.com wrote:
 Oh, I see the problem, and I now agree that it's the DROP CONSTRAINT
 code that is buggy.

 Want me to roll this fix in as part of the alter table only constraint
 patch? Or keep it split out? We might want to backpatch to at least
 8.3 where HOT was introduced (yes looks like the bug existed back
 then). I suppose its a fairly narrow chance to hit this bug so I could
 see the argument for not back patching...

Yeah, I'm not inclined to back-patch it.  The chance of hitting this
in older versions must be very small, or somebody would have noticed
by now.  If we get a report from the field, we can always back-patch
it then, but right now it doesn't seem worth taking any risks for.

On a related note, your fix seems slightly fragile to me ... because
we're pulling a CCI out of the innermost loop, but a CCI can still
happen inside that same loop if we recurse, because the recursive call
will do one before returning.  Now, maybe that's OK, because the
recursive call in that case will just be deleting the tuple, so there
won't be a new version for us to stumble over.  The only way we could
trip up in that case is if there were two identically named
constraints.  We'd have to visit the first tuple, update it, then
visit the second tuple, recurse (thus incrementing the command
counter), and then visit the updated version of the first tuple.  And
that should be impossible, because we've got code to disallow multiple
constraints on the same relation with the same name (though no unique
index, for some reason).  Still, that's a long chain of reasoning, so
I'm wondering if we can't come up with something that is more
obviously correct.

If we're confident that the inner loop here should never iterate more
than once (i.e. the lack of a unique index is not an ominous sign)
then maybe we should just rewrite this so that the inner loop scans
until it finds a match and then terminates.  Then, outside the loop,
we check whether a tuple was found and if so process it - but without
ever going back to look for another one.  See attached.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


drop-constraint.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 5:19 AM, Fujii Masao masao.fu...@gmail.com wrote:
 I found that by default WAL_DEBUG macro has been defined in
 9.2dev and 9.1. I'm very surprised at this. Why does WAL_DEBUG
 need to be defined by default? The performance overhead
 introduced by WAL_DEBUG is really vanishingly low?

 WAL_DEBUG was defined in the following commit:
 53dbc27c62d8e1b6c5253feba04a5094cb8fe046

 --
    Support unlogged tables.

    The contents of an unlogged table are WAL-logged; thus, they are not
    available on standby servers and are truncated whenever the database
    system enters recovery.  Indexes on unlogged tables are also unlogged.
    Unlogged GiST indexes are not currently supported.
 --

Oh, dear.  That was a mistake on my part.  :-(

The funny thing is that I've been thinking all of these months about
how convenient it is that we defined WAL_DEBUG in debug builds, not
realizing that (1) we were defining it all the time, not just in debug
builds and (2) I was the one who accidentally did that.

Sorry, all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Patch for cursor calling with named parameters

2011-10-07 Thread Yeb Havinga

On 2011-10-07 12:21, Yeb Havinga wrote:

On 2011-10-06 16:04, Royce Ausburn wrote:

Initial Review for patch:

http://archives.postgresql.org/pgsql-hackers/2011-09/msg00744.php



Again, thank you very much for your thorough review. I'll update the 
patch so mixing positional and named parameters are removed, add 
documentation, and give syntax errors before an error message 
indicating that positional and named parameters were mixed.




Attach is v2 of the patch.

Mixed notation now raises an error.

In contrast with what I said above, named parameter related errors are 
thrown before any syntax errors. I tested with raising syntax errors 
first but the resulting code was a bit more ugly and the sql checking 
under a error condition (i.e. double named parameter error means there 
is one parameter in short)  was causing serious errors.


Documentation was also added, regression tests updated.

regards,

--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index c14c34c..45081f8
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** END;
*** 2699,2718 
   Another way is to use the cursor declaration syntax,
   which in general is:
  synopsis
! replaceablename/replaceable optional optional NO /optional SCROLL /optional CURSOR optional ( replaceablearguments/replaceable ) /optional FOR replaceablequery/replaceable;
  /synopsis
   (literalFOR/ can be replaced by literalIS/ for
!  productnameOracle/productname compatibility.)
!  If literalSCROLL/ is specified, the cursor will be capable of
!  scrolling backward; if literalNO SCROLL/ is specified, backward
!  fetches will be rejected; if neither specification appears, it is
!  query-dependent whether backward fetches will be allowed.
!  replaceablearguments/replaceable, if specified, is a
!  comma-separated list of pairs literalreplaceablename/replaceable
!  replaceabledatatype/replaceable/literal that define names to be
!  replaced by parameter values in the given query.  The actual
!  values to substitute for these names will be specified later,
!  when the cursor is opened.
  /para
  para
   Some examples:
--- 2699,2717 
   Another way is to use the cursor declaration syntax,
   which in general is:
  synopsis
!  replaceablename/replaceable optional optional NO /optional SCROLL /optional CURSOR optional ( optional replaceableargname/replaceable /optional replaceableargtype/replaceable optional, .../optional) /optional FOR replaceablequery/replaceable;
  /synopsis
   (literalFOR/ can be replaced by literalIS/ for
!  productnameOracle/productname compatibility.)  If literalSCROLL/
!  is specified, the cursor will be capable of scrolling backward; if
!  literalNO SCROLL/ is specified, backward fetches will be rejected; if
!  neither specification appears, it is query-dependent whether backward
!  fetches will be allowed.  replaceableargname/replaceable, if
!  specified, defines the name to be replaced by parameter values in the
!  given query.  The actual values to substitute for these names will be
!  specified later, when the cursor is opened.
!  literalreplaceableargtype/replaceable/literal defines the datatype
!  of the parameter.
  /para
  para
   Some examples:
*** OPEN curs1 FOR EXECUTE 'SELECT * FROM '
*** 2827,2833 
   titleOpening a Bound Cursor/title
  
  synopsis
! OPEN replaceablebound_cursorvar/replaceable optional ( replaceableargument_values/replaceable ) /optional;
  /synopsis
  
   para
--- 2826,2832 
   titleOpening a Bound Cursor/title
  
  synopsis
!  OPEN replaceablebound_cursorvar/replaceable optional ( optional replaceableargname/replaceable := /optional replaceableargument_value/replaceable optional, .../optional ) /optional;
  /synopsis
  
   para
*** OPEN replaceablebound_cursorvar/repla
*** 2854,2864 
--- 2853,2875 
commandOPEN/.
   /para
  
+  para
+   Cursors that have named parameters may be opened using either
+   firsttermnamed/firstterm or firsttermpositional/firstterm
+   notation. In contrast with calling functions, described in xref
+   linkend=sql-syntax-calling-funcs, it is not allowed to mix
+   positional and named notation. In positional notation, all arguments
+   are specified in order. In named notation, each argument's name is
+   specified using literal:=/literal to separate it from the
+   argument expression.
+  /para
+ 
  para
   Examples:
  programlisting
  OPEN curs2;
  OPEN curs3(42);
+ OPEN curs3(key := 42);
  /programlisting
 /para
   /sect3
diff --git a/src/pl/plpgsql/src/gram.y b/src/pl/plpgsql/src/gram.y
new file mode 100644
index f8e956b..b9bf888
*** 

[HACKERS] patch : Allow toast tables to be moved to a different tablespace

2011-10-07 Thread Julien Tachoires

Hi,

Here's a patch to allow TOAST tables to be moved to a different 
tablespace. This item has been picked up from the TODO list.

Main idea is to consider that a TOAST table can have its own tablespace.

Regards,

--
JT
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 00a477e..a2360f4 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -66,6 +66,8 @@ ALTER TABLE replaceable class=PARAMETERname/replaceable
 NOT OF
 OWNER TO replaceable class=PARAMETERnew_owner/replaceable
 SET TABLESPACE replaceable class=PARAMETERnew_tablespace/replaceable
+SET TABLE TABLESPACE replaceable class=PARAMETERnew_tablespace/replaceable
+SET TOAST TABLESPACE replaceable class=PARAMETERnew_tablespace/replaceable
 
 phraseand replaceable class=PARAMETERtable_constraint_using_index/replaceable is:/phrase
 
@@ -549,6 +551,30 @@ ALTER TABLE replaceable class=PARAMETERname/replaceable
  /para
 /listitem
/varlistentry
+   
+   varlistentry
+termliteralSET TABLE TABLESPACE/literal/term
+listitem
+ para
+  This form changes only table's tablespace (not associated TOAST table's tablespace) 
+	  to the specified tablespace and moves the data file(s) associated to the new tablespace.
+  See also
+  xref linkend=SQL-CREATETABLESPACE
+ /para
+/listitem
+   /varlistentry
+
+   varlistentry
+termliteralSET TOAST TABLESPACE/literal/term
+listitem
+ para
+  This form changes the TOAST table's tablespace to the specified tablespace and
+  moves the data file(s) associated with the TOAST table to the new tablespace.
+  See also
+  xref linkend=SQL-CREATETABLESPACE
+ /para
+/listitem
+   /varlistentry
 
varlistentry
 termliteralRENAME/literal/term
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 0a133bb..d7d4235 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -422,6 +422,11 @@ pages). There was no run time difference compared to an un-acronymTOAST/ed
 comparison table, in which all the HTML pages were cut down to 7 kB to fit.
 /para
 
+para
+TOAST table can be moved to a different tablespace with
+commandALTER TABLE SET TOAST TABLESPACE/
+/para
+
 /sect1
 
 sect1 id=storage-fsm
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index a938c98..7ad965e 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -36,7 +36,7 @@ extern Oid	binary_upgrade_next_toast_pg_class_oid;
 Oid			binary_upgrade_next_toast_pg_type_oid = InvalidOid;
 
 static bool create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
-   Datum reloptions);
+   Datum reloptions, Oid toastTableSpace);
 static bool needs_toast_table(Relation rel);
 
 
@@ -53,19 +53,30 @@ static bool needs_toast_table(Relation rel);
  * to end with CommandCounterIncrement if it makes any changes.
  */
 void
-AlterTableCreateToastTable(Oid relOid, Datum reloptions)
+AlterTableCreateToastTable(Oid relOid, Datum reloptions, Oid toastTableSpace)
 {
 	Relation	rel;
-
+	Relation	toast_rel;
 	/*
 	 * Grab a DDL-exclusive lock on the target table, since we'll update the
 	 * pg_class tuple.	This is redundant for all present users.  Tuple
 	 * toasting behaves safely in the face of a concurrent TOAST table add.
 	 */
 	rel = heap_open(relOid, ShareUpdateExclusiveLock);
+	
+	/*
+	 * if NewToastTableSpace is null then try to find old TOAST table's tablespace
+	 */
+	if (!OidIsValid(toastTableSpace)  OidIsValid(rel-rd_rel-reltoastrelid))
+	{
+		toast_rel = relation_open(rel-rd_rel-reltoastrelid, NoLock);
+		if (OidIsValid(toast_rel-rd_rel-reltablespace))
+		toastTableSpace = toast_rel-rd_rel-reltablespace;
+		relation_close(toast_rel, NoLock);
+	}
 
 	/* create_toast_table does all the work */
-	(void) create_toast_table(rel, InvalidOid, InvalidOid, reloptions);
+	(void) create_toast_table(rel, InvalidOid, InvalidOid, reloptions, toastTableSpace);
 
 	heap_close(rel, NoLock);
 }
@@ -91,7 +102,7 @@ BootstrapToastTable(char *relName, Oid toastOid, Oid toastIndexOid)
 		relName)));
 
 	/* create_toast_table does all the work */
-	if (!create_toast_table(rel, toastOid, toastIndexOid, (Datum) 0))
+	if (!create_toast_table(rel, toastOid, toastIndexOid, (Datum) 0,InvalidOid))
 		elog(ERROR, \%s\ does not require a toast table,
 			 relName);
 
@@ -107,7 +118,7 @@ BootstrapToastTable(char *relName, Oid toastOid, Oid toastIndexOid)
  * bootstrap they can be nonzero to specify hand-assigned OIDs
  */
 static bool
-create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, Datum reloptions)
+create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, Datum reloptions, Oid toastTableSpace)
 {
 	Oid			relOid = RelationGetRelid(rel);
 	HeapTuple	reltup;
@@ -207,10 +218,15 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid, Datum reloptio
 		toast_typid = binary_upgrade_next_toast_pg_type_oid;

Re: [HACKERS] alter table only ... drop constraint broken in HEAD

2011-10-07 Thread Alex Hunsaker
On Fri, Oct 7, 2011 at 07:53, Robert Haas robertmh...@gmail.com wrote:

 The only way we could
 trip up in that case is if there were two identically named
 constraints.  We'd have to visit the first tuple, update it, then
 visit the second tuple, recurse (thus incrementing the command
 counter), and then visit the updated version of the first tuple.  And
 that should be impossible, because we've got code to disallow multiple
 constraints on the same relation with the same name (though no unique
 index, for some reason).

Surely an oversight...

  Still, that's a long chain of reasoning, so
 I'm wondering if we can't come up with something that is more
 obviously correct.

 If we're confident that the inner loop here should never iterate more
 than once (i.e. the lack of a unique index is not an ominous sign)
 then maybe we should just rewrite this so that the inner loop scans
 until it finds a match and then terminates.  Then, outside the loop,
 we check whether a tuple was found and if so process it - but without
 ever going back to look for another one.  See attached.

I eyeballed it and it does indeed seem simpler. My only thought is
perhaps we should add that missing unique index on (conrelid,
conname). If we are not going to support duplicate names in the code,
we might as well enforce it. No?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table only ... drop constraint broken in HEAD

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 11:19 AM, Alex Hunsaker bada...@gmail.com wrote:
 On Fri, Oct 7, 2011 at 07:53, Robert Haas robertmh...@gmail.com wrote:

 The only way we could
 trip up in that case is if there were two identically named
 constraints.  We'd have to visit the first tuple, update it, then
 visit the second tuple, recurse (thus incrementing the command
 counter), and then visit the updated version of the first tuple.  And
 that should be impossible, because we've got code to disallow multiple
 constraints on the same relation with the same name (though no unique
 index, for some reason).

 Surely an oversight...

  Still, that's a long chain of reasoning, so
 I'm wondering if we can't come up with something that is more
 obviously correct.

 If we're confident that the inner loop here should never iterate more
 than once (i.e. the lack of a unique index is not an ominous sign)
 then maybe we should just rewrite this so that the inner loop scans
 until it finds a match and then terminates.  Then, outside the loop,
 we check whether a tuple was found and if so process it - but without
 ever going back to look for another one.  See attached.

 I eyeballed it and it does indeed seem simpler. My only thought is
 perhaps we should add that missing unique index on (conrelid,
 conname). If we are not going to support duplicate names in the code,
 we might as well enforce it. No?

Not sure.  There could be performance or other ramifications to that.
For now I'm more interested in fixing this particular bug than I am in
getting into a wider world of re-engineering...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PQsendQuery/ PQgetResult Problem

2011-10-07 Thread Usama Dar
Hi Hackers,

I have a strange problem, or maybe it's not a strange problem but just
something wrong with my understanding i have SIP router which works with
postgresql using libpq, somewhere in the code it inserts a row in the
database and then when the insert is finished it invokes another module
which tries to read it , however sometimes the read operation doesn't find
the row which was just inserted, this happens if there are large number of
insert / read cycles like this are going on , the code which inserts the row
uses PQsendQuery to insert and waits for PQgetResult to return null before
the read module is invoked, the module which reads the row , reads it over a
new connection. My question is if PQgetResult returns null is this  a good
enough guarantee that a subsequent connection (even in next millisecond)
should be able to read that row??

Thanks for making me wiser

/Usama


Re: [HACKERS] WIP: Join push-down for foreign tables

2011-10-07 Thread Kohei KaiGai
2011年10月4日12:08 Shigeru Hanada shigeru.han...@gmail.com:
 In my opinion, FdwRoutine should have an additional API to inform the core 
 its
 supported features; such as inner-join, outer-join, order-by,
 group-by, aggregate
 functions, insert, update, delete, etc... in the future version.

 Sure, so in my design PlanForeignJoin is optional.

 The lack of capability is informed from FDW with setting function
 pointer in FdwRoutine to NULL.  If PlanForeignJoin was NULL, core
 (planner) will give up to consider join push-down, and use one of local
 join methods such as NestLoop and MergeJoin for those foreign tables.
 As you say, other push-down-able features would also have optional
 handler function for each.

Sorry, I overlooked it was already implemented at create_foreignjoin_path().

I additionally tried several cases using pgsql_fdw.
In some cases, it seems to me the planner don't push down the join tree
as you probably expected.
Please see the following example:

I defined three foreign tables: ft1(a int, b text), ft2(x int, y
text), ft3(s int, t text),
and lt1, lt2, lt3 are regular local tables.

postgres=# explain SELECT * FROM (ft1 join ft2 on a = x join ft3 on a = s);

  QUERY PLAN
---
 Foreign Scan on multiple foreign tables  (cost=0.00..0.00 rows=25000 width=108)
   Remote SQL: SELECT ft1.a, ft1.b, ft2.x, ft2.y, ft3.s, ft3.t FROM
public.ft1 ft1, public.ft2 ft2, public.ft3 ft3 WHERE (ft1.a = ft3.s)
AND (ft1.a = ft2.x)
(2 rows)

It works good.
(P.S. I noticed that pgsql_fdw has incorrect Assert(). Please fix
pgsql_fdw.c:730)

However, an existence of local relation makes planner confused.
It seems to me you expect ft1 join ft2 on a = x

postgres=# explain SELECT * FROM (ft1 join ft2 on a = x join lt3 on a = s);
  QUERY PLAN
--
 Merge Join  (cost=205.08..758.83 rows=30750 width=108)
   Merge Cond: (ft1.a = lt3.s)
   -  Merge Join  (cost=119.66..199.66 rows=5000 width=72)
 Merge Cond: (ft1.a = ft2.x)
 -  Sort  (cost=59.83..62.33 rows=1000 width=36)
   Sort Key: ft1.a
   -  Foreign Scan on ft1  (cost=10.00..10.00 rows=1000 width=36)
 Remote SQL: DECLARE pgsql_fdw_cursor_2 SCROLL
CURSOR FOR SELECT ft1.a, ft1.b FROM public.ft1 ft1
 -  Sort  (cost=59.83..62.33 rows=1000 width=36)
   Sort Key: ft2.x
   -  Foreign Scan on ft2  (cost=10.00..10.00 rows=1000 width=36)
 Remote SQL: DECLARE pgsql_fdw_cursor_3 SCROLL
CURSOR FOR SELECT ft2.x, ft2.y FROM public.ft2 ft2
   -  Sort  (cost=85.43..88.50 rows=1230 width=36)
 Sort Key: lt3.s
 -  Seq Scan on lt3  (cost=0.00..22.30 rows=1230 width=36)
(15 rows)

What is the reason why the foreign join is not pushed down?
Maybe, injected Sort plan prevent the planner to consider both side of
relations being foreign scan owned by same server? I'm still
investigating the reason.

I hope comments from committers. :-(

A collateral evidence is below.
If we try to sort the result by a key being not used to join, the both
of foreign scan gets pushed down.

postgres=# explain SELECT * FROM (ft1 join ft2 on a = x) order by y;
   QUERY PLAN
-
 Sort  (cost=307.19..319.69 rows=5000 width=72)
   Sort Key: ft2.y
   -  Foreign Scan on multiple foreign tables  (cost=0.00..0.00
rows=5000 width=72)
 Remote SQL: SELECT ft1.a, ft1.b, ft2.x, ft2.y FROM public.ft1
ft1, public.ft2 ft2 WHERE (ft1.a = ft2.x)
(4 rows)

However, when I tried to sort by a key being used to join, the both of
foreign scan was not pushed down.

postgres=# explain SELECT * FROM (ft1 join ft2 on a = x) order by a;
   QUERY PLAN

 Merge Join  (cost=119.66..199.66 rows=5000 width=72)
   Merge Cond: (ft1.a = ft2.x)
   -  Sort  (cost=59.83..62.33 rows=1000 width=36)
 Sort Key: ft1.a
 -  Foreign Scan on ft1  (cost=10.00..10.00 rows=1000 width=36)
   Remote SQL: DECLARE pgsql_fdw_cursor_6 SCROLL CURSOR
FOR SELECT ft1.a, ft1.b FROM public.ft1 ft1
   -  Sort  (cost=59.83..62.33 rows=1000 width=36)
 Sort Key: ft2.x
 -  Foreign Scan on ft2  (cost=10.00..10.00 rows=1000 width=36)
   Remote SQL: DECLARE pgsql_fdw_cursor_7 SCROLL CURSOR
FOR SELECT ft2.x, ft2.y FROM public.ft2 ft2
(10 rows)

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Re: [HACKERS] [v9.2] make_greater_string() does not return a string in some cases

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 12:22 AM, Kyotaro HORIGUCHI
horiguchi.kyot...@oss.ntt.co.jp wrote:
 Thank you for reviewing.

 The new version of this patch is attached to this message.

OK, I think this is reasonably close to being committable now.  There
are a few remaining style and grammar mistakes but I can fix those up
before committing.  One thing I still think it would be useful to add,
though, is some comments to pg_utf8_increment() and
pg_eucjp_increment() describing the algorithm being used.  Can you
take a crack at that?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQsendQuery/ PQgetResult Problem

2011-10-07 Thread Merlin Moncure
On Fri, Oct 7, 2011 at 11:17 AM, Usama Dar munir.us...@gmail.com wrote:
 Hi Hackers,
 I have a strange problem, or maybe it's not a strange problem but just
 something wrong with my understanding i have SIP router which works with
 postgresql using libpq, somewhere in the code it inserts a row in the
 database and then when the insert is finished it invokes another module
 which tries to read it , however sometimes the read operation doesn't find
 the row which was just inserted, this happens if there are large number of
 insert / read cycles like this are going on , the code which inserts the row
 uses PQsendQuery to insert and waits for PQgetResult to return null before
 the read module is invoked, the module which reads the row , reads it over a
 new connection. My question is if PQgetResult returns null is this  a good
 enough guarantee that a subsequent connection (even in next millisecond)
 should be able to read that row??
 Thanks for making me wiser
 /Usama

yeah -- Iron clad rule is that if you get the result and are able to
see that your query has been successfully transacted, so should
everyone else.  Are you sure your query didn't error out?  Are you
also sure you didn't open a transaction and not close it?  our problem
is most likely on your end.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alter table only ... drop constraint broken in HEAD

2011-10-07 Thread Alex Hunsaker
On Fri, Oct 7, 2011 at 09:50, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Oct 7, 2011 at 11:19 AM, Alex Hunsaker bada...@gmail.com wrote:
 My only thought is
 perhaps we should add that missing unique index on (conrelid,
 conname). If we are not going to support duplicate names in the code,
 we might as well enforce it. No?

 Not sure.  There could be performance or other ramifications to that.
 For now I'm more interested in fixing this particular bug than I am in
 getting into a wider world of re-engineering...

Yeah, looking at the code a bit closer we would also want to fix
various places to take advantage of the index. Seems like it could be
a big win when you have thousands of constraints (albeit only in the
add/drop case).

If I find the time maybe Ill submit something along these lines for
the next commit fest.

Thanks!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Force strings passed to and from plperl to be in UTF8 encoding.

2011-10-07 Thread Alex Hunsaker
On Wed, Oct 5, 2011 at 20:36, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Oct 5, 2011 at 5:03 PM, Alex Hunsaker bada...@gmail.com wrote:
 On Wed, Oct 5, 2011 at 08:18, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Oct 5, 2011 at 3:58 AM, Amit Khandekar
 amit.khande...@enterprisedb.com wrote:
 I have no more issues with the patch.
 Thanks!

 I think this patch needs to be added to the open CommitFest, with
 links to the reviews, and marked Ready for Committer.

 The open commitfest? Even if its an important bug fix that should be
 backpatched?

 Considering that the issue appears to have been ignored from
 mid-February until early October, I don't see why it should now get to
 jump to the head of the queue.  Other people may have different
 opinions, of course.

Added. :-)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 The funny thing is that I've been thinking all of these months
 about how convenient it is that we defined WAL_DEBUG in debug
 builds
 
IMO, --enable-debug should not do anything but include debugging
symbols.  The ability to get a useful stack trace from a production
crash, without compromising performance, is just too important by
itself to consider conditioning any other behavior on it.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 1:03 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:
 The funny thing is that I've been thinking all of these months
 about how convenient it is that we defined WAL_DEBUG in debug
 builds

 IMO, --enable-debug should not do anything but include debugging
 symbols.  The ability to get a useful stack trace from a production
 crash, without compromising performance, is just too important by
 itself to consider conditioning any other behavior on it.

So, should I go revert this change in head and 9.1, or does anyone
else want to argue for Heikki's position that we should just leave it
on, on the theory that it's too cheap to matter?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Fix little typo in docs in func.sgml

2011-10-07 Thread Dickson S. Guedes
Hello all,

This is a little patch to fix a typo in docs. In the length function
should be a space between string and bytea.

Best regards,
-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index bedd8ba..45b9956
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 1587,1593 
/row
  
row
!entryliteralfunctionlength(parameterstring/parametertypebytea/type,
  parameterencoding/parameter typename/type )/function/literal/entry
 entrytypeint/type/entry
 entry
--- 1587,1593 
/row
  
row
!entryliteralfunctionlength(parameterstring/parameter typebytea/type,
  parameterencoding/parameter typename/type )/function/literal/entry
 entrytypeint/type/entry
 entry

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: allow to get a number of processed rows by COPY statement

2011-10-07 Thread Kevin Grittner
Pavel Stehule pavel.steh...@gmail.com wrote:
 
 There is not possible to get a number of processed rows when COPY
 is evaluated via SPI. Client can use a tag, but SPI doesn't use a
 tag.
 
 I propose a small change a ProcessUtility to return a processed
 rows.
 
Please add this to the open CommitFest:
 
https://commitfest.postgresql.org/action/commitfest_view/open
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade - add config directory setting

2011-10-07 Thread Bruce Momjian
Bruce Momjian wrote:
 Bruce Momjian wrote:
  I will now work on pg_upgrade to also use the new flag to find the data
  directory from a config-only install.  However, this is only available
  in PG 9.2, and it will only be in PG 9.3 that you can hope to use this
  feature (if old is PG 9.2 or later).  I am afraid the symlink hack will
  have to be used for several more years, and if you are supporting
  upgrades from pre-9.2, perhaps forever.
 
 The attached patch uses postmaster -C data_directory to allow
 config-only upgrades.  It will allow a normal 9.1 cluster to be upgraded
 to a 9.2 config-only cluster.

Applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Please find attached a patch implementing a basic version of
 index-only scans.

I'm making some progress with this, but I notice what seems like a
missing feature: there needs to be a way to turn it off.  Otherwise
performance comparisons will be difficult to impossible.

The most obvious solution is a planner control GUC, perhaps
enable_indexonlyscan.  Anyone object, or want to bikeshed the name?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Joshua D. Drake


On 10/07/2011 11:40 AM, Tom Lane wrote:

Robert Haasrobertmh...@gmail.com  writes:

Please find attached a patch implementing a basic version of
index-only scans.


I'm making some progress with this, but I notice what seems like a
missing feature: there needs to be a way to turn it off.  Otherwise
performance comparisons will be difficult to impossible.

The most obvious solution is a planner control GUC, perhaps
enable_indexonlyscan.  Anyone object, or want to bikeshed the name?


enable_onlyindexscan

I'm kidding.

+1 on Tom's proposed name.



regards, tom lane




--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 2:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Please find attached a patch implementing a basic version of
 index-only scans.

 I'm making some progress with this, but I notice what seems like a
 missing feature: there needs to be a way to turn it off.  Otherwise
 performance comparisons will be difficult to impossible.

 The most obvious solution is a planner control GUC, perhaps
 enable_indexonlyscan.  Anyone object, or want to bikeshed the name?

I was expecting you to NOT want that, or I would have put it in to
begin with...  so go for it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Oct 7, 2011 at 2:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'm making some progress with this, but I notice what seems like a
 missing feature: there needs to be a way to turn it off.  Otherwise
 performance comparisons will be difficult to impossible.
 
 The most obvious solution is a planner control GUC, perhaps
 enable_indexonlyscan.  Anyone object, or want to bikeshed the name?

 I was expecting you to NOT want that, or I would have put it in to
 begin with...  so go for it.

It seems unlikely to have any use except for testing, but I think we
need it for that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Bruce Momjian
Andrea Suisani wrote:
 On 10/05/2011 07:37 AM, Tom Lane wrote:
  davegda...@sonic.net  writes:
  Postgresql 9.0.4 has the timezone:
 America/Blanc-Sablon
  However other sources seem to spell this with an underscore instead of 
  dash:
 America/Blanc_Sablon
 
  I don't know what other sources you're consulting, but Blanc-Sablon
  is the way it appears in the Olson timezone database, and that's what
  we follow.
 
 Speaking of Olson tz database, I've just stumbled across this post
 and I thought it would be worthy to report it here:
 
 http://blog.joda.org/2011/10/today-time-zone-database-was-closed.html

I suppose there is nothing stopping them from attacking people who
distribute the database, like Postgres, Red Hat, etc.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Bruce Momjian
Robert Haas wrote:
 On Fri, Oct 7, 2011 at 1:03 PM, Kevin Grittner
 kevin.gritt...@wicourts.gov wrote:
  Robert Haas robertmh...@gmail.com wrote:
  The funny thing is that I've been thinking all of these months
  about how convenient it is that we defined WAL_DEBUG in debug
  builds
 
  IMO, --enable-debug should not do anything but include debugging
  symbols. ?The ability to get a useful stack trace from a production
  crash, without compromising performance, is just too important by
  itself to consider conditioning any other behavior on it.
 
 So, should I go revert this change in head and 9.1, or does anyone
 else want to argue for Heikki's position that we should just leave it
 on, on the theory that it's too cheap to matter?

I would just fix it in head.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Andrea Suisani wrote:
 Speaking of Olson tz database, I've just stumbled across this post
 and I thought it would be worthy to report it here:
 http://blog.joda.org/2011/10/today-time-zone-database-was-closed.html

 I suppose there is nothing stopping them from attacking people who
 distribute the database, like Postgres, Red Hat, etc.

It seems pretty baseless to me: you can't copyright a collection of
facts.  I think we should do nothing pending a court decision.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Andrea Suisani wrote:
  Speaking of Olson tz database, I've just stumbled across this post
  and I thought it would be worthy to report it here:
  http://blog.joda.org/2011/10/today-time-zone-database-was-closed.html
 
  I suppose there is nothing stopping them from attacking people who
  distribute the database, like Postgres, Red Hat, etc.
 
 It seems pretty baseless to me: you can't copyright a collection of
 facts.  I think we should do nothing pending a court decision.

Agreed.  I am just pointing out the possible exposure.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Peter Geoghegan
On 7 October 2011 21:27, Bruce Momjian br...@momjian.us wrote:
 Tom Lane wrote:
 It seems pretty baseless to me: you can't copyright a collection of
 facts.  I think we should do nothing pending a court decision.

 Agreed.  I am just pointing out the possible exposure.

The one interesting case that I can recall were this was tested was
this (lifted from Wikipedia):

In October 1984, Fred L. Worth, author of The Trivia Encyclopedia,
Super Trivia, and Super Trivia II, filed a $300 million lawsuit
against the distributors of Trivial Pursuit. He claimed that more than
a quarter of the questions in the game's Genus Edition had been taken
from his books, even to the point of reproducing typographical errors
and deliberately placed misinformation. One of the questions in
Trivial Pursuit was What was Columbo's first name? with the answer
Philip. That information had been fabricated to catch anyone who
might try to violate his copyright.[5]
The inventors of Trivial Pursuit acknowledged that Worth's books were
among their sources, but argued that this was not improper and that
facts are not protected by copyright. The district court judge agreed,
ruling in favor of the Trivial Pursuit inventors. The decision was
appealed, and in September 1987 the United States Court of Appeals for
the Ninth Circuit upheld the ruling.[6] Worth asked the Supreme Court
of the United States to review the case, but the Court declined,
denying certiorari in March 1988.[7]

IANAL, but this seems pretty conclusive to me...

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Thom Brown
On 7 October 2011 21:17, Tom Lane t...@sss.pgh.pa.us wrote:
 Bruce Momjian br...@momjian.us writes:
 Andrea Suisani wrote:
 Speaking of Olson tz database, I've just stumbled across this post
 and I thought it would be worthy to report it here:
 http://blog.joda.org/2011/10/today-time-zone-database-was-closed.html

 I suppose there is nothing stopping them from attacking people who
 distribute the database, like Postgres, Red Hat, etc.

 It seems pretty baseless to me: you can't copyright a collection of
 facts.  I think we should do nothing pending a court decision.

It's ironic that they're attacking those using these facts when their
business is selling fiction poorly disguised as fact.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Merlin Moncure
On Fri, Oct 7, 2011 at 3:33 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 On 7 October 2011 21:27, Bruce Momjian br...@momjian.us wrote:
 Tom Lane wrote:
 It seems pretty baseless to me: you can't copyright a collection of
 facts.  I think we should do nothing pending a court decision.

 Agreed.  I am just pointing out the possible exposure.

 The one interesting case that I can recall were this was tested was
 this (lifted from Wikipedia):

 In October 1984, Fred L. Worth, author of The Trivia Encyclopedia,
 Super Trivia, and Super Trivia II, filed a $300 million lawsuit
 against the distributors of Trivial Pursuit. He claimed that more than
 a quarter of the questions in the game's Genus Edition had been taken
 from his books, even to the point of reproducing typographical errors
 and deliberately placed misinformation. One of the questions in
 Trivial Pursuit was What was Columbo's first name? with the answer
 Philip. That information had been fabricated to catch anyone who
 might try to violate his copyright.[5]
 The inventors of Trivial Pursuit acknowledged that Worth's books were
 among their sources, but argued that this was not improper and that
 facts are not protected by copyright. The district court judge agreed,
 ruling in favor of the Trivial Pursuit inventors. The decision was
 appealed, and in September 1987 the United States Court of Appeals for
 the Ninth Circuit upheld the ruling.[6] Worth asked the Supreme Court
 of the United States to review the case, but the Court declined,
 denying certiorari in March 1988.[7]

 IANAL, but this seems pretty conclusive to me...

Facts are not subject to copyright but compilations can be.  However,
the arrangement and presentation of the compilation has to be
sufficient to have merit protection.  For example, the SCOTUS denied
copywrite protection to phone books, which I think is entirely
relevant to this issue. (BUT INAL).

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Mark Mielke
My original read of the problem determined (for me personally) that the 
only way one could be in violation of copyright was if the data was 
incorrect (i.e. not factual). It presented an interesting contradiction. 
The only way they could sue is by agreeing that their data is faulty and 
should not be trusted. :-)


The case Merlin refers to below seemed to rule that even faulty 
information is not a concern.


Personally, I think the best choice is to officially state a position on 
the matter and agree to remove any copyrighted material that has been 
used without the permission of the copyright owner from PostgreSQL if or 
when this is ever demonstrated in court. Until that time, the damage to 
the community by responding to this unproven legal threat would be 
unreasonable to bear.


On 10/07/2011 05:10 PM, Merlin Moncure wrote:

The one interesting case that I can recall were this was tested was
this (lifted from Wikipedia):

In October 1984, Fred L. Worth, author of The Trivia Encyclopedia,
Super Trivia, and Super Trivia II, filed a $300 million lawsuit
against the distributors of Trivial Pursuit. He claimed that more than
a quarter of the questions in the game's Genus Edition had been taken
from his books, even to the point of reproducing typographical errors
and deliberately placed misinformation. One of the questions in
Trivial Pursuit was What was Columbo's first name? with the answer
Philip. That information had been fabricated to catch anyone who
might try to violate his copyright.[5]
The inventors of Trivial Pursuit acknowledged that Worth's books were
among their sources, but argued that this was not improper and that
facts are not protected by copyright. The district court judge agreed,
ruling in favor of the Trivial Pursuit inventors. The decision was
appealed, and in September 1987 the United States Court of Appeals for
the Ninth Circuit upheld the ruling.[6] Worth asked the Supreme Court
of the United States to review the case, but the Court declined,
denying certiorari in March 1988.[7]

IANAL, but this seems pretty conclusive to me...
Facts are not subject to copyright but compilations can be.  However,
the arrangement and presentation of the compilation has to be
sufficient to have merit protection.  For example, the SCOTUS denied
copywrite protection to phone books, which I think is entirely
relevant to this issue. (BUT INAL).



--
Mark Mielkem...@mielke.cc


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Merlin Moncure
On Fri, Oct 7, 2011 at 4:20 PM, Mark Mielke m...@mark.mielke.cc wrote:
 My original read of the problem determined (for me personally) that the only
 way one could be in violation of copyright was if the data was incorrect
 (i.e. not factual). It presented an interesting contradiction. The only way
 they could sue is by agreeing that their data is faulty and should not be
 trusted. :-)

 The case Merlin refers to below seemed to rule that even faulty information
 is not a concern.

specifically,
http://en.wikipedia.org/wiki/Feist_v._Rural

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.2] Fix Leaky View Problem

2011-10-07 Thread Noah Misch
On Sun, Oct 02, 2011 at 07:16:33PM +0200, Kohei KaiGai wrote:
 My preference is still also WITH(security_barrier=...) syntax.
 
 The arguable point was the behavior when a view is replaced without
 explicit WITH clause;
 whether we should consider it was specified a default value, or we
 should consider it means
 the option is preserved.
 If we stand on the viewpoint that object's attribute related to
 security (such as ownership,
 acl, label, ...) should be preserved, the security barrier also shall
 be preserved.
 On the other hand, we can never know what options will be added in the
 future, right now.
 Thus, we may need to sort out options related to security and not at
 DefineVirtualRelation().
 
 However, do we need to limit type of the options to be preserved to
 security related?
 It is the first case that object with arbitrary options can be replaced.
 It seems to me we have no matter, even if we determine object's
 options are preserved
 unless an explicit new value is provided.

Currently, you can predict how CREATE OR REPLACE affects a given object
characteristic with a simple rule: if the CREATE OR REPLACE statement can
specify a characteristic, we don't preserve its existing value.  Otherwise, we
do preserve it.  Let's not depart from that rule.

Applying that rule to the proposed syntax, it shall not preserve the existing
security_barrier value.  I think that is acceptable.  If it's not acceptable, we
need a different syntax -- perhaps CREATE SECURITY VIEW.

 Any other ideas?

Suppose we permitted pushdown of unsafe predicates when the user can read the
involved columns anyway, a generalization of the idea from the first paragraph
of [1].  Would that, along with LEAKPROOF, provide enough strategies for shoring
up performance to justify removing unsafe views entirely?

nm

[1] 
http://archives.postgresql.org/message-id/aanlktil1n2qwdd7izlgbvt2ifl29rwfvkssel9b9r...@mail.gmail.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Please find attached a patch implementing a basic version of
 index-only scans.  This patch is the work of my colleague Ibrar Ahmed
 and myself, and also incorporates some code from previous patches
 posted by Heikki Linnakanagas.

I've committed this after some rather substantial editorialization.
There's still a lot left to do of course, but it seems to need
performance testing next, and that'll be easier if the code is in HEAD.

 1. The way that nodeIndexscan.c builds up the faux heap tuple is
 perhaps susceptible to improvement.  I thought about building a
 virtual tuple, but then what do I do with an OID column, if I have
 one?  Or maybe this should be done some other way altogether.

I switched it to use a virtual tuple for now, and just not attempt to
use index-only scans if a system column is required.  We're likely to
want to rethink this anyway, because as currently constituted the code
can't do anything with an expression index, and avoiding recalculation
of an expensive function could be a nice win.  But the approach of
just building a faux heap tuple fundamentally doesn't work for that.

 2. Suppose we scan one tuple on a not-all-visible page followed by 99
 tuples on all-visible pages.  The code as written will hold the pin on
 the first heap page for the entire scan.  As soon as we hit the end of
 the scan or another tuple where we have to actually visit the page,
 the old pin will be released, but until then we hold onto it.

I did not do anything about this issue --- ISTM it needs performance
testing.

 3. The code in create_index_path() builds up a bitmapset of heap
 attributes that get used for any purpose anywhere in the query, and
 hangs it on the RelOptInfo so it doesn't need to be rebuilt for every
 index under consideration.  However, if it were somehow possible to
 have the rel involved without using any attributes at all, we'd
 rebuild the cache over and over, since it would never become non-NULL.

I dealt with this by the expedient of getting rid of the caching ;-).
It's not clear to me that it was worth the trouble, and in any case
it's fundamentally wrong to suppose that every index faces the same
set of attributes it must supply.  It should not need to supply columns
that are only needed in indexquals or index predicate conditions.
I'm not sure how to deal with those refinements cheaply enough, but
the cache isn't helping.

 4. There are a couple of cases that use index-only scans even though
 the EXPLAIN output sort of makes it look like they shouldn't.  For
 example, in the above queries, an index-only scan is chosen even
 though the query does SELECT * from the table being scanned.  Even
 though the EXPLAIN (VERBOSE) output makes it look otherwise, it seems
 that the target list of an EXISTS query is in fact discarded, e.g.:

The reason it looks that way is that we're choosing to use a physical
result tuple to avoid an ExecProject step at runtime.  There's nothing
wrong with the logic, it's just that EXPLAIN shows something that might
mislead people.

 5. We haven't made any planner changes at all, not even for cost
 estimation.  It is not clear to me what the right way to do cost
 estimation here is.

Yeah, me either.  For the moment I put in a hard-wired estimate that
only 90% of the heap pages would actually get fetched.  This is
conservative and only meant to ensure that the planner picks an
index-only-capable plan over an indexscan with a non-covering index,
all else being equal.  We'll need to do performance testing before
we can refine that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 4:06 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Fri, Oct 7, 2011 at 1:03 PM, Kevin Grittner
 kevin.gritt...@wicourts.gov wrote:
  Robert Haas robertmh...@gmail.com wrote:
  The funny thing is that I've been thinking all of these months
  about how convenient it is that we defined WAL_DEBUG in debug
  builds
 
  IMO, --enable-debug should not do anything but include debugging
  symbols. ?The ability to get a useful stack trace from a production
  crash, without compromising performance, is just too important by
  itself to consider conditioning any other behavior on it.

 So, should I go revert this change in head and 9.1, or does anyone
 else want to argue for Heikki's position that we should just leave it
 on, on the theory that it's too cheap to matter?

 I would just fix it in head.

That just seems weird.  Either it's cheap enough not to matter (in
which case there's no reason to revert that change at all) or it's
expensive enough to matter (in which case presumably we don't want to
leave it on in 9.1 for the 5 years or so it remains a supported
release).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] index-only scans

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 8:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 1. The way that nodeIndexscan.c builds up the faux heap tuple is
 perhaps susceptible to improvement.  I thought about building a
 virtual tuple, but then what do I do with an OID column, if I have
 one?  Or maybe this should be done some other way altogether.

 I switched it to use a virtual tuple for now, and just not attempt to
 use index-only scans if a system column is required.  We're likely to
 want to rethink this anyway, because as currently constituted the code
 can't do anything with an expression index, and avoiding recalculation
 of an expensive function could be a nice win.  But the approach of
 just building a faux heap tuple fundamentally doesn't work for that.

Figuring out how to fix that problem likely requires more knowledge of
the executor than I have got.

 2. Suppose we scan one tuple on a not-all-visible page followed by 99
 tuples on all-visible pages.  The code as written will hold the pin on
 the first heap page for the entire scan.  As soon as we hit the end of
 the scan or another tuple where we have to actually visit the page,
 the old pin will be released, but until then we hold onto it.

 I did not do anything about this issue --- ISTM it needs performance
 testing.

I'm actually less worried about any performance problem than I am
about the possibility of holding up VACUUM.  That can happen the old
way, too, but now the pin could stay on the same page for quite a
while even when the scan is advancing.

I think we maybe ought to think seriously about solving the problem at
the other end, though - either make VACUUM skip pages that it can't
get a cleanup lock on without blocking (except in anti-wraparound
mode) or have it just do the amount of work that can be done with an
exclusive lock (i.e. prune but not defragment, which would work even
in anti-wraparound mode).  That would solve the problems of (1)
undetected VACUUM deadlock vs. a buffer pin, (2) indefinite VACUUM
stall due to a suspended query, and (3) this issue.

 3. The code in create_index_path() builds up a bitmapset of heap
 attributes that get used for any purpose anywhere in the query, and
 hangs it on the RelOptInfo so it doesn't need to be rebuilt for every
 index under consideration.  However, if it were somehow possible to
 have the rel involved without using any attributes at all, we'd
 rebuild the cache over and over, since it would never become non-NULL.

 I dealt with this by the expedient of getting rid of the caching ;-).
 It's not clear to me that it was worth the trouble, and in any case
 it's fundamentally wrong to suppose that every index faces the same
 set of attributes it must supply.  It should not need to supply columns
 that are only needed in indexquals or index predicate conditions.
 I'm not sure how to deal with those refinements cheaply enough, but
 the cache isn't helping.

Oh, hmm.

 4. There are a couple of cases that use index-only scans even though
 the EXPLAIN output sort of makes it look like they shouldn't.  For
 example, in the above queries, an index-only scan is chosen even
 though the query does SELECT * from the table being scanned.  Even
 though the EXPLAIN (VERBOSE) output makes it look otherwise, it seems
 that the target list of an EXISTS query is in fact discarded, e.g.:

 The reason it looks that way is that we're choosing to use a physical
 result tuple to avoid an ExecProject step at runtime.  There's nothing
 wrong with the logic, it's just that EXPLAIN shows something that might
 mislead people.

I wonder if we oughta do something about that.

I was also thinking we should probably make EXPLAIN ANALYZE display
the number of heap fetches, so that you can see how index-only your
index-only scan actually was.

 5. We haven't made any planner changes at all, not even for cost
 estimation.  It is not clear to me what the right way to do cost
 estimation here is.

 Yeah, me either.  For the moment I put in a hard-wired estimate that
 only 90% of the heap pages would actually get fetched.  This is
 conservative and only meant to ensure that the planner picks an
 index-only-capable plan over an indexscan with a non-covering index,
 all else being equal.  We'll need to do performance testing before
 we can refine that.

Yep.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Bruce Momjian
Robert Haas wrote:
 On Fri, Oct 7, 2011 at 4:06 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Fri, Oct 7, 2011 at 1:03 PM, Kevin Grittner
  kevin.gritt...@wicourts.gov wrote:
   Robert Haas robertmh...@gmail.com wrote:
   The funny thing is that I've been thinking all of these months
   about how convenient it is that we defined WAL_DEBUG in debug
   builds
  
   IMO, --enable-debug should not do anything but include debugging
   symbols. ?The ability to get a useful stack trace from a production
   crash, without compromising performance, is just too important by
   itself to consider conditioning any other behavior on it.
 
  So, should I go revert this change in head and 9.1, or does anyone
  else want to argue for Heikki's position that we should just leave it
  on, on the theory that it's too cheap to matter?
 
  I would just fix it in head.
 
 That just seems weird.  Either it's cheap enough not to matter (in
 which case there's no reason to revert that change at all) or it's
 expensive enough to matter (in which case presumably we don't want to
 leave it on in 9.1 for the 5 years or so it remains a supported
 release).

I am concerned about changing behavior on people in a minor release ---
it is not about risk in this case.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does WAL_DEBUG macro need to be defined by default?

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 9:59 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Fri, Oct 7, 2011 at 4:06 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Fri, Oct 7, 2011 at 1:03 PM, Kevin Grittner
  kevin.gritt...@wicourts.gov wrote:
   Robert Haas robertmh...@gmail.com wrote:
   The funny thing is that I've been thinking all of these months
   about how convenient it is that we defined WAL_DEBUG in debug
   builds
  
   IMO, --enable-debug should not do anything but include debugging
   symbols. ?The ability to get a useful stack trace from a production
   crash, without compromising performance, is just too important by
   itself to consider conditioning any other behavior on it.
 
  So, should I go revert this change in head and 9.1, or does anyone
  else want to argue for Heikki's position that we should just leave it
  on, on the theory that it's too cheap to matter?
 
  I would just fix it in head.

 That just seems weird.  Either it's cheap enough not to matter (in
 which case there's no reason to revert that change at all) or it's
 expensive enough to matter (in which case presumably we don't want to
 leave it on in 9.1 for the 5 years or so it remains a supported
 release).

 I am concerned about changing behavior on people in a minor release ---
 it is not about risk in this case.

Well, I still maintain that if the performance impact is low enough
that we can get away with that, it's probably not worth fixing in
master either.  But at any rate, we now have three opinions on what to
do about this.  Anyone else want to cast a vote (preferably not for an
entirely new option)?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Greg Stark
On Fri, Oct 7, 2011 at 10:10 PM, Merlin Moncure mmonc...@gmail.com wrote:
 On 7 October 2011 21:27, Bruce Momjian br...@momjian.us wrote:
 Tom Lane wrote:
 It seems pretty baseless to me: you can't copyright a collection of
 facts.  I think we should do nothing pending a court decision.

 The one interesting case that I can recall were this was tested was
 this (lifted from Wikipedia):

 In October 1984, Fred L. Worth, author of The Trivia Encyclopedia,
 Super Trivia, and Super Trivia II, filed a $300 million lawsuit
 against the distributors of Trivial Pursuit.

 Facts are not subject to copyright but compilations can be.

I know it's popular for engineers to play lawyer and I've been guilty
of it on many an occasion. But in this case I think you're all *way*
oversimplifying the situation and I don't think it's within our ken to
be able to come to any clear conclusion.

a) Both the trivial pursuit case and the Feist predate a major change
to US copyright statutes -- the DMCA. The DMCA implemented the WIPO
Copyright Treaty which specifically addressed database compilation
copyrights. I do not know how to interpret the language of the DMCA on
this and frankly I'm not sure anybody knows since I don't know if
there have been any major cases under it yet. If my guess is right the
relevant section is 17 U.S.C. §§ 103.

I'm not clear that a compilation that was made prior to the DMCA can
suddenly acquire copyrights when if it had none before though.

b) Both of these cases are US cases. Copyright law varies heavily from
country to country despite the Berne and WIPO treaties.

c) I don't think that resolving whether the Olson database would be
covered even under Feist is so crystal clear as you guys make it out
to be. *After* Feist but before the DMCA courts ruled in various cases
that phone books and even a baseball score card *did* have enough
originality to qualify for copyright.

All that said I think this is far murkier than you all seem to think.
Copyright law is one of the most complex areas of the law and this is
one of the least well defined parts of copyright law.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-10-07 Thread Nikhil Sontakke
Hi Alex,

I guess we both are in agreement with each other :)

After sleeping over it, I think that check is indeed dead code with this new
non-inheritable check constraints functionality in place. So unless you have
some other comments, we can mark this as 'Ready for Commiter'.

Again, thanks for the thorough review and subsequent changes!

Regards,
Nikhils

On Fri, Oct 7, 2011 at 12:18 PM, Alex Hunsaker bada...@gmail.com wrote:

 On Fri, Oct 7, 2011 at 00:28, Nikhil Sontakke nikkh...@gmail.com wrote:
  Hi Alex,

  So with it all spelled out now I see the constraint must be added to
  child tables too check is dead code.
 
 
  Thanks the above step-wise explanation helps.
 
  But AFAICS, the default inhOpt value can be governed by the
 SQL_inheritance
  guc too. So in that case, it's possible that recurse is false and child
  tables are present, no?

 Well... Do we really want to differentiate between those two case? I
 would argue no.

 Given that:
  set sql_inhertance to off;
  alter table xxx alter column;
 behaves the same as
  set sql_inhertance to on;
  alter table only xxx alter column;

 Why should we treat constraints differently? Or put another way if set
 sql_inhertance off makes alter table behave with an implicit only,
 shouldn't add/drop constraint respect that?

  Infact as I now remember, the reason my patch was looping through was to
  handle this very case. It was based on the assumptions that some
 constraints
  might be ONLY type and some can be inheritable.
  Although admittedly the current ALTER TABLE functionality does not allow
 this.

 Hrm... Ill I see is a user who turned off sql_inhertance wondering why
 they have to specify ONLY on some alter table commands and not others.
 I think if we want to support ONLY constraint types in the way you
 are thinking about them, we need to put ONLY some place else (alter
 table xxx add only constraint ?). Personally I don't see a reason to
 have that kind of constraint. Mostly because I don't see how its
 functionally different. Is it somehow?

 Anyone else have any thoughts on this?



Re: [HACKERS] [PATCH] Fix little typo in docs in func.sgml

2011-10-07 Thread Robert Haas
On Fri, Oct 7, 2011 at 1:36 PM, Dickson S. Guedes lis...@guedesoft.net wrote:
 This is a little patch to fix a typo in docs. In the length function
 should be a space between string and bytea.

Committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OT?] Time-zone database down [was: Re: timezone buglet?]

2011-10-07 Thread Jaime Casanova
On Fri, Oct 7, 2011 at 10:02 PM, Greg Stark st...@mit.edu wrote:

 All that said I think this is far murkier than you all seem to think.
 Copyright law is one of the most complex areas of the law and this is
 one of the least well defined parts of copyright law.


imposing no natural restrictions have that effect ;)

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers