Re: [HACKERS] LinkedIn

2006-05-31 Thread Christopher Kings-Lynne

Do any of you guys use linkedin.com and want to add me to your contacts
network?


Ironically, I don't use LinkedIn, even though they use PostgreSQL (not 
exclusively, though).


Really?  How do you know that?  Are any of their people part of the 
community?


Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [PATCHES] Magic block for modules

2006-05-31 Thread Martijn van Oosterhout
On Tue, May 30, 2006 at 06:20:33PM -0400, Tom Lane wrote:
 Now that the magic-block patch is in, we need to revisit this bit of the
 discussion.  I'm for making lack of a magic block an ERROR immediately.
 I don't see the point of waiting; in fact, if we wait till freeze we'll
 just make the breakage more concentrated.  At the very least it ought
 to be a WARNING immediately, because a LOG message is just not visible
 enough.

If you like I can send a patch that adds it to all of contrib and some
of the other places required so that make check passes...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] plperl's ppport.h out of date?

2006-05-31 Thread Martijn van Oosterhout
On Tue, May 30, 2006 at 11:35:12AM -0400, Tom Lane wrote:
 FWIW, it looks like a large part of the bloat in the newer file is
 in-line documentation, which we hardly need to include in our
 distribution.  I'll leave it to someone more familiar with Perl to
 determine whether we want to try to use a slimmed-down copy of the
 up-to-date output.

Not sure whether it's worth it, but this sed line strips the POD docs:

sed -e '/^=/,/^=cut/d'  ppport.h

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] PQescapeIdentifier

2006-05-31 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Christopher Kings-Lynne
 Sent: 31 May 2006 04:16
 To: Tom Lane
 Cc: Hackers
 Subject: Re: [HACKERS] PQescapeIdentifier
 
  Christopher Kings-Lynne [EMAIL PROTECTED] writes:
  Here's a question. I wish to add a function to libpq to escape 
  PostgreSQL identifiers.  Will this function be subject to the same 
  security/encoding issues as PQescapeString?
  
  Is this of any general-purpose use?  How many apps are 
 really prepared
  to let an untrusted user dictate which columns are 
 selected/compared?
 
 phpPgAdmin has use for it, I assume pgAdmin would as well.  

Yes, it would.

Regards, Dave.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] Magic block for modules

2006-05-31 Thread Magnus Hagander
  On Sun, May 07, 2006 at 08:21:43PM -0400, Tom Lane wrote:
  I'm pretty sure we had agreed that magic blocks should be 
 required; 
  otherwise this check will accomplish little.
 
  Sure, I just didn't want to break every module in one 
 weekend. I was 
  thinking of adding it with LOG level now, send a message on 
 -announce 
  saying that at the beginning of the 8.2 freeze it will be an ERROR.
  Give people time to react.
 
 Now that the magic-block patch is in, we need to revisit this 
 bit of the discussion.  I'm for making lack of a magic block 
 an ERROR immediately.
 I don't see the point of waiting; in fact, if we wait till 
 freeze we'll just make the breakage more concentrated.  At 
 the very least it ought to be a WARNING immediately, because 
 a LOG message is just not visible enough.
 
 Comments?

If it's eventually going to be an ERROR, it's better to make it ERROR
from the start.

People working off cvs snapshot will (hopefully) expect temporary
breakage during the development period. In general, you'd expect less
breakage the closer to release you are.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Compile libpq with vc8

2006-05-31 Thread Bruce Momjian

It was applied 24-hours ago, and should be in CVS HEAD and the 8.1
stable branch.

---

Jeff McKenna wrote:
 When is this patch going to be applied?  Is there a bug that I can 
 follow along?
 
 thanks.
 
 jeff
 
 
 
 
  * From: Bruce Momjian pgman ( at ) candle ( dot ) pha ( dot ) pa ( 
 dot ) us
  * To: Yannick ybgn ( at ) free ( dot ) fr
  * Subject: Re: Compile libpq with vc8
  * Date: Mon, 29 May 2006 15:39:50 -0400 (EDT)
 
 Yes, I have a patch in the queue to fix it.  Should be applied soon.
 
 ---
 
 Yannick wrote:
   Hi,
  
 I am trying to compile libpq with vc8. I got errors in wchar.c
  
   ..\..\backend\utils\mb\wchar.c(99) : error C2054: expected '(' to follow
   'inline'
   ..\..\backend\utils\mb\wchar.c(100) : error C2085: 'pg_euc_mblen' : not
   in formal parameter list
   ..\..\backend\utils\mb\wchar.c(100) : error C2143: syntax error :
   missing ';' before '{'
  
  
 If I remove the inline, I got an error in:
  
   [...]postgresql-8.1.4\src\include\utils/elog.h(105) : error C2365:
   'errcode' : redefinition; previous definition was 'typedef'
  
 Does anyone already compiled libpq with vc8?
  
   Yannick.
  
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] copy with compression progress n

2006-05-31 Thread Andreas Pflug
I've been playing around with COPYing large binary data, and implemented 
a COMPRESSION transfer format. The server side compression saves 
significant bandwidth, which may be the major limiting factor when large 
amounts of data is involved (i.e. in many cases where COPY TO/FROM 
STDIN/STDOUT is used)
In addition, a progress notification can be enabled using a PROGRESS 
each n lines option.


I tested this with a table, containing 2000 rows with a highly 
compressable bytea column (size 1.4GB, on-disk 138MB). Numbers are as 
follows (8.2 HEAD psql):

pg_dump -a -F c -t  652s, 146MB
\copy TO /dev/null  322s
\copy TO /dev/null binary   24s
\copy TO /dev/null compression  108s
\copy TO /tmp/file binary   55s, 1.4GB
\copy TO /tmp/file compression  108s, 133MB
\copy TO STDOUT binary|gzip -1  69s, 117MB

So using the plain text copy has a large overhead for text data over 
binary formats. OTOH, copying normal rows WITH BINARY may bloat the 
result too. A typical test table gave these numbers:

COPY:   6014 Bytes
BINARY: 15071 Bytes
COMPRESSION:2334 Bytes

The compression (pg_lzcompress) is less efficient than a binary copy 
piped to gzip, as long as the data transfer of 1.4GB from server to 
client isn't limited by network bandwidth. Apparently, pg_lzcompress 
uses 53s to compress to 133MB, while gzip only needs 14s for 117MB. 
Might be worth to have a look optimizing that since it's used in 
tuptoaster. Still, when network traffic is involved, it may be better to 
have some time spent on the server to reduce data (e.g. for Slony, which 
uses COPY to start a replication, and is likely to be operated over 
lines 1GBit/s).


The attached patch implements COPY ... WITH [BINARY] COMPRESSION 
(compression implies BINARY). The copy data uses bit 17 of the flag 
field to identify compressed data.
The PROGRESS n option to throw notices each n lines has a caveat: when 
copying TO STDOUT, data transfer will cease after the first notice was 
sent. This may either mean dont ereport(NOTICE) when COPYing data to 
the client or a bug somewhere.


Regards,
Andreas
Index: src/backend/commands/copy.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/copy.c,v
retrieving revision 1.266
diff -c -r1.266 copy.c
*** src/backend/commands/copy.c 26 May 2006 22:50:02 -  1.266
--- src/backend/commands/copy.c 31 May 2006 08:52:42 -
***
*** 47,53 
  #include utils/memutils.h
  #include utils/relcache.h
  #include utils/syscache.h
! 
  
  #define ISOCTAL(c) (((c) = '0')  ((c) = '7'))
  #define OCTVALUE(c) ((c) - '0')
--- 47,53 
  #include utils/memutils.h
  #include utils/relcache.h
  #include utils/syscache.h
! #include utils/pg_lzcompress.h
  
  #define ISOCTAL(c) (((c) = '0')  ((c) = '7'))
  #define OCTVALUE(c) ((c) - '0')
***
*** 103,114 
--- 103,121 
int client_encoding;/* remote side's 
character encoding */
boolneed_transcoding;   /* client encoding diff 
from server? */
boolencoding_embeds_ascii;  /* ASCII can be non-first byte? 
*/
+ booldo_compress;/* compress data before writing to output */
+ booldo_flush;   /* flush fe_msgbuf to copy target file/pipe */
+ booluse_raw_buf;/* use raw buffered data for CopyGetData */
uint64  processed;  /* # of tuples processed */
+   uint64  progress;   /* progress notice each # 
tuples processed */
+ 
+   MemoryContext oldcontext;
  
/* parameters from the COPY command */
Relationrel;/* relation to copy to or from 
*/
List   *attnumlist; /* integer list of attnums to copy */
boolbinary; /* binary format? */
+   boolcompression;/* binary compressed format? */
booloids;   /* include OIDs? */
boolcsv_mode;   /* Comma Separated Value 
format? */
boolheader_line;/* CSV header line? */
***
*** 153,162 
 * converts it.  Note: we guarantee that there is a \0 at
 * raw_buf[raw_buf_len].
 */
! #define RAW_BUF_SIZE 65536/* we palloc RAW_BUF_SIZE+1 bytes */
char   *raw_buf;
int raw_buf_index;  /* next byte to process */
int raw_buf_len;/* total # of bytes stored */
  } CopyStateData;
  
  typedef CopyStateData *CopyState;
--- 160,170 
 * converts it.  Note: we guarantee that there is a \0 at
 * raw_buf[raw_buf_len].
 */
! #define RAW_BUF_SIZE 65536/* initially, we palloc RAW_BUF_SIZE+1 
bytes */
char   *raw_buf;
int raw_buf_index;  

Re: [HACKERS] [PATCHES] Magic block for modules

2006-05-31 Thread Bruce Momjian
Magnus Hagander wrote:
   On Sun, May 07, 2006 at 08:21:43PM -0400, Tom Lane wrote:
   I'm pretty sure we had agreed that magic blocks should be 
  required; 
   otherwise this check will accomplish little.
  
   Sure, I just didn't want to break every module in one 
  weekend. I was 
   thinking of adding it with LOG level now, send a message on 
  -announce 
   saying that at the beginning of the 8.2 freeze it will be an ERROR.
   Give people time to react.
  
  Now that the magic-block patch is in, we need to revisit this 
  bit of the discussion.  I'm for making lack of a magic block 
  an ERROR immediately.
  I don't see the point of waiting; in fact, if we wait till 
  freeze we'll just make the breakage more concentrated.  At 
  the very least it ought to be a WARNING immediately, because 
  a LOG message is just not visible enough.
  
  Comments?
 
 If it's eventually going to be an ERROR, it's better to make it ERROR
 from the start.
 
 People working off cvs snapshot will (hopefully) expect temporary
 breakage during the development period. In general, you'd expect less
 breakage the closer to release you are.

I say make it an ERROR and we can relax it later.  If you make it a
warning, we might not hear about it.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [COMMITTERS] pgsql: Fix calculation of plan node extParams to account for the

2006-05-31 Thread Jackie Leng
I trid the following two queries in the version before your patch:
(1) which is reported in the bug(plan should not reference subplan's
variable) reported by Catalin Pitis:

INSERT INTO PROJECT(PROJECT_ID,PROJECT_DESC)(SELECT
MAX(PROJECT_ID),'MYPROJECT' FROM PROJECT WHERE NOT EXISTS  (
SELECT PROJECT_DESC FROM PROJECT WHERE PROJECT_DESC = 'MYPROJECT'  ) );

it reported an error;
(2)
select * from project where (1,1) = (SELECT MAX(PROJECT_ID),1 FROM
PROJECT WHERE NOT EXISTS  (SELECT PROJECT_DESC FROM PROJECT WHERE
PROJECT_DESC = 'MYPROJECT'))

but, there was no error at all!!

Then I noticed that when add IDs of all PARAM_EXEC params appearing in the
given expression tree to the result set:
(1) for subplans corresponding to a SubLink, it was processed like this:
/* in finalize_primnode */

 if (is_subplan(node))
 {
  SubPlan*subplan = (SubPlan *) node;

  /* Add outer-level params needed by the subplan to paramids */
  context-paramids = bms_join(context-paramids,
  bms_intersect(subplan-plan-extParam,
   context-outer_params));
  /* fall through to recurse into subplan args */
 }

Attention: there's a bms_intersect op here before bms_join.

(2) but for subplans correspoonding to a RangeTable, say SubqueryScan, it
was processed like this:

  /* in finalize_plan */
  case T_SubqueryScan:

   /*
* In a SubqueryScan, SS_finalize_plan has already been run on the
* subplan by the inner invocation of subquery_planner, so there's
* no need to do it again. Instead, just pull out the subplan's
* extParams list, which represents the params it needs from my
* level and higher levels.
*/
   context.paramids = bms_add_members(context.paramids,
 ((SubqueryScan *) plan)-subplan-extParam);
   break;

Attention: there's no bms_intersect .

So, my question is why not just add a bms_intersect in the second occasion
just like the first one? Do we need to change so much?


Tom Lane [EMAIL PROTECTED] дÈëÏûÏ¢ÐÂÎÅ
:[EMAIL PROTECTED]
 Log Message:
 ---
 Fix calculation of plan node extParams to account for the possibility that
one
 initPlan sets a parameter for another.  This could not (I think) happen
before
 8.1, but it's possible now because the initPlans generated by MIN/MAX
 optimization might themselves use initPlans.  We attach those initPlans as
 siblings of the MIN/MAX ones, not children, to avoid duplicate computation
 when multiple MIN/MAX aggregates are present; so this leads to the case of
an
 initPlan needing the result of a sibling initPlan, which is not possible
with
 ordinary query nesting.  Hadn't been noticed because in most contexts
having
 too much stuff listed in extParam is fairly harmless.  Fixes plan should
not
 reference subplan's variable bug reported by Catalin Pitis.

 Tags:
 
 REL8_1_STABLE

 Modified Files:
 --
 pgsql/src/backend/optimizer/plan:
 subselect.c (r1.100.2.2 - r1.100.2.3)

(http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/plan
/subselect.c.diff?r1=1.100.2.2r2=1.100.2.3)

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Still not happy with psql's multiline history behavior

2006-05-31 Thread Tom Lane
I'm getting grudgingly more used to CVS HEAD's behavior of stuffing a
multiline command into the history buffer as a single item, but there's
still something bothering me about it.  I've finally put my finger on
what.  I think it should not combine SQL text and backslash commands
into a single history entry.  The example that seriously sucks is:

regression=# select foo ...
regression-# \r
Query buffer reset (cleared).
now control-P brings back:
regression=# select foo ...
\r

In the case where I type SQL and backslash on one line, of course it
can't break them apart, but I think there should be a general rule that
a line starting with a backslash is always a separate history entry.

Also, \e is seriously broken: after you edit the text and exit the
editor, the text is not reloaded into the visible display (although it
does seem to still be behind the scenes somewhere).

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] Magic block for modules

2006-05-31 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 If you like I can send a patch that adds it to all of contrib and some
 of the other places required so that make check passes...

Think I got them all already:
http://archives.postgresql.org/pgsql-committers/2006-05/msg00384.php
but if you see any I missed...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Compile libpq with vc8

2006-05-31 Thread Yannick

Bruce Momjian wrote:

It was applied 24-hours ago, and should be in CVS HEAD and the 8.1
stable branch.


I downloaded the source, following the instructions at
http://www.postgresql.org/developer/sourcecode/. It seems I didn't get
your commit :(

Log for win32.mak:
revision 1.36
date: 2005/09/16 18:58:48;  author: momjian;  state: Exp;  lines: +1 -4
This correction is required of nmake of Windows.

Did I check out the correct branch?

Yannick.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] copy with compression progress n

2006-05-31 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 The attached patch implements COPY ... WITH [BINARY] COMPRESSION 
 (compression implies BINARY). The copy data uses bit 17 of the flag 
 field to identify compressed data.

I think this is a pretty horrid idea, because it changes pg_lzcompress
from an unimportant implementation detail into a backup file format
that we have to support till the end of time.  What happens if, say,
we need to abandon pg_lzcompress because we find out it has patent
problems?

It *might* be tolerable if we used gzip instead, but I really don't see
the argument for doing this inside the server at all: piping to gzip
seems like a perfectly acceptable solution, quite possibly with higher
performance than doing it all in a single process (which isn't going
to be able to use more than one CPU).

I don't see the argument for restricting it to binary only, either.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [ADMIN] Problem building initdb on sparc10

2006-05-31 Thread Averbukh Stella
Hello Tom,

Thank you for the advice.  I removed pqsignalinquire() method and it
worked just fine.  


Stella Averbukh
_



-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 30, 2006 11:29 AM
To: Averbukh Stella
Cc: pgsql-admin@postgresql.org; pgsql-hackers@postgresql.org; Bruce
Momjian
Subject: Re: [ADMIN] Problem building initdb on sparc10 

Averbukh Stella [EMAIL PROTECTED] writes:
 I'm building postgresQL on Sparc10 and the build goes fine unil
initdb.

 /usr/local/bin/gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
 -Winline -Wendif-labels -fno-strict-aliasing initdb.o 
 -L../../../src/port -lpgport -L../../../src/interfaces/libpq -lpq 
 -L../../../src/port -Wl,-R/postgres/lib -L/lib -L/usr/lib 
 -L/usr/local/lib -L/usr/local/ssl/lib -L/platform/SUNW,Ultra-60/lib
 -L/usr/sfw/lib/sparcv9 -L/usr/sfw/lib/ -L/usr/local/lib/sparcv9 
 -lpgport -lz -lrt -lresolv -lgen -lsocket -lnsl -ldl -lm -o initdb 
 Undefined first referenced symbol in file sigmask 
 ../../../src/interfaces/libpq/libpq.so
 sigblock ../../../src/interfaces/libpq/libpq.so

When did you last successfully build PG on this system?  I'm guessing
you must have been using 7.4 or older, because AFAICS this was broken by
this 8.0 change:

2004-01-08 21:02  momjian

* doc/src/sgml/libpq.sgml, src/backend/nodes/read.c,
src/interfaces/libpq/fe-connect.c,
src/interfaces/libpq/fe-print.c,
src/interfaces/libpq/fe-secure.c,
src/interfaces/libpq/libpq-fe.h,
src/interfaces/libpq/libpq-int.h,
src/interfaces/libpq/pqsignal.c,
src/interfaces/libpq/pqsignal.h: Allow libpq to do thread-safe
SIGPIPE handling.  This allows it to ignore SIGPIPE from send()
in
libpq, but terminate on any other SIGPIPE, unless the user
installs
their own signal handler.

This is a minor fix because the only time you get SIGPIPE from
libpq's send() is when the backend dies.

The code appears to unconditionally assume that sigmask() and sigblock()
exist.  Not a good assumption.

AFAICS pqsignalinquire() isn't even used anywhere (at least not in
HEAD), so the simplest answer may be to remove it rather than try to fix
it.  It's in src/interfaces/libpq/pqsignal.c.

regards, tom lane

__
This e-mail has been scanned by Arbitron's Email Content Service. 
__

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Tom Lane
After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs.  Perhaps COPY ought to have the ability to pipe its output
to a shell command, or read input from a shell command.  Maybe something
like

COPY mytable TO '| gzip /home/tgl/mytable.dump.gz';

(I'm not wedded to the above syntax, it's just an off-the-cuff thought.)

Of course psql would need the same capability, since the server-side
copy would still be restricted to superusers.

You can accomplish COPY piping now through psql, but it's a bit awkward:

psql -c COPY mytable TO stdout mydb | gzip ...

Thoughts?  Is this worth doing, or is the psql -c approach good enough?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Compile libpq with vc8

2006-05-31 Thread Andrew Dunstan

Yannick wrote:

Bruce Momjian wrote:

It was applied 24-hours ago, and should be in CVS HEAD and the 8.1
stable branch.


I downloaded the source, following the instructions at
http://www.postgresql.org/developer/sourcecode/. It seems I didn't get
your commit :(

Log for win32.mak:
revision 1.36
date: 2005/09/16 18:58:48;  author: momjian;  state: Exp;  lines: +1 -4
This correction is required of nmake of Windows.

Did I check out the correct branch?




The fix didn't touch the makefile. It supplied a definition for inline 
in the config header file.


see http://archives.postgresql.org/pgsql-committers/2006-05/msg00351.php

cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andrew Dunstan

Tom Lane wrote:


You can accomplish COPY piping now through psql, but it's a bit awkward:

psql -c COPY mytable TO stdout mydb | gzip ...

Thoughts?  Is this worth doing, or is the psql -c approach good enough?




I think it's good enough. And there is also

  pg_dump -F c -t bigtable -f bigtable.dump

cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread David Fetter
On Wed, May 31, 2006 at 11:03:14AM -0400, Tom Lane wrote:
 After re-reading what I just wrote to Andreas about how compression
 of COPY data would be better done outside the backend than inside,
 it struck me that we are missing a feature that's fairly common in
 Unix programs.  Perhaps COPY ought to have the ability to pipe its
 output to a shell command, or read input from a shell command.
 Maybe something like
 
   COPY mytable TO '| gzip /home/tgl/mytable.dump.gz';

That's a great syntax :)

Similarly,

COPY mytable FROM 'create_sample_data --table mytable --rows 1000 |';

would be cool.

 (I'm not wedded to the above syntax, it's just an off-the-cuff
 thought.)

It will be familiar to Perl users, for better or worse.  Come to that,
should the prefixes  and  also mean their corresponding shell
things?

 Of course psql would need the same capability, since the server-side
 copy would still be restricted to superusers.

Roight.

 You can accomplish COPY piping now through psql, but it's a bit awkward:
 
   psql -c COPY mytable TO stdout mydb | gzip ...
 
 Thoughts?  Is this worth doing, or is the psql -c approach good enough?

I think it's worth doing :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] copy with compression progress n

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

The attached patch implements COPY ... WITH [BINARY] COMPRESSION 
(compression implies BINARY). The copy data uses bit 17 of the flag 
field to identify compressed data.



I think this is a pretty horrid idea, because it changes pg_lzcompress
from an unimportant implementation detail into a backup file format
that we have to support till the end of time.  What happens if, say,
we need to abandon pg_lzcompress because we find out it has patent
problems?

It *might* be tolerable if we used gzip instead,


I used pg_lzcompress because it's present in the backend. I'm fine with 
every other good compression algorithm.



 but I really don't see
the argument for doing this inside the server at all: piping to gzip
seems like a perfectly acceptable solution,


As I said, this hits only if it is possible to pipe the result into gzip 
in a performant way. The issue already arises if psql or any other COPY 
client (slony, pg_dump) is not on the same machine: Network bandwidth 
will limit throughput.



quite possibly with higher
performance than doing it all in a single process (which isn't going
to be able to use more than one CPU).


Which is pretty normal for pgsql.


I don't see the argument for restricting it to binary only, either.


That's not a restriction, but a result: compressed data is binary. 
Marking it as binary will make it working with older frontends as well, 
as long as they don't try to interpret the data. Actually, all 8.x psql 
versions should work (with COPY STDxx, not \copy).


Do you have a comment about the progress notification and its impact on 
copy to stdout?


Regards,
Andreas

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs.  Perhaps COPY ought to have the ability to pipe its output
to a shell command, or read input from a shell command.  Maybe something
like

COPY mytable TO '| gzip /home/tgl/mytable.dump.gz';





(I'm not wedded to the above syntax, it's just an off-the-cuff thought.)

Of course psql would need the same capability, since the server-side
copy would still be restricted to superusers.


Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to receive the 
reduced stuff. But clients should be agnostic of server side installed 
tools, and probably not be able to address them directly. Sounds like a 
potential security issue.


Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Andreas Pflug wrote:



Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to receive the 
reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format that's 
efficient for both text and binary data. Relying on some external XYZzip 
version seems not too portable to me.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Andreas Pflug
 Sent: 31 May 2006 16:41
 Cc: Tom Lane; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe
 
 Andreas Pflug wrote:
 
  
  Won't help too much, until gzip's output is piped back too, so a 
  replacement for COPY .. TO STDOUT COMPRESSED  would be
  COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to 
 receive the 
  reduced stuff.
 
 Forgot to mention:
 COPY COMPRESSED was also meant to introduce a portable format that's 
 efficient for both text and binary data. Relying on some 
 external XYZzip 
 version seems not too portable to me.

It does have that advantage. Gzip and others are not particularly
Windows friendly for example.

Regards, Dave.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] copy with compression progress n

2006-05-31 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 Do you have a comment about the progress notification and its impact on 
 copy to stdout?

I didn't bother to comment on it because I think it's useless, as well
as broken for the stdout case.  Anyone who actually sees a use for it
will have to comment on why they want it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 Forgot to mention:
 COPY COMPRESSED was also meant to introduce a portable format that's 
 efficient for both text and binary data. Relying on some external XYZzip 
 version seems not too portable to me.

I dislike putting this into the backend precisely because it's trying to
impose a one-size-fits-all compression solution.  Someone might wish to
use bzip2 instead of gzip, for instance, or tweak the compression level
options of gzip.  It's trivial for the user to do that if the
compression program is separate, not trivial at all if it's wired into
COPY.  Also, a pipe feature would have uses unrelated to compression,
such as on-the-fly analysis or generation of data.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Dave Page wrote:
 




-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas Pflug

Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe

Andreas Pflug wrote:


Won't help too much, until gzip's output is piped back too, so a 
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to 


receive the 


reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format that's 
efficient for both text and binary data. Relying on some 
external XYZzip 
version seems not too portable to me.



It does have that advantage. Gzip and others are not particularly
Windows friendly for example.


... as most windows programs are pipe agnostic.

Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] copy progress notification

2006-05-31 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

Do you have a comment about the progress notification and its impact on 
copy to stdout?



I didn't bother to comment on it because I think it's useless,


It's useful to see anything at all, and to be able to estimate how long 
the whole process will take. People might find it interesting whether 
they should go for a cup of coffee or come better back the next day...


as well as broken for the stdout case. 


I know it's broken, but why? Is using ereport when sending copy data 
illegal by design? If not, it's not the feature that's broken but 
something in cvs HEAD.


Regards,
Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Joshua D. Drake


I dislike putting this into the backend precisely because it's trying to
impose a one-size-fits-all compression solution.  Someone might wish to
use bzip2 instead of gzip, for instance, or tweak the compression level
options of gzip.  It's trivial for the user to do that if the
compression program is separate, not trivial at all if it's wired into
COPY.  Also, a pipe feature would have uses unrelated to compression,
such as on-the-fly analysis or generation of data.


It seems that it would be better to have the options within pg_dump 
which would give the most flexibility.


Sincerely,

Joshua D. Drake

--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Joshua D. Drake wrote:


I dislike putting this into the backend precisely because it's trying to
impose a one-size-fits-all compression solution.  Someone might wish to
use bzip2 instead of gzip, for instance, or tweak the compression level
options of gzip.  It's trivial for the user to do that if the
compression program is separate, not trivial at all if it's wired into
COPY.  Also, a pipe feature would have uses unrelated to compression,
such as on-the-fly analysis or generation of data.



It seems that it would be better to have the options within pg_dump 
which would give the most flexibility.


What about all other client tools?

My COPY WITH COMPRESSION is not the same as taking a copy file and 
zipping it; it creates a copy file with BinarySignature that has 
compressed bytes in the data part, thus it can be handled by any client 
app that can stream binary copy files from/to the server.


Regards,
Andreas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Magnus Hagander
 Won't help too much, until gzip's output is piped back too, so a 
 replacement for COPY .. TO STDOUT COMPRESSED  would be 
 COPY ... TO '| 
 /bin/gzip |' STDOUT, to enable clients to
 
 receive the
 
 reduced stuff.
 
 Forgot to mention:
 COPY COMPRESSED was also meant to introduce a portable 
 format that's 
 efficient for both text and binary data. Relying on some external 
 XYZzip version seems not too portable to me.
  
  
  It does have that advantage. Gzip and others are not particularly
  Windows friendly for example.
 
 ... as most windows programs are pipe agnostic.

For the record, gzip on win32 works perfectly fine both as a separate
program and running in a pipe. No problem at all. The only issue is that
it's not available by default. (And possible issues with programs
launching it that don't know how to deal with windows style directory
naming)

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Chris Browne
[EMAIL PROTECTED] (Andreas Pflug) writes:
 Dave Page wrote:

-Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Andreas
 Pflug
Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe

Andreas Pflug wrote:


 Won't help too much, until gzip's output is piped back too, so a
 replacement for COPY .. TO STDOUT COMPRESSED  would be
 COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to

 receive the

reduced stuff.

Forgot to mention:
 COPY COMPRESSED was also meant to introduce a portable format
 that's efficient for both text and binary data. Relying on some
 external XYZzip version seems not too portable to me.
 It does have that advantage. Gzip and others are not particularly
 Windows friendly for example.

 ... as most windows programs are pipe agnostic.

Shall we make PostgreSQL less powerful because of that?
-- 
cbbrowne,@,cbbrowne.com
http://cbbrowne.com/info/advocacy.html
Love is like a snowmobile flying over the frozen tundra that suddenly
flips, pinning you underneath.  At night, the ice weasels come.
-- Matt Groening

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Chris Browne wrote:

[EMAIL PROTECTED] (Andreas Pflug) writes:


Dave Page wrote:


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andreas
Pflug
Sent: 31 May 2006 16:41
Cc: Tom Lane; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Possible TODO item: copy to/from pipe

Andreas Pflug wrote:




Won't help too much, until gzip's output is piped back too, so a
replacement for COPY .. TO STDOUT COMPRESSED  would be
COPY ... TO '| /bin/gzip |' STDOUT, to enable clients to


receive the



reduced stuff.


Forgot to mention:
COPY COMPRESSED was also meant to introduce a portable format
that's efficient for both text and binary data. Relying on some
external XYZzip version seems not too portable to me.


It does have that advantage. Gzip and others are not particularly
Windows friendly for example.


... as most windows programs are pipe agnostic.



Shall we make PostgreSQL less powerful because of that?


I never said that. We shall seek solutions that run painless on most 
popular platforms are useful to users.
I wonder if we'd be able to ship gzip with the windows installer, to 
insure proper integration.


Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread David Fetter
On Wed, May 31, 2006 at 02:46:29PM -0400, Andrew Dunstan wrote:

 I wish somebody would explain to me the compelling use case for
 this.

As with in-place upgrades,[1] the compelling use case is being short
on disk space.  For somebody with a multi-TB (or whatever figure
sounds big this week) PostgreSQL database, it may be impossible to get
space for twice or more that.  Giving people the option to stream
COPYs through a pipe would alleviate a lot of pain.

Cheers,
D

[1]  A feature people seem to think we don't need, although convincing
cases have been made for it.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] More thoughts about planner's cost estimates

2006-05-31 Thread Tom Lane
I've been thinking about the planner's costing problems again,
particularly in connection with Philippe Lang's complaint here:
http://archives.postgresql.org/pgsql-general/2006-05/msg01526.php
Investigation showed that the planner is much too optimistic about the
usefulness of a multi-index BitmapAnd plan, and since that comparison is
just a cost-estimate comparison, it implies we are underestimating the
cost of an index scan.  A typical example from his results is

-  BitmapAnd  (cost=12.30..12.30 rows=1 width=0) (actual time=0.306..0.306 
rows=0 loops=13628)
  -  Bitmap Index Scan on lw_id_workflow  (cost=0.00..2.02 rows=7 
width=0) (actual time=0.009..0.009 rows=7 loops=13628)
Index Cond: (lw.id_workflow = outer.id)
  -  Bitmap Index Scan on lw_ordre  (cost=0.00..10.03 rows=1437 
width=0) (actual time=0.293..0.293 rows=1714 loops=13628)
Index Cond: (ordre = $4)

There are two variables involved here: the cost of touching an index page
and the cost of processing an index tuple.  Given the two comparable data
points above, we can solve for those numbers, and it turns out that the
real cost ratio on Philippe's machine is about 50 to 1.  Which says that
for him, cpu_index_tuple_cost plus cpu_operator_cost should be around
0.02, nearly an order of magnitude higher than their current default
values (0.001 and 0.0025 respectively).

In general it seems to me that for CPU-bound databases, the default values
of the cpu_xxx_cost variables are too low.  I am tempted to raise the
default value of cpu_index_tuple_cost to 0.005, which would be half of
cpu_tuple_cost and twice cpu_operator_cost; that seems more in line with
my feel for the relative costs of things.  For a CPU-bound database all
three would need to go up more than that.  But rather than telling people
to manipulate all three of these variables individually, I think it might
also be a good idea to provide a new GUC variable named something like
cpu_speed_scale that would just be a multiplier for the other variables.
It would default to 1.0 and our standard advice for CPU-bound databases
would be decrease random_page_cost to 1.0 and raise cpu_speed_scale to
10.0 or so.  Seems cleaner than telling people to muck with three or so
individual values.

Another thing that's bothering me is that the index access cost computation
(in genericcostestimate) is looking sillier and sillier:

/*
 * Estimate the number of index pages that will be retrieved.
 *
 * For all currently-supported index types, the first page of the index is
 * a metadata page, and we should figure on fetching that plus a pro-rated
 * fraction of the remaining pages.
 */
if (index-pages  1  index-tuples  0)
{
numIndexPages = (numIndexTuples / index-tuples) * (index-pages - 1);
numIndexPages += 1;/* count the metapage too */
numIndexPages = ceil(numIndexPages);
}
else
numIndexPages = 1.0;

/*
 * Compute the index access cost.
 *
 * Disk cost: our generic assumption is that the index pages will be read
 * sequentially, so they have cost 1.0 each, not random_page_cost.
 */
*indexTotalCost = numIndexPages;

There are several things wrong with this, at least in its application to
btrees.  It's not counting descent of the btree (ie, touches of the root
page and intermediate-level pages).  On the other hand it's surely
overcharging for metapage touches.  As of CVS tip we cache the metapage in
the relcache, so it's probably fair to disregard that cost altogether.
And on the third hand, when we need to retrieve multiple leaf pages it's
over-optimistic to assume that those'll be purely sequential fetches.
(That would be approximately true in a freshly-built btree, but not in one
that's suffered any amount of page splitting or recycling.)

I've desisted from touching this mainly because the obvious sanity
adjustments, such as adding something for tree descent and charging
random_page_cost not 1.0 for leaf page touches, would increase the
estimated cost of index usage, which seemed like not the direction we
wanted to go in.  So I figured that the errors were more or less
cancelling each other.  But the addition of bitmap index scans is now
exposing the weaknesses, so we need to face up to the problem.

I am inclined to think that a reasonable model is to continue to estimate
the number of index leaf pages touched as above (pro-rating against the
total number of index entries), to charge random_page_cost per leaf page
touched, and not to count anything for metapage or tree descent.  I
justify the latter on the grounds that upper tree levels will tend to stay
in cache because they're touched so often.  random_page_cost per leaf page
may be an overestimate, since sometimes logically adjacent leaf pages will
be physically adjacent too, but not charging for tree descent should help
to cancel that out.  With this model, the disk cost to fetch a 

Re: [HACKERS] error-free disabling of individual child partition

2006-05-31 Thread Greg Stark


Regarding ALTER TABLE ADD INHERITS, tablecmds.c has the following comment for
when it's creating a table that inherits some columns:

 *The order in which the attributes are inherited is very important.
 *Intuitively, the inherited attributes should come first. If a table
 *inherits from multiple parents, the order of those attributes are
 *according to the order of the parents specified in CREATE TABLE.

If you add a parent table at some later date then the columns might not fit
this constraint. Is it actually very important or merely a good idea for the
server to create them in the order the user will expect? That is, is anything
else in the server going to break if the columns aren't in the right order?

Actually now that I think of it removing a parent table can also create a
incorrectly ordered column list if the table has multiple parents. If we
remove one of its first parents and one of the columns was merged from
multiple parents then it will appear before its normal place with the other
columns from the later parent.


Incidentally, did the following message ever make it to the list? I saw my
carbon copy, but never received it from the list and never received any
responses. Usually any suggestion of making tokens reserved words elicits
screams of objections.


Greg Stark [EMAIL PROTECTED] writes:

 Tom Lane:
 
  ALTER TABLE childN DROP INHERITS old_parent;
  ALTER TABLE childN ADD INHERITS new_parent;
 
 I think you would have to make INHERITS a reserved word to make that happen:
 
 stark= create table integer (i integer);
 CREATE TABLE
 stark= alter table test add inherits integer;
 ALTER TABLE
 
 -- 
 greg
 
 

-- 
greg


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Dave Page



On 31/5/06 18:28, Magnus Hagander [EMAIL PROTECTED] wrote:

 Won't help too much, until gzip's output is piped back too, so a
 replacement for COPY .. TO STDOUT COMPRESSED  would be
 COPY ... TO '| 
 /bin/gzip |' STDOUT, to enable clients to
 
 receive the
 
 reduced stuff.
 
 Forgot to mention:
 COPY COMPRESSED was also meant to introduce a portable
 format that's 
 efficient for both text and binary data. Relying on some external
 XYZzip version seems not too portable to me.
 
 
 It does have that advantage. Gzip and others are not particularly
 Windows friendly for example.
 
 ... as most windows programs are pipe agnostic.
 
 For the record, gzip on win32 works perfectly fine both as a separate
 program and running in a pipe. No problem at all. The only issue is that
 it's not available by default. (And possible issues with programs
 launching it that don't know how to deal with windows style directory
 naming)

Exactly my point; how many production Windows servers do you have with gzip
anywhere near them? Andreas' point about pipes is also valid though - it's
simply not the norm on Windows as I found when we were porting Slony
(more.exe barfs at 8MB being pipe in).

Regards, Dave.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Dave Page



On 31/5/06 19:13, Andreas Pflug [EMAIL PROTECTED] wrote:

 I wonder if we'd be able to ship gzip with the windows installer, to
 insure proper integration.

'Fraid not. It's GPL'd.

Regards, Dave.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Alvaro Herrera
Andrew Dunstan wrote:
 David Fetter wrote:

 As with in-place upgrades,[1] the compelling use case is being short
 on disk space.  For somebody with a multi-TB (or whatever figure
 sounds big this week) PostgreSQL database, it may be impossible to get
 space for twice or more that.  Giving people the option to stream
 COPYs through a pipe would alleviate a lot of pain.
 
 But why is that hugely better than piping psql output to gzip?

psql output has already travelled over the network.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] error-free disabling of individual child partition

2006-05-31 Thread Andrew Dunstan

Greg Stark wrote:


Incidentally, did the following message ever make it to the list? I saw my
carbon copy, but never received it from the list and never received any
responses. Usually any suggestion of making tokens reserved words elicits
screams of objections.


Greg Stark [EMAIL PROTECTED] writes:

  

Tom Lane:



ALTER TABLE childN DROP INHERITS old_parent;
ALTER TABLE childN ADD INHERITS new_parent;
  

I think you would have to make INHERITS a reserved word to make that happen:

stark= create table integer (i integer);
CREATE TABLE
stark= alter table test add inherits integer;
ALTER TABLE





Yes it did.

Syntax like this would probably not require making it a reserved word:

ALTER TABLE foo ALTER INHERITS ADD|DROP bar

Maybe a bit uglier, but not too much.

cheers

andrew



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Steve Atkins


On May 31, 2006, at 12:58 PM, Dave Page wrote:





On 31/5/06 19:13, Andreas Pflug [EMAIL PROTECTED] wrote:


I wonder if we'd be able to ship gzip with the windows installer, to
insure proper integration.


'Fraid not. It's GPL'd.


Well, one implementation of it is. zlib is new-bsd-ish, though, and  
includes

minigzip, which should be just fine for use in a pipe on windows.

(Not that that's an argument one way or the other as to whether this
is something we should do).

Cheers,
  Steve




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Magnus Hagander
  I wonder if we'd be able to ship gzip with the windows 
 installer, to 
  insure proper integration.
 
 'Fraid not. It's GPL'd.
 

Well, if we want to go down that route, we could probably hack up
something simple around zlib. IIRC, there's even sample code in there
for how to write a gzip pipe program...

No, not as convenient, but should be handlable.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 My COPY WITH COMPRESSION is not the same as taking a copy file and 
 zipping it; it creates a copy file with BinarySignature that has 
 compressed bytes in the data part, thus it can be handled by any client 
 app that can stream binary copy files from/to the server.

If you mean you're compressing each data field separately, that's surely
a very bad idea.  If you mean you're compressing everything except the
file header, I fail to see the value.  Binary is binary.  I *seriously*
doubt there are clients out there that look for a PGCOPY header before
deciding whether to send the file to the server or not.  And a client
that did know that much about a PGCOPY file would likely spit up on a
critical flag it didn't recognize, anyway.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Tom Lane
Dave Page dpage@vale-housing.co.uk writes:
 Exactly my point; how many production Windows servers do you have with gzip
 anywhere near them? Andreas' point about pipes is also valid though - it's
 simply not the norm on Windows as I found when we were porting Slony
 (more.exe barfs at 8MB being pipe in).

I don't see that we should allow Windows' deficiencies to be an argument
against providing a facility that would be useful on all our other platforms.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andrew Dunstan

Alvaro Herrera wrote:

Andrew Dunstan wrote:
  


But why is that hugely better than piping psql output to gzip?



psql output has already travelled over the network.

  


As I understand Tom's suggestion, it does not involve compression of 
over the wire data. He suggested that on the server you would be able to do:


 COPY mytable TO '| gzip /home/tgl/mytable.dump.gz';


and that there could be an equivalent extension on psql's \copy command, as an alternative to doing 



 psql -c COPY mytable TO stdout mydb | gzip ...



It's the second piece especially that seems to me unnecessary. 



So I am still unconvinced.



cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] plperl's ppport.h out of date?

2006-05-31 Thread Devrim GUNDUZ
Hi,

On Tue, 2006-05-30 at 08:29 -0400, Tom Lane wrote:
 ppport.h is throwing warnings when compiling on Fedora Core 5
 (with perl-5.8.8-4).  I gather from the comments in the head
 of that file that we ought to look for a newer version.

I've informed Andrew about this, he sent me a patch a few days before,
but I could not have time to test it.

Let me test it soon.
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Dave Page



On 31/5/06 21:10, Tom Lane [EMAIL PROTECTED] wrote:

 Dave Page dpage@vale-housing.co.uk writes:
 Exactly my point; how many production Windows servers do you have with gzip
 anywhere near them? Andreas' point about pipes is also valid though - it's
 simply not the norm on Windows as I found when we were porting Slony
 (more.exe barfs at 8MB being pipe in).
 
 I don't see that we should allow Windows' deficiencies to be an argument
 against providing a facility that would be useful on all our other platforms.

It's not about a primarily GUI based OS not being able to do everything a
traditionally command line based OS can do on the command line, it's about
providing a solution that will work on either and remain portable. Whilst I
agree with your objection to using pg_lzcompress, I for one would rather see
a builtin solution that I know will work whatever platform/box I'm on
without having to go and download additional tools.

Regards, Dave. 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Martijn van Oosterhout
On Wed, May 31, 2006 at 01:08:28PM -0700, Steve Atkins wrote:
 On May 31, 2006, at 12:58 PM, Dave Page wrote:
 On 31/5/06 19:13, Andreas Pflug [EMAIL PROTECTED] wrote:
 
 I wonder if we'd be able to ship gzip with the windows installer, to
 insure proper integration.
 
 'Fraid not. It's GPL'd.
 
 Well, one implementation of it is. zlib is new-bsd-ish, though, and
 includes minigzip, which should be just fine for use in a pipe on
 windows.

Even then it's not relevent. The Windows Installer is already GPL'd by
the fact it includes readline. zlib is indeed straight BSD like.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Andreas Pflug

Dave Page wrote:


 It's not about a primarily GUI based OS not being able to do
 everything a traditionally command line based OS can do on the
 command line, it's about providing a solution that will work on
 either and remain portable. Whilst I agree with your objection to
 using pg_lzcompress,


Well, pg_lzcompress is in the backend for more than 6 years now, strange 
the objections arise now. However, a replacement for it might be a good 
idea, since apparently the fastest gzip algorithm is 3x faster for 10% 
better compression. TOAST write performance would probably profit 
significantly from a better algorithm.


I wonder what other use-cases exist for server side copy filters beyond 
compression.


Regards,
Andreas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] plperl's ppport.h out of date?

2006-05-31 Thread Andrew Dunstan

Martijn van Oosterhout wrote:

On Tue, May 30, 2006 at 11:35:12AM -0400, Tom Lane wrote:
  

FWIW, it looks like a large part of the bloat in the newer file is
in-line documentation, which we hardly need to include in our
distribution.  I'll leave it to someone more familiar with Perl to
determine whether we want to try to use a slimmed-down copy of the
up-to-date output.



Not sure whether it's worth it, but this sed line strips the POD docs:

sed -e '/^=/,/^=cut/d'  ppport.h

Have a nice day,
  


The changes are a lot more substantive than this, from my quick look.

We'll have to upgrade some day, but for now we are OK. This module is 
regularly updated, but we should only update our copy as needed. I think 
we are good to go with Tom's patch.


cheers

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Does anyone see value in src/tutorial/beard.c ?

2006-05-31 Thread Tom Lane
I'm strongly tempted to put src/tutorial/beard.c out of its misery.
It doesn't compile and probably never has --- not only is it missing
needed #includes, but it references a symbol INV_MD that I can find
no trace of anywhere.  And it's certainly not a good example of
backend programming style.  Anyone care to save it?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Magnus Hagander
  I wonder if we'd be able to ship gzip with the windows 
 installer, to 
  insure proper integration.
  
  'Fraid not. It's GPL'd.
  
  Well, one implementation of it is. zlib is new-bsd-ish, though, and 
  includes minigzip, which should be just fine for use in a pipe on 
  windows.
 
 Even then it's not relevent. The Windows Installer is already 
 GPL'd by the fact it includes readline. zlib is indeed 
 straight BSD like.

Uh. The installer does *not* include readline.

We do include PostGIS, but the PostGIS people themselves don't consider
us GPLed because of that ;-) 

Everything else is != GPL.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Generalized concept of modules

2006-05-31 Thread Tom Lane
[ moving this thread to -hackers ]

Martijn van Oosterhout kleptog@svana.org writes:
 While you do have a good point about non-binary modules, our module
 handling need some help IMHO. For example, the current hack for CREATE
 LANGUAGE to fix things caused by old pg_dumps. I think that's the
 totally wrong approach long term, I think the pg_dump shouldn't be
 including the CREATE LANGUAGE statement at all, but should be saying
 something like INSTALL plpgsql and pg_restore works out what is
 needed for that module.

There's a lot to be said for this, but I keep having the nagging
feeling that people are equating module with shared library, which
seems far from sufficiently general.  I'd like to see module mean
an arbitrary collection of SQL objects.  So I think the raw definition
sought by your INSTALL would always be a SQL script, and any shared
libs that might come along with that are secondary.  The idea of using
pg_depend to manage UNINSTALL is an excellent one.

 1. When given the name of an external module, you need to be able to
 find the SQL commands needed to make it work.

No problem, the name is the name of a SQL script file stored in a specific
installation directory.

 2. You need to be able to tell if something is installed already or
 not.

pg_module system catalog.  You'd need this anyway since there has to be
some representation of the module object in the catalogs for its
component objects to have pg_depend dependencies on.

 With these in place, upgrades across versions of postgres could become
 a lot easier. People using tsearch2 now would get only INSTALL
 tsearch2 in their dumps and when they upgrade to 8.2 they get the new
 definitions for tsearch using GIN. No old definitions to confuse people
 or the database. (Note: I'm not sure if tsearch would be compatable at
 the query level, but that's not relevent to the point I'm making).

Let's see, I guess pg_dump would have to be taught to ignore any objects
that it can see are directly dependent on a module object.  What about
indirect dependencies though?  The exact semantics don't seem clear to me.

Also, this seems to be getting into territory that Oracle has already
trod --- someone should study exactly what they do for PL/SQL modules
and whether we want to be compatible or not.  Perhaps there's even
something in SQL2003 about it?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Joshua D. Drake



Uh. The installer does *not* include readline.

We do include PostGIS, but the PostGIS people themselves don't consider
us GPLed because of that ;-) 



That is a tad different. PostgreSQL does not link to Postgis. Readline 
does :)



Sincerely,

Joshua D. Drake


Everything else is != GPL.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] session id and global storage

2006-05-31 Thread David Hoksza
Something like this would be maybe possible, but this select can
return more rows, when the user is connected with more instances...

David Hoksza



 Hi, I cant find any function, which tells me something like session
 id. Is there something like that? I need it in my AM, because I need
 to know, if something which I wrote in file was written in this
 current session or previously.

AL How about

AL select procpid||' '||backend_start from pg_stat_activity;

AL Yours,
AL Laurenz Albe


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] server crash on recursive function invocation

2006-05-31 Thread Ali Baba
hi,i was just trying recursive function invocation and got a server crash when i changed a GUC variable max_stack_depth, to a high number. fallowing is what i have tried.select max_val from pg_settings where name='max_stack_depth'; -- returns 2097151set max_stack_depth=2097151;CREATE OR REPLACE FUNCTION func() RETURNS INT AS $$DECLAREx int;BEGIN null; x := func(); return 0;END;$$ LANGUAGE PLPGSQL;select func();and the server get crashed.Any ideas?Thanks,--Usman
		Be a chatter box. Enjoy free PC-to-PC calls  with Yahoo! Messenger with Voice.

Re: [HACKERS] server crash on recursive function invocation

2006-05-31 Thread Andrew Dunstan
Ali Baba said:

 hi,
 i was just trying recursive function invocation and got a server crash
 when i changed a GUC variable max_stack_depth,  to a high number.
 fallowing is what i have tried.

 select max_val from pg_settings where name='max_stack_depth'; --
 returns 2097151 set max_stack_depth=2097151;
 CREATE OR REPLACE FUNCTION func() RETURNS INT AS $$
 DECLARE
 x int;
 BEGIN
  null;
  x := func();
  return 0;
 END;
 $$ LANGUAGE PLPGSQL;
 select func();
 and the server get crashed.


 Any ideas?



what the heck did you expect with an infinitely recursive function?

cheers

andrew



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] server crash on recursive function invocation

2006-05-31 Thread Tom Lane
Ali Baba [EMAIL PROTECTED] writes:
 i was just trying recursive function invocation and got a server crash when i 
 changed a GUC variable max_stack_depth,  to a high number. 

There's a reason why that variable is a superuser-only setting: you're
supposed to have some clue what you're doing when you change it ;-)

If you need more stack space, what you generally have to do is adjust
the ulimit setting that the postmaster is started under.  You can set
max_stack_depth up to a few hundred K less than the postmaster's
ulimit -s setting, but not more.

(We wouldn't even have the variable if there were a sufficiently
portable way to find out the ulimit stack depth automatically, but
there's not one that I know of.)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Possible TODO item: copy to/from pipe

2006-05-31 Thread Martijn van Oosterhout
On Wed, May 31, 2006 at 11:20:21PM +0200, Magnus Hagander wrote:
 Uh. The installer does *not* include readline.

Terribly sorry, I misinterpreted the thread about it at the beginning
of the year.

http://archives.postgresql.org/pgsql-hackers/2006-02/msg00539.php

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature