[HACKERS]

2006-05-25 Thread ipig



Hi,
 
 I am now studying PostgreSQL.

in your web site: 

Developer's Frequently Asked Questions (FAQ) for 
PostgreSQL
Technical Questions
2.1) How do I efficiently access information in 
tables from the backend code?

Since heap_open function needs the oid of the 
relation, how can I get those parameters for heap_open?


Suppose that at the beginning, I only know the name of the 
relation, which functions I need to call?


Best regards.


Re: [HACKERS] compiling source code!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

2006-05-25 Thread Andrew Dunstan
Gurjeet Singh said:
Also, I would recommend uninstalling cygwin before you install
 mingw, because if the mingw doesn't behave properly (gcc won't compile
 files etc...), you could be sure that there definitely is a conflict
 between the cygwin and mingw.

If you don't wish to uninstall cygwin, at least rename the folder
 to someting else (cygwin.before.mingw) so that you don't accidentally
 use cygwin's binaries.


This is complete nonsense. I have Cygwin and Mingw/Msys living quite happily
side by side on the same box. Each is largely ignorant of the other.

Of course, if you add Cygwin to the system path you can cause problems, so
don't do that. But there is no need at all that I know of to uninstall or
rename anything.

If you have problems with your setup please ask for help, instead of handing
out wrong advice.

cheers

andrew



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] compiling source code!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

2006-05-25 Thread Gurjeet Singh

   I was just telling one of the ways to avoid the conflicts. It
generally happens that a newbie adds the cygwin paths to the env
variables and if he forgets to remove those, then, you know better,
which tools he'd be using!!! In such cases, a less experienced person
would just give up.

   I remember that if I hadn't had that small peice of advice from
AumAum, I was on the verge of giving up on trying the mingw
installation aftar failing 3 times!!!

quote
It's crucial to crack the tarballs into /mingw (C:\msys\n.n\mingw),
because if you crack any of them into / (C:\msys\n.n), you'll cause
some subtle conflicts between MSYS and the MinGW tools which will
cause your compilations to fail.
/quote
from http://www.mingw.org/MinGWiki/index.php/RealDumbQuickstart

So my suggestion was just to at least rename the folder
temporarily till sibel gets everything started.

   Moreover, I dont think one would need cygwin after having mingw. I
might be wrong!

   Yes, I agree your approach is more non-intrusive.

   Sibel, just add the method of 'removing cygwin's paths from
environment variables (if they exist)' to the top of the
pre-requisites-list before you try installing mingw.

   Sorry for uiintentionally misleading someone, if I did!

Regards,
Gurjeet

On 5/25/06, Andrew Dunstan [EMAIL PROTECTED] wrote:

Gurjeet Singh said:
Also, I would recommend uninstalling cygwin before you install
 mingw,



If you don't wish to uninstall cygwin, at least rename the folder
 to someting else (cygwin.before.mingw) so that you don't accidentally
 use cygwin's binaries.


This is complete nonsense. I have Cygwin and Mingw/Msys living quite happily
side by side on the same box. Each is largely ignorant of the other.

Of course, if you add Cygwin to the system path you can cause problems, so
don't do that. But there is no need at all that I know of to uninstall or
rename anything.

If you have problems with your setup please ask for help, instead of handing
out wrong advice.

cheers

andrew


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug
Currently, WAL files will be archived as soon as archive_command is set. 
 IMHO, this is not desirable if no permanent backup is wanted, but only 
scheduled online backup because; it will flood the wal_archive 
destination with files that will never be used.


I propose to introduce a GUC permanent_archiving or so, to select 
whether wal archiving happens permanently or only when a backup is in 
progress (i.e. between pg_start_backup and pg_stop_backup).



Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement

2006-05-25 Thread Massimiliano Poletto

If you call it exactly once, it'll say that exactly once.  If you keep
calling it millions of times, it'll keep saying that.


OK.  I wonder, though, why at some point it does in fact return 0.


Indeed.  You forgot to complete the COPY sequence before returning
to the PQgetResult loop.  As long as the thing is in COPY mode,
PQgetResult will return a result saying PGRES_COPY_IN.  The point


Got it.  Thanks.

max

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS]

2006-05-25 Thread Alvaro Herrera
ipig wrote:

 2.1) How do I efficiently access information in tables from the backend code?
 Since heap_open function needs the oid of the relation, how can I get
 those parameters for heap_open?
 
 Suppose that at the beginning, I only know the name of the relation,
 which functions I need to call?

You can create a RangeVar using makeRangeVar and call heap_openrv().

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] file-locking and postmaster.pid

2006-05-25 Thread korry







That's not workable, unless you want to assume that nothing on the
system except Postgres uses SysV semaphores.  Otherwise something else
could randomly gobble up the semid you want to use.  I don't care very
much for requiring a distinct semid to be hand-specified for each
postmaster on a machine, either. 



Yeah, that does suck. Ok, naming problems seem to make semaphores useless.

I'm back to byte-range locking, but if NFS is important and is truly unreliable, then that's out too.

I've never had locking problems on NFS (probably because we tell our users not to use NFS), but now that I think about it, SMB locking is very unreliable so Win32 would be an issue too.

 -- Korry





Re: [HACKERS] file-locking and postmaster.pid

2006-05-25 Thread Andreas Joseph Krogh
On Thursday 25 May 2006 14:35, korry wrote:
  That's not workable, unless you want to assume that nothing on the
  system except Postgres uses SysV semaphores.  Otherwise something else
  could randomly gobble up the semid you want to use.  I don't care very
  much for requiring a distinct semid to be hand-specified for each
  postmaster on a machine, either.

 Yeah, that does suck.  Ok, naming problems seem to make semaphores
 useless.

 I'm back to byte-range locking, but if NFS is important and is truly
 unreliable, then that's out too.

 I've never had locking problems on NFS (probably because we tell our
 users not to use NFS), but now that I think about it, SMB locking is
 very unreliable so Win32 would be an issue too.

What I don't get is why everybody think that because one solution doesn't fit 
all needs on all platforms(or NFS), it shouldn't be implemented on those 
platforms it *does* work on. Why can't those platforms(like Linux) benefit 
from a better solution, if one exists? There are plenty of examples of 
software providing better solutions on platforms supporting more features.

-- 
Andreas Joseph Krogh [EMAIL PROTECTED]
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Rename standard_conforming_strings to standard_strings

2006-05-25 Thread Bruce Momjian
I am having trouble remembering the name of the new GUC parameter,
standard_conforming_strings.  I am thinking standard_strings is
clearer, and shorter.  I would like to rename this.  The parameter will
be new in 8.2.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS]

2006-05-25 Thread Bruce Momjian
Alvaro Herrera wrote:
 ipig wrote:
 
  2.1) How do I efficiently access information in tables from the backend 
  code?
  Since heap_open function needs the oid of the relation, how can I get
  those parameters for heap_open?
  
  Suppose that at the beginning, I only know the name of the relation,
  which functions I need to call?
 
 You can create a RangeVar using makeRangeVar and call heap_openrv().

Or just look in the code to see how other code is calling heap_open().

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 I propose to introduce a GUC permanent_archiving or so, to select 
 whether wal archiving happens permanently or only when a backup is in 
 progress (i.e. between pg_start_backup and pg_stop_backup).

This is silly.  Why not just turn archiving on and off?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] problem with PQsendQuery/PQgetResult and COPY FROM statement

2006-05-25 Thread Tom Lane
Massimiliano Poletto [EMAIL PROTECTED] writes:
 If you call it exactly once, it'll say that exactly once.  If you keep
 calling it millions of times, it'll keep saying that.

 OK.  I wonder, though, why at some point it does in fact return 0.

Probably because you ran out of memory to create new PGresult structs.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] file-locking and postmaster.pid

2006-05-25 Thread Tom Lane
Andreas Joseph Krogh [EMAIL PROTECTED] writes:
 What I don't get is why everybody think that because one solution doesn't fit 
 all needs on all platforms(or NFS), it shouldn't be implemented on those 
 platforms it *does* work on.

(1) Because we're not really interested in supporting multiple fundamentally
different approaches to postmaster interlocking.  The system is
complicated enough already.

(2) Because according to discussion so far, we can't rely on this solution
anywhere.  Postgres can't easily tell whether its data directory is
mounted over NFS, for example.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Rename standard_conforming_strings to standard_strings

2006-05-25 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 I am having trouble remembering the name of the new GUC parameter,
 standard_conforming_strings.  I am thinking standard_strings is
 clearer, and shorter.  I would like to rename this.  The parameter will
 be new in 8.2.

The name is already out in places we can't easily change, eg the CVE
descriptions for the recent security issues.  Live with it.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread David Fetter
On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:
 I am asking again:  what is the timetable for merging gborg and
 pgfoundry, and if not, can we set a date to shut down to force the
 move.

Bruce,

You know that there is a tall stack of finicky, time-consuming work in
order to make this possible.  Which parts of it are you volunteering
to do?

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Postgres Crashes on Win2K

2006-05-25 Thread chelsea boot
HiCan anyone offer advice on this please: Intermittently a postgres process appears to crash and the postmaster service needs restarting. The following is entered in the log:2006-03-15 09:50:03 LOG: server process (PID 348) was terminated by signal 125  2006-03-15 09:50:03 LOG: terminating any other active server processes  2006-03-15 09:50:06 FATAL: the database system is in recovery mode  2006-03-15 09:50:06 LOG: all server processes terminated; reinitializing  2006-03-15 09:50:06 FATAL: the database system is starting up  2006-03-15 09:50:06 LOG: database system was interrupted at 2006-03-15 00:16:39 GMT Standard Time  2006-03-15 09:50:06 LOG: checkpoint record is at 0/C1F3E48  2006-03-15 09:50:06 LOG: redo record is at 0/C1F3E48; undo record is at 0/0; shutdown FALSE 
 2006-03-15 09:50:06 LOG: next transaction ID: 651022; next OID: 113846  2006-03-15 09:50:06 LOG: database system was not properly shut down; automatic recovery in progress  2006-03-15 09:50:06 FATAL: the database system is starting up  2006-03-15 09:50:06 LOG: record with zero length at 0/C1F3E88  2006-03-15 09:50:06 LOG: redo is not required  2006-03-15 09:50:06 FATAL: the database system is starting up  2006-03-15 09:50:06 LOG: database system is readyThis is accompanied by 2 postgres errors in the Application Log of the Win2K server:  (1) "the application failed to initialize properly (0xc142). Click on OK to terminate the application"   (2) "the execption unknown software exception (0xc0fd) occurred in the application at 0x7c59bd01. Click on OK to terminate the application".   
   Alternatively I get the same messages as abovebut with an additional WARNING:WARNING: terminating connection because of a crash of another server process  DEATIL: The postmaster has commaned this server process to roll back the current transaction and exit, possibly because another server process exited abnormally and possibly corrupted shared memory.What is the signal 125 and howcan I troubleshoot what is causingthe process to crash as the error occurs at random times eg. 8pm when no-one is using the network and no utility is running such as AV or backup or during the day?I've posted to novice and ports but have been unable to find a solution.Many Thanks  Ch.Send instant messages to your online friends http://uk.messenger.yahoo.com 

Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

I propose to introduce a GUC permanent_archiving or so, to select 
whether wal archiving happens permanently or only when a backup is in 
progress (i.e. between pg_start_backup and pg_stop_backup).



This is silly.  Why not just turn archiving on and off?


Not quite. I want online backup, but no archiving. Currently, I have to 
edit postgresql.conf and SIGHUP to turn on archiving configuring a 
(hopefully) writable directory, do the backup, edit postgresql.conf and 
SIGHUP again. Not too convenient...


Regards,
Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 This is silly.  Why not just turn archiving on and off?

 Not quite. I want online backup, but no archiving. Currently, I have to 
 edit postgresql.conf and SIGHUP to turn on archiving configuring a 
 (hopefully) writable directory, do the backup, edit postgresql.conf and 
 SIGHUP again. Not too convenient...

You don't get to count the edit/SIGHUP steps, because those would be the
same for any other GUC.

AFAICS you could get the effect by setting up an archive_command script
sleep 100
exit 1
so that the archiver will do nothing.

BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability.  What you're proposing seems like a bulky, unportable,
hard-to-use equivalent of pg_dump.  Why not use pg_dump?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:


Tom Lane wrote:


This is silly.  Why not just turn archiving on and off?



Not quite. I want online backup, but no archiving. Currently, I have to 
edit postgresql.conf and SIGHUP to turn on archiving configuring a 
(hopefully) writable directory, do the backup, edit postgresql.conf and 
SIGHUP again. Not too convenient...



You don't get to count the edit/SIGHUP steps, because those would be the
same for any other GUC.


That's right, but my proposal would implicitely switch on archiving 
while backup is in progress, thus explicitely enabling/disabling 
archiving wouldn't be necessary.


AFAICS you could get the effect by setting up an archive_command script
sleep 100
exit 1
so that the archiver will do nothing.


Doesn't WAL expect the WAL files already archived to be recyclable, so 
they could get overwritten in the pg_xlog dir while backup is running? 
Additionally, the doc recommends omitting pg_xlog from the file level 
backup, so a restart would need the archived wal files, no?





BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability.  What you're proposing seems like a bulky, unportable,
hard-to-use equivalent of pg_dump.  Why not use pg_dump?


Because pg_dump will take too long and create bloated dump files. All I 
need is a physical backup for disaster recovery purposes without 
bringing down the server.


In my case, I'd expect a DB that uses 114GB on disk to consume 1.4TB 
when pg_dumped, too much for the available backup capacity (esp. 
compared to net content, about 290GB). See other post inefficient bytea 
escaping for details.


Regards,
Andreas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Bruce Momjian
David Fetter wrote:
 On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:
  I am asking again:  what is the timetable for merging gborg and
  pgfoundry, and if not, can we set a date to shut down to force the
  move.
 
 Bruce,
 
 You know that there is a tall stack of finicky, time-consuming work in
 order to make this possible.  Which parts of it are you volunteering
 to do?

I will move my projects manually, and until we set a date, there is no
motivation for anyone to move their own projects.  Someone doing this
migration is not working (for years), so let's just do it dirty.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Postgres Crashes on Win2K

2006-05-25 Thread Robert Treat
On Thursday 25 May 2006 10:47, chelsea boot wrote:
 Hi

   Can anyone offer advice on this please: Intermittently a postgres process
 appears to crash and the postmaster service needs restarting.  The
 following is entered in the log:

   2006-03-15 09:50:03 LOG:  server process (PID 348) was terminated by
 signal 125 2006-03-15 09:50:03 LOG:  terminating any other active server
 processes 2006-03-15 09:50:06 FATAL:  the database system is in recovery
 mode 2006-03-15 09:50:06 LOG:  all server processes terminated;
 reinitializing 2006-03-15 09:50:06 FATAL:  the database system is starting
 up
   2006-03-15 09:50:06 LOG:  database system was interrupted at 2006-03-15
 00:16:39 GMT Standard Time 2006-03-15 09:50:06 LOG:  checkpoint record is
 at 0/C1F3E48
   2006-03-15 09:50:06 LOG:  redo record is at 0/C1F3E48; undo record is at
 0/0; shutdown FALSE 2006-03-15 09:50:06 LOG:  next transaction ID: 651022;
 next OID: 113846 2006-03-15 09:50:06 LOG:  database system was not properly
 shut down; automatic recovery in progress 2006-03-15 09:50:06 FATAL:  the
 database system is starting up
   2006-03-15 09:50:06 LOG:  record with zero length at 0/C1F3E88
   2006-03-15 09:50:06 LOG:  redo is not required
   2006-03-15 09:50:06 FATAL:  the database system is starting up
   2006-03-15 09:50:06 LOG:  database system is ready

   This is accompanied by 2 postgres errors in the Application Log of the
 Win2K server: (1) the application failed to initialize properly
 (0xc142). Click on OK to terminate the application (2) the execption
 unknown software exception (0xc0fd) occurred in the application at
 0x7c59bd01. Click on OK to terminate the application.

   Alternatively I get the same messages as above but with an additional
 WARNING:

   WARNING: terminating connection because of a crash of another server
 process DEATIL: The postmaster has commaned this server process to roll
 back the current transaction and exit, possibly because another server
 process exited abnormally and possibly corrupted shared memory.

   What is the signal 125 and how can I troubleshoot what is causing the
 process to crash as the error occurs at random times eg. 8pm when no-one is
 using the network and no utility is running such as AV or backup or during
 the day?

   I've posted to novice and ports but have been unable to find a solution.


Try setting log_error_verbosity to verbose and sending that info in along with 
OS and PG Versions.

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Joshua D. Drake

Bruce Momjian wrote:

David Fetter wrote:

On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:

I am asking again:  what is the timetable for merging gborg and
pgfoundry, and if not, can we set a date to shut down to force the
move.

Bruce,

You know that there is a tall stack of finicky, time-consuming work in
order to make this possible.  Which parts of it are you volunteering
to do?


I will move my projects manually, and until we set a date, there is no
motivation for anyone to move their own projects.  Someone doing this
migration is not working (for years), so let's just do it dirty.



I have to agree with Bruce here. Sometimes you just have to cut off a limb.

Joshua D. Drake

--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread David Fetter
On Thu, May 25, 2006 at 10:11:13AM -0700, Joshua D. Drake wrote:
 Bruce Momjian wrote:
 David Fetter wrote:
 On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:
 I am asking again:  what is the timetable for merging gborg and
 pgfoundry, and if not, can we set a date to shut down to force
 the move.

 Bruce,
 
 You know that there is a tall stack of finicky, time-consuming
 work in order to make this possible.  Which parts of it are you
 volunteering to do?
 
 I will move my projects manually, and until we set a date, there is
 no motivation for anyone to move their own projects.  Someone doing
 this migration is not working (for years), so let's just do it
 dirty.
 
 I have to agree with Bruce here. Sometimes you just have to cut off
 a limb.

It would be a great service to the community if people could
document--maybe in some kind of wiki or something--just how they did
each piece.  We could also put stuff on the front of gborg, mail every
project lead, etc., etc.  In other words, a concerted effort here
would have non-trivial overhead, but also non-trivial pay-off :)

Let's start with the first thing:  What features map on gborg map to
which features on pgfoundry?

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] LIKE, leading percent, bind parameters and indexes

2006-05-25 Thread Rodrigo Hjort
I'm not used to the PG Internals. But let me see if I understood that.The LIKE operator, when applied on a static string and it is not preceded by '%', causes the planner to search for some indexes in the table in order to make a index scan. Otherwise, 
i.e. using leading '%' on static text or bound paremeter, makes the planner always do a sequential scan. Is that the scenario?-- Rodrigo Hjorthttp://icewall.org/~hjort
2006/5/23, Tom Lane [EMAIL PROTECTED]:
Rodrigo Hjort [EMAIL PROTECTED] writes: What happens is that only the 004 block uses the index! The 002 code, which also has no leading percent, does a sequential scan. The difference
 between them is that 002 uses bind parameters.Yeah.The LIKE index optimization depends on seeing a constant LIKEpattern at plan time --- otherwise the planner doesn't know whatindexscan parameters to generate.So a bound-parameter query loses.
Ideas for improving this situation are welcome ... it's not an easyproblem ...regards, tom lane


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Marc G. Fournier

On Thu, 25 May 2006, Bruce Momjian wrote:


David Fetter wrote:

On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:

I am asking again:  what is the timetable for merging gborg and
pgfoundry, and if not, can we set a date to shut down to force the
move.


Bruce,

You know that there is a tall stack of finicky, time-consuming work in
order to make this possible.  Which parts of it are you volunteering
to do?


I will move my projects manually, and until we set a date, there is no
motivation for anyone to move their own projects.  Someone doing this
migration is not working (for years), so let's just do it dirty.


gborg won't be shut down until Josh et al are ready with pgfoundry ... 
and, right now, the 'et al' is fairly busy with stuff like OSDN and the 
Anniversary which are quickly approaching ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email . [EMAIL PROTECTED]  MSN . [EMAIL PROTECTED]
Yahoo . yscrappy   Skype: hub.orgICQ . 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Larry Rosenman
David Fetter wrote:
 On Thu, May 25, 2006 at 10:11:13AM -0700, Joshua D. Drake wrote:
 Bruce Momjian wrote:
 David Fetter wrote:
 On Wed, May 24, 2006 at 10:33:53PM -0400, Bruce Momjian wrote:
 I am asking again:  what is the timetable for merging gborg and
 pgfoundry, and if not, can we set a date to shut down to force
 the move.
 
 Bruce,
 
 You know that there is a tall stack of finicky, time-consuming
 work in order to make this possible.  Which parts of it are you
 volunteering to do?
 
 I will move my projects manually, and until we set a date, there is
 no motivation for anyone to move their own projects.  Someone doing
 this migration is not working (for years), so let's just do it
 dirty.
 
 I have to agree with Bruce here. Sometimes you just have to cut off
 a limb.
 
 It would be a great service to the community if people could
 document--maybe in some kind of wiki or something--just how they did
 each piece.  We could also put stuff on the front of gborg, mail every
 project lead, etc., etc.  In other words, a concerted effort here
 would have non-trivial overhead, but also non-trivial pay-off :)
 
 Let's start with the first thing:  What features map on gborg map to
 which features on pgfoundry?
 
I've offered to help with this, but things are so poorly documented that
I have to guess where things are, and I'm not one that's used either
site. 

I do have a bunch of sysadmin knowledge, and script writing ability, but
knowledge of how each is set up, is resident in a very small minority of
the 
folks that help run things. 

I have Pervasive's blessing to spend paid time working on community
projects, 
and this could be one of them, but every time I try to get more
information about
how things on BOTH gborg and pgfoundry are set up, it takes days, and
lots of the 
people have no idea how it is set up. 

I truly *WANT* to help here, but getting the information is tough.

Larry Rosenman


-- 
Larry Rosenman  
Database Support Engineer

PERVASIVE SOFTWARE. INC.
12365B RIATA TRACE PKWY
3015
AUSTIN TX  78727-6531 

Tel: 512.231.6173
Fax: 512.231.6597
Email: [EMAIL PROTECTED]
Web: www.pervasive.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Andrew Dunstan

Larry Rosenman wrote:


I have Pervasive's blessing to spend paid time working on community
projects, 
and this could be one of them, but every time I try to get more

information about
how things on BOTH gborg and pgfoundry are set up, it takes days, and
lots of the 
people have no idea how it is set up. 


I truly *WANT* to help here, but getting the information is tough.

  


Larry,

I don't have any knowledge of gborg, but I do have pretty good knowledge 
of how everything works on pgfoundry. Feel free to ping me on IRC or via 
email if you have questions.


I am as impatient as everyone to get this done.

cheers

andrew


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Inefficient bytea escaping?

2006-05-25 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 When dumping the table with psql \copy (non-binary), the resulting file 
 would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH 
 BINARY (modified psql as posted to -patches), the time was cut down to 
 21-22 seconds (filesize 1.4GB as expected), which is near the physical 
 throughput of the target disk. If server based COPY to file is used, The 
 same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB 
 Cache HT disabled, 1GB main mem).

This is with an 8.0.x server, right?

Testing a similar case with CVS HEAD, I see about a 5x speed difference,
which is right in line with the difference in the physical amount of
data written.  (I was testing a case where all the bytes were emitted as
'\nnn', so it's the worst case.)  oprofile says the time is being spent
in CopyAttributeOutText() and fwrite().  So I don't think there's
anything to be optimized here, as far as bytea goes: its binary
representation is just inherently a lot smaller.

Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character.  I'm not sure how
much per-call overhead there is in that function.  We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Marc G. Fournier

On Thu, 25 May 2006, Andrew Dunstan wrote:


Larry Rosenman wrote:


I have Pervasive's blessing to spend paid time working on community
projects, and this could be one of them, but every time I try to get more
information about
how things on BOTH gborg and pgfoundry are set up, it takes days, and
lots of the people have no idea how it is set up. 
I truly *WANT* to help here, but getting the information is tough.





Larry,

I don't have any knowledge of gborg, but I do have pretty good knowledge of 
how everything works on pgfoundry. Feel free to ping me on IRC or via email 
if you have questions.


The thing is, I thought that the scripting work for this was already done? 
Since we're doing CVS-CVS and mailman-mailman, the only major scripting 
effort was required on the gborg-sql - pgfoundry-sql side of things, 
which I *thought* was already written, just needed alot of testing?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email . [EMAIL PROTECTED]  MSN . [EMAIL PROTECTED]
Yahoo . yscrappy   Skype: hub.orgICQ . 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] LIKE, leading percent, bind parameters and indexes

2006-05-25 Thread Andrew Sullivan
On Thu, May 25, 2006 at 02:18:10PM -0300, Rodrigo Hjort wrote:
 make a index scan. Otherwise, i.e. using leading '%' on static text or bound
 paremeter, makes the planner always do a sequential scan. Is that the
 scenario?

I think more exactly, the planner can't possibly know how to plan an
indexscan with a leading '%', because it has nowhere to start.

Think of it this way: if you go to the public library, and say, I
want a book.  I can't remember its name exactly, but it starts with
'daytime', you can find it by going to the title index and browsing
for things that start that way.  If you go to the public library, and
say, There's this book I want, but I can't remember the title.  It's
red, you're going to have a lot of books to look through.  Maybe all
of them.

If it were important enough -- say you left a $10,000 cheque inside
-- you might just start looking.  Maybe you'll get lucky, and hit it.  

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Joshua D. Drake


The thing is, I thought that the scripting work for this was already 
done? Since we're doing CVS-CVS and mailman-mailman, the only major 
scripting effort was required on the gborg-sql - pgfoundry-sql side of 
things, which I *thought* was already written, just needed alot of testing?


I remember reading something abou this as well but the big question is:

Where is the scripting?

Joshua D. Drake





Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email . [EMAIL PROTECTED]  MSN . [EMAIL PROTECTED]
Yahoo . yscrappy   Skype: hub.orgICQ . 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] 8.1.4 build failure on ICC 9.1

2006-05-25 Thread Joshua D. Drake

Hello,

I encountered this the other day and set up a build farm client for it.

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=meerkatdt=2006-05-25%2018:16:36

--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Inefficient bytea escaping?

2006-05-25 Thread Andreas Pflug

Tom Lane wrote:

Andreas Pflug [EMAIL PROTECTED] writes:

When dumping the table with psql \copy (non-binary), the resulting file 
would be 6.6GB of size, taking about 5.5 minutes. Using psql \copy WITH 
BINARY (modified psql as posted to -patches), the time was cut down to 
21-22 seconds (filesize 1.4GB as expected), which is near the physical 
throughput of the target disk. If server based COPY to file is used, The 
same factor 12 can be observed, CPU is up to 100 % (single P4 3GHz 2MB 
Cache HT disabled, 1GB main mem).



This is with an 8.0.x server, right?


I've tested both 8.0.5 and 8.1.4, no difference observed.


Testing a similar case with CVS HEAD, I see about a 5x speed difference,
which is right in line with the difference in the physical amount of
data written.


That's what I would have expected, apparently the data is near worst case.

  (I was testing a case where all the bytes were emitted as

'\nnn', so it's the worst case.)  oprofile says the time is being spent
in CopyAttributeOutText() and fwrite().  So I don't think there's
anything to be optimized here, as far as bytea goes: its binary
representation is just inherently a lot smaller.


Unfortunately, binary isn't the cure for all, since copying normal data 
with binary option might bloat that by factor two or so. I wish there 
was a third option that's fine for both kinds of data. That's not only a 
question of dump file sizes, but also of network throughput (an online 
compression in the line protocol would be desirable for this).




Looking at CopySendData, I wonder whether any traction could be gained
by trying not to call fwrite() once per character.  I'm not sure how
much per-call overhead there is in that function.  We've done a lot of
work trying to optimize the COPY IN path since 8.0, but nothing much
on COPY OUT ...


Hm, I'll see whether I can manage to check CVS head too, and see what's 
happening, not a production alternative though.


Regards,
Andreas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Inefficient bytea escaping?

2006-05-25 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Looking at CopySendData, I wonder whether any traction could be gained
 by trying not to call fwrite() once per character.  I'm not sure how
 much per-call overhead there is in that function.  We've done a lot of
 work trying to optimize the COPY IN path since 8.0, but nothing much
 on COPY OUT ...

 Hm, I'll see whether I can manage to check CVS head too, and see what's 
 happening, not a production alternative though.

OK, make sure you get the copy.c version I just committed ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1.4 build failure on ICC 9.1

2006-05-25 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 I encountered this the other day and set up a build farm client for it.

 http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=meerkatdt=2006-05-25%2018:16:36

That NaN problem has been discussed before, and I believe we concluded
it's a compiler bug.  See the archives for the switch to use to avoid
it.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] 8.1.4 build failure on ICC 9.1

2006-05-25 Thread Jeremy Drake
On Thu, 25 May 2006, Tom Lane wrote:

 Joshua D. Drake [EMAIL PROTECTED] writes:
  I encountered this the other day and set up a build farm client for it.

  http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=meerkatdt=2006-05-25%2018:16:36

 That NaN problem has been discussed before, and I believe we concluded
 it's a compiler bug.  See the archives for the switch to use to avoid
 it.

I think it was meant as a feature by intel, but one man's feature is
another man's bug ;)

The flag to use is -mp1

Also, I see that you are getting all of the tons of output also.  Those
drove me nuts.  I sent in a patch for configure to take some of those
-W flags out which is now applied to HEAD as well as the addition of the
-mp1 flag for the ICC compiler.

I was more interested in the failures on the HEAD build on that box.  I
have had no problems with pl/(perl|python) on my box, though it is using
9.0 vs 9.1, I don't expect that they would have broken things THAT
badly...





---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] GIN stuck in loop during PITR

2006-05-25 Thread Andreas Seltenreich
I'm just experimenting a bit with GIN, and it is occasionally getting
stuck looping in findParents() during WAL replay.

The attached patch seems to fix it. I also had to set ptr-off as
advertised in the comment above the function to avoid triggering
assertions.

GIN isn't fully transparent to me yet, so it is quite likely that I am
missing something...

regards,
andreas

Index: ginbtree.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/gin/ginbtree.c,v
retrieving revision 1.1
diff -c -r1.1 ginbtree.c
*** ginbtree.c  2 May 2006 11:28:54 -   1.1
--- ginbtree.c  25 May 2006 18:12:13 -
***
*** 202,208 
for(;;) {
buffer = ReadBuffer(btree-index, blkno);
LockBuffer(buffer, GIN_EXCLUSIVE);
!   page = BufferGetPage(root-buffer);
if ( GinPageIsLeaf(page) )
elog(ERROR, Lost path);
  
--- 202,208 
for(;;) {
buffer = ReadBuffer(btree-index, blkno);
LockBuffer(buffer, GIN_EXCLUSIVE);
!   page = BufferGetPage(buffer);
if ( GinPageIsLeaf(page) )
elog(ERROR, Lost path);
  
***
*** 224,229 
--- 224,230 
ptr-blkno = blkno;
ptr-buffer = buffer;
ptr-parent = root; /* it's may be wrong, but in next 
call we will correct */
+   ptr-off = offset;
stack-parent = ptr;
return;
}

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Larry Rosenman
Joshua D. Drake wrote:
 The thing is, I thought that the scripting work for this was already
 done? Since we're doing CVS-CVS and mailman-mailman, the only major
 scripting effort was required on the gborg-sql - pgfoundry-sql side
 of things, which I *thought* was already written, just needed alot
 of testing? 
 
 I remember reading something abou this as well but the big question
 is: 
 
 Where is the scripting?
 
it's in an admin project on pgfoundry, and VERY lacking in details.

It assumes a LOT of knowledge that is not readily apparent.


-- 
Larry Rosenman  
Database Support Engineer

PERVASIVE SOFTWARE. INC.
12365B RIATA TRACE PKWY
3015
AUSTIN TX  78727-6531 

Tel: 512.231.6173
Fax: 512.231.6597
Email: [EMAIL PROTECTED]
Web: www.pervasive.com 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Jim Nasby

On May 25, 2006, at 11:24 AM, Andreas Pflug wrote:

BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have  
any
PITR capability.  What you're proposing seems like a bulky,  
unportable,

hard-to-use equivalent of pg_dump.  Why not use pg_dump?


Because pg_dump will take too long and create bloated dump files.  
All I need is a physical backup for disaster recovery purposes  
without bringing down the server.


In my case, I'd expect a DB that uses 114GB on disk to consume  
1.4TB when pg_dumped, too much for the available backup capacity  
(esp. compared to net content, about 290GB). See other post  
inefficient bytea escaping for details.


Another consideration is that you can use rsync to update a  
filesystem-level backup, but there's no pg_dump equivalent. On a  
large database that can make a sizable difference in the amount of  
time required for a backup.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Josh Berkus
Marc,

 The thing is, I thought that the scripting work for this was already
 done? Since we're doing CVS-CVS and mailman-mailman, the only major
 scripting effort was required on the gborg-sql - pgfoundry-sql side of
 things, which I *thought* was already written, just needed alot of
 testing?

There is, let me send link to Larry.

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Andreas Pflug

Jim Nasby wrote:

On May 25, 2006, at 11:24 AM, Andreas Pflug wrote:

BTW, I don't actually understand why you want this at all.  If you're
not going to keep a continuing series of WAL files, you don't have any
PITR capability.  What you're proposing seems like a bulky, unportable,
hard-to-use equivalent of pg_dump.  Why not use pg_dump?


Because pg_dump will take too long and create bloated dump files. All 
I need is a physical backup for disaster recovery purposes without 
bringing down the server.


In my case, I'd expect a DB that uses 114GB on disk to consume 1.4TB 
when pg_dumped, too much for the available backup capacity (esp. 
compared to net content, about 290GB). See other post inefficient 
bytea escaping for details.


Another consideration is that you can use rsync to update a 
filesystem-level backup, but there's no pg_dump equivalent. On a large 
database that can make a sizable difference in the amount of time 
required for a backup.
That's fine to cut the backup execution time, but to guarantee 
consistency while the cluster is running pg_start_backup/pg_stop_backup 
and WAL archiving will still be necessary.


Regards,
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] XLogArchivingActive

2006-05-25 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 That's right, but my proposal would implicitely switch on archiving 
 while backup is in progress, thus explicitely enabling/disabling 
 archiving wouldn't be necessary.

I'm not sure you can expect that to work.  The system is not built to
guarantee instantaneous response to mode changes like that.

 BTW, I don't actually understand why you want this at all.  If you're
 not going to keep a continuing series of WAL files, you don't have any
 PITR capability.  What you're proposing seems like a bulky, unportable,
 hard-to-use equivalent of pg_dump.  Why not use pg_dump?

 Because pg_dump will take too long and create bloated dump files. All I 
 need is a physical backup for disaster recovery purposes without 
 bringing down the server.

 In my case, I'd expect a DB that uses 114GB on disk to consume 1.4TB 
 when pg_dumped, too much for the available backup capacity (esp. 
 compared to net content, about 290GB). See other post inefficient bytea 
 escaping for details.

The conventional wisdom is that pg_dump files are substantially smaller
than the on-disk footprint ... and that's even without compressing them.
I think you are taking a corner case, ie bytea data, and presenting it
as something that ought to be the design center.

Something that might be worth considering is an option to allow pg_dump
to use binary COPY.  I don't think this'd work nicely for text dumps,
but seems like custom- or tar-format dumps could be made to use it.
This would probably be a win for many datatypes not only bytea, and it'd
still be far more portable than a filesystem dump.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] LIKE, leading percent, bind parameters and indexes

2006-05-25 Thread Rodrigo Hjort
I think more exactly, the planner can't possibly know how to plan anindexscan with a leading '%', because it has nowhere to start.
The fact is that index scan is performed on LIKE _expression_ on a string not preceded by '%', except when bound parameter is used.
select * from table where field like 'THE NAME%'; -- index scanselect * from table where field like '%THE NAME%'; -- seq scanselect * from table where field like :bind_param; -- seq scan (always)Regards,

Rodrigo Hjort
http://icewall.org/~hjort



Re: [HACKERS] Gborg and pgfoundry

2006-05-25 Thread Josh Berkus
Bruce,

 I am asking again:  what is the timetable for merging gborg and
 pgfoundry, and if not, can we set a date to shut down to force the move.

Looking more like mid-June.  I didn't have the time for this at the 
beginning of May that I thought I would have.

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] LIKE, leading percent, bind parameters and indexes

2006-05-25 Thread Greg Stark

Rodrigo Hjort [EMAIL PROTECTED] writes:

  I think more exactly, the planner can't possibly know how to plan an
  indexscan with a leading '%', because it has nowhere to start.
 
 The fact is that index scan is performed on LIKE expression on a string not
 preceded by '%', except when bound parameter is used.
 
 select * from table where field like 'THE NAME%'; -- index scan
 select * from table where field like '%THE NAME%'; -- seq scan
 select * from table where field like :bind_param; -- seq scan (always)

Just for reference I found that both Oracle and MSSQL (back when last I used
it, many years ago) did use an index scan for the following case:

select * from table where field like :bind_param || '%'

At the time this seemed perfectly logical but now that I have more experience
it seems hard to justify. There's no principled reason to think this is any
more likely than a plain :bind_param to be an indexable scan. 

However in practice this worked great. I rarely if ever put % characters into
the bind parameter and the index scan was exactly what I, as a user, expected.

Even if there's resistance to having this form be treated as indexable there
is certainly a use case for something like this. If not this then something
like

WHERE escape(:bind_param)||'%'

but that would be pretty hard to recognize, certainly much harder than a
simple :bind_param || '%'.


-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq