Re: [GENERAL] ECPG and COPY

2006-02-24 Thread Michael Meskes
Am Freitag, 24. Februar 2006 05:26 schrieb Wes:
 I found this in the archives:

   http://archives.postgresql.org/pgsql-interfaces/2004-04/msg5.php

 With 8.x, is it still true that with ECPG, it is not possible to use COPY
 inline - I'd have to write the data to a file, then 'COPY FROM filename'
 (or use direct libpq API calls instead of ECPG)?

Yes, it's still an open report. Sorry, about that and all the other open bug 
reports/feature requests. I do not have the time at the moment to even 
reproduce bugs. Hopefully this will become better in the near future.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] unsubscribe

2006-02-24 Thread Zlatko Matic




unsubscribe


Re: [GENERAL] Temporal Databases

2006-02-24 Thread Simon Riggs
On Fri, 2006-02-24 at 00:20 -0300, Rodrigo Sakai wrote:
 It's a good solution, but not what I'm looking for. 
 I'm looking for something implemented inside the database, like the 
 flashback functionality of oracle 10g.

I think you need to be clear about why you want this: do you want this
as a recovery mechanism or to satisfy general temporal queries?

You also need to read much recent work on the use of temporal results in
BI applications, starting with Kimball et al's books. BI applications
already frequently address these issues via specific design patterns,
rather than requiring a specific implementation within the dbms. 

IMHO this is the primary reason why no mainstream dbms provides an
in-dbms solution to this problem area for general temporal queries and
why flashback functionality is essentially a data recovery technique.

To support this you would need
- a transaction time table - inserted into by each commit (only), so you
can work out which transactions have committed and which haven't at any
point in history
- a mechanism to avoid using the clog and subtrans, since those caches
are updated in real time, so they would not give temporal results as
they currently stand, plus a mechanism to override the individual commit
hint bits that are stored on each row in the database - probably via a
new kind of Snapshot with its own local Xid result cache
- a mechanism to track the xmin up to which a table has been VACUUMed
(which is probably going to exist for 8.2ish), so you can throw an error
to say no longer possible to answer query for time T
- potentially a mechanism to control which xmin was selected by VACUUM,
so that you could maintain explicit control over how much history was
kept

...but it would not be easily accepted into the main line, I would
guess, without some careful planning to ensure low-zero impact for
non-users.

A much easier way is to start a serialized transaction every 10 minutes
and leave the transaction idle-in-transaction. If you decide you really
need to you can start requesting data through that transaction, since it
can see back in time and you already know what the snapshot time is
(if you record it). As time moves on you abort and start new
transactions... but be careful that this can effect performance in other
ways.

Best Regards, Simon Riggs




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] How to read odbc and pgadmin lists from news server

2006-02-24 Thread Andrus Moor
In previous week the following (and other active)  newsgroups are removed 
from  news.postgresql.org news server witohut any notice

pgsql.interfaces.odbc
pgsql.interfaces.pgadmin.hackers

How to read those newsgroups from from news server ?

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How to load 6 MB file to database

2006-02-24 Thread Andrus Moor
I'm using Postgres 8.1 and latest ODBC driver in Windows XP with 256 MB RAM.

When trying to insert a  6 MB binary file to a bytea field, out-of-memory 
error is writeen to ODBC log file and insert fails. Smaller files are 
inserted OK

Any idea how to load 6 MB file to a bytea field or avoid out-of memory error 
?

Is it possible to add data to a single bytea field incrementally using 
separate insert of update
statements each adding for example 100 KB of data ?

Or is there other solution ?

Andrus.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Bruce Momjian
Leonard Soetedjo wrote:
 On Wednesday 15 February 2006 01:38, Tom Lane wrote:
  merlyn@stonehenge.com (Randal L. Schwartz) writes:
   Oracle purchases Sleepycat.  From what I understand, BerkeleyDB was the
   other way that MySQL could have transactions if Oracle decided to
   restrict InnoDB tables (after purchasing Innobase last year).
  
   Does this mean the other shoe has dropped for MySQL AB?
 
  The deal's not gone through yet, but it sure does look like they want to
  put a hammerlock on MySQL ...
 
 Is it possible that Oracle is trying to buy MySQL to kill off other open 
 source competitor, e.g. PostgreSQL?  MySQL has a strong number of users and 
 therefore it is a good deal for Oracle to buy MySQL.  Then by doing that, 
 Oracle will market MySQL as the low-end alternative to their own database to 
 give a full solution to the customer.  And this would slow down the take up 
 rate for other database competitor.

MySQL already has major funding.  I don't see how it could get worse for
us if Oracle bought them.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Does PG really lack a time zone for India?

2006-02-24 Thread Bruce Momjian
Martijn van Oosterhout wrote:
 Comments inline.
 
 On Wed, Feb 15, 2006 at 09:49:57AM -0500, Tom Lane wrote:
  I think defining the problem as let's get rid of australian_timezones
  would be a serious mistake.  The basic problem here is that we can't
  have a one-size-fits-all list of timezone abbreviations.  We've
  certainly heard plenty of complaints about IST, and I seem to recall
  some from Brazil, and there are other conflicts noted in the comments
  in the existing list.  So even if there is no one who cares anymore
  about australian_timezones (which I doubt, 'cause that code isn't all
  that old), we still have a problem.
 
 Hmm? The original USE_AUSTRALIAN_RULES timezones were added June
 1997[1] for 6.1 and the #define was changed to a GUC in June 2001 [2]
 in time for 7.2. The code has been there for ages.
 
 It's funny how it was added though. Someone mentioned the issue in 1997
 and said it would be nice to handle, even if it was just via a #define
 [3]. Two days later without further discussion the hack was added.

As I remember, the problem was that AST was used both for Atlantic
Standard Time (think eastern Canada) and Australia, and we had users in
both time zones.

Fortunately that was the only overlap we commonly saw for years.  Only
recently have we hit more, specifically IST for Israel and India, I
think.  Anyway, now that we have the tz database in PostgreSQL, we can
use the long names, so the abbreviations are only for convenience.  We do
have a TODO item on this:

o Allow customization of the known set of TZ names (generalize the
  present australian_timezones hack)

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] incremental backups

2006-02-24 Thread Bruce Momjian

I have applied the following patch adds to the paragraph after the one
you quoted below.  I just added mention that the start/stop time _and_
wal file names are in the history file.

---

Rick Gigger wrote:
 I've started writing some scripts to set up incremental backup to my  
 taste. I just discovered something and thought I would revisit this  
 thread briefly.
 
 When you go to restore from a give base file system backup you need  
 to know  the start WAL file that you need and the end WAL file that  
 you need.  (You will most likely have many files beyond the stop  
 file but you must have at least up to the stop file for the restore  
 to work.
 
 Now if you try to restore but you don't have the stop WAL file  
 postges will die on recovery and tell you that it can't recover  
 forward far enough to make the backup consistent.  But I wanted to  
 know the easiest way to verify if you indeed had the necessary files  
 without having to actually do a restore and have postgres tell you if  
 it succeeded or not.
 
 Perhaps no one understood me because the answer I was looking for was  
 too obvious.   But what I really wanted to know was how do you know  
 what the stop file is.  It informs you of the start file all over  
 the place when doing the base backups but I thought I would have to  
 do something clever to figure out the stop file on my own.  But  
 luckily I don't.  The backup history file has too lines like this:
 
 START WAL LOCATION: 0/88F21D0C (file 00010088)
 STOP WAL LOCATION: 0/88F21D50 (file 00010088)
 
 It was clear to me from the docs how to figure out what the start  
 file is but the end file was a mystery until I actually created a  
 backup history file and looked in it.  The only place I can find in  
 the Online Backup instructions where this is indicated is this  
 paragraph:
 
 To make use of this backup, you will need to keep around all the WAL  
 segment files generated during and after the file system backup. To  
 aid you in doing this, the pg_stop_backup function creates a backup  
 history file that is immediately stored into the WAL archive area.  
 This file is named after the first WAL segment file that you need to  
 have to make use of the backup. For example, if the starting WAL file  
 is 0001123455CD the backup history file will be named  
 something like 0001123455CD.007C9330.backup. (The second  
 number in the file name stands for an exact position within the WAL  
 file, and can ordinarily be ignored.) Once you have safely archived  
 the file system backup and the WAL segment files used during the  
 backup (as specified in the backup history file), all archived WAL  
 segments with names numerically less are no longer needed to recover  
 the file system backup and may be deleted. However, you should  
 consider keeping several backup sets to be absolutely certain that  
 you can recover your data. Keep in mind that only completed WAL  
 segment files are archived, so there will be delay between running  
 pg_stop_backup and the archiving of all WAL segment files needed to  
 make the file system backup consistent.
 
 Reading it now it seems obvious that the file would contain not only  
 the start WAL file but also the Stop WAL file but when going over the  
 directions the first time it did not pick up on it.  And it left me  
 thinking I would have to use some hack to figure it out if I ever  
 wanted to test a base backup.  It would have been less confusing to  
 me if it just said right in the docs: The backup history file  
 contains both the start WAL file name and the Stop WAL file name or  
 something like that just to make it perfectly clear.
 
 Now that I know this I can extract that filename from the backup  
 history file, check to see if it has been archived and copy it  
 somewhere if it hasn't been archived yet.  I'm pretty sure that I can  
 assume that all files before the stop file have already been  
 archived.  So once I backup the stop file I can be positive that the  
 base backup I just made will be valid when I try to restore from it.
 
 This lessens my need for the get current WAL file functionality in  
 this context.  It will still be nice to have in the context of  
 backing it up every five minutes or so in case a WAL file takes a  
 long time to fill up.
 
 Anyway I would have been less confused if the docs had made it more  
 clear that the name of the stop wal file was in the backup history file.
 
 Rick
 
 
 On Jan 30, 2006, at 10:20 PM, Bruce Momjian wrote:
 
 
  Yes, I think copying it while it is being written is safe.
 
  -- 
  -
 
  Rick Gigger wrote:
  Yes!  Thanks you!  That is exactly what I was looking for.
 
  So I take it that this means that it is save to copy the current in
  use WAL file even as it is being 

[GENERAL] Operator for int8 array

2006-02-24 Thread K.Deepa

Dear All,

I need operator for int8 array. I tried changing the code in contrib/ 
and compiled.
When I tried executing the query, it is dumping. Kindly tell me if there 
is alternative

to overcome the problem. I am using postgresql 7.1.3 version.

--
regards,
Deepa K

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Marc G. Fournier

On Fri, 24 Feb 2006, Bruce Momjian wrote:


Leonard Soetedjo wrote:

On Wednesday 15 February 2006 01:38, Tom Lane wrote:

merlyn@stonehenge.com (Randal L. Schwartz) writes:

Oracle purchases Sleepycat.  From what I understand, BerkeleyDB was the
other way that MySQL could have transactions if Oracle decided to
restrict InnoDB tables (after purchasing Innobase last year).

Does this mean the other shoe has dropped for MySQL AB?


The deal's not gone through yet, but it sure does look like they want to
put a hammerlock on MySQL ...


Is it possible that Oracle is trying to buy MySQL to kill off other open
source competitor, e.g. PostgreSQL?  MySQL has a strong number of users and
therefore it is a good deal for Oracle to buy MySQL.  Then by doing that,
Oracle will market MySQL as the low-end alternative to their own database to
give a full solution to the customer.  And this would slow down the take up
rate for other database competitor.


MySQL already has major funding.  I don't see how it could get worse for
us if Oracle bought them.


I think that Leonards point here is that if Oracle were to acquire them 
and market MySQL as 'the low-end alternative', that they have a huge 
marketing budget that they could bring to bear on this ... one that I 
imagine makes MySQL's look like pocket change ...


Greatbridge had major funding, and succeeded in burning it off in, what, 
12 months?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Bruno Wolff III
On Fri, Feb 24, 2006 at 10:52:53 -0400,
  Marc G. Fournier [EMAIL PROTECTED] wrote:
 On Fri, 24 Feb 2006, Bruce Momjian wrote:
 
 Greatbridge had major funding, and succeeded in burning it off in, what, 
 12 months?

It's been a long time, but I thought they still had a significant amount
of money left when Greatbridge was shut down.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] setting LC_NUMERIC

2006-02-24 Thread [EMAIL PROTECTED]
Hi,

unfortunately I did not set the locales correct, when I created my
database. The result is, that the numeric type expects an american
format like 12.34.

Since I'm from europe, where this number will be represented as 12,34
I would like to change it without having to recreate the database,
because it already contains data.

I set the lc_numeric in the postgres.conf but that did not help. I also
used:

=# SET LC_NUMERIC TO [EMAIL PROTECTED] ;
SET

In the psql client, but after that inserts with the european number
format also failed. I also changed the enviornment variables and
restarted the database server. No luck either.

How do I change the numeric format? I read the postgresql manual but
it's fishy about this and they ways they described I tried and failed.

Any advice is appreciated!

Fritz


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How do I use the backend APIs

2006-02-24 Thread Chad
Thanks Martijn.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Could DBmirror be done with the mechine using RPM installation instaed of souce code [tar.gz] ?

2006-02-24 Thread Amrit Angsusing
I use the two mechines with FC3 and postgresql 8.1 RPM installation and would like to do DBmirror or other postgresql database replication from master to slave server . Could I do this replication using RPM or I must use the 
tar.gz version ? Could anybody suggest me about replication?
Amrit 


[GENERAL] psql is very slow

2006-02-24 Thread Michal Merta
Hi all,

I have a psql7.3.4, apache 2.0.40, perlv5.8.0. Database is pretty big, (dump is about 100Megs).
But all the operations are very, veryslow.
Is any possibility to make postgresql more quick? (don't tell me to cut the database :))

OS version: 2.4.20-gentoo-r7
RAM: 512MB
CPU:i686 AMD Athlon(tm) XP 1800+

Thanks.
Regards, Michal


[GENERAL] How do I prevent binding to TCP/IP port outside of localhost?

2006-02-24 Thread Karl Wright
I have a situation where I need postgres to LISTEN and allow BINDs to 
its TCP/IP port (5432) only to connections that originate from 
localhost.  I need it to not accept *socket* connections if requests 
come in from off-box.  If I try to set up pg_hba.conf such that it 
rejects off-box requests, it seems to do this after it permits the 
socket connection, and that won't do for our security geeks here.


For example, here's the difference:

[EMAIL PROTECTED]:~$ curl http://duck37:5432
curl: (52) Empty reply from server
[EMAIL PROTECTED]:~$ curl http://duck37:5433
curl: (7) couldn't connect to host
[EMAIL PROTECTED]:~$

Note that the outside world seems to be able to connect to 5432 just 
fine, although any *database* connections get (properly) rejected.


I cannot turn off TCP/IP entirely because I have a Java application that 
uses JDBC.


Can somebody tell me whether this is an innate capability of postgres, 
or whether I will need to modify the base code (and if so, WHERE I would 
modify it?)


Thanks,
Karl Wright


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] User groups

2006-02-24 Thread Russell Denney
Any user groups in the Tampa, St. Petersburg, Florida area? Anyone 
interested in starting a group? I am new to PostgreSQL and many aspects 
of programming, but I would very much like to meet with those that have 
the same interests, regardless of programming skills.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] upgrade PostgreSQL 8.x on production FreeBSD

2006-02-24 Thread Filip Rembiałkowski
[EMAIL PROTECTED] napisał(a):
 Could anybody point to an on-line resource about the steps involved with
 upgrading of PostgreSQL 8.x on a production FreeBSD 5.4 with minimum
 downtime (i.e. 1 - stop db so that no changes happen 2 - dump 3 -
 upgrade (ports) 4 - import 5- start db).
 

here you are:
http://www.postgresql.org/docs/8.1/static/migration.html

If both PostgreSQL versions can't coexist on one host, you can use
another one, with possibly identical setup (HW/OS/etc) for the
migration, and transfer (use tar + netcat for minimal downtime) the
cluster afterwards.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Lista de correo

2006-02-24 Thread Leandro Alvarez
Escribo en esta oprtunidad para unirme a la lista de correos de postgressql , estoy comenzando a usarlo, y me interesaria recibir documentacion que fuese util para mi, asi como tambien ayudar a quienes lo necsiten, siempre y cuando este a mi alcance... si me pueden recomendar algunos textos tambien me seria bastante util 
gracias y saludosLeandro Alvarezcaracas-venezuela


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Ned Lilly

Marc G. Fournier wrote:

On Fri, 24 Feb 2006, Bruce Momjian wrote:


MySQL already has major funding.  I don't see how it could get worse for
us if Oracle bought them.


I think that Leonards point here is that if Oracle were to acquire them 
and market MySQL as 'the low-end alternative', that they have a huge 
marketing budget that they could bring to bear on this ... one that I 
imagine makes MySQL's look like pocket change ...


Greatbridge had major funding, and succeeded in burning it off in, 
what, 12 months?


Umm, I think MySQL has executed a little better than the late Great Bridge.  I 
should know.

I think Bruce's point was that any Oracle-MySQL combination would just be more 
of the same - highly visible low-end database, not really challenging Oracle 
10, single-company-directed, not really that open a development community.

Frankly, I think that would be pretty good news for PostgreSQL.  It would just underscore 
the strengths of the PG community.  Will be interesting to see if Oracle really does buy 
JBoss or Zend, and whether those communities fork in some way as has been widely 
forecast.  I think the point that it can't happen to PostgreSQL is pretty 
solid.

That's not to say marketing's not extremely important for PostgreSQL.  We're still the underdog in that fight, and would be more so if Oracle bought MySQL.  


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How do I prevent binding to TCP/IP port outside of localhost?

2006-02-24 Thread Jim Buttafuoco

if its linux, use iptables to block to port.

-- Original Message ---
From: Karl Wright [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Thu, 23 Feb 2006 15:49:09 -0500
Subject: [GENERAL] How do I prevent binding to TCP/IP port outside of localhost?

 I have a situation where I need postgres to LISTEN and allow BINDs to 
 its TCP/IP port (5432) only to connections that originate from 
 localhost.  I need it to not accept *socket* connections if requests 
 come in from off-box.  If I try to set up pg_hba.conf such that it 
 rejects off-box requests, it seems to do this after it permits the 
 socket connection, and that won't do for our security geeks here.
 
 For example, here's the difference:
 
 [EMAIL PROTECTED]:~$ curl http://duck37:5432
 curl: (52) Empty reply from server
 [EMAIL PROTECTED]:~$ curl http://duck37:5433
 curl: (7) couldn't connect to host
 [EMAIL PROTECTED]:~$
 
 Note that the outside world seems to be able to connect to 5432 just 
 fine, although any *database* connections get (properly) rejected.
 
 I cannot turn off TCP/IP entirely because I have a Java application that 
 uses JDBC.
 
 Can somebody tell me whether this is an innate capability of postgres, 
 or whether I will need to modify the base code (and if so, WHERE I would 
 modify it?)
 
 Thanks,
 Karl Wright
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
--- End of Original Message ---


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
   Marc G. Fournier [EMAIL PROTECTED] wrote:
 Greatbridge had major funding, and succeeded in burning it off in, what, 
 12 months?

 It's been a long time, but I thought they still had a significant amount
 of money left when Greatbridge was shut down.

Oh, certainly.  Lack of money was not the problem --- GB was performing
according to plan, which was to become profitable within three years,
but the board of directors got cold feet.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] checksum error from windows to linux

2006-02-24 Thread Antoine PERENNEC
Hi all,
I'm a newbie to postgres, and I'm facing a few problems to start my postgres server.
Actually, I've retrieved a postgreSQL database from a postgreSQL serverthat was running on windows and I'm trying to load this database on a postgreSQL server that will be runningon a linux mandrake platform. I downloaded the PostgreSQL 8.0.7 package, same version as the windows server, and copied the data directory from the windows server into the /usr/local/pgsql/data directory on my linux platform.
Then, I've tried to launch the server in the foreground with the command line :
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
The error message was telling me that the value "French_France" for the locale "lc_messages" wasn'trecognized by postmaster.
Then I've defined a localealias in locale.alias for "French_France", but the checksum problem persists.
I suppose there's a deeper cause to this problem, involving the way the checksum is computed.
Is there a way to overcome the problem simply, for example re-compute the checksum before starting the server?
Or may Iinitialize the server with an empty database with initdb,then postmaster and then recover the data without the checksum ?I'm really looking for this particular kind of recovering process.
Whatever, sorry for my difficulties of explanation, I'm really new to postgres and english isn't my mother tongue (I bet you've guessed it), and thanks a lot for anyhelp !!
Antoine

Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Ned Lilly

Without going into the particulars, let's just say the total amount spent was 
less than publicly announced figures, and the parent (sole investor) shut it 
down before coming close to those figures.


Bruno Wolff III wrote:

On Fri, Feb 24, 2006 at 10:52:53 -0400,
  Marc G. Fournier [EMAIL PROTECTED] wrote:

On Fri, 24 Feb 2006, Bruce Momjian wrote:

Greatbridge had major funding, and succeeded in burning it off in, what, 
12 months?


It's been a long time, but I thought they still had a significant amount
of money left when Greatbridge was shut down.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] psql is very slow

2006-02-24 Thread Sean Davis



On 2/22/06 10:23 AM, Michal Merta [EMAIL PROTECTED] wrote:

 I have a psql 7.3.4, apache 2.0.40, perl  v5.8.0. Database is pretty big,
 (dump is about 100Megs).
 But all the operations are very, very slow.
 Is any possibility to make postgresql more quick? (don't tell me to cut the
 database :))

I assume that by psql you mean postgresql?

You'll probably have to be more specific about what you are doing, what your
database is like and how it is used, what you want to change, and what kinds
of time you are talking about.

Sean



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How do I prevent binding to TCP/IP port outside of

2006-02-24 Thread Rich Doughty

Karl Wright wrote:
I have a situation where I need postgres to LISTEN and allow BINDs to 
its TCP/IP port (5432) only to connections that originate from 
localhost.  I need it to not accept *socket* connections if requests 
come in from off-box.  If I try to set up pg_hba.conf such that it 
rejects off-box requests, it seems to do this after it permits the 
socket connection, and that won't do for our security geeks here.


try listen_addresses = 'localhost' in your postgresql.conf



For example, here's the difference:

[EMAIL PROTECTED]:~$ curl http://duck37:5432
curl: (52) Empty reply from server
[EMAIL PROTECTED]:~$ curl http://duck37:5433
curl: (7) couldn't connect to host
[EMAIL PROTECTED]:~$

Note that the outside world seems to be able to connect to 5432 just 
fine, although any *database* connections get (properly) rejected.


I cannot turn off TCP/IP entirely because I have a Java application that 
uses JDBC.


Can somebody tell me whether this is an innate capability of postgres, 
or whether I will need to modify the base code (and if so, WHERE I would 
modify it?)


Thanks,
Karl Wright


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




--

  - Rich Doughty

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] upgrade PostgreSQL 8.x on production FreeBSD

2006-02-24 Thread Philippe Ferreira

Hi,

If the downtime is a concern, you could use Slony-1 for the migration.

Philippe Ferreira.


[EMAIL PROTECTED] napisał(a):
 


Could anybody point to an on-line resource about the steps involved with
upgrading of PostgreSQL 8.x on a production FreeBSD 5.4 with minimum
downtime (i.e. 1 - stop db so that no changes happen 2 - dump 3 -
upgrade (ports) 4 - import 5- start db).

   



here you are:
http://www.postgresql.org/docs/8.1/static/migration.html

If both PostgreSQL versions can't coexist on one host, you can use
another one, with possibly identical setup (HW/OS/etc) for the
migration, and transfer (use tar + netcat for minimal downtime) the
cluster afterwards.

 




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] checksum error from windows to linux

2006-02-24 Thread Tom Lane
Antoine PERENNEC [EMAIL PROTECTED] writes:
 Actually, I've retrieved a postgreSQL database from a postgreSQL
 server that was running on windows and I'm trying to load this
 database on a postgreSQL server that will be running on a linux
 mandrake platform.

You generally can't transfer the physical datafiles between
non-identical platforms.  Use pg_dump instead to create a portable
dump.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] setting LC_NUMERIC

2006-02-24 Thread Tom Lane
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
 How do I change the numeric format?

You don't.  You can use to_char and to_number to do locale-aware
transformations, but the base data types only handle the SQL-standard
representations of numbers.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] psql is very slow

2006-02-24 Thread A. Kretschmer
am  22.02.2006, um 16:23:16 +0100 mailte Michal Merta folgendes:
 Hi all,
 
 I have a psql 7.3.4, apache 2.0.40, perl  v5.8.0. Database is pretty big,
 (dump is about 100Megs).
 But all the operations are very, very slow.

- 7.3 is very, very old, newer verions are much faster!
- runs vacuum!
- use explain
- tell us your slow querys

You are from germany? We have a german mailing list too.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] setting LC_NUMERIC

2006-02-24 Thread Peter Eisentraut
Am Mittwoch, 22. Februar 2006 14:26 schrieb [EMAIL PROTECTED]:
 unfortunately I did not set the locales correct, when I created my
 database. The result is, that the numeric type expects an american
 format like 12.34.

You can't change that.  The locale only affects the to_char function.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Temporal Databases

2006-02-24 Thread Brad Nicholson

Simon Riggs wrote:


A much easier way is to start a serialized transaction every 10 minutes
and leave the transaction idle-in-transaction. If you decide you really
need to you can start requesting data through that transaction, since it
can see back in time and you already know what the snapshot time is
(if you record it). As time moves on you abort and start new
transactions... but be careful that this can effect performance in other
ways.

 



We're currently prototyping a system (still very much in it's infancy) 
that uses the Slony-I shipping mechanism to build an off line temporal 
system for point in time reporting purposes.  The idea being that the 
log shipping files will contain only the committed inserts, updates and 
deletes.  Those log files are then applied to an off line system which 
has a  trigger defined on each table that re-write the statements, based 
on the type of statement, into a temporally sensitive format.


If you want to get an exact point in time snapshot with this approach, 
you are going to have to have timestamps on all table in your source 
database that contain the exact time of the statement table.  Otherwise, 
a best guess (based on the time the slony sync was generated) is the 
closest that you will be able to come.


--
Brad Nicholson  416-673-4106   
Database Administrator, Afilias Canada Corp.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread CG
PostgreSQL 8.1.3

I'm trying to collect some hard numbers to show just how much it degrades and
over how long a time interval.

All I have now is anecdotal evidence, and I was hoping to save myself some
downtime by seeking advice early.

I have a search table which I use for partial-match text searches:

CREATE TABLE search
(
  id int4 NOT NULL DEFAULT nextval('search_id_seq'::regclass),
  item_id int8 NOT NULL,
  search_vector ltree NOT NULL,
  CONSTRAINT search_id_pkey PRIMARY KEY (id),
  CONSTRAINT search_item_id_fkey FOREIGN KEY (item_id)
  REFERENCES items (id) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE
) 
WITH OIDS;

CREATE INDEX lsearch_vector_idx
  ON search
  USING gist
  (search_vector);

I have some triggers that insert rows into the search table as rows are
inserted into items.

I implimented this yesterday, and the immediate effect was a fantastic return
time for partial text searches in the sub-second range. By today, these queries
take 10 minutes sometimes... There are about 134000 rows in the table.

The table gets analyzed nightly. Should the frequency be more? There are about
1000 rows added a day, only about 30 or so rows removed, and nothing is ever
updated. There's not that much turnover.

The search vectors are built like this:

For a string Hello World the ltree is created like 'h.e.l.l.o.w.o.r.l.d' ...
If I wanted to find all rows with orl in them i would construct an lquery
like '*.o.r.l.*' and use the ~ operator in the where clause. I would link to
the table items by the item_id ... 

What could be making this go so wrong? Is there a better way to accomplish my
task?

CG

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_dump warning with -Fc option

2006-02-24 Thread Richard Huxton

Carlos Henrique Reimer wrote:

Hello,
   
  Pg_dump complains when I use the -Fc option with:
   
  pg_dump: [archiver] WARNING: requested compression not available in this installation -- archive will be uncompressed


  and the dump is not compressed... searching in the list I´ve found that there is something related with the zlib. 
   
  How can I fix it? Should I recompile postgresql?


Yes - you probably want to install the zlib-dev package for your machine 
first (not just the zlib package).


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Operator for int8 array

2006-02-24 Thread Tom Lane
K.Deepa [EMAIL PROTECTED] writes:
 I need operator for int8 array. I tried changing the code in contrib/ 
 and compiled.
 When I tried executing the query, it is dumping. Kindly tell me if there 
 is alternative
 to overcome the problem. I am using postgresql 7.1.3 version.

7.1.3?  Egad.  Get yourself onto some remotely modern version of PG.
7.1 is nearly five years old and has many known serious bugs, of
both data-loss and security flavors.

You didn't say exactly what you needed to do with an int8 array, but you
may well find that 8.1 can do it already.  The array support is far
superior now to what it was in 7.1.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Temporal Databases

2006-02-24 Thread Chris Browne
[EMAIL PROTECTED] (Bernhard Weisshuhn) writes:
 On Thu, Feb 23, 2006 at 03:34:36PM -0300, Rodrigo Sakai [EMAIL PROTECTED] 
 wrote:
 I'm focus on temporal databases (not temporary), and I want to know
 if anyone here is studying this tecnologies too. So, we can
 exchange knowlegment. Specifically, anyone who is trying to
 implement on postgresql the aspect of time (temporal).  These
 researches are lead by Richard Snodgrass. So, anyone who have
 something to share, please contact me!

 Not sure if I understand the problem correctly, but the
 contrib/spi/timetravel module does something which I think may be
 what you are talking about.

   
 http://developer.postgresql.org/cvsweb.cgi/~checkout~/pgsql/contrib/spi/README.timetravel

 The docs are a bit cryptic but you should be able to grasp if it
 suits your needs. Basically you can go back to any point in tabe for
 a timetravel table and make date based comparisons.

If I recall properly, the last time one of our folks looked at the
timetravel module, they found that it hadn't been updated to be
compatible with modern versions of PostgreSQL.

FYI, we're interested in this too; one of the neato new features in
Slony-I 1.1 was log shipping, which had a number of alterations made
to it to ensure it would provide useful information for constructing
temporal databases.

Notably, log shipping includes the timestamp of the time of each SYNC
on the source system, which gives at least *approximate* temporal
information as to when updates took place.

The intent of that is to allow loading log shipping data into a
modified database schema where two changes take place:

 - Tables are augmented with start/end dates

 - Three triggers affect the three operations, populating those dates:

- INSERT sets start = time of SYNC, end = infinity
- UPDATE alters the last record to change the end date to time of SYNC, 
  and inserts the new row with start = time of SYNC, end = infinity
- DELETE alters the last record to change the end date to time of SYNC

That maps a stateful database onto a temporal form.

It doesn't provide a way to address making retroactive changes, but
seeing as how this is being fed by operational/production style
systems, retroactivity normally isn't something online systems cope
with terribly much anyways.
-- 
output = reverse(gro.gultn @ enworbbc)
http://cbbrowne.com/info/linux.html
Rules of  the Evil Overlord #76.  If the hero  runs up to my  roof, I
will not run up after him and  struggle with him in an attempt to push
him  over the  edge. I  will also  not  engage him  at the  edge of  a
cliff. (In the middle of a  rope-bridge over a river of molten lava is
not even worth considering.) http://www.eviloverlord.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ltree + gist index performance degrades

2006-02-24 Thread Scott Marlowe
On Fri, 2006-02-24 at 11:02, CG wrote:
 PostgreSQL 8.1.3
 
 I'm trying to collect some hard numbers to show just how much it degrades and
 over how long a time interval.
 
 All I have now is anecdotal evidence, and I was hoping to save myself some
 downtime by seeking advice early.
 
 I have a search table which I use for partial-match text searches:
 
 CREATE TABLE search
 (
   id int4 NOT NULL DEFAULT nextval('search_id_seq'::regclass),
   item_id int8 NOT NULL,
   search_vector ltree NOT NULL,
   CONSTRAINT search_id_pkey PRIMARY KEY (id),
   CONSTRAINT search_item_id_fkey FOREIGN KEY (item_id)
   REFERENCES items (id) MATCH SIMPLE
   ON UPDATE CASCADE ON DELETE CASCADE
 ) 
 WITH OIDS;
 
 CREATE INDEX lsearch_vector_idx
   ON search
   USING gist
   (search_vector);
 
 I have some triggers that insert rows into the search table as rows are
 inserted into items.
 
 I implimented this yesterday, and the immediate effect was a fantastic return
 time for partial text searches in the sub-second range. By today, these 
 queries
 take 10 minutes sometimes... There are about 134000 rows in the table.
 
 The table gets analyzed nightly. Should the frequency be more? There are about
 1000 rows added a day, only about 30 or so rows removed, and nothing is ever
 updated. There's not that much turnover.
 
 The search vectors are built like this:
 
 For a string Hello World the ltree is created like 'h.e.l.l.o.w.o.r.l.d' ...
 If I wanted to find all rows with orl in them i would construct an lquery
 like '*.o.r.l.*' and use the ~ operator in the where clause. I would link to
 the table items by the item_id ... 
 
 What could be making this go so wrong? Is there a better way to accomplish my
 task?

Are you vacuuming regularly, are your fsm settings high enough, and what
does vacuum verbose say?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] collation UTF-8

2006-02-24 Thread Tomi NA
I'm using PosgreSQL 8.1.2 on linux and want to load UTF-8 encoded varchars.While I can store and get at stored text correctly, the ORDER BY places all accented characters (Croatian, in this case - probably marked hr_HR) after non-accented characters.
This is no showstopper, but it does affect the general perception of application quality.Now, I've seen the issue mentioned in a number of places, but often with fairly old versions of pgsql (8.0), in different circumstances etc. so my question is:
is there an official way to set up UTF8 collation so that SELECT first_name FROM persons ORDER BY first_name works as expected?TIA,Tomislav


Re: [GENERAL] collation UTF-8

2006-02-24 Thread Martijn van Oosterhout
On Fri, Feb 24, 2006 at 06:23:07PM +0100, Tomi NA wrote:
 I'm using PosgreSQL 8.1.2 on linux and want to load UTF-8 encoded varchars.
 While I can store and get at stored text correctly, the ORDER BY places all
 accented characters (Croatian, in this case - probably marked hr_HR) after
 non-accented characters.
 This is no showstopper, but it does affect the general perception of
 application quality.

Collation is a function of the OS. Basically, is the locale of your
database setup for UTF-8 collation? It would probably be called
hr_HR.UTF-8.

 is there an official way to set up UTF8 collation so that SELECT first_name
 FROM persons ORDER BY first_name works as expected?

Yes, setup the locale correctly. In general, postgresql should give the
same results as sort(1) on the command-line. Use that to experiment.

LC_ALL=hr_HR.UTF-8 sort  input  output

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread Martijn van Oosterhout
On Fri, Feb 24, 2006 at 09:02:04AM -0800, CG wrote:
 PostgreSQL 8.1.3
 
 I'm trying to collect some hard numbers to show just how much it degrades and
 over how long a time interval.
 
 All I have now is anecdotal evidence, and I was hoping to save myself some
 downtime by seeking advice early.

snip

 I implimented this yesterday, and the immediate effect was a fantastic return
 time for partial text searches in the sub-second range. By today, these 
 queries
 take 10 minutes sometimes... There are about 134000 rows in the table.
 
 The table gets analyzed nightly. Should the frequency be more? There are about
 1000 rows added a day, only about 30 or so rows removed, and nothing is ever
 updated. There's not that much turnover.

That's very odd. Like the other person said, do you vacuum and analyse?
But my question is: is it using the index? What does EXPLAIN / EXPLAIN
ANALYZE tell you?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread CG
--- Scott Marlowe [EMAIL PROTECTED] wrote:

 Are you vacuuming regularly, are your fsm settings high enough, and what
 does vacuum verbose say?

Autovacuum is running, but I do a nightly vacuum analyze. When I just do a
vacuum analyze on the table I get:

data=# vacuum analyze verbose search;
INFO:  vacuuming search
INFO:  index search_id_pkey now contains 1344661 row versions in 5134 pages
DETAIL:  9 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.09s/0.07u sec elapsed 4.91 sec.
INFO:  index search_vector_idx now contains 1344672 row versions in 47725
pages
DETAIL:  9 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.77s/0.37u sec elapsed 407.55 sec.
INFO:  index search_item_id_idx now contains 1344690 row versions in 6652
pages
DETAIL:  9 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.24s/0.08u sec elapsed 45.62 sec.
INFO:  search: removed 9 row versions in 2 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.05 sec.
INFO:  letter_search: found 9 removable, 1344661 nonremovable row versions in
33984 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 141 unused item pointers.
0 pages are entirely empty.
CPU 2.41s/0.62u sec elapsed 483.06 sec.
INFO:  vacuuming pg_toast.pg_toast_174918394
INFO:  index pg_toast_174918394_index now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  pg_toast_174918394: found 0 removable, 0 nonremovable row versions in
0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  analyzing search
INFO:  search: scanned 3000 of 33984 pages, containing 119035 live rows and 0
dead rows; 3000 rows in sample, 1348428 estimated total rows
VACUUM

max_fsm_pages = 6   # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 2000# min 100, ~70 bytes each




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread CG


--- Martijn van Oosterhout kleptog@svana.org wrote:

 That's very odd. Like the other person said, do you vacuum and analyse?
 But my question is: is it using the index? What does EXPLAIN / EXPLAIN
 ANALYZE tell you?

data=# explain select * from search where search_vector ~ '*.o.r.l.*'::lquery;
QUERY PLAN
---
 Bitmap Heap Scan on search  (cost=53.71..4566.65 rows=1345 width=161)
   Recheck Cond: (search_vector ~ '*.o.r.l.*'::lquery)
   -  Bitmap Index Scan on search_vector_idx  (cost=0.00..53.71 rows=1345
width=0)
 Index Cond: (search_vector ~ '*.o.r.l.*'::lquery)
(4 rows)

data=# explain analyze select * from search where search_vector ~
'*.o.r.l.*'::lquery;
QUERY PLAN
--
 Bitmap Heap Scan on search  (cost=53.71..4566.65 rows=1345 width=161) (actual
time=183684.156..196997.278 rows=1655 loops=1)
   Recheck Cond: (search_vector ~ '*.o.r.l.*'::lquery)
   -  Bitmap Index Scan on search_vector_idx  (cost=0.00..53.71 rows=1345
width=0) (actual time=183683.857..183683.857 rows=1655 loops=1)
 Index Cond: (search_vector ~ '*.o.r.l.*'::lquery)
 Total runtime: 197000.061 ms
(5 rows)

I appreciate you taking the time to help me out. Thank you all.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] collation UTF-8

2006-02-24 Thread Tomi NA
On 2/24/06, Martijn van Oosterhout kleptog@svana.org wrote:
On Fri, Feb 24, 2006 at 06:23:07PM +0100, Tomi NA wrote: I'm using PosgreSQL 8.1.2 on linux and want to load UTF-8 encoded varchars. While I can store and get at stored text correctly, the ORDER BY places all
 accented characters (Croatian, in this case - probably marked hr_HR) after non-accented characters. This is no showstopper, but it does affect the general perception of application quality.
Collation is a function of the OS. Basically, is the locale of yourdatabase setup for UTF-8 collation? It would probably be calledhr_HR.UTF-8.You were right about this:LC_ALL=hr_HR.UTF-8 sort  
test.txt(seemingly) collates the same way that pgsql does. Accented letters at the end of the alphabet. I've tried hr_HR.UTF8 as well, without results.Btw, my database is created withCREATE DATABASE mydb WITH OWNER = postgres
 ENCODING = 'UTF8' TABLESPACE = pg_default;Yes, setup the locale correctly. In general, postgresql should give the
same results as sort(1) on the command-line. Use that to experiment.LC_ALL=hr_HR.UTF-8 sort  input  outputI'm very sorry to report it does not work. :( Btw,set | grep LC_
returns nothing...is this a possible source of the problem? Tomislav


Re: [GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread Tom Lane
CG [EMAIL PROTECTED] writes:
 INFO:  index search_vector_idx now contains 1344672 row versions in 47725 
 pages
 INFO:  letter_search: found 9 removable, 1344661 nonremovable row versions 
 in 33984 pages

Yikes ... the index is bigger than the table!  Seems like you've not
been vacuuming enough, or else gist has got a major bloat problem.
If you REINDEX, does the index get materially smaller?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] collation UTF-8

2006-02-24 Thread Tom Lane
Tomi NA [EMAIL PROTECTED] writes:
 You were right about this:
  LC_ALL=3Dhr_HR.UTF-8 sort  test.txt
 (seemingly) collates the same way that pgsql does. Accented letters at the
 end of the alphabet. I've tried hr_HR.UTF8 as well, without results.

If you're not sure what locales are available on your system, run
locale -a.  I don't think sort will complain about an unknown locale
setting, it'll probably just fall back to C locale.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle purchases Sleepycat - is this the other shoe

2006-02-24 Thread Bruce Momjian
Bruno Wolff III wrote:
 On Fri, Feb 24, 2006 at 10:52:53 -0400,
   Marc G. Fournier [EMAIL PROTECTED] wrote:
  On Fri, 24 Feb 2006, Bruce Momjian wrote:
  
  Greatbridge had major funding, and succeeded in burning it off in, what, 
  12 months?
 
 It's been a long time, but I thought they still had a significant amount
 of money left when Greatbridge was shut down.

Right, they closed the company before the full promised amount was
spent, but they did spend millions, for sure.


-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Backup file extension

2006-02-24 Thread Michael Schmidt



I am writing a client GUI application and am adding backup/restore 
features. I noticed that differentbackup file extensions are used 
for PostgreSQL - pgAdmin uses .backup (possible problem because it is not 
consistent with 8.3 file names) and PG Lightning Admin uses .bak (possible 
problem because it is generic). To reduce the chance of the user making an 
error, I was wondering if it would make sense to standardize PostgreSQL backup 
file extension names - something like .pgb (PostgreSQL Backup). I Googled 
pgb and it doesn't look like anything uses this extension.

Just a thought.

Michael Schmidt


Re: [GENERAL] Backup file extension

2006-02-24 Thread Scott Marlowe
On Fri, 2006-02-24 at 12:15, Michael Schmidt wrote:
 I am writing a client GUI application and am adding backup/restore
 features.  I noticed that different backup file extensions are used
 for PostgreSQL - pgAdmin uses .backup (possible problem because it is
 not consistent with 8.3 file names) and PG Lightning Admin uses .bak
 (possible problem because it is generic).  To reduce the chance of the
 user making an error, I was wondering if it would make sense to
 standardize PostgreSQL backup file extension names - something like
 .pgb (PostgreSQL Backup).  I Googled pgb and it doesn't look like
 anything uses this extension.
  

I've always used .sql.  Just saying.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ltree + gist index performance degrades significantly over a night

2006-02-24 Thread Martijn van Oosterhout
On Fri, Feb 24, 2006 at 09:44:37AM -0800, CG wrote:
 
 
 --- Martijn van Oosterhout kleptog@svana.org wrote:
 
  That's very odd. Like the other person said, do you vacuum and analyse?
  But my question is: is it using the index? What does EXPLAIN / EXPLAIN
  ANALYZE tell you?
 
 data=# explain select * from search where search_vector ~ '*.o.r.l.*'::lquery;

snip

 data=# explain analyze select * from search where search_vector ~
 '*.o.r.l.*'::lquery;

snip
  Total runtime: 197000.061 ms

Ouch! The index is obviously not very good in this case. Unfortunatly
it's not clear where the slowdown is. You'd probably need to recompile
postgresql with profiling to find exactly where it's going wrong.

Quick test though, if you disable the index (set enable_indexscan=off)
so it does a seqscan, is it faster or slower? By how much? i.e. is it
helping much.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[GENERAL] phantom backends

2006-02-24 Thread Ed L.
I have a bunch of 7.4.6 backend processes that show up via 
pg_stat_get_db_numbackends(), pg_stat_get_backend_idset(), etc, 
but do not show up in ps.  Any clues?

Thanks,
Ed

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How much clustered?

2006-02-24 Thread Bruce Momjian
Tom Lane wrote:
 Carlos Henrique Reimer [EMAIL PROTECTED] writes:
I would like to know how much clustered is a table related to some 
  index How can I discover?
 
 You could do
   select ctid from mytable order by indexcolumns
 and then do whatever sort of calculation strikes your fancy on the
 sequence of page numbers.  (It's probably fair to ignore the row
 numbers, considering an index to be fully clustered if the page
 reference sequence is perfect.)

Currently we output the ctid as a string:

snprintf(buf, sizeof(buf), (%u,%u), blockNumber, offsetNumber);

Perhaps someday we should consider outputting that as an array or a
result set:

test= select x from (select 1, 2) as x;
   x
---
 (1,2)
(1 row)

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Requesting LinuxWorld East staffers

2006-02-24 Thread Joshua D. Drake

Hello,

We need booth staffers for Linux World east. Please step up.

Joshua D. Drake

--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Wrong length of bytea field

2006-02-24 Thread Andrus Moor
I inserted two binary files to a bytea field contents

file1  size was 7834649 bytes
file2 size was 5888534 bytes

select filename,
octet_length(contents),
length(contents)
from localfil

returns

file1  ;16777184;16777184
file2  ;15768893;15768893

It seems that for some reason postgres returns sometimes escaped string size 
(eq. single byte 0x00 uploaded as \\000 size is returned as 5 )

Why result is incorrect ?
How to get the actual size of bytea field ?

Using Postgres 8.1 in XP , cluster encoding is UTF-8

Andrus.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Wrong length of bytea field

2006-02-24 Thread Michael Fuhr
On Fri, Feb 24, 2006 at 10:32:58PM +0200, Andrus Moor wrote:
 It seems that for some reason postgres returns sometimes escaped string size 
 (eq. single byte 0x00 uploaded as \\000 size is returned as 5 )

I'd guess that some reason is because the data was over-escaped
when it was inserted -- that is, escaped and then escaped again.
Try inserting a small file and querying for the bytea column and
see what you get.

CREATE TABLE foo (data bytea);

INSERT INTO foo VALUES ($$\000$$);
INSERT INTO foo VALUES ($$\\000$$);
INSERT INTO foo VALUES ($$\\\000$$);
INSERT INTO foo VALUES ($$000$$);

SELECT data, octet_length(data) FROM foo;
  data   | octet_length 
-+--
 \000|1
 \\000   |4
 \\\000  |2
 000 |5
(4 rows)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] How to tell how much of the database is being used for data.

2006-02-24 Thread Steve Oualline
Title: How to tell how much of the database is being used for data.






Question: How can I tell how much free space is in the database itself?


Background: We want to keep as many records as possible in the 

database as possible. Currently we fill the database until the disk

usage reaches 80% then delete and VACUUM FULL the tables.


We'd like to just VACUUM but only VACUUM FULL is sure to 

release disk space to the operating system. However if we knew

how much space was free in the database itself, we could judge how

many new records we could dump into it.


Any help would be appreciated.





Re: [GENERAL] Requesting LinuxWorld East staffers

2006-02-24 Thread brew

Joshua.

 We need booth staffers for Linux World east. Please step up.

Is that for Command Prompt or postgreSQL?  Both, probably.

I'm confused about Linux World east, though.  I googled it and it looks
like it was in Boston last month.

brew

 ==
  Strange Brew   ([EMAIL PROTECTED])
  Check out my Stock Option Covered Call website  http://www.callpix.com
 and my Musician's Online Database Exchange http://www.TheMode.com
 ==


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Requesting LinuxWorld East staffers

2006-02-24 Thread Joshua D. Drake

[EMAIL PROTECTED] wrote:

Joshua.

  

We need booth staffers for Linux World east. Please step up.



Is that for Command Prompt or postgreSQL?  Both, probably.
  

It it is for PostgreSQL.Org. I am coordinating the booth.


I'm confused about Linux World east, though.  I googled it and it looks
like it was in Boston last month.
  

It is next month.


Joshua D. Drake

brew

 ==
  Strange Brew   ([EMAIL PROTECTED])
  Check out my Stock Option Covered Call website  http://www.callpix.com
 and my Musician's Online Database Exchange http://www.TheMode.com
 ==


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] ...unknown user name or bad password error during install

2006-02-24 Thread Jean-Christophe Roux
Hi,I am trying to install PostgreSQL 8.1.3 on windows xp. This computer had previous versions of postgresql installed but they have been uninstalled.When the installer reach the window Service Configuration, I keep the default values and add a password. I am getting the error:"Invalid username specified: Logon failure: unknown user name or bad password"My understanding is that it is not normal.I searched a little and found BUG #1873 by Lee Benson with an error quite similar to the one I am facing. But that does not help me out much..Any idea on what I should do?Thanks a lot in advanceJCR
		Brings words and photos together (easily) with 
PhotoMail  - it's free and works with Yahoo! Mail.

[GENERAL] Another perplexity with PG rules

2006-02-24 Thread Ken Winter
I'm stumped on the following problem.  

Everything between the --- rows should be executable.

Please advise.

~ TIA
~ Ken

---

-- Here's a table:

CREATE TABLE public.person_h
(
person_id bigint DEFAULT nextval('pop_seq'::text),
effective_date_and_time timestamptz DEFAULT ('now'::text)::timestamp(6) with
time zone,
expiration_date_and_time timestamptz DEFAULT 'infinity'::timestamp with time
zone,
first_name varchar(255),
middle_names varchar(255),
last_name_prefix varchar(255),
last_name varchar(255),
name_suffix varchar(255),
preferred_full_name varchar(255),
preferred_business_name varchar(255),
user_name varchar(255),
_action varchar(32) DEFAULT 'preserve'::character varying,
CONSTRAINT pk_person_h_identifier_2 PRIMARY KEY (person_id,
effective_date_and_time)
);
-- Indexes
CREATE UNIQUE INDEX personal_data_px ON person_h USING btree (person_id,
effective_date_and_time);

-- Here's a view of that table plus a few ALTERs on the view:

CREATE OR REPLACE VIEW person AS
SELECT h.person_id AS person_id,
h.effective_date_and_time AS effective_date_and_time,
h.expiration_date_and_time AS expiration_date_and_time,
h.first_name AS first_name,
h.middle_names AS middle_names,
h.last_name_prefix AS last_name_prefix,
h.last_name AS last_name,
h.name_suffix AS name_suffix,
h.preferred_full_name AS preferred_full_name,
h.preferred_business_name AS preferred_business_name,
h.user_name AS user_name,
h._action AS _action
FROM person_h AS h
WHERE h.effective_date_and_time = CURRENT_TIMESTAMP
AND h.expiration_date_and_time = CURRENT_TIMESTAMP

ALTER TABLE person
ALTER COLUMN person_id
SET DEFAULT nextval('pop_seq'::text)
;
ALTER TABLE person
ALTER COLUMN effective_date_and_time
SET DEFAULT ('now'::text)::timestamp(6) with time zone
;
ALTER TABLE person
ALTER COLUMN expiration_date_and_time
SET DEFAULT 'infinity'::timestamp with time zone
;
ALTER TABLE person
ALTER COLUMN _action
SET DEFAULT 'preserve'::character varying

-- Here are a couple of rules on that view:

/*** Rule on_insert inserts the object's first history record into person_h.
***/
CREATE OR REPLACE RULE on_insert AS
ON INSERT TO person
DO INSTEAD (
/* Insert the row into the H table.
Effective and expiration dates take the defaults,
unless query overrides them. */
INSERT INTO person_h
( person_id,
effective_date_and_time,
first_name,
middle_names,
last_name_prefix,
last_name,
name_suffix,
preferred_full_name,
preferred_business_name,
user_name,
_action )
VALUES ( nextval('pop_seq'::text),
NEW.effective_date_and_time,
NEW.first_name,
NEW.middle_names,
NEW.last_name_prefix,
NEW.last_name,
NEW.name_suffix,
NEW.preferred_full_name,
NEW.preferred_business_name,
NEW.user_name,
NEW._action )
)
;

/*** Rule on_update_1_nothing meets the PostgreSQL requirement for one
unconditional UPDATE rule. ***/
CREATE OR REPLACE RULE on_update_1_nothing AS
ON UPDATE TO person
DO INSTEAD NOTHING
;
 
/*** Rule on_update_2_preserve_h inserts a new record with the old data into
history table person_h,
expires this record effective either now or at the effective time given in
the query,
and updates the current record as of the same time. ***/
CREATE OR REPLACE RULE on_update_2_preserve_h AS
ON UPDATE TO person
WHERE (
(OLD.person_id  NEW.person_id
OR (OLD.person_id IS NULL AND NEW.person_id IS NOT NULL)
OR (OLD.person_id IS NOT NULL AND NEW.person_id IS NULL ))
OR (OLD.effective_date_and_time  NEW.effective_date_and_time
OR (OLD.effective_date_and_time IS NULL AND
NEW.effective_date_and_time IS NOT NULL)
OR (OLD.effective_date_and_time IS NOT NULL AND
NEW.effective_date_and_time IS NULL ))
OR (OLD.first_name  NEW.first_name
OR (OLD.first_name IS NULL AND NEW.first_name IS NOT NULL)
OR (OLD.first_name IS NOT NULL AND NEW.first_name IS NULL ))
OR (OLD.middle_names  NEW.middle_names
OR (OLD.middle_names IS NULL AND NEW.middle_names IS NOT NULL)
OR (OLD.middle_names IS NOT NULL AND NEW.middle_names IS NULL ))
OR (OLD.last_name_prefix  NEW.last_name_prefix
OR (OLD.last_name_prefix IS NULL AND NEW.last_name_prefix IS NOT
NULL)
OR (OLD.last_name_prefix IS NOT NULL AND NEW.last_name_prefix IS
NULL ))
OR (OLD.last_name  NEW.last_name
OR (OLD.last_name IS NULL AND NEW.last_name IS NOT NULL)
OR (OLD.last_name IS NOT NULL AND NEW.last_name IS NULL ))
OR (OLD.name_suffix  NEW.name_suffix
OR (OLD.name_suffix IS 

Re: [GENERAL] How to specify infinity for intervals ?

2006-02-24 Thread Bruce Momjian
Karsten Hilbert wrote:
 Thanks to all for the suggestions.
 
 For the time being I will stay with using NULL.
 
 I will also stay with the hope that one day before long we
 will have 'infinite'::interval.

We have this TODO:

o Allow infinite dates just like infinite timestamps

Do we need to add intervals to this?

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] psql is very slow

2006-02-24 Thread Christopher Browne
Oops! [EMAIL PROTECTED] (Michal Merta) was seen spray-painting on a wall:
 I have a psql 7.3.4, apache 2.0.40, perl  v5.8.0. Database is pretty
 big, (dump is about 100Megs).

 But all the operations are very, very slow.

 Is any possibility to make postgresql more quick? (don't tell me to
 cut the database :))

Well, have you read the manual to follow the maintenance directions
that are recommended?

You're running a rather old version; 7.3 was neat enough stuff a
couple of years ago, but there have been several substantially
improved major versions since then.  An upgrade would seem wise.
-- 
let name=cbbrowne and tld=gmail.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/slony.html
When marriage is outlawed, only outlaws will have inlaws. 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ecpg error -- DbPg_cre_sect_buf.pgc:1334: ERROR: syntax

2006-02-24 Thread Bruce Momjian
[EMAIL PROTECTED] wrote:
 I'm relatively new to database programming  I've inherited a system to
 maintain that contains lots of embedded SQL.  I'm attempting to port my C
 source code from RH 7.2 (with ecpg 2.8.0) to Mandrake 10.0 (with ecpg
 4.1.1).   ecpg 4.1.1 is generating an error which I did not get on the
 RH7.2 system with ecpg 2.8.0It's telling me I have a syntax error with
 a ROLLBACK RELEASE (DbPg_cre_sect_buf.pgc:1334: ERROR: syntax error at or
 near RELEASE).  Here's a snippet of the code in error...
 .
 .
 .
  1320 /* Ignore termination signal (if rcvd) - terminating anyway
 */
1321 signal(SIGTERM, SIG_IGN);
1322
1323 /* send a message to the client that download has
 terminated abnormally */
1324 DbgS1(10, DBG_DRW_3, sqlca.sqlerrm.sqlerrmc);
1325 DbgI1(10, DBG_DRW_3, -sqlca.sqlcode);
1326
1327 EXEC SQL WHENEVER SQLERROR CONTINUE;
1328 EXEC SQL CLOSE cms_sec_curr;
1329
1330 if (fpa_curs)
1331 {
1332 DbPgCC_sector_assign_status ();
1333 }
1334 EXEC SQL ROLLBACK RELEASE;
1335
1336 DbgS1(11, DBG_DRW_3,
1337 Closed Connection to the Database Due to Postgres
 Error);

Seems we had shift-reduce problems and had to remove this
Oracle-compatible syntax.  In preproc.y I see:

/*
 * for compatibility with ORACLE we will also allow the keyword RELEASE
 * after a transaction statement to disconnect from the database.
 */

/* We cannot do that anymore since it causes shift/reduce conflicts. */
ECPGRelease: TransactionStmt RELEASE

and we commented out the ECPGRelease state.  You can only use:

| RELEASE SAVEPOINT ColId
| RELEASE ColId  
| ROLLBACK opt_transaction TO SAVEPOINT ColId
| ROLLBACK opt_transaction TO ColId

Sorry about that.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Fw: [GENERAL] ecpg error -- DbPg_cre_sect_buf.pgc:1334: ERROR:

2006-02-24 Thread Bruce Momjian
[EMAIL PROTECTED] wrote:
 
 
 
 
 
 I think you should omit the word RELEASE after the
  ROLLBACK.
 
 Will this still result in the closed DB connection that the ROLLBACK
 RELEASE used to give me?

No, it will not.  I see way to do that anymore using a single statement.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] phantom backends

2006-02-24 Thread Christopher Browne
Oops! [EMAIL PROTECTED] (Ed L.) was seen spray-painting on a wall:
 I have a bunch of 7.4.6 backend processes that show up via 
 pg_stat_get_db_numbackends(), pg_stat_get_backend_idset(), etc, 
 but do not show up in ps.  Any clues?

Possibly your statistics collector got overrun, and the last time it
was successfully updated, they were there?
-- 
(reverse (concatenate 'string moc.liamg @ enworbbc))
http://linuxdatabases.info/info/internet.html
If a man speaks in the forest and there is no woman to hear him, is he
still wrong?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Requesting LinuxWorld East staffers

2006-02-24 Thread Christopher Browne
Quoth [EMAIL PROTECTED]:
 Joshua.

 We need booth staffers for Linux World east. Please step up.

 Is that for Command Prompt or postgreSQL?  Both, probably.

 I'm confused about Linux World east, though.  I googled it and it looks
 like it was in Boston last month.

It could be one of two shows LinuxWorld is putting on:

1.  Boston, April 3-6
2.  Toronto, April 24-26

I expect they are referring to Boston...
-- 
output = (cbbrowne @ gmail.com)
http://linuxdatabases.info/info/postgresql.html
Sufficiently advanced incompetence is indistinguishable from malice.
-- Vernon Schryver

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Backup file extension

2006-02-24 Thread Christopher Browne
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Michael Schmidt) 
belched out:
 I am writing a client GUI application and am adding backup/restore
 features.  I noticed that different backup file extensions are used
 for PostgreSQL - pgAdmin uses .backup (possible problem because it
 is not consistent with 8.3 file names) and PG Lightning Admin uses
 .bak (possible problem because it is generic).  To reduce the chance
 of the user making an error, I was wondering if it would make sense
 to standardize PostgreSQL backup file extension names - something
 like .pgb (PostgreSQL Backup).  I Googled pgb and it doesn't look
 like anything uses this extension.

Well, Unix doesn't have any notion of extensions.  That's something
for legacy operating systems, like MVS, CP/M, MS/DOS, and such.

On modern OSes, they generally simply support having long names, and
you are free to use whatever prefix/suffix combinations you prefer.
-- 
let name=cbbrowne and tld=gmail.com in name ^ @ ^ tld;;
http://linuxdatabases.info/info/rdbms.html
Rules of the  Evil Overlord #128. I will not  employ robots as agents
of  destruction  if  there  is  any  possible way  that  they  can  be
re-programmed  or if their  battery packs  are externally  mounted and
easily removable. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Error correction to FAQ

2006-02-24 Thread Bruce Momjian

Updated. Thanks.  I forgot that was up there.

---

Harald Armin Massa wrote:
 Within the FAQ, Point 4.5 there is:
 
 []The size of the PostgreSQL database file containing this data can be
 estimated as 6.4 MB:
 
 
 []
 685 database pages * 8192 bytes per page  =  5,611,520 bytes (5.6 MB)
 
 
 So, the detailed calculation results in 5,6MB, surrounding text is 6,4 MB.
 Please somebody with access change the first 6.4
 
 Thanks,
 
 Harald
 
 --
 GHUM Harald Massa
 persuadere et programmare
 Harald Armin Massa
 Reinsburgstra?e 202b
 70197 Stuttgart
 0173/9409607
 -
 When I visit a mosque, I show my respect by taking off my shoes. I follow
 the customs, just as I do in a church, synagogue or other holy place. But if
 a believer demands that I, as a nonbeliever, observe his taboos in the
 public domain, he is not asking for my respect, but for my submission. And
 that is incompatible with a secular democracy.
 
-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Requesting LinuxWorld East staffers

2006-02-24 Thread Joshua D. Drake

Christopher Browne wrote:

Quoth [EMAIL PROTECTED]:
  

Joshua.



We need booth staffers for Linux World east. Please step up.
  

Is that for Command Prompt or postgreSQL?  Both, probably.

I'm confused about Linux World east, though.  I googled it and it looks
like it was in Boston last month.



It could be one of two shows LinuxWorld is putting on:

1.  Boston, April 3-6
  


Boston

--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ECPG and COPY

2006-02-24 Thread Bruce Momjian
Michael Meskes wrote:
 Am Freitag, 24. Februar 2006 05:26 schrieb Wes:
  I found this in the archives:
 
http://archives.postgresql.org/pgsql-interfaces/2004-04/msg5.php
 
  With 8.x, is it still true that with ECPG, it is not possible to use COPY
  inline - I'd have to write the data to a file, then 'COPY FROM filename'
  (or use direct libpq API calls instead of ECPG)?
 
 Yes, it's still an open report. Sorry, about that and all the other open bug 
 reports/feature requests. I do not have the time at the moment to even 
 reproduce bugs. Hopefully this will become better in the near future.

Should we add this to TODO?  Anything else?

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to specify infinity for intervals ?

2006-02-24 Thread Michael Glaesemann


On Feb 25, 2006, at 12:09 , Bruce Momjian wrote:


We have this TODO:

o Allow infinite dates just like infinite timestamps

Do we need to add intervals to this?


I think to be consistent with the other datetime types, might as  
well. Then one would be able to get an answer to


test=# select 'infinity'::timestamp - current_timestamp;
ERROR:  cannot subtract infinite timestamps

Michael Glaesemann
grzm myrealbox com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly