Re: [GENERAL] Postgres mystery

2005-03-29 Thread Michael Fuhr
On Wed, Mar 30, 2005 at 09:11:09AM +0200, Shaun Clements wrote:
> Can anyone tell me what the problem is here:
> I am inserting into a table via a stored procedure, to a few columns within
> the table and postgres is throwing a 
> CANNOT EXECUTE NULL QUERY.
> 
> EXECUTE ''INSERT INTO table (column1, column2, column3,
> ''||quote_ident(column4)||'') values
> (''||quote_literal(RECORDNAME.column1)||'',''||quote_literal(RECORDNAME.colu
> mn2)||'',stringvalue,''||quote_literal(RECORDNAME.column2)||'')'';

One of the operands to || is probably NULL, so the entire INSERT
string ends up being NULL.  Example:

SELECT 'abc' || 'def';
 ?column? 
--
 abcdef
(1 row)

SELECT 'abc' || NULL;
 ?column? 
--
 
(1 row)

Looks like you need to check for NULL or use COALESCE to convert
NULL to something else.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgres mystery

2005-03-29 Thread Richard Huxton
Shaun Clements wrote:
Can anyone tell me what the problem is here:
I am inserting into a table via a stored procedure, to a few columns within
the table and postgres is throwing a 
CANNOT EXECUTE NULL QUERY.


EXECUTE ''INSERT INTO table (column1, column2, column3,
''||quote_ident(column4)||'') values
(''||quote_literal(RECORDNAME.column1)||'',''||quote_literal(RECORDNAME.colu
mn2)||'',stringvalue,''||quote_literal(RECORDNAME.column2)||'')'';

Is this a bug, as I am ensuring that the NOT NULL columns are supplied, as
well as one other.
I'm guessing one of your variables is null. Try explicitly checking all 
of those.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postgres mystery

2005-03-29 Thread Klint Gore
On Wed, 30 Mar 2005 09:11:09 +0200, Shaun Clements <[EMAIL PROTECTED]> wrote:
> Can anyone tell me what the problem is here:
> I am inserting into a table via a stored procedure, to a few columns within
> the table and postgres is throwing a 
> CANNOT EXECUTE NULL QUERY.
> 
> 
> EXECUTE ''INSERT INTO table (column1, column2, column3,
> ''||quote_ident(column4)||'') values
> (''||quote_literal(RECORDNAME.column1)||'',''||quote_literal(RECORDNAME.colu
> mn2)||'',stringvalue,''||quote_literal(RECORDNAME.column2)||'')'';
> 
> 
> Is this a bug, as I am ensuring that the NOT NULL columns are supplied, as
> well as one other.

At least one of column4, recordname.column1, recordname.column2,
recordname.column2 is null.  If you append a null to a string, the
result is null.

klint.

+---+-+
: Klint Gore: "Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless"   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Zombie processes

2005-03-29 Thread Richard Huxton
Peterson, Bjorn wrote:
I am running Postgres 8.0.1 on a Windows 2000 server as a service, and the
query returns appropriate values after about 30 seconds when executed from a
psql console.  However, when running this query from a Java application on
the same machine through the postgres JDBC driver, the Java app hangs on
st.executeQuery().  The query shows up in pg_stat_activity for about 3
minutes then goes away, but the process referenced by the pg_stat_activity
remains active and consumes 50% of the CPU resources indefinitely until I
kill it off.  I let it run for over an hour yesterday.  I can reproduce this
every time I run this query.
Two things you should start with:
1. Turn query-logging on in your postgresql.conf so we can see exactly 
what is happening
2. See if you can simulate this using PREPARE/EXECUTE for the query.

I'm guessing that the Java side of things is producing a prepared query 
and substituting parameters in. That might produce a different plan.

First thing is to turn query logging on though, that way you can check 
exactly what is being executed.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Upgrade data

2005-03-29 Thread Richard Huxton
josue wrote:
Hello list,
I need to upgrade my dbs from 743 to 801, current data size is around 
5GB, I've tried this way:

 ./pg_dump -d dbtest -p 9980 | ./psql -d template1 -p 9981
but is too slow, any idea or suggestion to properly upgrade my dbs, I 
also have blobs stored there.
Try pg_dump followed by pg_restore - both using version 8.0.1. If that's 
still too slow, the only other option is to dump/restore and then setup 
replication between the old and new databases to bring your new system 
up to date. Slony can be used for this.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Postgres mystery

2005-03-29 Thread Shaun Clements
Title: [GENERAL] Postgres mystery





Can anyone tell me what the problem is here:
I am inserting into a table via a stored procedure, to a few columns within the table and postgres is throwing a 
CANNOT EXECUTE NULL QUERY.



EXECUTE ''INSERT INTO table (column1, column2, column3, ''||quote_ident(column4)||'') values (''||quote_literal(RECORDNAME.column1)||'',''||quote_literal(RECORDNAME.column2)||'',stringvalue,''||quote_literal(RECORDNAME.column2)||'')'';




Is this a bug, as I am ensuring that the NOT NULL columns are supplied, as well as one other.


Kind Regards,
Shaun Clements





Re: [GENERAL] How to return a record and sets of record in plpython

2005-03-29 Thread Sim Zacks
As far as I understand, though I hope someone will post that I'm wrong,
plpython cannot return recordsets.
In general, I use plpgsql when I want recordets and plpython when I want
functionality.
I have also called plpython functions from a plpgsql function when I wanted
a single value result.



"RL" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi,
>
>I am trying to implement a few functions in Plpython (in Postgresql
8.0.1).
> However, I could find the solution in the documentation, so I would
appreciate
> your help. My question is how to return a record and sets of records in
Plpython
> if it's possible. For instance, if we have a table name EMP
>
> CREATE TABLE EMP ( name text, salary integer, age integer )
>
> according to the documentation, the following code will return a composite
> type:
>
> CREATE FUNCTION new_emp() RETURNS EMP AS '
>SELECT text "None" AS name,
>   1000 AS salary,
>   25 AS age;
> ' LANGUAGE SQL;
>
> and
>
> "RETURN NEXT" in Pl/pgsql allows us to return multiple rows
>
> How am I going to do the same thing in plpython?
> Thanks,
>
> Ruey-Lung Hsiao




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.0.2beta1

2005-03-29 Thread Bruce Momjian

Sorry, I have been in Boston since Sunday and return home tomorrow and
will then try to get the release notes done unless someone else gets to
it first.

---

Tom Lane wrote:
> "Cristian Prieto" <[EMAIL PROTECTED]> writes:
> > What are the differences, bugfixes, features, etc in the 8.0.2beta1? I =
> > was looking around and I couldn't find anything related... Could =
> > somebody help me?
> 
> Unfortunately we haven't prepared release notes yet.  If you want to do
> testing, what I would look for is performance changes.  The main stuff
> in 8.0.2 (other than bugfixes for very specific bugs) is:
> 
> * Replace ARC buffer management algorithm by 2Q to avoid pending IBM patent.
>   Testing so far says this causes a few percent degradation in some
>   cases and no visible impact in others.  We'd be interested to hear
>   more reports.
> 
> * Planner tweaks to avoid bad side-effects of the 8.0 change that made
>   the planner use the current physical table size instead of what
>   pg_class.relpages says.  That's usually a win but caused problems if
>   a plan is cached when the table is first created (and so is empty).
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Christopher Browne
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Caleb 
Simonyi-Gindele) would write:
> John Burger wrote:
 If it were me, and someone proposed a model where two-way
 replication was needed, I would tell them to rethink their model.
 It's broken.
>>>
>>> I would respectfully disagree that the requirement for two-way
>>> replication indicates a broken design.
>>
>>
>> I agree with your disagreement.  This design is present in lots of
>> non-RDB systems - CVS, IMAP, PDA syncing, etc.  It's clearly more
>> complicated, but can be made to work, and has been many times.  I
>> don't see anything about databases in general, or Postgres
>> specifically, that indicates it's a bad idea.
>>
>> - John D. Burger
>>   MITRE
>>
> Yes, we use it successfully with the SQL Server edition of our
> product. Does anyone know if this is available with Postgre?

There's no such thing as "Postgre," so there's a paucity of features
available for that...

If you're thinking of PostgreSQL, the only system I am aware of that
offers a similar form of "highly asynchronous multi master with
conflict avoidance/resolution" is PeerDirect's replication system.
-- 
let name="cbbrowne" and tld="gmail.com" in String.concat "@" [name;tld];;
http://linuxdatabases.info/info/slony.html
"Python's minimalism is attractive to people  who like minimalism.  It
is decidedly unattractive to people who  see Python's minimalism as an
exercise in masochism." -- Peter Hickman, comp.lang.ruby

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Possible to run the server with ANSI/ISO string

2005-03-29 Thread Ken Johanson
The latest mysql build (5.0.3) now supports standard backslash behavior, 
using the below config option.

set-variable=sql-mode=PIPES_AS_CONCAT,ANSI_QUOTES,NO_BACKSLASH_ESCAPE
PG seems to be the last holdout. :-)
Ken
Ken Johanson wrote:
[snip]
I think most people agree that being SQL compliant is good. The question
is: is it worth the pain for existing users?

My guess is that it is worth it, if the users are given the discretion 
of treading that water.. and to save them future pain by encouraging 
them to migrate toward 'other-db' compatibility (or merely to migrate to 
PreparedStatement to eliminate worry and *insure* interop).

But where things are right now, I *know* allot of apps specially coded 
for PG (or mysql) --- using functions like PHPs escapeCslashes()... so 
they are NOT compatible apps with other DBs. So making the change would 
at least raises author awareness to use PreparedStatements instead (half 
the battle is won then because when a PS admin turns on the new escape, 
their apps still works correctly), or stop using escapeCslashes in favor 
of a sql-escape function (yes, not 'old pg' compatible, but be able to 
claim interop with other dbs).

A configurable option does not make the pain disappear. Admins are
forced to choose one side (either sql compliant or c style) and exclude
the other applications. Any app developer that wants to support pre-8.1
apps will have to have a c-style app available. So even if you nip it in
the bud, it's not really gone yet because app developers want to support
old versions of postgres.

As was mentioned earlier, this may not be too much of an issue if the 
new drivers supported an option in the getConnection call that turned on 
the new escape, otherwise leaving old escape turned on by default. Sort 
of like the jdbc version/conformance level that jdbc drivers can report 
through function calls. In fact PG could forever use the old style 
escapes by default, except when a modern driver connected to it and they 
both agree to use the new style.

I know if we added the option and deprecated the old style, I would be
forced to choose between using deprecated syntax that may not be
supported for long, or doing a lot of work to convert and retest
applications.
Yes - and your app would be inter operable with Oracle, Sybase, etc and 
have a wider audience (moot point if you use prepared statements 
obviously) especially in the enterprise... Very worthwhile, imo.


Besides, the version-deprecation / version requirements you mention 
exists in every piece of software I've even seen. Sometime they're 
okay with a really old version, sometime only the newest will do. 
This is the very argument for getting PG to offer an (use-optional) 
escape behavior inline with the rest - to mitigate these version 
requirements down the road.


I think you may have misunderstood what I meant. I am not suggesting
that we don't change the database at all between versions, my argument
was showing the difficulties when one version has many different shapes
due to many incompatible options.

Sorry, I misunderstood. Your point is well taken, and I agree.
Thank you,
ken

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] sub query constraint

2005-03-29 Thread Dale Sykora
Yudie Pg wrote:
CREATE OR REPLACE FUNCTION validate_actions_insert() RETRUNS OPAQUE AS '
CREATE OR REPLACE FUNCTION validate_actions_insert() RETURNS OPAQUE AS $$
DECLARE
rs RECORD;
BEGIN
  SELECT INTO rs * FROM user_data WHERE name = NEW.user and write_access = 't';
   
  IF NOT FOUND THEN
 RAISE EXCEPTION ''writing access forbidden for user '', NEW.user;
  RAISE EXCEPTION 'writing access forbidden for user %', NEW.user;
  END IF;
  RETURN NEW;
END;
' LANGUAGE plpgsql;
 $$ LANGUAGE plpgsql;
CREATE TRIGGER tg_actions BEFORE INSERT OR UPDATE ON actions
FOR EACH ROW EXECUTE PROCEDURE validate_actions_insert();
Yugi,
	I made a few minor modifications as shown above and the trigger 
function works great.  I think I'll also use triggers to keep a history 
of record changes for auditing.

Thanks,
Dale
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Oracle Migration. Don't Care about the Corbomite Maneuver,

2005-03-29 Thread Joshua D. Drake
Mohan, Ross wrote:
Hello Postgres'ers, 

Oracle newbie to PG here, curious about best books and online
resources. A *lot* of what I am finding in google is one to four
year old p*ssing contest with MySQL vs. Postgres (don't care)
or "Why doesn't PostGres support five level hypergalactic triple ACID
Phase Inverters on the Debian/Slack Leenucks 3.0(a) kernel" (also
don't care, at least yet ;-)) 

Just have a small oracle db to move over to PG. And I want it
to go well, be robust, manageable etc. So...that's the kind
of book/community I am looking for. 

Any pointers? 
 

If you don't currently use store procedures your data set
should be fairly easily to move over. You just have to change
some data types to correctly match.
I had a paper around here somewhere about it I will see if I can
did it up.
Sincerely,
Joshua D. Drake

Thanks In Advance!
Ross Mohan
Postgres Rookie
---(end of broadcast)---
TIP 8: explain analyze is your friend
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Tony Caduto
If you are using Delphi you can use the tclientdataset which has a 
Briefcase type system built in.
Or you can use one of the many middleware systems that are available for 
Delphi, all of which will do what you want.
www.remobjects.com
www.astatech.com
etc etc

Most of them work by creating a XML local dataset then applying that XML 
dataset when the client logs back in, it handles the conflict resolution 
etc.

Delphi really is one of THE best ways to develop database apps.
Tony
Scott Marlowe wrote:
On Tue, 2005-03-29 at 16:06, Caleb Simonyi-Gindele wrote:
 

John Burger wrote:
   

If it were me, and someone proposed a model where two-way replication
was needed, I would tell them to rethink their model.  It's broken.
 

I would respectfully disagree that the requirement for two-way 
replication
indicates a broken design.
   

I agree with your disagreement.  This design is present in lots of 
non-RDB systems - CVS, IMAP, PDA syncing,  etc.  It's clearly more 
complicated, but can be made to work, and has been many times.  I 
don't see anything about databases in general, or Postgres 
specifically, that indicates it's a bad idea.

- John D. Burger
 MITRE
 

Yes, we use it successfully with the SQL Server edition of our product. 
Does anyone know if this is available with Postgre?
   

It's important to understand that what you're asking for is MORE than
simple replication, it is replication with ((semi)automatic) conflict
resolution.  If you use a simple replication system to try and do this,
you are likely to wind up with inconsistent data.
Just because SQL Server does it doesn't mean it does it right.  And the
general philosophy of the PostgreSQL team seems to be do it right or
don't bother.
So, what are the chances that you'll have records on your sales folks
machines that have also been updated back at the home office?  What
rules should be applied when conflicts arise?  These are the kinds of
questions you need to answer before jumping feet first into the fire and
getting burnt.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Oracle Migration. Don't Care about the Corbomite Maneuver, Spock.

2005-03-29 Thread Dann Corbit

Probably there is a more recent link.  This is the one that popped up in
Google:
http://www.postgresql.org/docs/7.3/interactive/plpgsql-porting.html

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mohan, Ross
Sent: Tuesday, March 29, 2005 4:35 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Oracle Migration. Don't Care about the Corbomite
Maneuver, Spock. 

Hello Postgres'ers, 

Oracle newbie to PG here, curious about best books and online
resources. A *lot* of what I am finding in google is one to four
year old p*ssing contest with MySQL vs. Postgres (don't care)
or "Why doesn't PostGres support five level hypergalactic triple ACID
Phase Inverters on the Debian/Slack Leenucks 3.0(a) kernel" (also
don't care, at least yet ;-)) 

Just have a small oracle db to move over to PG. And I want it
to go well, be robust, manageable etc. So...that's the kind
of book/community I am looking for. 

Any pointers? 


Thanks In Advance!


Ross Mohan
Postgres Rookie

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Oracle Migration. Don't Care about the Corbomite Maneuver, Spock.

2005-03-29 Thread Dann Corbit
Here's a newer link:
http://www.postgresql.org/docs/8.0/interactive/plpgsql-porting.html

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mohan, Ross
Sent: Tuesday, March 29, 2005 4:35 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Oracle Migration. Don't Care about the Corbomite
Maneuver, Spock. 

Hello Postgres'ers, 

Oracle newbie to PG here, curious about best books and online
resources. A *lot* of what I am finding in google is one to four
year old p*ssing contest with MySQL vs. Postgres (don't care)
or "Why doesn't PostGres support five level hypergalactic triple ACID
Phase Inverters on the Debian/Slack Leenucks 3.0(a) kernel" (also
don't care, at least yet ;-)) 

Just have a small oracle db to move over to PG. And I want it
to go well, be robust, manageable etc. So...that's the kind
of book/community I am looking for. 

Any pointers? 


Thanks In Advance!


Ross Mohan
Postgres Rookie

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Oracle Migration. Don't Care about the Corbomite Maneuver, Spock.

2005-03-29 Thread Guy Rouillier
Mohan, Ross wrote:
> Hello Postgres'ers,
> 
> Oracle newbie to PG here, curious about best books and online
> resources. A *lot* of what I am finding in google is one to four year
> old p*ssing contest with MySQL vs. Postgres (don't care) or "Why
> doesn't PostGres support five level hypergalactic triple ACID Phase
> Inverters on the Debian/Slack Leenucks 3.0(a) kernel" (also don't
> care, at least yet ;-)) 
> 
> Just have a small oracle db to move over to PG. And I want it to go
> well, be robust, manageable etc. So...that's the kind of
> book/community I am looking for.  
> 
> Any pointers?

Having recently done this myself, check out ora2pg.  Very helpful, and
will do much of the work for you.  There are also section in the
documentation dealing with this particular topic (since so many people
want to do it.)

> 
> 
> Thanks In Advance!
> 
> 
> Ross Mohan
> Postgres Rookie
> 
> ---(end of
> broadcast)--- 
> TIP 8: explain analyze is your friend



-- 
Guy Rouillier


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Oracle Migration. Don't Care about the Corbomite Maneuver, Spock.

2005-03-29 Thread Mohan, Ross
Hello Postgres'ers, 

Oracle newbie to PG here, curious about best books and online
resources. A *lot* of what I am finding in google is one to four
year old p*ssing contest with MySQL vs. Postgres (don't care)
or "Why doesn't PostGres support five level hypergalactic triple ACID
Phase Inverters on the Debian/Slack Leenucks 3.0(a) kernel" (also
don't care, at least yet ;-)) 

Just have a small oracle db to move over to PG. And I want it
to go well, be robust, manageable etc. So...that's the kind
of book/community I am looking for. 

Any pointers? 


Thanks In Advance!


Ross Mohan
Postgres Rookie

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] database monitor

2005-03-29 Thread Edson Vilhena de Carvalho
I´m a estudante and I am making a project where I am
developing a database monitor, PostgreSQL is one of
the database's that my aplication will monotorize. I
have only started working with postgreSQL recently I
am likin it. 

I would like to know what do you think it is
importante and useful do be monitorized in a
postgreSQL database, you can also give me your
opinions about things you think that are importante to
be monitorized in any database system.

Tank you very much
Edson Carvalho





Yahoo! Acesso Grátis - Internet rápida e grátis. 
Instale o discador agora! http://br.acesso.yahoo.com/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Scott Marlowe
On Tue, 2005-03-29 at 16:06, Caleb Simonyi-Gindele wrote:
> John Burger wrote:
> 
> >>> If it were me, and someone proposed a model where two-way replication
> >>> was needed, I would tell them to rethink their model.  It's broken.
> >>
> >>
> >> I would respectfully disagree that the requirement for two-way 
> >> replication
> >> indicates a broken design.
> >
> >
> > I agree with your disagreement.  This design is present in lots of 
> > non-RDB systems - CVS, IMAP, PDA syncing,  etc.  It's clearly more 
> > complicated, but can be made to work, and has been many times.  I 
> > don't see anything about databases in general, or Postgres 
> > specifically, that indicates it's a bad idea.
> >
> > - John D. Burger
> >   MITRE
> >
> Yes, we use it successfully with the SQL Server edition of our product. 
> Does anyone know if this is available with Postgre?

It's important to understand that what you're asking for is MORE than
simple replication, it is replication with ((semi)automatic) conflict
resolution.  If you use a simple replication system to try and do this,
you are likely to wind up with inconsistent data.

Just because SQL Server does it doesn't mean it does it right.  And the
general philosophy of the PostgreSQL team seems to be do it right or
don't bother.

So, what are the chances that you'll have records on your sales folks
machines that have also been updated back at the home office?  What
rules should be applied when conflicts arise?  These are the kinds of
questions you need to answer before jumping feet first into the fire and
getting burnt.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Yudie Pg
On Tue, 29 Mar 2005 16:00:37 -0500, John Burger <[EMAIL PROTECTED]> wrote:
> >> If it were me, and someone proposed a model where two-way replication
> >> was needed, I would tell them to rethink their model.  It's broken.
> >
> > I would respectfully disagree that the requirement for two-way
> > replication
> > indicates a broken design.
> 
> I agree with your disagreement.  This design is present in lots of
> non-RDB systems - CVS, IMAP, PDA syncing,  etc.  It's clearly more
> complicated, but can be made to work, and has been many times.  I don't
> see anything about databases in general, or Postgres specifically, that
> indicates it's a bad idea.
> 

I would suggest whenever changes on the main db caused by sync or
immediate update by user, it better to archive the changes into
separate table.
Archiving is quite simple with creating rule on update or delete table
to insert old record to separate table. It will be useful for further
reconciliation

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Driver's SQLAllocHandle on SQL_HANDLE_DBC failed

2005-03-29 Thread Amir Zicherman
hi guys.  

i'm using microsoft odbc and i close and open a lot of connections
through my code at a regular basis.  for some reason, sometimes, not
always, i get the following error when i try to open a new connection
to the DB:  "ERROR [IM005] [Microsoft][ODBC Driver Manager] Driver's
SQLAllocHandle on SQL_HANDLE_DBC failed".

on the db side, the postgresql error log shows nothing abouy this. 
any ideas why this is happening?

my postgresql version is: 7.4.6

Thank You, amir

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Caleb Simonyi-Gindele

John Burger wrote:
If it were me, and someone proposed a model where two-way replication
was needed, I would tell them to rethink their model.  It's broken.

I would respectfully disagree that the requirement for two-way 
replication
indicates a broken design.

I agree with your disagreement.  This design is present in lots of 
non-RDB systems - CVS, IMAP, PDA syncing,  etc.  It's clearly more 
complicated, but can be made to work, and has been many times.  I 
don't see anything about databases in general, or Postgres 
specifically, that indicates it's a bad idea.

- John D. Burger
  MITRE
Yes, we use it successfully with the SQL Server edition of our product. 
Does anyone know if this is available with Postgre?
Caleb
begin:vcard
fn:Caleb Simonyi-Gindele
n:Simonyi-Gindele;Caleb
org:Advanced Technology Corp.
adr:;;79 N. Franklin Turnpike;Ramsey;NJ;07446;USA
email;internet:[EMAIL PROTECTED]
title:R&D Specialist
tel;work:1-201-399-4374
tel;fax:1-201-399-4373
tel;home:1-306-482-5040
tel;cell:1-306-482-7050
x-mozilla-html:TRUE
url:http://www.vetstar.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Upgrade data

2005-03-29 Thread josue
Hello list,
I need to upgrade my dbs from 743 to 801, current data size is around 
5GB, I've tried this way:

 ./pg_dump -d dbtest -p 9980 | ./psql -d template1 -p 9981
but is too slow, any idea or suggestion to properly upgrade my dbs, I 
also have blobs stored there.

Thanks in advance,
--
Sinceramente,
Josué Maldonado.
... "La amistad, como todo verdadero encuentro, es dar y recibir." P. 
Guisar.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread John Burger
If it were me, and someone proposed a model where two-way replication
was needed, I would tell them to rethink their model.  It's broken.
I would respectfully disagree that the requirement for two-way 
replication
indicates a broken design.
I agree with your disagreement.  This design is present in lots of 
non-RDB systems - CVS, IMAP, PDA syncing,  etc.  It's clearly more 
complicated, but can be made to work, and has been many times.  I don't 
see anything about databases in general, or Postgres specifically, that 
indicates it's a bad idea.

- John D. Burger
  MITRE

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Guy Rouillier
Dann Corbit wrote:

> If it were me, and someone proposed a model where two-way replication
> was needed, I would tell them to rethink their model.  It's broken. 

I'm relatively new to PostgreSQL so won't comment about that.  But some
DBMSs have this feature built in because it is a fairly common usage
model (think traveling salespeople.)  I've personally used Watcom (now
Sybase) SQL Anywhere which has this and it works quite well.  I would
respectfully disagree that the requirement for two-way replication
indicates a broken design.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2

2005-03-29 Thread Oleg Bartunov
On Tue, 29 Mar 2005, Ben wrote:
Yes by reinstalling to the default location. :)
When I get a chance I'll try this again, but I've been too swamped to give
it a go.
I just tried myself to install pgsql into non-standard location and 
got no problem.


On Tue, 29 Mar 2005, Oleg Bartunov wrote:
Did you resolve your problem ?
On Tue, 29 Mar 2005, Ben wrote:
Yes, I did.
On Fri, 25 Mar 2005, Oleg Bartunov wrote:
Did you try 'make clean' first ?
On Thu, 24 Mar 2005, Ben wrote:
I'm trying to install tsearch2 into an empty database on a new 8.0.1 postgres
install. The machine already has an older 7.4 install of postgres on it, so I
gave configure a --prefix=/usr/local/pg801 option. Postgres installed and
started fine (after changing the port), and I was able to create my new empty
database without issues.
Now comes the part where I fail to install tsearch2. I go to the
contrib/tsearch2 directory, run make and make install without issues. make
installcheck tries to connect to the older postgres install (I don't see an
option to set the port it attempts to use), so I try to pipe tsearch2.sql
into the new database. It starts working fine, and then says:
ERROR:  could not find function "tsvector_cmp" in file
"/usr/local/pgsql/lib/tsearch2.so"
 which is interesting, because it's not trying to use /usr/local/pg801/
like it's supposed to.
Thoughts?
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
 http://www.postgresql.org/docs/faq
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2

2005-03-29 Thread Ben
Yes by reinstalling to the default location. :)

When I get a chance I'll try this again, but I've been too swamped to give 
it a go.

On Tue, 29 Mar 2005, Oleg Bartunov wrote:

> Did you resolve your problem ?
> 
> On Tue, 29 Mar 2005, Ben wrote:
> 
> > Yes, I did.
> >
> > On Fri, 25 Mar 2005, Oleg Bartunov wrote:
> >
> >> Did you try 'make clean' first ?
> >>
> >> On Thu, 24 Mar 2005, Ben wrote:
> >>
> >>> I'm trying to install tsearch2 into an empty database on a new 8.0.1 
> >>> postgres
> >>> install. The machine already has an older 7.4 install of postgres on it, 
> >>> so I
> >>> gave configure a --prefix=/usr/local/pg801 option. Postgres installed and
> >>> started fine (after changing the port), and I was able to create my new 
> >>> empty
> >>> database without issues.
> >>>
> >>> Now comes the part where I fail to install tsearch2. I go to the
> >>> contrib/tsearch2 directory, run make and make install without issues. make
> >>> installcheck tries to connect to the older postgres install (I don't see 
> >>> an
> >>> option to set the port it attempts to use), so I try to pipe tsearch2.sql
> >>> into the new database. It starts working fine, and then says:
> >>>
> >>> ERROR:  could not find function "tsvector_cmp" in file
> >>> "/usr/local/pgsql/lib/tsearch2.so"
> >>>
> >>>
> >>>  which is interesting, because it's not trying to use 
> >>> /usr/local/pg801/
> >>> like it's supposed to.
> >>>
> >>>
> >>> Thoughts?
> >>>
> >>>
> >>> ---(end of broadcast)---
> >>> TIP 5: Have you checked our extensive FAQ?
> >>>
> >>>  http://www.postgresql.org/docs/faq
> >>>
> >>
> >>Regards,
> >>Oleg
> >> _
> >> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> >> Sternberg Astronomical Institute, Moscow University (Russia)
> >> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> >> phone: +007(095)939-16-83, +007(095)939-23-83
> >>
> >> ---(end of broadcast)---
> >> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> >>
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> >
> 
>   Regards,
>   Oleg
> _
> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> Sternberg Astronomical Institute, Moscow University (Russia)
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(095)939-16-83, +007(095)939-23-83
> 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2

2005-03-29 Thread Oleg Bartunov
Did you resolve your problem ?
On Tue, 29 Mar 2005, Ben wrote:
Yes, I did.
On Fri, 25 Mar 2005, Oleg Bartunov wrote:
Did you try 'make clean' first ?
On Thu, 24 Mar 2005, Ben wrote:
I'm trying to install tsearch2 into an empty database on a new 8.0.1 postgres
install. The machine already has an older 7.4 install of postgres on it, so I
gave configure a --prefix=/usr/local/pg801 option. Postgres installed and
started fine (after changing the port), and I was able to create my new empty
database without issues.
Now comes the part where I fail to install tsearch2. I go to the
contrib/tsearch2 directory, run make and make install without issues. make
installcheck tries to connect to the older postgres install (I don't see an
option to set the port it attempts to use), so I try to pipe tsearch2.sql
into the new database. It starts working fine, and then says:
ERROR:  could not find function "tsvector_cmp" in file
"/usr/local/pgsql/lib/tsearch2.so"
 which is interesting, because it's not trying to use /usr/local/pg801/
like it's supposed to.
Thoughts?
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
 http://www.postgresql.org/docs/faq
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] do I need replication or something else?

2005-03-29 Thread Dann Corbit
Sounds like you are begging for trouble.

Suppose that a customer calls in to the main office, and you update some
customer data.

The field salesman also updates data for this customer.

If you update the main office database with the field data, you will
lose information.

If you update the field database data with main office database data,
you will lose information.

Because we have two different changed records, it will be very difficult
to reconcile this data without human intervention.

In short, a disconnected system where data on both ends can change is
begging for trouble.

Now, if you never update the database in the main office except with
data from the field salesmen, it could be made to work.  But I doubt
that this is what you are after.

If it were me, and someone proposed a model where two-way replication
was needed, I would tell them to rethink their model.  It's broken.

IMO-YMMV.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Caleb
Simonyi-Gindele
Sent: Tuesday, March 29, 2005 10:58 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] do I need replication or something else?

We have a billing system and we want the ability to send users out into 
the field with an unconnected (no WAN, VPN etc) laptop containing our pg

db and software. Upon their return we need to synchronize changes to the

main db. We would like the ability to be able to have this accomplished 
at the db level rather than doing this in our app.

What is the best tool to accomplish this with?

Caleb

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] btree index bloat, prototype non-locking solution

2005-03-29 Thread Dave Chapeskie
The issue commonly referred to as 'btree index bloat' is better after
the release of 7.4 but still exists.  What follows is a description
of the problem (for the non-developers or those that have forgotten),
how it can occur in real tables, current work-arounds (REINDEX), and a
possible solution that doesn't hold long-term exclusive locks.  Skip to
the prototype solution section if you like.  I'd like some comments on
from developers.

[Please CC me on any replies, I'll check the archives but I'm not
 currently subscribed to this list, thanks.]


Problem Description
===
The bloating issue is that when tuples are deleted index pages may
become sparsely populated and under some circumstances this can effect
the performance of the index.  PostgreSQL-7.4 addressed part of this
problem by introducing the ability of VACUUM to identify and mark for
reuse completely free index pages.  Before 7.4 the possible size of an
index was completely unbounded, under extreme circumstances you could
generate an arbitrary large index file with a very small set of rows;
now the worst you can possibly get is a single index key per index page,
8 KB of index per row (still very bad, but at least it's bounded now).

Deletes should not be a problem for an index of some arbitrary
general-use columns since future updates/inserts for index keys in the
same range can re-use the space in existing index pages.  Indeed, this
is why REINDEX and CREATE INDEX don't pack the index pages full to
start with, to avoid needing to split pages right away on a following
insert or update.  The problem arises if the insert and delete patterns
are such that this space is never reused.  It's exactly this pattern
that many of the table in the schema I work with use, and the resulting
sparse btree indexes are causing us issues on production databases.

The only current solution for this is that once the index becomes sparse
REINDEX is run; this locks readers and writers from accessing the table.
CREATE INDEX/DROP INDEX can be used to build a replacement index for
non-primary key indices instead with the benefit that this still allows
readers, but it still locks out writers.  With large tables either of
these operations can take hours which makes this undesirable (or in my
case unusable).

Real-Life Example
=
We have tables that hold timestamped data.  The data is inserted
with the current time stamp such that this field is monotonically
incrementing.  Additionally, there are serial fields that also
monotonically increment.  The granularity of the data during insert is
high (e.g. 1 row per item every 5 minutes) but as the data ages the
granularity can be lower (e.g. after a week we only need 1 row per item
every hour; and after a month we only need 1 row per item every day;
etc).  The tables are designed such that the granularity can be changed
by simply deleting a subset of rows (e.g. delete 11 of the 12 rows
inserted for an item during an hour).  Old rows are never updated.

Keys for indices starting with the time stamp or id field always get
inserted into the right most index page.  The current btree code splits
this page 70/30 (instead of the normal 50/50) such that the after
repeated inserting the index pages are ~70% full of index keys (note
that a REINDEX fills index pages to 90%).  As the data ages we delete a
subset of the old data which makes those pages sparse (or in some cases
empty, those are reused).  Since no updates or inserts occur on old time
stamp or serial values these pages become and stay quite sparse.  The
attached example SQL script has a case where indices pages become only
20% on average.


Prototype Solution
==
The solution I've toyed with to improve performance and make the space
re-usable but NOT lock out readers or writers for any length of time is
a naive btree index page merger.  This does the opposite of a index page
split.  The merger/compressor scans the pages of the index and compares
each page with its right-page.  If the items on the two pages can fit
in one page at a capacity less than some target amount (e.g. 90%) the
items from the left-page are moved into the right-page, the parent-page
updated to reflect this, and then all three pages written out.

The reason I call it naive is that it only considers pairs of pages,
not runs of several pages that could be merged (e.g. compress 3 pages
into 2).  The code locks the three pages in question but because
I wasn't sure if this would be safe in all cases it also grabs an
exclusive lock on the table and index during the scan.  However, unlike
re-index, the scan can be incremental, e.g. scan pages 1-1000 then
release the lock, then scan pages 1001-2000 and release the lock, etc,
so that writers are only periodically and briefly locked out.

After this a VACUUM of the table will put the now empty index pages on
the FSM (Free Space Map) for later re-use.  (E.g. this doesn't make the
size of the index smaller yet, it just helps li

[GENERAL] do I need replication or something else?

2005-03-29 Thread Caleb Simonyi-Gindele
We have a billing system and we want the ability to send users out into 
the field with an unconnected (no WAN, VPN etc) laptop containing our pg 
db and software. Upon their return we need to synchronize changes to the 
main db. We would like the ability to be able to have this accomplished 
at the db level rather than doing this in our app.

What is the best tool to accomplish this with?
Caleb
begin:vcard
fn:Caleb Simonyi-Gindele
n:Simonyi-Gindele;Caleb
org:Advanced Technology Corp.
adr:;;79 N. Franklin Turnpike;Ramsey;NJ;07446;USA
email;internet:[EMAIL PROTECTED]
title:R&D Specialist
tel;work:1-201-399-4374
tel;fax:1-201-399-4373
tel;home:1-306-482-5040
tel;cell:1-306-482-7050
x-mozilla-html:TRUE
url:http://www.vetstar.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] pg_xlog disk full error, i need help

2005-03-29 Thread Janning Vygen
Am Dienstag, 29. März 2005 16:37 schrieb Tom Lane:
> Janning Vygen <[EMAIL PROTECTED]> writes:
> > Am Montag, 28. März 2005 18:06 schrieb Tom Lane:
> >> The only way for pg_xlog to bloat vastly beyond what it's supposed to be
> >> (which is to say, about twice your checkpoint_segments setting) is if
> >> checkpoints are somehow blocked from happening.  The only mechanism I
> >> know about for that is that in 7.4.* (maybe 7.3.* too) a very large
> >> btree CREATE INDEX or REINDEX operation can block checkpoints until it
> >> completes.  Did you have something like that going on?
> >
> > It looks like something/someone wrote so much data in my table that the
> > nightly clustering process just didn't succeed because of disk full
> > failure after writing too many pg_xlog files. The writing of so many
> > pg_xlog files now makes more sense to me when clustering 68 GByte of
> > data.
>
> Yeah, CLUSTER proceeds by rewriting the table and then invoking REINDEX
> on each index, so the checkpoint lockout problem will apply.  Tell you
> the truth, the best and perhaps only answer for you is to update to 8.0
> where that problem is solved.
>
> > How do i get the tablename using this filenode? (restarting the database
> > is not an option)
>
> Sure it is.  pg_resetxlog will allow you to restart ... possibly you
> will lose some transactions, but if the only thing going on was the
> CLUSTER, nothing of value will be lost.

* I just can't restart it:

I zipped all my pg_xlog files in the crashed database to have enough space to 
get my backup running. 

As my database server is not in my LAN i can't download 100 GB of files and i 
can't gunzip all the pg_xlog files again to start it on the same server.

So i could delete all files in my pg_xlog directory and then try to start the 
database with another compiled instance of postgresql. But as this is a 
production database, its not a good idea. 

And i cant move 100 GB (or only 60 GB in the base directory) to another server 
because no server has enough space nor is transfering 60 GB very cheap. I 
have no idea how to get it running again.

* But what i really want to know is how to interpret the results of 
pg_filedump. I didn't found any documentation besides the README. Most output 
is easy to understand, others are not. example: what means "Flags: USED" in a 
data item? 

* anyway: i am still confused how my table could get this big over night.

I stop all database activities by shutting down apache in a nightly cronjob 
and then my nightly job runs 

  pg_dump -Fc $DBNAME > $BACKUP_FILE
  psql -c 'SELECT update_tspt_aktuell();' $DBNAME
  psql -c 'CLUSTER;' $DBNAME

a) The dump file ist just fine and the one table is not as big as 60 GB! The 
whole base directory after reinstalling is 1.4 GB.

b) The Function is this:

CREATE OR REPLACE FUNCTION update_tspt_aktuell () RETURNS integer LANGUAGE 
'plpgsql' AS '
DECLARE
  var_count integer;
BEGIN

UPDATE Tippspieltage SET tspt_aktuell = false WHERE tspt_aktuell;
UPDATE Tippspieltage SET tspt_aktuell = true
FROM
  (
SELECT DISTINCT ON (tspt2sp.tr_kurzname)
  tspt2sp.tr_kurzname,
  tspt2sp.tspt_sort,
  MIN(abs(EXTRACT(epoch FROM date_trunc(''day'', sp.sp_termin) - 
CURRENT_DATE))) AS timediff
FROM
  Tippspieltage2Spiele AS tspt2sp
  LEFT JOIN Spiele AS sp USING (sp_id)
GROUP BY
  tspt2sp.tr_kurzname,
  tspt2sp.tspt_sort
ORDER BY
  tspt2sp.tr_kurzname,
  timediff ASC,
  tspt2sp.tspt_sort DESC
  ) as tspt_akt
WHERE
  Tippspieltage.tr_kurzname = tspt_akt.tr_kurzname
  AND Tippspieltage.tspt_sort = tspt_akt.tspt_sort
;

GET DIAGNOSTICS var_count = ROW_COUNT;
RETURN var_count;
END;
';

my cron mail reports success and modified rows:

 update_tspt_aktuell 
-
5872
(1 row)

So there is no reason i can see that this function produced 60 GB of data.

c) after this function cluster fails. The Clustering fails starting with 
"PANIC:  could not write to file 
"/home/postgres/data/pg_xlog/xlogtemp.24223": No space left on device"

the cron job took 2:15 hours to run and to report this failure. I guess the 
clustering is somewhat broken (maybe because i use many multi-column natural 
keys) 

* conclusion: i think i will stop clustering every night until i upgraded to 
8.0, but it leaves me very unsatisfied not to know the reason for this kind 
of db failure (or human error or whatever it is)

> What I would expect to be happening in a CLUSTER is that there would be
> an "old" file plus a "new" file of similar size, for both the table
> itself and each index that's been processed (up to the point of failure,
> where you will have a partially-written new index).  After restart with
> this method, you will find only the "old" files listed in pg_class.
> You'll want to manually delete the unreferenced "new" files.

Ok but the clustering should not scale the file from 500 MB to 64 GB 
(separated 

Re: [GENERAL] 8.0.2beta1

2005-03-29 Thread Michael Fuhr
On Tue, Mar 29, 2005 at 12:22:20PM -0500, Tom Lane wrote:
> "Cristian Prieto" <[EMAIL PROTECTED]> writes:
> > What are the differences, bugfixes, features, etc in the 8.0.2beta1? I 
> > was looking around and I couldn't find anything related... Could 
> > somebody help me?
> 
> Unfortunately we haven't prepared release notes yet.

For the gory details, you could search for "REL8_0_STABLE" in the
pgsql-committers archives for the last few months.

http://archives.postgresql.org/

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2

2005-03-29 Thread Ben
Yes, I did. 

On Fri, 25 Mar 2005, Oleg Bartunov wrote:

> Did you try 'make clean' first ?
> 
> On Thu, 24 Mar 2005, Ben wrote:
> 
> > I'm trying to install tsearch2 into an empty database on a new 8.0.1 
> > postgres 
> > install. The machine already has an older 7.4 install of postgres on it, so 
> > I 
> > gave configure a --prefix=/usr/local/pg801 option. Postgres installed and 
> > started fine (after changing the port), and I was able to create my new 
> > empty 
> > database without issues.
> >
> > Now comes the part where I fail to install tsearch2. I go to the 
> > contrib/tsearch2 directory, run make and make install without issues. make 
> > installcheck tries to connect to the older postgres install (I don't see an 
> > option to set the port it attempts to use), so I try to pipe tsearch2.sql 
> > into the new database. It starts working fine, and then says:
> >
> > ERROR:  could not find function "tsvector_cmp" in file 
> > "/usr/local/pgsql/lib/tsearch2.so"
> >
> >
> >  which is interesting, because it's not trying to use /usr/local/pg801/ 
> > like it's supposed to.
> >
> >
> > Thoughts?
> >
> >
> > ---(end of broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> >
> >  http://www.postgresql.org/docs/faq
> >
> 
>   Regards,
>   Oleg
> _
> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> Sternberg Astronomical Institute, Moscow University (Russia)
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(095)939-16-83, +007(095)939-23-83
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Views!

2005-03-29 Thread Hrishikesh Deshmukh
Could you give me an example as to how to do it in SQL?
CREATE TABLE myschema.mytable (
 table1, table2,table3..table17
);

Also can i do querying against a schema/view?

Thanks,
Hrishi


On Tue, 29 Mar 2005 17:16:33 +, Ragnar Hafstað <[EMAIL PROTECTED]> wrote:
> On Tue, 2005-03-29 at 11:48 -0500, Hrishikesh Deshmukh wrote:
> 
> [rearranged]
> 
> > On Tue, 29 Mar 2005 09:01:24 -0500, Sean Davis <[EMAIL PROTECTED]> wrote:
> > > On Mar 29, 2005, at 8:27 AM, Hrishikesh Deshmukh wrote:
> > >
> > > >
> > > > I have 254 tables,  i want to "subset" it in 237 and 17 tables?!
> > > > Is creating views the answer?/ Is there a better way to "subset" them?
> > >
> > > Have a look at schemas:
> >
> > I have go through the docs; what you are suggesting is that take 237
> > tables and add them into a schema!
> 
> maybe less work to move the other 17 tables.
> 
> gnari
> 
>

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 8.0.2beta1

2005-03-29 Thread Tom Lane
"Cristian Prieto" <[EMAIL PROTECTED]> writes:
> What are the differences, bugfixes, features, etc in the 8.0.2beta1? I =
> was looking around and I couldn't find anything related... Could =
> somebody help me?

Unfortunately we haven't prepared release notes yet.  If you want to do
testing, what I would look for is performance changes.  The main stuff
in 8.0.2 (other than bugfixes for very specific bugs) is:

* Replace ARC buffer management algorithm by 2Q to avoid pending IBM patent.
  Testing so far says this causes a few percent degradation in some
  cases and no visible impact in others.  We'd be interested to hear
  more reports.

* Planner tweaks to avoid bad side-effects of the 8.0 change that made
  the planner use the current physical table size instead of what
  pg_class.relpages says.  That's usually a win but caused problems if
  a plan is cached when the table is first created (and so is empty).

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Views!

2005-03-29 Thread Ragnar Hafstað
On Tue, 2005-03-29 at 11:48 -0500, Hrishikesh Deshmukh wrote:

[rearranged]

> On Tue, 29 Mar 2005 09:01:24 -0500, Sean Davis <[EMAIL PROTECTED]> wrote:
> > On Mar 29, 2005, at 8:27 AM, Hrishikesh Deshmukh wrote:
> > 
> > >
> > > I have 254 tables,  i want to "subset" it in 237 and 17 tables?!
> > > Is creating views the answer?/ Is there a better way to "subset" them?
> >
> > Have a look at schemas:
>
> I have go through the docs; what you are suggesting is that take 237
> tables and add them into a schema! 

maybe less work to move the other 17 tables.

gnari



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] sub query constraint

2005-03-29 Thread Dale Sykora
Yudie Pg wrote:
One way to do this is to add a write_access column to actions and use
a constraint to force it to be true. 
Create a UNIQUE key of
(name, write_access) for user_data and then add a FOREIGN KEY
reference from (name, write_access) in actions to (name, write_access)
in user_data.

Yes the name must unique indexed but couldn't force the write_access
to always 'true'.
I may suggest create a trigger function to validate insert to table actions:
CREATE OR REPLACE FUNCTION validate_actions_insert() RETRUNS OPAQUE AS '
DECLARE
rs RECORD;
BEGIN
  SELECT INTO rs * FROM user_data WHERE name = NEW.user and write_access = 't';
   
  IF NOT FOUND THEN
 RAISE EXCEPTION ''writing access forbidden for user '', NEW.user;
  END IF;

  RETURN NEW;
END;
' LANGUAGE plpgsql;
CREATE TRIGGER tg_actions BEFORE INSERT OR UPDATE ON actions
FOR EACH ROW EXECUTE PROCEDURE validate_actions_insert();
You may need create another trigger for table user_data before update
for reverse validation.
Bruno and Yudie,
Thanks for the replies.  I will read up on triggers and give that a try.
Thanks,
Dale
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] 8.0.2beta1 RPMs

2005-03-29 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Tue, 29 Mar 2005, Robin Ericsson wrote:
PostgreSQL RPM Building Project[1] has built RPMs for 8.0.2beta1. RPMs for 
Red Hat Linux 9, Red Hat Enterprise Linux Enterprise Server 3.0, Fedora Cor 
1,2,3 are now available, and more to come later.
Are the Fedora Core packages compatible with packages from Fedora Core 
itself? I.e, files will be at the same places if I upgrade from FC packages?
Yes, the binaries,configuration files, libraries, etc. are at the same 
places.

We are trying to be as close as with Red Hat/Fedora RPMs. But I suggest 
you to take a full backup befora upgrading. Remember that you can upgrade 
only from 8.0.X.

Regards,
- --
Devrim GUNDUZ 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.tdmsoft.com http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFCSYehtl86P3SPfQ4RAmWhAJ92sB0Wnjr7HGMUqxLWGPpzHpXbUQCfS8RP
9hHL/BrJzJYByJ5Sx4btT4s=
=ILSV
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Views!

2005-03-29 Thread Hrishikesh Deshmukh
Hi,

I have go through the docs; what you are suggesting is that take 237
tables and add them into a schema! Could you please please give a code
snippet?

Thanks,
Hrishi


On Tue, 29 Mar 2005 09:01:24 -0500, Sean Davis <[EMAIL PROTECTED]> wrote:
> Have a look at schemas:
> 
> http://www.postgresql.org/docs/current/static/ddl-schemas.html
> 
> Sean
> 
> On Mar 29, 2005, at 8:27 AM, Hrishikesh Deshmukh wrote:
> 
> > Hi All,
> >
> > I have 254 tables,  i want to "subset" it in 237 and 17 tables?!
> > Is creating views the answer?/ Is there a better way to "subset" them?
> >
> > Thanks,
> > Hrishi
> >
> > ---(end of
> > broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to
> > [EMAIL PROTECTED]
> 
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] after gentoo emerge plpython failure

2005-03-29 Thread Guy Rouillier
Sim Zacks wrote:
> Question for gentoo people -
> Is it required to restart daemons, such as postgresql, after an
> emerge world?   

In general, yes, since typically system packages would be updated.  But
you'd have to see what packages are getting updated as a result of a
particular emerge to say absolutely.  --pretend will allow you to see
what will be updated without actually doing the update.

-- 
Guy Rouillier


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Solaris 10 svc setup

2005-03-29 Thread Mark Greenbank
Hi,

Does anyone have a Solaris 10 smf script that they'd like to share for
controlling postgreSQL startup, shutdown, etc.?

Mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Zombie processes

2005-03-29 Thread Peterson, Bjorn








I’m running out of ideas for the following problem:

 

I have a moderately complex query as follows-

 

SELECT t.term_id, a.user_id, a.time_slot, a.status, 

SUM(CASE WHEN a.date>=t.start_date THEN 1 ELSE 0 END),
COUNT(a. date) 

FROM "Table1" a, "Table2" t, "Table2"
ytd, "Table3" cu, "Table4" c, "Table5" co, "Table6"
s 

WHERE a.type=1 AND a.status IN(1,2,3,4) AND a.date>=ytd.start_date
AND a.date<=t.end_date AND 

a.date<=now() AND a.user_id=cu.user_id AND
a.time_slot=cu.course_id AND cu.course_id=c.course_id AND
co.course_offered_id=c.course_offered_id 

AND co.school_id=s.school_id AND s.district_id=2 AND ytd.term_id=t.top_term_id
GROUP BY a.user_id, a.time_slot, t.term_id, a.status 

ORDER BY a.user_id, a.time_slot, t.term_id, a.status

 

I am running Postgres 8.0.1 on a Windows 2000 server as a
service, and the query returns appropriate values after about 30 seconds when
executed from a psql console.  However, when running this query from a
Java application on the same machine through the postgres JDBC driver, the Java
app hangs on st.executeQuery().  The query shows up in pg_stat_activity
for about 3 minutes then goes away, but the process referenced by the
pg_stat_activity remains active and consumes 50% of the CPU resources
indefinitely until I kill it off.  I let it run for over an hour
yesterday.  I can reproduce this every time I run this query.

 

Any ideas?

 

 

Bjorn Peterson

Software Engineer

Pearson School Technologies

Bloomington, MN

[EMAIL PROTECTED]

 




 

This email may contain confidential material. If you were not 
an intended recipient, Please notify the sender and delete all copies. 
We may monitor email to and from our network.
 *** 

 





[GENERAL] 8.0.2beta1

2005-03-29 Thread Cristian Prieto



What are the differences, bugfixes, features, etc 
in the 8.0.2beta1? I was looking around and I couldn't find anything related... 
Could somebody help me?


[GENERAL] 8.0.2beta1 RPMs

2005-03-29 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
PostgreSQL RPM Building Project[1] has built RPMs for 8.0.2beta1. RPMs for 
Red Hat Linux 9, Red Hat Enterprise Linux Enterprise Server 3.0, Fedora 
Cor 1,2,3 are now available, and more to come later.

They are available at:
http://www.postgresql.org/ftp/binary/v8.0.2beta1/linux/rpms
The SRPMs are also provided:
http://www.postgresql.org/ftp/binary/v8.0.2beta1/linux/srpms
[1]: http://pgfoundry.org/projects/pgsqlrpms
Regards,
- --
Devrim GUNDUZ 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.tdmsoft.com http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFCSXpgtl86P3SPfQ4RAhpjAJwOSRLde/yi2DxRS9f+EcRyCy+cUgCfadqb
rbXSuANrJ6p38zXrcyVz68s=
=+TFR
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Tracking row updates - race condition

2005-03-29 Thread Alex Adriaanse
Harald Fuchs wrote:
In article <[EMAIL PROTECTED]>,
Alex Adriaanse <[EMAIL PROTECTED]> writes:
 

Thanks for the input everyone.  I think Harald's approach will work
well...
   

I'm not so sure anymore :-(
Consider something like that:
UPDATE tbl SET col1 = 1 WHERE col2 = 1;
UPDATE tbl SET col1 = 2 WHERE col2 = 1;
with not much time inbetween.  By using NULL temporarily, you destroy
the ordering.  The client won't miss an UPDATE, but it might execute
the second one before the first.  Neither my nor your idea appear to
take care of that.
 

Maybe I'm missing something, but I don't really see the problem.  If 
that second statement is executed before the transaction containing the 
first statement is committed, wouldn't the second statement block until 
the first statement is committed?  If the first one is committed before 
the second statement is executed, then I don't see how the client will 
see the updates out-of-order.

Alex

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Unable to restore table oids from pg_dump

2005-03-29 Thread dharana
Hello list,
I've got a problem. I have a table like this:
 id | integer | not null
 item_table_oid| oid |
 item_row_id | oid |
"id" is a serial field for this table,
"item_table_oid" is an oid referencing the oid of the table in the 
pg_catalog.pg_class table

"item_row_id" is an oid that references the oid of a data row in the 
table refered by item_table_oid

When I pg_dumpall -o and then I try to restore it in a clean cluster 
every table has new OIDs so the item_table_oid is invalid.

I did a "DELETE FROM pg_class" and I learnt that it's a quick way of 
erasing a database.

Isn't there any way of dumping the pg_class entries so the references 
would still work after restoring in a clean env?

Thanks in advance,
--
Juan Alonso
http://gamersmafia.com | http://laflecha.net
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_xlog disk full error, i need help

2005-03-29 Thread Tom Lane
Janning Vygen <[EMAIL PROTECTED]> writes:
> Am Montag, 28. März 2005 18:06 schrieb Tom Lane:
>> The only way for pg_xlog to bloat vastly beyond what it's supposed to be
>> (which is to say, about twice your checkpoint_segments setting) is if
>> checkpoints are somehow blocked from happening.  The only mechanism I
>> know about for that is that in 7.4.* (maybe 7.3.* too) a very large
>> btree CREATE INDEX or REINDEX operation can block checkpoints until it
>> completes.  Did you have something like that going on?

> It looks like something/someone wrote so much data in my table that the 
> nightly clustering process just didn't succeed because of disk full failure 
> after writing too many pg_xlog files. The writing of so many pg_xlog files 
> now makes more sense to me when clustering 68 GByte of data.

Yeah, CLUSTER proceeds by rewriting the table and then invoking REINDEX
on each index, so the checkpoint lockout problem will apply.  Tell you
the truth, the best and perhaps only answer for you is to update to 8.0
where that problem is solved.

> How do i get the tablename using this filenode? (restarting the database is 
> not an option)

Sure it is.  pg_resetxlog will allow you to restart ... possibly you
will lose some transactions, but if the only thing going on was the
CLUSTER, nothing of value will be lost.

What I would expect to be happening in a CLUSTER is that there would be
an "old" file plus a "new" file of similar size, for both the table
itself and each index that's been processed (up to the point of failure,
where you will have a partially-written new index).  After restart with
this method, you will find only the "old" files listed in pg_class.
You'll want to manually delete the unreferenced "new" files.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Views!

2005-03-29 Thread Sean Davis
Have a look at schemas:
http://www.postgresql.org/docs/current/static/ddl-schemas.html
Sean
On Mar 29, 2005, at 8:27 AM, Hrishikesh Deshmukh wrote:
Hi All,
I have 254 tables,  i want to "subset" it in 237 and 17 tables?!
Is creating views the answer?/ Is there a better way to "subset" them?
Thanks,
Hrishi
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Referential integrity using constant in foreign key

2005-03-29 Thread Richard Huxton
Thomas F.O'Connell wrote:
Referential integrity never dictates the need for "dummy" columns. If 
you have a column that you need to refer to a column in another table so 
strongly that you want the values always to be in sync, you create a 
foreign key, establishing referential integrity between a column (or 
columns) in the table with the foreign key and a column in another table 
(usually a primary key).

I don't understand what you're trying to accomplish well enough to be 
able to make a specific recommendation based on your examples that suits 
your needs.
I know what he's trying to do, because I do it myself. And the short 
answer Andrus is "no, there is no shortcut".

The typical usage is something like:
CREATE TABLE contract (con_id int PRIMARY KEY, con_type varchar, 
con_date ...)
CREATE TABLE purchase_details (con_id int, item_id int, qty int, ...)
CREATE TABLE rental_details (con_id int, rental_period interval, ...)

Now, you only want purchase_details to reference rows in contract where 
con_type="purchase". Likewise rental_details should only reference rows 
with con_type="rental".

We can't reference a view, and we can't add a constant to the 
foreign-key definition. So, the options are:

1. Don't worry about it (not good design).
2. Add a "dummy" column to purchase_details which only contains the 
value "purchase" so we can reference the contract table (wasteful)
3. Write your own foreign-key triggers to handle this (a fair bit of work)
4. Eliminate the con_type column and determine it from what tables you 
join to. But that means you now need to write a custom constraint across 
all the xxx_details tables so that you don't get a mixed purchase/rental 
table.

None of these are very attractive, but that's where we stand at the moment.
HTH
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Views!

2005-03-29 Thread Hrishikesh Deshmukh
Hi All,

I have 254 tables,  i want to "subset" it in 237 and 17 tables?!
Is creating views the answer?/ Is there a better way to "subset" them?

Thanks,
Hrishi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] plpython function problem workaround

2005-03-29 Thread Marco Colombo
On Tue, 29 Mar 2005, Marco Colombo wrote:
# escapes (expanded by PostgreSQL)
q3 = r"select count(f1) from test1 where f1 = 'this is a multi line 
string\r\nline2\r\nline3\r\n'"
curs.execute(q3)
^^
This line (no. 28) is useless (but harmless), please ignore it
(just a cut&paste error).
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] plpython function problem workaround

2005-03-29 Thread Marco Colombo
On Tue, 29 Mar 2005, Sim Zacks wrote:
The only ?issue? that I have found with it is similar to an issue I
posted about multiline in general, which does not seem to be
considered a bug.
I've posted similar concerns in the past. The whole point is that
there are two possible approaches:
1) treat text as binary - as we do now;
2) do on the wire conversion - like FTP ASCII mode.
Both have disadvantages, and both lead to unexpected results.
As I wrote before, 2) is more problematic. You'll have to reject
any file with a bare \n from a Windows, otherwise you won't be able
to process it correclty.
I think if you do:
insert into test (sometext) values ('Line one\nLine two\r\n');
-- with the literal chars, not the escape sequences
you're expecting exaclty the same on output. If the server
converts it in the Unix form:
'Line one\nLine two\n'
for storing and the converts back to the Windows form, when you do:
select sometext from test; -- from a Windows client
you get:
Line one\r\nLine two\r\n
which is not the same you entered.
I doubt FTP ASCII mode handles this correctly.
As for the examples you made (the python functions), it's a problem
with python string literals (just don't use them). Let's try this:
-- CUT HERE 8< 
#!/usr/bin/env python
import pgdb
db = pgdb.connect()
curs = db.cursor()
# this is only to emulate PGAdmin under Windows (I don't have it)
# (I ran the script with these uncommented on Linux)
#q = "create temp table test1(f1 varchar(50));"
#curs.execute(q)
#q = "insert into test1 values('this is a multi line 
string\r\nline2\r\nline3\r\n');"
#curs.execute(q)
 real test
# embedded in string literal
q1 = """select count(f1) from test1 where f1 = 'this is a multi line string
line2
line3
'"""
# escapes (expanded by python)
q2 = "select count(f1) from test1 where f1 = 'this is a multi line 
string\r\nline2\r\nline3\r\n'"
# escapes (expanded by PostgreSQL)
q3 = r"select count(f1) from test1 where f1 = 'this is a multi line 
string\r\nline2\r\nline3\r\n'"
curs.execute(q3)
# stating the obvious
print "Comparisons:"
print "%-10s%-10s%-10s" % ("q1 == q2", "q1 == q3", "q2 == q3")
print "%-10s%-10s%-10s" % (q1 == q2,q1 == q3,   q2 == q3)
print "\nRunning tests..."
curs.execute(q1)
print "Test 1 (string literal):", curs.fetchone()[0]
curs.execute(q2)
print "Test 2 (Python escapes):", curs.fetchone()[0]
curs.execute(q3)
print "Test 3 (PG escapes):", curs.fetchone()[0]
# in case someone wonders, let's try using query parameters
astring = """this is a multi line string
line2
line3
"""
q = "select count(f1) from test1 where f1 = %(mystr)s"
curs.execute(q, { "mystr": astring })
print "Test 4 (parameters):", curs.fetchone()[0]
-- >8 CUT HERE 
This is the output (on Linux):
Comparisons:
q1 == q2  q1 == q3  q2 == q3
False False False
Running tests...
Test 1 (string literal): 0
Test 2 (Python escapes): 1
Test 3 (PG escapes): 1
Test 4 (parameters): 0
Which is consistent with your examples, that is, it works _only_
with explicit escapes (either at Python level or at PostgreSQL level).
If client-side python works this way, why are you expecting server-side
python to work differently?
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] unsubscribe

2005-03-29 Thread [EMAIL PROTECTED]
unsubscribe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Tracking row updates - race condition

2005-03-29 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
Alex Adriaanse <[EMAIL PROTECTED]> writes:

> Thanks for the input everyone.  I think Harald's approach will work
> well...

I'm not so sure anymore :-(

Consider something like that:

UPDATE tbl SET col1 = 1 WHERE col2 = 1;
UPDATE tbl SET col1 = 2 WHERE col2 = 1;

with not much time inbetween.  By using NULL temporarily, you destroy
the ordering.  The client won't miss an UPDATE, but it might execute
the second one before the first.  Neither my nor your idea appear to
take care of that.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] plpython function problem workaround

2005-03-29 Thread Sim Zacks
The only ?issue? that I have found with it is similar to an issue I
posted about multiline in general, which does not seem to be
considered a bug.
I would say if it is documented that any newlines in a python
function, including embedded newlines, are *NIX newlines no matter
what operating system the function is created on, that would suffice.

As an example - Windows PGAdmin client. Linux Server:
create table test1(f1 varchar(50));

insert into test1 values('this is a multi line string
line2
line3
')

select * from test1 where f1='this is a multi line string
line2
line3
'
--returns 1 row

create or replace function testnewlines() returns int as
$$
x=plpy.execute("""select f1 from test1 where f1='this is a multi line 
string\r\nline2\r\nline3\r\n'""")
return x.nrows()
$$ language 'plpythonu'
--returns 1

create or replace function testnewlines() returns int as
$$
x=plpy.execute("""select f1 from test1 where f1='this is a multi line string
line2
line3
'""")
return x.nrows()
$$ language 'plpythonu'

--returns 0
Thank You
Sim Zacks




On Fri, Mar 18, 2005 at 10:12:05PM -0700, Michael Fuhr wrote:
> 
> I just submitted a small patch to convert CRLF => LF, CR => LF.

This patch is in 8.0.2beta1, so PL/Python users might want to test
it before 8.0.2 is released.  See the recent "8.0.2 Beta Available"
announcement:

http://archives.postgresql.org/pgsql-general/2005-03/msg01311.php

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster