Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Gauthier, Dave
For the time being, I dropped a few tables in a scratch DB that I was 
experimenting with.  I just reran the app that gave me the messages before and 
this time no messages.  Tonight, I'll cycle the DB with the new fsm value.

Thanks for all the help!

(BTW, just have to say that the help I get here is faster and better than what 
we used to pay for from Oracle)

-dave

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Friday, February 27, 2009 4:53 PM
To: Bill Moran
Cc: Gauthier, Dave; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_fsm_relations 

Bill Moran  writes:
> In response to "Gauthier, Dave" :
>> There is no way I have 1000 tables/indexes.  But maybe it's counting 
>> table/index file extensions in the mix?  What's the metadata query to see 
>> these 1000 relations?

> Are you counting tables, indexes, sequences, pg_toast tables, system tables?

I think sequences don't count here, but toast tables and system catalogs
surely do.  Also, as you said, it's the total across all databases in
the installation that counts.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Tom Lane
Bill Moran  writes:
> In response to "Gauthier, Dave" :
>> There is no way I have 1000 tables/indexes.  But maybe it's counting 
>> table/index file extensions in the mix?  What's the metadata query to see 
>> these 1000 relations?

> Are you counting tables, indexes, sequences, pg_toast tables, system tables?

I think sequences don't count here, but toast tables and system catalogs
surely do.  Also, as you said, it's the total across all databases in
the installation that counts.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Greg Smith

On Fri, 27 Feb 2009, Gauthier, Dave wrote:

Is there a way I can spin that in without rebooting the DB (and kicking 
my user off)?


Nope:

# select name,context from pg_settings where name='max_fsm_pages';
 name  |  context
---+
 max_fsm_pages | postmaster

That's the definitive way to answer questions like this.  See "When they 
take effect" at 
http://www.postgresqldocs.org/wiki/Tuning_Your_PostgreSQL_Server for a 
quick description of what the context stuff means.  "postmaster" means you 
need to restart the server, in this case (like many of the other 
parameters in that class) because the parameter changes a shared memory 
allocation, which is only done at startup.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Bill Moran
In response to "Gauthier, Dave" :

> Ya, most of it's system stuff.  OK, I see where the 1000 comes from.  I 
> bumped it up to 1200 in postgresql.conf.  Is there a way I can spin that in 
> without rebooting the DB (and kicking my user off)?

No.  Unless something has changed that I'm not aware of, you have to do
a restart for PG to change that value, as it allocates the memory on
startup.

Unless you're expecting a considerable amount of table bloat in the
near future, you can just reset the value, then schedule the server
to restart during your next off-peak time.  The server's not going to
meltdown immediately (unless you've got a _TON_ of update activity)

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Gauthier, Dave
Ya, most of it's system stuff.  OK, I see where the 1000 comes from.  I bumped 
it up to 1200 in postgresql.conf.  Is there a way I can spin that in without 
rebooting the DB (and kicking my user off)?

-dave

-Original Message-
From: Bill Moran [mailto:wmo...@potentialtech.com] 
Sent: Friday, February 27, 2009 4:00 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_fsm_relations

In response to "Gauthier, Dave" :

> There is no way I have 1000 tables/indexes.  But maybe it's counting 
> table/index file extensions in the mix?  What's the metadata query to see 
> these 1000 relations?

Are you counting tables, indexes, sequences, pg_toast tables, system tables?

SELECT relname,reltype from pg_class;

Make sure you do that query for every database and add them up.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Bill Moran
In response to "Gauthier, Dave" :

> There is no way I have 1000 tables/indexes.  But maybe it's counting 
> table/index file extensions in the mix?  What's the metadata query to see 
> these 1000 relations?

Are you counting tables, indexes, sequences, pg_toast tables, system tables?

SELECT relname,reltype from pg_class;

Make sure you do that query for every database and add them up.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Gauthier, Dave
There is no way I have 1000 tables/indexes.  But maybe it's counting 
table/index file extensions in the mix?  What's the metadata query to see these 
1000 relations?

-dave


-Original Message-
From: Bill Moran [mailto:wmo...@potentialtech.com] 
Sent: Friday, February 27, 2009 3:03 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_fsm_relations

In response to "Gauthier, Dave" :

> I just did a Vacuum Analyze on a DB.  It worked OK, but I got...
> 
> 
> NOTICE: max_fsm_relations(1000) equals the number of relations checked
> HINT:  You have at least 1000 relations.  Consider increasing the 
> configuration parameter "max_fsm_relations"

Then it didn't work OK.

> I browsed around and learned that this has to do with keeping track of free 
> disk space.
> Q: Is there a problem with leaving this alone?  How serious is this if it is 
> a problem?

Vacuum is probably not doing all that it needs to.  Which means some tables
will grow until they use up all your disk space.

> Q: Is there a way I can shrink this number (reload the data to consume the 
> free space perhaps?)

Drop some databases, tables, sequences, etc.  That will reduce the number
of relations that PG has to track.

Or, just raise the parameter and restart postgres.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Bill Moran
In response to "Gauthier, Dave" :

> I just did a Vacuum Analyze on a DB.  It worked OK, but I got...
> 
> 
> NOTICE: max_fsm_relations(1000) equals the number of relations checked
> HINT:  You have at least 1000 relations.  Consider increasing the 
> configuration parameter "max_fsm_relations"

Then it didn't work OK.

> I browsed around and learned that this has to do with keeping track of free 
> disk space.
> Q: Is there a problem with leaving this alone?  How serious is this if it is 
> a problem?

Vacuum is probably not doing all that it needs to.  Which means some tables
will grow until they use up all your disk space.

> Q: Is there a way I can shrink this number (reload the data to consume the 
> free space perhaps?)

Drop some databases, tables, sequences, etc.  That will reduce the number
of relations that PG has to track.

Or, just raise the parameter and restart postgres.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_relations

2009-02-27 Thread Joshua D. Drake
On Fri, 2009-02-27 at 12:37 -0700, Gauthier, Dave wrote:
> I just did a Vacuum Analyze on a DB.  It worked OK, but I got...
> 
>  
> 
>  
> 
> NOTICE: max_fsm_relations(1000) equals the number of relations checked
> 
> HINT:  You have at least 1000 relations.  Consider increasing the
> configuration parameter “max_fsm_relations”
> 
>  
> 
> I browsed around and learned that this has to do with keeping track of
> free disk space.
> 
> Q: Is there a problem with leaving this alone?  How serious is this if
> it is a problem?
> 
> Q: Is there a way I can shrink this number (reload the data to consume
> the free space perhaps?)

This is "relations" which means tables, indexes etc... So unless you
start dropping things, no you can't reduce it.

Just increase it a bit (say 20%) it won't hurt you.

Note it does use a little shared_memory,.

Joshua D. Drake


> 
>  
> 
> Thanks
> 
> -dave
> 
>  
> 
> 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] max_fsm_relations

2009-02-27 Thread Gauthier, Dave
I just did a Vacuum Analyze on a DB.  It worked OK, but I got...


NOTICE: max_fsm_relations(1000) equals the number of relations checked
HINT:  You have at least 1000 relations.  Consider increasing the configuration 
parameter "max_fsm_relations"

I browsed around and learned that this has to do with keeping track of free 
disk space.
Q: Is there a problem with leaving this alone?  How serious is this if it is a 
problem?
Q: Is there a way I can shrink this number (reload the data to consume the free 
space perhaps?)

Thanks
-dave



Re: [GENERAL] strange performance problem

2009-02-27 Thread Linos

Richard Huxton escribió:

Linos wrote:

Richard Huxton escribió:

Linos wrote:

2009-02-27 13:51:15 CET 127.0.0.1LOG:  duración: 4231.045 ms  sentencia:
SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto",
"id_seccion", "id_categoria" FROM "modelo_subfamilia"
PSQL with \timing:
-development: Time: 72,441 ms
-server: Time: 78,762 ms
but if i load it from QT or from pgadmin i get more than 4 seconds in
server and ~100ms in develoment machime, if i try the query without the
"foto" column i get 2ms in development and 30ms in server

OK, so:
1. No "foto" - both quick
2. psql + "foto" - both slow
3. QT + "foto"   - slow only on server

1.No "foto" -both quick but still a noticeable
difference between them 2ms develoment - 30ms server
2. psql + "foto"-both quick really, they are about 70ms,
not bad giving that foto are bytea with small png images.


Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of
course you're using European decimal marks.


3. QT or WXWindows + "foto"   -slow only one server yes.


The bit that puzzles me is why both are slow in #2 and not in #3.


OK - well, the fact that both psql are fast means there's nothing too
wrong with your setup. It must be something to do with the application
libraries.


After the vacuum full verbose and reindex still the same problem (i had
tried the vacuum before).


OK. Worth ruling it out.


1- The same in the two machines, tcp/ip with localhost.


Hmm...


2- I am exactly the same code in the two machines and the same pgadmin3
version too.


Good. We can rule that out.


3- Ever the entire result set.


Good.


4- I am using es_ES.UTF8 in the two machines


Good.


What can be using wxwindows and QT to access postgresql that psql it is
not using, libpq?


Well, I'm pretty sure that pgadmin will be using libpq at some level,
even if there is other code above it.

Either:

1. One machine (the fast one) is actually using unix sockets and not
tcp/ip+localhost like you think.
2. The networking setup is different on each.
3. Something your code is doing with the bytea data is slower on one
machine than another. I seem to remember that pgadmin used to be quite
slow at displaying large amounts of data. They did some work on that,
but it might be that your use-case still suffers from it.

For #1 try the psql test again, but with "-h localhost" and "-h /tmp"
(or whatever directory your unix socket is in - might be
/var/run/postgresql or similar too).


Ok, thanks for the trick now i know where to search, after trying with -h 
localhost psql it is slow too in the server from 80,361 with \timing to 4259,453 
using -h localhost. Any ideas what can be the problem here? i am going to make 
what you suggest and capture analyze the traffic, after find the hole i have 
tried in other debian server with the same kernel 2.6.26 and i have the same 
problem (my development machine it is Arch Linux with 2.6.28).


Regards,
Miguel Angel.


For #2, you can always try timing "psql -h localhost ... > /dev/null" on
both machines. If you capture port 5432 with something like "tcpdump -w
ip.dump host localhost and port 5432" you can then use wireshark to see
exactly why it's slow.

For #3, I guess you'd need to reduce your code to just fetching the data
and time that. You may have already done this of course.

HTH




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about no unchanging update rule + ALTER

2009-02-27 Thread Josh Trutwin
On Fri, 27 Feb 2009 09:34:08 +
Richard Huxton  wrote:

> > CREATE TRIGGER prevent_empty_updates BEFORE UPDATE ON test FOR
> > EACH ROW EXECUTE PROCEDURE prevent_empty_updates();
> > 
> > Actually after writing this, this TOO does not seem to work after
> > an ADD COLUMN.  :/  Any suggestions?  
> 
> Try disconnecting and reconnecting to the database - that should do
> it. The function will be "compiled" the first time it is called in
> a session , so the * is probably getting expanded then. There's
> been a lot of work done to provide automatic re-planning in these
> sort of situations, but maybe you're hitting a corner-case.

Thanks for the info - I'll test out the disconnect and report back.

Josh.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange performance problem

2009-02-27 Thread Richard Huxton
Linos wrote:
> Richard Huxton escribió:
>> Linos wrote:
>>> 2009-02-27 13:51:15 CET 127.0.0.1LOG:  duración: 4231.045 ms  sentencia:
>>> SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto",
>>> "id_seccion", "id_categoria" FROM "modelo_subfamilia"
>>
>>> PSQL with \timing:
>>> -development: Time: 72,441 ms
>>> -server: Time: 78,762 ms
>>
>>> but if i load it from QT or from pgadmin i get more than 4 seconds in
>>> server and ~100ms in develoment machime, if i try the query without the
>>> "foto" column i get 2ms in development and 30ms in server
>>
>> OK, so:
>> 1. No "foto" - both quick
>> 2. psql + "foto" - both slow
>> 3. QT + "foto"   - slow only on server
> 
> 1.No "foto" -both quick but still a noticeable
> difference between them 2ms develoment - 30ms server
> 2. psql + "foto"-both quick really, they are about 70ms,
> not bad giving that foto are bytea with small png images.

Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of
course you're using European decimal marks.

> 3. QT or WXWindows + "foto"   -slow only one server yes.
> 
>> The bit that puzzles me is why both are slow in #2 and not in #3.

OK - well, the fact that both psql are fast means there's nothing too
wrong with your setup. It must be something to do with the application
libraries.

> After the vacuum full verbose and reindex still the same problem (i had
> tried the vacuum before).

OK. Worth ruling it out.

> 1- The same in the two machines, tcp/ip with localhost.

Hmm...

> 2- I am exactly the same code in the two machines and the same pgadmin3
> version too.

Good. We can rule that out.

> 3- Ever the entire result set.

Good.

> 4- I am using es_ES.UTF8 in the two machines

Good.

> What can be using wxwindows and QT to access postgresql that psql it is
> not using, libpq?

Well, I'm pretty sure that pgadmin will be using libpq at some level,
even if there is other code above it.

Either:

1. One machine (the fast one) is actually using unix sockets and not
tcp/ip+localhost like you think.
2. The networking setup is different on each.
3. Something your code is doing with the bytea data is slower on one
machine than another. I seem to remember that pgadmin used to be quite
slow at displaying large amounts of data. They did some work on that,
but it might be that your use-case still suffers from it.

For #1 try the psql test again, but with "-h localhost" and "-h /tmp"
(or whatever directory your unix socket is in - might be
/var/run/postgresql or similar too).

For #2, you can always try timing "psql -h localhost ... > /dev/null" on
both machines. If you capture port 5432 with something like "tcpdump -w
ip.dump host localhost and port 5432" you can then use wireshark to see
exactly why it's slow.

For #3, I guess you'd need to reduce your code to just fetching the data
and time that. You may have already done this of course.

HTH

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when to use "execute" in plpgsql?

2009-02-27 Thread Merlin Moncure
On Fri, Feb 27, 2009 at 12:00 PM, Enrico Sirola  wrote:
> Hello,
>
> I'm having some troubles with the correct use of the execute plpgsql
> statement. Where I work, we have a postgresql db hosting a set of schemas
> all with the same tables and, from time to time, we upgrade the schemas to a
> new version coding a stored procedure like the following (pseudocode):
>
> ---> example use case <
>
> -- upgrade function, gets a schema name as input and upgrades it
> create function upgrade_to_new_version(schema_name name)
> returns void as $$
> begin

I'm not completely sure what the problem is because this is light on
detail, but here's a what I bet the problem is. static (that is, not
EXECUTE-ed) queries in pl/pgsql functions convert table references in
the function body to fixed 'pointers' to actual tables that are always
schema qualified.  Once the function is run the first time and the
plan generated, changing the schema will have no bearing on which
tables are used.  Thus, the function will not 'float' with the current
schema search path setting.

dynamic sql functions however will always re-look up the tables based
on the search path because the plan is not saved off for the query.

If this is your problem, you have a couple of options:

*) always use dynamic sql in functions that are meant to apply to
multiple schemas in the same session
*) duplicate your function for each schema and make your tables fully
schema qualified
*) DISCARD your plans before running your function

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql selecting strange index for simple query

2009-02-27 Thread Tom Lane
Maxim Boguk  writes:
> Tom Lane wrote:
>> Could you send me a dump of this test_table off-list?  It seems like
>> there must be something strange about the stats of last_change_time,
>> but I don't feel like guessing about what it is ...

> Here attached is small part of table (1160 rows) wich show same wrong index 
> selection.

Thanks.  It turns out the funny behavior is not because of
last_change_time, but because wrong_index's *first* column is well
correlated with the table ordering.  In the test case it actually
gets a 1.0 correlation score because it's a constant, but I suppose
that in your real example the leading column is well correlated with
the row insertion order.

The planner is favoring the multicolumn index more than it should
because of the correlation, which is an estimation bug that I've
started a thread about on pgsql-hackers.  In the meantime, though,
it seems that the issue only occurs for a limited range of
random_page_cost settings.  At high random_page_cost the index page
fetch cost increases enough to make the "wrong" index not be favored,
and at low random_page_cost the index per-tuple costs do the same.
So as a workaround you might look into whether your fully-cached
situation is modeled better by having both random_page_cost and
seq_page_cost less than 1.0.  In the past we've seen some evidence
that setting them both to 0.1 or so produces results that are more
accurate for databases that are fully cached in RAM.  (Alternatively
you can jack up the various CPU costs, but there are more of them to
mess with.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standalone ODBC Driver

2009-02-27 Thread Joshua D. Drake
On Fri, 2009-02-27 at 19:06 +0100, Tim Tassonis wrote:
> Joshua D. Drake wrote:
> > On Fri, 2009-02-27 at 13:30 +0100, Tim Tassonis wrote:
> >> Hi all
> >>
> >> I remember, a while ago somebody mentioning an odbc driver for postgres 
> >> that is not dependant on a working postgres client installation. 
> >> Unfortunately I lost the link to it, can anybody remember?
> >>
> > 
> > ODBCng?
> > 
> > http://projects.commandprompt.com/public/odbcng
> 
> Yes, thanks, thats the one. Already up and running!
> 
> Sadly, the current svn versions seem to target only windows. No chance 
> to even compile it under linux, but the old version from 2007 works.

The linux version lags behind. We are working on a unixodbc issue with
newer versions then you will be able to.

Joshua D. Drake


> 
> Bye
> Tim
> 
> 
> 
> 
> > 
> >> (
> >> I tested it then and it worked fine for simple task, but then switched 
> >> back to the standard driver, as the datatype information stuff was much 
> >> more complete.
> >>
> >> Now I need to run a postgresql odbc program on a few ancient machines 
> >> whith completely outdated client libraries and no chance of upgrading.
> >> So, the standalone driver would come very handy here again.
> >> )
> >>
> >>
> >>
> >> Bye
> >> Tim
> >>
> 
> 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standalone ODBC Driver

2009-02-27 Thread Tim Tassonis

Joshua D. Drake wrote:

On Fri, 2009-02-27 at 13:30 +0100, Tim Tassonis wrote:

Hi all

I remember, a while ago somebody mentioning an odbc driver for postgres 
that is not dependant on a working postgres client installation. 
Unfortunately I lost the link to it, can anybody remember?




ODBCng?

http://projects.commandprompt.com/public/odbcng


Yes, thanks, thats the one. Already up and running!

Sadly, the current svn versions seem to target only windows. No chance 
to even compile it under linux, but the old version from 2007 works.


Bye
Tim







(
I tested it then and it worked fine for simple task, but then switched 
back to the standard driver, as the datatype information stuff was much 
more complete.


Now I need to run a postgresql odbc program on a few ancient machines 
whith completely outdated client libraries and no chance of upgrading.

So, the standalone driver would come very handy here again.
)



Bye
Tim




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hiding row counts in psql

2009-02-27 Thread Ben Chobot
Is there a way in psql to hide the row counts but keep the column headers? 
The man page talks about \t and --tuples-only, but both of those also 
suppress column headers.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when to use "execute" in plpgsql?

2009-02-27 Thread Fernando Moreno
Hi, check this out:
http://archives.postgresql.org/pgsql-general/2008-05/msg00938.php

I would say that execute is the only way to achieve some things
related to schemas and temp tables.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] foxpro, odbc, data types and unnecessary convertions

2009-02-27 Thread Fernando Moreno
Thank you very much for your advice, I guess I'm wasting my time in
this 'problem'. I'm going to check that class, it seems pretty useful.
And by the way...yes, this is a born-dead app (at least on the client
side) and it's likely to be ported to .NET in the future, but like I
said before, it's not my call.

Cheers.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] db_restore and xml data

2009-02-27 Thread Enrico Sirola

Hello,
I have a pgsql database hosting xml data in xml columns. The data,  
have  declarations at the beginning, so it is saved with


XMLPARSE (DOCUMENT )

when I try to restore a database from dump, pg_restore complains  
because the data it tries to restore is not an xml content  
(correctly), so what I usually do is to edit the postgresql.conf file,  
change xmloption to 'document', reload the conf file and make  
pg_restore happy.

However, the documentation (8.13.1) says

"The default is CONTENT, so all forms of XML data are allowed."

from this statement, it seems there are 'content' xmls which are not  
'document' xmls and vice versa, so how do you proceed in this case? It  
seems that potentially you could be unable to restore from a dump.

Thanks for your help,
Enrico


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] when to use "execute" in plpgsql?

2009-02-27 Thread Enrico Sirola

Hello,

I'm having some troubles with the correct use of the execute plpgsql  
statement. Where I work, we have a postgresql db hosting a set of  
schemas all with the same tables and, from time to time, we upgrade  
the schemas to a new version coding a stored procedure like the  
following (pseudocode):


---> example use case <

-- upgrade function, gets a schema name as input and upgrades it
create function upgrade_to_new_version(schema_name name)
returns void as $$
begin

-- become the schema owner
execute 'set role to ' || schema_name;

/* perform DDL and data transformations work here */
/* body here */
end;
language plpgsql volatile strict;

-- schemas_to_upgrade contains a column sname with the names of
-- the schemas needing an upgrade
select upgrade_to_new_version(sname) from schemas_to_upgrade;

->example end<-

the strange thing is that from time to time the function doesn't work.  
Or, even worst, It works for a database but doesn't on another. The  
issue usually goes away if we substitute the statement into /* body  
here */ prepending those with an execute and submitting those via  
execute; apparently the behaviour is reproducibile given a database  
instance (i.e. it is not random), but it is impossible (at least for  
us) to tell in advance if it will happen on another database (with the  
same schemas and postgresql version number which, by the way, is the  
official 8.3.5 on centos5/x86_64). The "safe" way to do things (it  
never breaks) is to pass every statement via executes but we would  
like to dig on this.


What are we missing?
Thanks in advance for your help,
enrico

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standalone ODBC Driver

2009-02-27 Thread Joshua D. Drake
On Fri, 2009-02-27 at 13:30 +0100, Tim Tassonis wrote:
> Hi all
> 
> I remember, a while ago somebody mentioning an odbc driver for postgres 
> that is not dependant on a working postgres client installation. 
> Unfortunately I lost the link to it, can anybody remember?
> 

ODBCng?

http://projects.commandprompt.com/public/odbcng

> (
> I tested it then and it worked fine for simple task, but then switched 
> back to the standard driver, as the datatype information stuff was much 
> more complete.
> 
> Now I need to run a postgresql odbc program on a few ancient machines 
> whith completely outdated client libraries and no chance of upgrading.
> So, the standalone driver would come very handy here again.
> )
> 
> 
> 
> Bye
> Tim
> 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL to Oracle

2009-02-27 Thread Martin Gainty

1)Strip all Postgres and or user-specific custom datatypes 
for ex if you see an in or out var declared as fubar chances are this wont map 
correctly in Oracle

2)Get to know packages..they work well to aggregate and organise
2a)Functions and Procedures which are used for a specific purpose for example
HL7 events such as A01,A03 or A08 would be different implementations
of a package base class A00 (Health Level 7) Base Structure
2b)Variables and or attributes which would support the Package type in my case
HL7_Event_Type would be A01,A03 or A08

The safest way to transport functions is to code required functionality in Java 
such as
CREATE OR REPLACE FUNCTON java_pkg AS
 FUNCTION phone_fmt
  (p_phone IN VARCHAR2)
 RETURN VARCHAR2
IS
 LANGUAGE JAVA
  NAME 'JavaPackageName.JavaClassName.getString (char[]) return char[]';
END;
/

/* contents of JavaPackageName.JavaClassName */
package JavaPackageName;
public class JavaClassName
{
 public String aString=new String("fubar");
 public JavaClassName() { ; } //default constructor
 public JavaClassName(char[] aString) //constructor with 1 string arg
 {
   System.out.println("JavaClassname has one input which is a String 
contents="+aString);
 }
 public getString(char[] input_string)
 {
   this.aString = new java.lang.String(input_string);
   return this.aString;
 } 
}

HTH
Martin 
__ 
Disclaimer and confidentiality note 
Everything in this e-mail and any attachments relates to the official business 
of Sender. This transmission is of a confidential nature and Sender does not 
endorse distribution to any party other than intended recipient. Sender does 
not necessarily endorse content contained within this transmission. 




Date: Thu, 26 Feb 2009 22:36:56 -0800
From: abr_...@yahoo.com
Subject: [GENERAL] PostgreSQL to Oracle
To: pgsql-general@postgresql.org



Hi All,

I want to migrate from PostgreSQL to Oracle and need any tool preferably open 
source. And I am specially concerned with stored procedures / functions.

Regards,
Abdul Rehman. 


_
Windows Live™ Hotmail®…more than just e-mail. 
http://windowslive.com/howitworks?ocid=TXT_TAGLM_WL_t2_hm_justgotbetter_howitworks_022009

Re: [GENERAL] strange performance problem

2009-02-27 Thread Linos

Richard Huxton escribió:

Linos wrote:

2009-02-27 13:51:15 CET 127.0.0.1LOG:  duración: 4231.045 ms  sentencia:
SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto",
"id_seccion", "id_categoria" FROM "modelo_subfamilia"



PSQL with \timing:
-development: Time: 72,441 ms
-server: Time: 78,762 ms



but if i load it from QT or from pgadmin i get more than 4 seconds in
server and ~100ms in develoment machime, if i try the query without the
"foto" column i get 2ms in development and 30ms in server


OK, so:
1. No "foto" - both quick
2. psql + "foto" - both slow
3. QT + "foto"   - slow only on server


1.No "foto"	 -both quick but still a noticeable difference between them 2ms 
develoment - 30ms server
2. psql + "foto"-both quick really, they are about 70ms, not bad giving that 
foto are bytea with small png images.

3. QT or WXWindows + "foto"  -slow only one server yes.



The bit that puzzles me is why both are slow in #2 and not in #3.


First things first: run "VACUUM FULL VERBOSE modela_subfamilia" on both
systems and see how many pages were being taken up. I'm guessing it will
be more on the server, but is it a *lot* more?

>

Then run "REINDEX TABLE modela_subfamilia" to clean up your indexes.

If it's still a problem that suggests the two systems are doing
something different with the bytea encoding/decoding. Check:

1. Connection settings - is one tcp/ip and the other unix sockets?
2. Binary/text mode - are you using a binary format for query results on
the development machine?
3. Are you fetching the entire result-set on the server and only the
first row(s) on your dev. machine?
4. Encoding/locale differences - can't see how this would matter for
bytea, but worth ruling out.



After the vacuum full verbose and reindex still the same problem (i had tried 
the vacuum before).


1- The same in the two machines, tcp/ip with localhost.
2- I am exactly the same code in the two machines and the same pgadmin3 version 
too.
3- Ever the entire result set.
4- I am using es_ES.UTF8 in the two machines

What can be using wxwindows and QT to access postgresql that psql it is not 
using, libpq?


Regards,
Miguel Angel.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange performance problem

2009-02-27 Thread Richard Huxton
Linos wrote:
> 2009-02-27 13:51:15 CET 127.0.0.1LOG:  duración: 4231.045 ms  sentencia:
> SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto",
> "id_seccion", "id_categoria" FROM "modelo_subfamilia"

> PSQL with \timing:
> -development: Time: 72,441 ms
> -server: Time: 78,762 ms

> but if i load it from QT or from pgadmin i get more than 4 seconds in
> server and ~100ms in develoment machime, if i try the query without the
> "foto" column i get 2ms in development and 30ms in server

OK, so:
1. No "foto" - both quick
2. psql + "foto" - both slow
3. QT + "foto"   - slow only on server

The bit that puzzles me is why both are slow in #2 and not in #3.


First things first: run "VACUUM FULL VERBOSE modela_subfamilia" on both
systems and see how many pages were being taken up. I'm guessing it will
be more on the server, but is it a *lot* more?

Then run "REINDEX TABLE modela_subfamilia" to clean up your indexes.

If it's still a problem that suggests the two systems are doing
something different with the bytea encoding/decoding. Check:

1. Connection settings - is one tcp/ip and the other unix sockets?
2. Binary/text mode - are you using a binary format for query results on
the development machine?
3. Are you fetching the entire result-set on the server and only the
first row(s) on your dev. machine?
4. Encoding/locale differences - can't see how this would matter for
bytea, but worth ruling out.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] strange performance problem

2009-02-27 Thread Linos
Hello i have the same table with the same data in my development machine and in 
a small server in production. The table is this:


Tabla «modelo_subfamilia»
Columna| Tipo  | Modificadores
---+---+---
 nombre| character varying(40) | not null
 subfamilia_id | character(4)  | not null
 id_familia| character(4)  | not null
 hasta | character(4)  | not null
 foto  | bytea |
 id_seccion| integer   |
 id_categoria  | integer   |
Índices:
«modelo_subfamilia_pkey» PRIMARY KEY, btree (subfamilia_id)
«uq_hasta_index_modsubfam» UNIQUE, btree (hasta)
«nombre_index_modsubfam» btree (nombre)

the column "foto" has a toast table, aside from the fact that it have in the 
server three triggers they are exactly the same, with the same data too, my 
development machine has version 8.3.6 (linux kernel 2.6.28) and production 
server has version 8.3.3 (linux kernel 2.6.26), the settings in postgresql.conf 
are nearly the same except for work_men (24 server, 36 development machine) and 
effective_cache_size (1024 server, 1536 development machine), they have the same 
sysctl settings and limits too, and the same mount options for the ext3 
filesystem that have the data, i have a single sata disk (wd velociraptor) in my 
development machine and the server it is using a linux software raid10 with 4 
sata disks.


I have detected that a simple query from the application i am developing in QT 
it is really fast in my machine and takes too much time in production server, i 
am logging the queries that spend more than 500ms so i have this in the log.


2009-02-27 13:51:15 CET 127.0.0.1LOG:  duración: 4231.045 ms  sentencia: SELECT 
"nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", 
"id_categoria" FROM "modelo_subfamilia"


so i have been testing in my machine and in the server the same query to see the 
difference.


EXPLAIN ANALYZE:
-development:
Seq Scan on modelo_subfamilia  (cost=0.00..11.68 rows=368 width=73) (actual 
time=0.010..0.092 rows=368 loops=1)

Total runtime: 0.174 ms

-server:
Seq Scan on modelo_subfamilia  (cost=0.00..6.10 rows=368 width=69) (actual 
time=0.008..0.158 rows=368 loops=1)

Total runtime: 0.289 ms

PSQL with \timing:
-development: Time: 72,441 ms
-server: Time: 78,762 ms

but if i load it from QT or from pgadmin i get more than 4 seconds in server and 
~100ms in develoment machime, if i try the query without the "foto" column i get 
2ms in development and 30ms in server so the difference its there anyway but not 
in psql commandline it seems to be only when accessing from a graphical front 
end, and with the complete query with foto column included i get the postgresql 
process to eat 90% of the cpu for the complete 4 seconds that it gets to send me 
the result so it not seems to be a problem  with the cpu usage from the graphic 
libs (no QT or WxWindows), how could i debug this problem?, where should i begin 
to search? Thanks.


Regards,
Miguel Angel.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Grzegorz Jaśkiewicz
On Fri, Feb 27, 2009 at 11:57 AM, Gregory Stark  wrote:

> Uh, we get a lot of really mangled SQL and explain plans -- I don't see
> anything wrong with these. If the question was unclear it sounds like it's
> just because it's a fairly subtle problem and was hard to describe. Needing
> two cracks at describing the problem is pretty much par for the course here.
>
> I haven't tested the query to see what's going on but if the problem is due to
> random() then in 8.4 you could use WITH to guarantee that the subquery is
> executed precisely once and the results reused as-is subsequently.

In that case, actually I need to get a random array of packages that's
different as much as possible for every account-id.
I'll try to create simpler example, with some script that would
generate data for you.



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Standalone ODBC Driver

2009-02-27 Thread Tim Tassonis

Hi all

I remember, a while ago somebody mentioning an odbc driver for postgres 
that is not dependant on a working postgres client installation. 
Unfortunately I lost the link to it, can anybody remember?


(
I tested it then and it worked fine for simple task, but then switched 
back to the standard driver, as the datatype information stuff was much 
more complete.


Now I need to run a postgresql odbc program on a few ancient machines 
whith completely outdated client libraries and no chance of upgrading.

So, the standalone driver would come very handy here again.
)



Bye
Tim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Gregory Stark
Alban Hertroys  writes:

> On Feb 26, 2009, at 11:02 AM, Grzegorz Jaśkiewicz wrote:
>
>> looks like you completely misunderstood my question.
>
> I'm not surprised. What do you expect with random capitalisation,  random 
> table
> alias names and random indentation combined with queries  getting wrapped by
> the mailing-list software? 

Uh, we get a lot of really mangled SQL and explain plans -- I don't see
anything wrong with these. If the question was unclear it sounds like it's
just because it's a fairly subtle problem and was hard to describe. Needing
two cracks at describing the problem is pretty much par for the course here.

I haven't tested the query to see what's going on but if the problem is due to
random() then in 8.4 you could use WITH to guarantee that the subquery is
executed precisely once and the results reused as-is subsequently.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to return rows as columns?

2009-02-27 Thread Linos

A. Kretschmer escribió:

In response to Linos :

Hello,
i have a query that returns a result set like this:

item | size | stock
123  | XL   | 10
123  | XXL | 5
123  | XS   | 3

and i would like get the results like this:

item | XL | XXL | XS
123  | 10  | 5 | 3


Other solution with plain SQL:

test=*# select * from linos ;
 item | size | stock
--+--+---
  123 | XL   |10
  123 | XXL  | 5
  123 | XS   | 3
(3 rows)

test=*# select item, sum(case when size='XL' then stock else 0 end) as
"XL", sum(case when size='XXL' then stock else 0 end) as "XXL", sum(case
when size='XS' then stock else 0 end) as "XS" from linos where item=123
group by item;
 item | XL | XXL | XS
--++-+
  123 | 10 |   5 |  3
(1 row)


Andreas


I think this approach have a problem (almost with my data), i have a somewhat 
large number of different sizes, about 200 or so (although i have a presented a 
limited example i now). Thanks anyway by the alternative way to do it Andreas.


Regards,
Miguel Angel.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to return rows as columns?

2009-02-27 Thread A. Kretschmer
In response to Linos :
> Hello,
>   i have a query that returns a result set like this:
> 
> item | size | stock
> 123  | XL   | 10
> 123  | XXL | 5
> 123  | XS   | 3
> 
> and i would like get the results like this:
> 
> item | XL | XXL | XS
> 123  | 10  | 5 | 3

Other solution with plain SQL:

test=*# select * from linos ;
 item | size | stock
--+--+---
  123 | XL   |10
  123 | XXL  | 5
  123 | XS   | 3
(3 rows)

test=*# select item, sum(case when size='XL' then stock else 0 end) as
"XL", sum(case when size='XXL' then stock else 0 end) as "XXL", sum(case
when size='XS' then stock else 0 end) as "XS" from linos where item=123
group by item;
 item | XL | XXL | XS
--++-+
  123 | 10 |   5 |  3
(1 row)


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to return rows as columns?

2009-02-27 Thread Linos

Thomas Kellerer escribió:

Linos, 27.02.2009 11:41:

Hello,
i have a query that returns a result set like this:

item | size | stock
123  | XL   | 10
123  | XXL | 5
123  | XS   | 3

and i would like get the results like this:

item | XL | XXL | XS
123  | 10  | 5 | 3

i have been thinking how to do it with a plpgsql function but the 
number of sizes depend on the item that it is queried so i can not 
create a type and return it, i could create it like a text 
concatenating the stock and size of every row and returning the 
complete line text but i would have to process it in the application 
anyway so i am searching a solution that lets me return it like a record.


I have been searching the list and maybe i could create the record 
type inside the function and to get the correct names and number of 
columns in the application side launching a query to get the number of 
sizes before call the function to specify the columns in the function 
call but maybe i am missing anything important here? any better (or 
more correct) way to do this? Thanks.


Check out the "crosstab" function in the "Tablefunc" module:

http://www.postgresql.org/docs/current/static/tablefunc.html




I knew i was missing something hehehe, thanks Thomas.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to return rows as columns?

2009-02-27 Thread Thomas Kellerer

Linos, 27.02.2009 11:41:

Hello,
i have a query that returns a result set like this:

item | size | stock
123  | XL   | 10
123  | XXL | 5
123  | XS   | 3

and i would like get the results like this:

item | XL | XXL | XS
123  | 10  | 5 | 3

i have been thinking how to do it with a plpgsql function but the number 
of sizes depend on the item that it is queried so i can not create a 
type and return it, i could create it like a text concatenating the 
stock and size of every row and returning the complete line text but i 
would have to process it in the application anyway so i am searching a 
solution that lets me return it like a record.


I have been searching the list and maybe i could create the record type 
inside the function and to get the correct names and number of columns 
in the application side launching a query to get the number of sizes 
before call the function to specify the columns in the function call but 
maybe i am missing anything important here? any better (or more correct) 
way to do this? Thanks.


Check out the "crosstab" function in the "Tablefunc" module:

http://www.postgresql.org/docs/current/static/tablefunc.html




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] function to return rows as columns?

2009-02-27 Thread Linos

Hello,
i have a query that returns a result set like this:

item | size | stock
123  | XL   | 10
123  | XXL | 5
123  | XS   | 3

and i would like get the results like this:

item | XL | XXL | XS
123  | 10  | 5 | 3

i have been thinking how to do it with a plpgsql function but the number of 
sizes depend on the item that it is queried so i can not create a type and 
return it, i could create it like a text concatenating the stock and size of 
every row and returning the complete line text but i would have to process it in 
the application anyway so i am searching a solution that lets me return it like 
a record.


I have been searching the list and maybe i could create the record type inside 
the function and to get the correct names and number of columns in the 
application side launching a query to get the number of sizes before call the 
function to specify the columns in the function call but maybe i am missing 
anything important here? any better (or more correct) way to do this? Thanks.


Regards,
Miguel Angel.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Scott Marlowe
On Fri, Feb 27, 2009 at 3:16 AM, Grzegorz Jaśkiewicz  wrote:
> 2009/2/27 Scott Marlowe :
>> Nope.
>
> as far as I can understand it, if I do the same thing in two steps,
> and in one step. And the latter is broken, because of some internal
> process/optimization/whatever - that's a bug to me.

You're assuming that creating a temp data set in a table and join
semantics are the same.  they are not.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Grzegorz Jaśkiewicz
2009/2/27 Scott Marlowe :
> Nope.

as far as I can understand it, if I do the same thing in two steps,
and in one step. And the latter is broken, because of some internal
process/optimization/whatever - that's a bug to me.

Unless I am expecting it to work, and it was just pure luck that it
worked in two steps..



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Scott Marlowe
On Fri, Feb 27, 2009 at 3:10 AM, Grzegorz Jaśkiewicz  wrote:
> 2009/2/27 Scott Marlowe :
>> On Thu, Feb 26, 2009 at 3:02 AM, Grzegorz Jaśkiewicz  
>> wrote:
>>
>>> First of all, I wonder why the same query divided up in half - and
>>> using temporary table works as expected, and with everything together
>>
>> I'm betting it's your use of generate_series().  You can get some
>> weird side effects because it sometimes gets run multiple times not
>> just the once you expect.  I'm guessing that's what's biting you.
>
> in which case, wouldn't that be a postgresql's bug ?

Nope.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] funny view/temp table problem with query

2009-02-27 Thread Grzegorz Jaśkiewicz
2009/2/27 Scott Marlowe :
> On Thu, Feb 26, 2009 at 3:02 AM, Grzegorz Jaśkiewicz  
> wrote:
>
>> First of all, I wonder why the same query divided up in half - and
>> using temporary table works as expected, and with everything together
>
> I'm betting it's your use of generate_series().  You can get some
> weird side effects because it sometimes gets run multiple times not
> just the once you expect.  I'm guessing that's what's biting you.

in which case, wouldn't that be a postgresql's bug ?




-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about no unchanging update rule + ALTER

2009-02-27 Thread Richard Huxton
Josh Trutwin wrote:
> I found the following on a blog post
> (http://www.depesz.com/index.php/2007/09/08/avoiding-empty-updates/)
> which had a rule to prevent empty updates:
> 
> CREATE RULE no_unchanging_updates AS
> ON UPDATE
> TO test_table
> WHERE ROW(OLD.*) IS NOT DISTINCT FROM ROW(NEW.*)
> DO INSTEAD NOTHING;
> 
> Works great, but problem comes when I alter the table and add a new
> column, it appears the rule doesn't allow an update after adding a
> new column via ALTER TABLE ADD COLUMN.
> 
> I created the rule above, then did:
> 
> ALTER TABLE test_table ADD COLUMN foo TEXT;
> 
> => UPDATE test_table SET foo = 'bar';
> UPDATE 0
> 
> When doing a \d on the table I notice the rule is expanded at the
> time of creation to include each column in an expression, but it is
> not updated from the ALTER TABLE command.
> 
> Do I have to drop and recreate this rule after every ALTER TABLE
> ADD/DELETE column? 

Quite possibly - I seem to remember that id *does* expand the * to an
explicit list of columns. That's what you want sometimes. If the whole
point of the view is to provide a stable interface to an application,
you don't want it changing when you change underlying tables.

> Or would the following trigger (also found on
> blog post) be a better solution as my app is for a "plugin" builder
> where adding/deleting/changing fields is common:
> 
> CREATE OR REPLACE FUNCTION prevent_empty_updates() RETURNS trigger as
> $BODY$
> DECLARE
>   BEGIN
> IF ROW(OLD.*) IS DISTINCT FROM ROW(NEW.*) THEN
>   RETURN NEW;
> END IF;
> RETURN NULL;
>   END;
> $BODY$ language plpgsql;
> 
> CREATE TRIGGER prevent_empty_updates BEFORE UPDATE ON test FOR EACH
> ROW EXECUTE PROCEDURE prevent_empty_updates();
> 
> Actually after writing this, this TOO does not seem to work after an
> ADD COLUMN.  :/  Any suggestions?

Try disconnecting and reconnecting to the database - that should do it.
The function will be "compiled" the first time it is called in a session
, so the * is probably getting expanded then. There's been a lot of work
done to provide automatic re-planning in these sort of situations, but
maybe you're hitting a corner-case.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL to Oracle

2009-02-27 Thread Artacus

Hi All,

I want to migrate from PostgreSQL to Oracle and need any tool preferably 
open source. And I am specially concerned with stored procedures / 
functions.


Regards,
Abdul Rehman.



You were just converting from Oracle to Postgres two days ago, so it 
shouldn't take much to convert back.


These may help though.

http://wiki.postgresql.org/wiki/Oracle_to_Postgres_Conversion

http://www.postgresql.org/docs/8.3/interactive/plpgsql-porting.html

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general