Re: [GENERAL] does postgresql works on distributed systems?

2008-06-05 Thread Volkan YAZICI
On Wed, 4 Jun 2008, Gurjeet Singh [EMAIL PROTECTED] writes:
 If you search for  OpenSSI Postgres, you hit this link:

 http://wiki.openssi.org/go/PostgreSQL_on_OpenSSI_enabled_Knoppix

 I have done this setup and small test on it about an year ago. The
 performance was horrible. I cannot say for sure, but I think, as
 OpenSSI FAQ mentions it, it's because of the way Postgres works with
 shared memory.

Yes, I read that page and saw your name in the history of the page. But
because of lacking test results and further information, I couldn't
derive to any solution from claims written there. At least, would you
mind giving some more details about below test factors:

- Which PostgreSQL version did you use? (Assuming you did appropriate
  postgresql.conf configurations.) Operating system, file system, etc.?

- What was the system specifications of the machines in the cluster?
  (Particularly network architecture comes to mind.)

- What sort of tests did you apply and in which ones you faced serious
  bottlenecks?

- What was the reason of the occured bottlenecks? (Memory access over
  network, which directly refers to raw network traffic?) If you did any
  monitoring, what were the other unhealthy statistics (suspicious
  changes) occured during tests?

 I hope you find that article useful in starting your own
 experiment. Lets hope there's some improvement since last year. Do let
 us all know the results.

BTW, can you comment on the activity of the OpenSSI project. A project
with a dead main page (see http://openssi.org) doesn't smell good to
me. Are there any alive support in the mailing lists?


Regards.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Joshua D. Drake


On Wed, 2008-06-04 at 22:54 -0700, Richard Broersma wrote:
 I just bought a new Ubuntu Laptop so that I could tryout and hopefully
 offer support for a few of my favorite pgfoundry projects.
 
 Would anyone be able to give any dirction on what I need to do to get
 passed this error?

You don't have any build tools install. Try:

apt-get install binutils gcc autoconf flex

Sincerely,

Joshua D. Drake



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Tom Lane
Richard Broersma [EMAIL PROTECTED] writes:
 Would anyone be able to give any dirction on what I need to do to get
 passed this error?

 /usr/bin/ld: crt1.o: No such file: No such file or directory

Seems you've got an incomplete installation.  On my Fedora machine,
crt1.o is provided by the glibc-devel RPM ... dunno how Ubuntu
splits things up.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Additional arguments to aggregate functions

2008-06-05 Thread Artacus
Is there a way to send additional arguments when defining a custom 
aggregate?


I wrote a mysql style group_concat aggregate. You get DISTINCT for 
free. I want to be able to define if it is sorted and what the 
separator is.


Art

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Richard Broersma
On Wed, Jun 4, 2008 at 11:20 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
 You don't have any build tools install. Try:

 apt-get install binutils gcc autoconf flex

Thanks gentlemen, I will give these ideas a try.


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange statistics

2008-06-05 Thread Henrik


3 jun 2008 kl. 23.31 skrev Joris Dobbelsteen:


Henrik wrote:

Hi list,
I'm having a table with a lots of file names in it. (Aprox 3  
million) in a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I  
can get them right even when I raise the statistics to 1000.

db=# alter table tbl_file alter file_name set statistics 1000;
ALTER TABLE
db=# analyze tbl_file;
ANALYZE
db=# explain analyze select * from tbl_file where lower(file_name)  
like lower('to%');

QUERY PLAN
  Bitmap 
 Heap Scan on tbl_file  (cost=23.18..2325.13 rows=625 width=134)  
(actual time=7.938..82.386 rows=17553 loops=1)

  Filter: (lower((file_name)::text) ~~ 'to%'::text)
  -  Bitmap Index Scan on tbl_file_idx  (cost=0.00..23.02 rows=625  
width=0) (actual time=6.408..6.408 rows=17553 loops=1)
Index Cond: ((lower((file_name)::text) ~=~ 'to'::text) AND  
(lower((file_name)::text) ~~ 'tp'::text))

Total runtime: 86.230 ms
(5 rows)
How can it be off by a magnitude of 28??


These are statistics and represent an only estimate! In this case,  
the planner seems to be doing the right thing(tm) anyway.


Statistics is a frequently misunderstood subject and usually  
provides excellent material to draw plain wrong conclusions. There  
is a good chance that due to the physical layout of your data, the  
algorithms in the statistics collector, the existence of uncertainty  
and some more unknown factors your statistics will be biased. This  
is a situations where you noticed it.


Running SELECT * FROM pg_stats; will give you the statistics the  
planner uses and can provide some hints to why the planner has  
chosen these estimates.
Probably statistics will vary between ANALYZE runs. Its also  
possible to try CLUSTER and friends. Try different queries and  
look at the deviations.
Thanks Joris for your input. You are the second person that suggests  
CLUSTER for me. Maybe I should take a look. The problem is that our  
select queries are kinda random. Would CLUSTER help then also? Should  
I just CLUSTER on the moste used index or?


Thanks
/henke





All in all, you should really start worrying when the planner starts  
planning inefficient queries. Since its a filename, it might be  
highly irregular (random) and a low statistics target might be good  
enough anyways.


Unfortunately I'm not a statistics expert...

- Joris



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database growing. Need autovacuum help.

2008-06-05 Thread Henrik


3 jun 2008 kl. 16.06 skrev Scott Marlowe:


On Tue, Jun 3, 2008 at 7:41 AM, Henrik [EMAIL PROTECTED] wrote:


To be able to handle versions we always insert new folders even  
though

nothing has changed but it seemd like the best way to do it.

E.g

First run:
  tbl_file 500k new files.
  tbl_folder 50k new rows.
  tbl_file_folder 550k new rows.

Second run with no new files.
  tbl_file unchanged.
  tbl_folder 50k new rows
  tbl_file_folder 550k new rows.


On useful trick is to include a where clause that prevents the extra  
updates.


I.e. update table set field=123 where field  123;



Mmm I keep that in mind. The problem is that on these tables I only do  
INSERTS and DELETES. :)


Maybe I can redesign it but I can find a good way...yet...

Thanks!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to sync the system table with pg_dump

2008-06-05 Thread Gary Fu

Hi,

I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.

I tried the following pg_dump command to just restore that table without 
success either.


Does pg_dump support for the system tables or something I missed ?
Is there another way to sync the system tables ?

Thanks,
Gary

% pg_dump -t pg_namespace -h nppdist nppsd3 | psql -h nppsds1 -d nppsd3
SET
SET
SET
SET
SET
SET
SET
SET
ERROR:  relation pg_namespace already exists
ALTER TABLE
ERROR:  duplicate key violates unique constraint 
pg_namespace_nspname_index

CONTEXT:  COPY pg_namespace, line 1: pg_toast  10  \N
ERROR:  permission denied: pg_namespace is a system catalog
ERROR:  permission denied: pg_namespace is a system catalog
REVOKE
REVOKE
GRANT

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full vacuum really slowing query down

2008-06-05 Thread Jason Long
I have a query that takes 2.5 sec if I run it from a freshly restored 
dump.  If I run a full vacuum on the database it then takes 30 seconds.


Would someone please comment as to why I would see over a 10x slow down 
by only vacuuming the DB?


I am using 8.3.1

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] full vacuum really slows down query

2008-06-05 Thread Jason Long
I have a query that takes 2 sec if I run it from a freshly restored 
dump.  If I run a full vacuum on the database it then takes 30 seconds.


Would someone please comment as to why I would see a 15x slow down by 
only vacuuming the DB?


I am using 8.3.1

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Dimitri Fontaine
Le jeudi 05 juin 2008, Joshua D. Drake a écrit :
 You don't have any build tools install. Try:
 apt-get install binutils gcc autoconf flex

Or even better:
  apt-get build-dep postgresql-8.3

-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Craig Ringer

Tom Lane wrote:

Richard Broersma [EMAIL PROTECTED] writes:

Would anyone be able to give any dirction on what I need to do to get
passed this error?



/usr/bin/ld: crt1.o: No such file: No such file or directory


Seems you've got an incomplete installation.  On my Fedora machine,
crt1.o is provided by the glibc-devel RPM ... dunno how Ubuntu
splits things up.


$ dpkg -S /usr/lib/crt1.o
libc6-dev: /usr/lib/crt1.o

... so you're missing the libc6-dev package. However, you'll find 
there's lots else missing too, since you appear to have just installed 
gcc but no supporting libraries etc.


You need to install at least the `build-essential' package, which will 
pull in all the core headers and libraries. You may also need other 
library -dev packages. As Ubuntu packages PostgreSQL, the easiest way to 
get everything you need is to tell it to install all the build 
dependencies of the postgresql source package:


sudo apt-get build-dep postgresql

When you configure your custom postgresql build make sure to use a 
--prefix that points it somewhere sensible. DO NOT configure it with 
--prefix=/usr. A good option is to use something like 
--prefix=/opt/postgresql83 so the whole app is neatly self contained.


An alternative might be to `apt-get source postgresql' then modify the 
package and rebuild it by running 'debian/rules binary' from the package 
source dir. However, depending on the changes you're going to make this 
might be more hassle than it's worth.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I compare sql create script with running database?

2008-06-05 Thread Raymond O'Donnell

On 05/06/2008 10:52, Bjørn T Johansen wrote:

If I already have a running database, how can I compare the tables in
the database with the sql script to discover the differences?


You can use pg_dump with the -s option to dump the schema of the 
database, and run it through the diff tool of your choice.


Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I compare sql create script with running database?

2008-06-05 Thread Bjørn T Johansen
If I already have a running database, how can I compare the tables in the 
database with the sql script to discover the differences?


Regards,

BTJ

-- 
---
Bjørn T Johansen

[EMAIL PROTECTED]
---
Someone wrote:
I understand that if you play a Windows CD backwards you hear strange Satanic 
messages
To which someone replied:
It's even worse than that; play it forwards and it installs Windows
---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I compare sql create script with running database?

2008-06-05 Thread Bjørn T Johansen
On Thu, 05 Jun 2008 11:06:36 +0100
Raymond O'Donnell [EMAIL PROTECTED] wrote:

 On 05/06/2008 10:52, Bjørn T Johansen wrote:
  If I already have a running database, how can I compare the tables in
  the database with the sql script to discover the differences?
 
 You can use pg_dump with the -s option to dump the schema of the 
 database, and run it through the diff tool of your choice.
 
 Ray.
 

Well, not really an option because I don't think the dump will be an exact 
match to the sql script

BTJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to sync the system table with pg_dump

2008-06-05 Thread Albe Laurenz
Gary Fu wrote:
 I tried to use pg_dump to restore (sync) a database, but I noticed that
 the system table pg_namespace was not synced.

If you restore a database, entries in pg_namespace will be created if
the dump contains any CREATE SCHEMA statements, i.e. if there are
schemas in your original database.

Check if the dump was created and restored by a database user with
the appropriate permissions (a superuser ideally), and look out for
error messages.

Do not try to manually change pg_namespace. Just don't.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tripping up on my first attempt at building PG from source

2008-06-05 Thread Richard Broersma
On Thu, Jun 5, 2008 at 2:15 AM, Craig Ringer
[EMAIL PROTECTED] wrote:
 sudo apt-get build-dep postgresql

Thanks, this works perfectly now!

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] functions, transactions, key violations

2008-06-05 Thread Ioannis Tambouras
 
 Apart from concurrency issues, it is possible that you
have sequence generation problems. Depending on how
you inserted the original rows into the 'purchases' table, it is possible 
that the nextval number has not kept-up and is lagging behind.

 You need to ensure that 'purchases_purchase_id_seq' is pointing
to the correct next value! That is, if it is current nextval
is number 100, but you already have 110 rows on the table (without gaps),
it is no wonder you will receive primary key violations for the
next 10 inserts but will work fine afterwards. ( Assuming, of course, 
you are the only one receiving values from the sequence.)


 Thanks
 Ioannis Tambouras

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I compare sql create script with running database?

2008-06-05 Thread Tino Wildenhain

Hi,

Bjørn T Johansen wrote:

On Thu, 05 Jun 2008 11:06:36 +0100
Raymond O'Donnell [EMAIL PROTECTED] wrote:


On 05/06/2008 10:52, Bjørn T Johansen wrote:

If I already have a running database, how can I compare the tables in
the database with the sql script to discover the differences?
You can use pg_dump with the -s option to dump the schema of the 
database, and run it through the diff tool of your choice.


Ray.



Well, not really an option because I don't think the dump will be an exact 
match to the sql script



Unless you create all your objects with your script into a different 
Database on the server then use pg_dump -s on both and compare the

result :-)

Cheers
Tino



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] does postgresql works on distributed systems?

2008-06-05 Thread James B. Byrne
In-Reply-To: : [EMAIL PROTECTED]

On: Thu, 05 Jun 2008 09:03:14 +0300, Volkan YAZICI [EMAIL PROTECTED] wrote:

 BTW, can you comment on the activity of the OpenSSI project. A project
 with a dead main page (see http://openssi.org) doesn't smell good to
 me. Are there any alive support in the mailing lists?

The link http://openssi.org redirects to
http://openssi.org/cgi-bin/view?page=openssi.html and the most recent
(pre-)release is discussed here:
http://sourceforge.net/forum/forum.php?forum_id=768341



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does postgresql works on distributed systems?

2008-06-05 Thread Volkan YAZICI
On Thu, 5 Jun 2008, James B. Byrne [EMAIL PROTECTED] writes:
 The link http://openssi.org redirects to
 http://openssi.org/cgi-bin/view?page=openssi.html and the most recent
 (pre-)release is discussed here:
 http://sourceforge.net/forum/forum.php?forum_id=768341

Hrm... It didn't 3-4 days ago. Anyway, thanks for warning.


Regards.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin complains about vacuuming required after fresh 8.1 install

2008-06-05 Thread Collin Peters
Hi all - I am wondering if I can get a consensus on what to do about
this minor issue.  I have looked through the archives and can't find a
definitive answer.

So I have a new 8.1 install on Linux (have not yet been able to
upgrade to 8.3).  The documentation say that autovacuum is enabled by
default in 8.1 and sure enough I see messages in the logs that
autovacuum is processing database postgres, etc...

In my postgresql.conf I see 'autovacuum = on', 'stats_start_collector
= on', and 'stats_row_level = on'

However, despite all this pgAdmin still gives me messages on certain
tables recommending a vacuum to be run.  I see some messages saying
that you need to run a VACUUM ANALYZE every week or night to 'make
sure things are up to date', but then in the commits I see a comment:
Update documentation to mention that autovacuum also does analyze so
we don't need to recommend nightly analyzes anymore unless autovacuum
is off.

So I am looking for the definitive answer on this.  Is pgAdmin wrong
and I should ignore the messages?  Is autovacuum not fully running?
Do they just have different threshold values and pgadmin is a bit
pickier?

Regards,
Collin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I compare sql create script with running database?

2008-06-05 Thread Bjørn T Johansen
On Thu, 05 Jun 2008 15:28:55 +0200
Tino Wildenhain [EMAIL PROTECTED] wrote:

 Hi,
 
 Bjørn T Johansen wrote:
  On Thu, 05 Jun 2008 11:06:36 +0100
  Raymond O'Donnell [EMAIL PROTECTED] wrote:
  
  On 05/06/2008 10:52, Bjørn T Johansen wrote:
  If I already have a running database, how can I compare the tables in
  the database with the sql script to discover the differences?
  You can use pg_dump with the -s option to dump the schema of the 
  database, and run it through the diff tool of your choice.
 
  Ray.
 
  
  Well, not really an option because I don't think the dump will be an exact 
  match to the sql script
  
 
 Unless you create all your objects with your script into a different 
 Database on the server then use pg_dump -s on both and compare the
 result :-)
 
 Cheers
 Tino
 

That I could do :)

BTJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Tim Tassonis

Hi all

I assume this is not an uncommon problem, but so far, I haven't been 
able to find a good answer to it.


I've got a table that holds log entries and fills up very fast during 
the day, it gets approx. 25 million rows per day. I'm now building a web 
application using apache/mod_php where you can query the database and 
then should be able to page through the results.


My idea was that whenever a user constructs a query, I create a 
temporary table holding the results and then page through this table, 
which should work very well in principle.


But from what I've been able to find out, temporary tables live only in 
the Postgres Session they have been created in and are destroyed upon 
session descructuion.


Now, with apache/php in a mpm environment, I have no guarantee that a 
user will get the same postgresql session for a subsequent request, thus 
he will not see the temporary table.


Is there a way to create temporary tables in another way, so they are 
visible between sessions, or do I need to create real tables for my 
purpose? And is the perfomance penalty big for real tables, as they have 
been written to disk/read from disk?



Tim


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Bill Moran
In response to Tim Tassonis [EMAIL PROTECTED]:

 Hi all
 
 I assume this is not an uncommon problem, but so far, I haven't been 
 able to find a good answer to it.
 
 I've got a table that holds log entries and fills up very fast during 
 the day, it gets approx. 25 million rows per day. I'm now building a web 
 application using apache/mod_php where you can query the database and 
 then should be able to page through the results.
 
 My idea was that whenever a user constructs a query, I create a 
 temporary table holding the results and then page through this table, 
 which should work very well in principle.
 
 But from what I've been able to find out, temporary tables live only in 
 the Postgres Session they have been created in and are destroyed upon 
 session descructuion.
 
 Now, with apache/php in a mpm environment, I have no guarantee that a 
 user will get the same postgresql session for a subsequent request, thus 
 he will not see the temporary table.
 
 Is there a way to create temporary tables in another way, so they are 
 visible between sessions, or do I need to create real tables for my 
 purpose? And is the perfomance penalty big for real tables, as they have 
 been written to disk/read from disk?

Build a framework that creates the tables in a special schema, and then
can access them through any session.  Use some method to generate unique
table names and store the names in the HTTP session.  Create some sort
of garbage collection routines that removes tables when they're no longer
needed.

The details of exactly how you pull this off are going to depend heavily
on the rest of your application architecture.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Tino Wildenhain

Hi,

Tim Tassonis wrote:

Hi all

I assume this is not an uncommon problem, but so far, I haven't been 
able to find a good answer to it.


I've got a table that holds log entries and fills up very fast during 
the day, it gets approx. 25 million rows per day. I'm now building a web 
application using apache/mod_php where you can query the database and 
then should be able to page through the results.


you should be aware that PHP isnt the only scripting language with an
apache module and not neccessary the best choice among them.

My idea was that whenever a user constructs a query, I create a 
temporary table holding the results and then page through this table, 
which should work very well in principle.


That means you are more or less constructing materialized views :-)
But if you hold the session anyway, then see below.


But from what I've been able to find out, temporary tables live only in 
the Postgres Session they have been created in and are destroyed upon 
session descructuion.


Now, with apache/php in a mpm environment, I have no guarantee that a 
user will get the same postgresql session for a subsequent request, thus 
he will not see the temporary table.


Thats the problem and if you have failover/loadbalancing situations, 
even more so.


Is there a way to create temporary tables in another way, so they are 
visible between sessions, or do I need to create real tables for my 
purpose? And is the perfomance penalty big for real tables, as they have 
been written to disk/read from disk?


To start with, you should avoid reconnecting to the database for every
request. Not only because of loosing the session context but also
to avoid connection overhead.

Usually this is done by connection pooling. You can then try to trac
user:connection relationship as much as possible thru the connection pool.

If you have that, there is actually no need for the temp tables. Instead
you can just use a regular cursor and scroll it as neccessary.

Almost all frameworks should give you reasonable pool implementations,
some additional memory caching on top of it and there are also a lot
of other methods to help you with that, for example pgpool and
pgbouncer.

Regards
Tino


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Bill Moran
In response to Tim Tassonis [EMAIL PROTECTED]:
 
 Bill Moran wrote:
  In response to Tim Tassonis [EMAIL PROTECTED]:
  
 
  Now, with apache/php in a mpm environment, I have no guarantee that a 
  user will get the same postgresql session for a subsequent request, thus 
  he will not see the temporary table.
 
  Is there a way to create temporary tables in another way, so they are 
  visible between sessions, or do I need to create real tables for my 
  purpose? And is the perfomance penalty big for real tables, as they have 
  been written to disk/read from disk?
  
  Build a framework that creates the tables in a special schema, and then
  can access them through any session.  Use some method to generate unique
  table names and store the names in the HTTP session.  Create some sort
  of garbage collection routines that removes tables when they're no longer
  needed.
  
  The details of exactly how you pull this off are going to depend heavily
  on the rest of your application architecture.
  
 
 What you describe is what I referred to as create real tables. I've 
 done that and it works, but I wondered if there's something similar 
 built in postgres apart from classical temporary tables.

Not that I'm aware of.

If you keep the mailing list in the CC, others can answer as well.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Benchmarking best practices?

2008-06-05 Thread Francisco Reyes
At work I am creating a standard postgresql benchmark suite based on the
queries and operations that we commonly do.

A couple of questions
+ Should I shutdown/restart the DB between runs?

+ How much bigger than memory should my tables be to have a good benchmark?
One issue to keep in mind is that the benchmark DB will be only a subset of
the real DBs to make it easier to copy to multiple machines. Once we show
improvements in the benchmark subset after hardware/configuration/DB
redesign then we would validate against the full sized DBs in the different
machines.

The goals are to benchmark different settings and machines to work on
improving performance by changing the DB structures (ie index changes, DB
re-design) and by buying/upgrading hardware.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bizgrez dead?

2008-06-05 Thread Francisco Reyes
Don't see any activity in the project since 2006. Is that project dead?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] conditionally executing migration code

2008-06-05 Thread Michael P. Soulier
I'm using some simple migration code to execute individual fragments of 
SQL code based on the version of the schema. Is there a way to perform 
an ALTER TABLE conditionally?


Example:

I want to add column foo to table bar, but only if column foo does not 
exist already.


I'm trying to avoid such situations, but it's not always easy.

Thanks,
Mike
--
Michael P. Soulier [EMAIL PROTECTED], 613-592-2122 x2522
Any intelligent fool can make things bigger and more complex... It
takes a touch of genius - and a lot of courage to move in the opposite
direction. --Albert Einstein

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Tim Tassonis

Tino Wildenhain wrote:

Hi,

Tim Tassonis wrote:

Hi all

I assume this is not an uncommon problem, but so far, I haven't been 
able to find a good answer to it.


I've got a table that holds log entries and fills up very fast during 
the day, it gets approx. 25 million rows per day. I'm now building a 
web application using apache/mod_php where you can query the database 
and then should be able to page through the results.


you should be aware that PHP isnt the only scripting language with an
apache module and not neccessary the best choice among them.


There's no need to become insulting. I am aware of the truly astonishing 
fact that there are other scripting languages apart from php and that 
not everybody loves php.


Apart from the sad fact that I quite like php, the problem is not the 
choice of scripting language, but the nature of apache mpm processing, 
making the  postgres connection stuck to an apache process.




My idea was that whenever a user constructs a query, I create a 
temporary table holding the results and then page through this table, 
which should work very well in principle.


That means you are more or less constructing materialized views :-)


No, I want the data to remain fixed after the query is executed.


But if you hold the session anyway, then see below.


I don't hold the session, see above.




But from what I've been able to find out, temporary tables live only 
in the Postgres Session they have been created in and are destroyed 
upon session descructuion.


Now, with apache/php in a mpm environment, I have no guarantee that a 
user will get the same postgresql session for a subsequent request, 
thus he will not see the temporary table.


Thats the problem and if you have failover/loadbalancing situations, 
even more so.


Is there a way to create temporary tables in another way, so they are 
visible between sessions, or do I need to create real tables for my 
purpose? And is the perfomance penalty big for real tables, as they 
have been written to disk/read from disk?


To start with, you should avoid reconnecting to the database for every
request. Not only because of loosing the session context but also
to avoid connection overhead.


I don't reconnect after every request, but I'm not guaranteed by mpm 
that I get the same session/process. I might, but that's hardly what I'd 
call a stable application, even as a php programmer.




Usually this is done by connection pooling. You can then try to trac
user:connection relationship as much as possible thru the connection pool.


As far as I can see, there is no implementation of a multi client 
process connection pool in mod_php.
I admit that my interprocess communication know-how is not very deep, 
but that would mean the client postgres/tcpip connection part would have 
to be held somewhere in shared memory between the different apache 
processes. From reading the documentation, php does not do that.




If you have that, there is actually no need for the temp tables. Instead
you can just use a regular cursor and scroll it as neccessary.


My problem ist that I don't have that.


Almost all frameworks should give you reasonable pool implementations,
some additional memory caching on top of it and there are also a lot
of other methods to help you with that, for example pgpool and
pgbouncer.


I'm afraid you somehow missed the point, but thanks for your response.

Bye
Tim


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/pgSQL graph enumeration function hangs

2008-06-05 Thread Charles F. Munat

I have a table of organizations that has a many-to-many relationship
with itself via another table called relationships. The relationships
table has a serial id primary key and parent_id and child_id integer
fields. The organizations table has a couple thousand records and the
maximum depth is around 7 or 8 levels. The graph is directed and is (or
will be) reconvergent.

Using pseudocode from Celko's SQL for Smarties book, I wrote the
following function that builds a path enumeration table. I hope to
trigger this function on the rare occasions that the organizations table
is updated. But when I run this function, it hangs.

Can anyone spot the problem? If not, is there a better solution?

I am returning a trigger. There are no arguments. I'm using VOLATILE,
CALLED ON NULL INPUT, and SECURITY INVOKER.

-

DECLARE
  oldsize integer NOT NULL;
  newsize integer NOT NULL;

BEGIN

  -- recreate the empty path_enum table
  DROP TABLE IF EXISTS organizations_path_enum;
  CREATE TABLE organizations_path_enum (
  parent_id integer NOT NULL,
  child_id integer NOT NULL,
  depth integer NOT NULL,
  CONSTRAINT depth_not_negative CHECK( depth = 0 )
  );
  CREATE INDEX ope_parent_idx ON organizations_path_enum(parent_id);
  CREATE INDEX ope_child_idx ON organizations_path_enum(child_id);
  CREATE INDEX ope_parent_child_idx ON
organizations_path_enum(parent_id, child_id);
  CREATE INDEX ope_child_parent_idx ON
organizations_path_enum(child_id, parent_id);
  CREATE UNIQUE INDEX ope_uniq_row_idx ON organizations_path_enum
(parent_id, child_id, depth);

  -- load path of node to itself
  INSERT INTO organizations_path_enum
  SELECT DISTINCT child_id, child_id, 0 FROM relationships;

  -- load paths of length = 1 into table
  INSERT INTO organizations_path_enum
  SELECT DISTINCT parent_id, child_id, 1 FROM relationships;

  -- insert rows only while table grows
  oldsize := 0;
  SELECT COUNT(*) INTO newsize FROM organizations_path_enum;

  WHILE oldsize  newsize LOOP
  INSERT INTO organizations_path_enum
  SELECT o1.parent_id, r1.child_id, (o1.depth + 1)
  FROM organizations_path_enum o1, relationships r1
  -- advance existing paths by one level
  WHERE EXISTS (SELECT * FROM organizations_path_enum AS o2
WHERE r1.parent_id = o2.child_id)
  -- insert only new rows into the table
  AND NOT EXISTS (SELECT * FROM organizations_path_enum AS o3
WHERE o1.parent_id = o3.parent_id AND r1.child_id = o3.child_id);

  oldsize := newsize;
  SELECT COUNT(*) INTO newsize FROM organizations_path_enum;
  END LOOP;
END;

-

Thanks!

Charles Munat
Seattle


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bizgrez dead?

2008-06-05 Thread Joshua D. Drake


On Thu, 2008-06-05 at 14:10 -0400, Francisco Reyes wrote:
 Don't see any activity in the project since 2006. Is that project dead?
 

I think greenplum would be a better place to ask but from what I can
tell, its dead.

Joshua D. Drake

 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Marco Bizzarri
On Thu, Jun 5, 2008 at 5:36 PM, Tim Tassonis [EMAIL PROTECTED] wrote:

 Is there a way to create temporary tables in another way, so they are
 visible between sessions, or do I need to create real tables for my purpose?
 And is the perfomance penalty big for real tables, as they have been written
 to disk/read from disk?

You could create a real table on disk, inserting just the primary keys
of the table; then, you could join on the main table, to get the real
results.

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditionally executing migration code

2008-06-05 Thread Scott Marlowe
Can't you just try to add the column and catch the error?  If you're
in a transaction use a user defined function to run it an catch the
exception in pl/pgsql.

On Thu, Jun 5, 2008 at 12:15 PM, Michael P. Soulier
[EMAIL PROTECTED] wrote:
 I'm using some simple migration code to execute individual fragments of SQL
 code based on the version of the schema. Is there a way to perform an ALTER
 TABLE conditionally?

 Example:

 I want to add column foo to table bar, but only if column foo does not exist
 already.

 I'm trying to avoid such situations, but it's not always easy.

 Thanks,
 Mike
 --
 Michael P. Soulier [EMAIL PROTECTED], 613-592-2122 x2522
 Any intelligent fool can make things bigger and more complex... It
 takes a touch of genius - and a lot of courage to move in the opposite
 direction. --Albert Einstein

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Annoying messages when copy sql code to psql terminal

2008-06-05 Thread Alvaro Herrera
Tom Lane escribió:
 A B [EMAIL PROTECTED] writes:
  Whenever I use copy-paste to run code in a terminal window that is
  running psql, and the code contains a row like
  [...]

 Either avoid copying/pasting tabs, or turn off readline
 (-n option to psql, I think, but check the manual).
 
 There's probably a way to turn off tab-completion without
 disabling readline altogether, but I don't know how offhand.

This can be done by adding 

$if psql
   set disable-completion on
$endif

to .inputrc.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full vacuum really slows down query

2008-06-05 Thread Scott Marlowe
Have you run analyze on the tables? bumped up default stats and re-run analyze?

Best way to send query plans is as attachments btw.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full vacuum really slows down query

2008-06-05 Thread Scott Marlowe
Oh, another point of attack.  Always test your queries under just
\timing.  You can wrap up like this:

\timing
select count(*) from (subselect goes here);

I've been on a few machines where the cost of explain analyze itself
threw the timing way off.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql8.3.2 tentative release date

2008-06-05 Thread Vlad Kosilov

is there a tentative release date (week ... month) for postgres-8.3.2 ?
Thanks!
Vlad

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full vacuum really slows down query

2008-06-05 Thread Jason Long

Thanks for the advice.  I will keep playing with it.  Can someone here
comment on EnterpriseDB or another companies paid support?  I may
consider this to quickly improve my performance.

Scott Marlowe wrote:

Have you run analyze on the tables? bumped up default stats and re-run analyze?

Best way to send query plans is as attachments btw.
  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres connection problem via python pg DBI

2008-06-05 Thread Dan Joo
Hi all,

 

I have a problem connecting to postgres via the python pg module ONLY
from the cgi-scripts.

 

The command is:

 

db=pg.connect('aqdev','localhost',5432,None,None,'postgres',None)

 

From the commandline the connection works great, but from a cgi-script
it barfs with the following message:

 

InternalError: could not create socket: Permission denied 

 

Does anyone have any idea how I can get around this issue?  

 

Thanks a bunch!



Re: [GENERAL] postgres connection problem via python pg DBI

2008-06-05 Thread Dan Joo
Just solved it.  

 

For others, here is the solution.  

 

setsebool -P httpd_can_network_connect_db 1

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dan Joo
Sent: Thursday, June 05, 2008 4:18 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postgres connection problem via python pg DBI

 

Hi all,

 

I have a problem connecting to postgres via the python pg module ONLY
from the cgi-scripts.

 

The command is:

 

db=pg.connect('aqdev','localhost',5432,None,None,'postgres',None)

 

From the commandline the connection works great, but from a cgi-script
it barfs with the following message:

 

InternalError: could not create socket: Permission denied 

 

Does anyone have any idea how I can get around this issue?  

 

Thanks a bunch!



Re: [GENERAL] postgres connection problem via python pg DBI

2008-06-05 Thread Colin Wetherbee

Dan Joo wrote:

db=pg.connect('aqdev','localhost',5432,None,None,'postgres',None)

From the commandline the connection works great, but from a
cgi-script it barfs with the following message:

*InternalError*: could not create socket: Permission denied


My (obvious, granted) guess is that you're running it from the command 
line as your own user, but the web server is running under another user 
who doesn't have the proper permissions (or ident response) to access 
the database.


Colin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Application EventLog: could not write to log file: Bad file descriptor

2008-06-05 Thread Ati Rosselet
Using postgresql 8.3 on windows 2003 server.  I keep seeing this message in
my system log.   Checking the times, it seems to coincide with a log
rollover each time, almost as though the db were trying to log something at
precisely the same time as it is closing access to the old file, and before
opening the new file.. and so fails to write.. does this make sense? has
anyone else seen this? Solutions? Ideas?   I reduced logging, and disabled
the debugging plugin (still don't know how that got enabled.. I must have
missed something on the install).  Hope that helps

Any ideas what is causing this error to be logged?
Cheers
Ati


Re: [GENERAL] Annoying messages when copy sql code to psql terminal

2008-06-05 Thread Merlin Moncure
On Tue, May 27, 2008 at 9:24 AM, A B [EMAIL PROTECTED] wrote:
 Whenever I use copy-paste to run code in a terminal window that is
 running psql, and the code contains a row like

 IF FOUND THEN

 then I get the words

 ABORTCHECKPOINT   COMMIT   DECLARE  EXECUTE

[...]

As others have noted, you have tabs in your sql source.  I'd advise if
possible, not to use the tab character in sql, for this and other
reasons.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extracting data from deprecated MONEY fields

2008-06-05 Thread Ken Winter
I understand from
http://www.postgresql.org/docs/8.0/static/datatype-money.html that the
money data type is deprecated.  

 

So I want to convert the data from my existing money columns into new
un-deprecated columns, e.g. with type decimal(10,2).  But every SQL
command I try tells me I can't cast or convert money data into any other
type I have tried, including decimal, numeric, varchar, and text.  

 

Is there any way to do this?

 

~ TIA 

~ Ken



Re: [GENERAL] Extracting data from deprecated MONEY fields

2008-06-05 Thread Joshua D. Drake

Ken Winter wrote:
I understand from 
http://www.postgresql.org/docs/8.0/static/datatype-money.html that the 
“money” data type is deprecated. 


Money is no longer deprecated in newer releases (specifically 8.3), 
although I do think it would be wise to push it to numeric.


I think the way to do it would be to backup the table and edit the table 
definition from the file. Make the money a numeric. Then reload the 
table from the backup.


Sincerely,

Joshua D. Drake

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Annoying messages when copy sql code to psql terminal

2008-06-05 Thread Gurjeet Singh
On Fri, Jun 6, 2008 at 7:58 AM, Merlin Moncure [EMAIL PROTECTED] wrote:

 On Tue, May 27, 2008 at 9:24 AM, A B [EMAIL PROTECTED] wrote:
  Whenever I use copy-paste to run code in a terminal window that is
  running psql, and the code contains a row like
 
  IF FOUND THEN
 
  then I get the words
 
  ABORTCHECKPOINT   COMMIT   DECLARE  EXECUTE

 [...]

 As others have noted, you have tabs in your sql source.  I'd advise if
 possible, not to use the tab character in sql, for this and other
 reasons.


Can you please elaborate on other reasons? I have never encountered any
_other_ reason!!

And 'using psql' is not reason enough to not indent your SQL code.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Temporary Tables and Web Application

2008-06-05 Thread Brent Wood
Hi Tim,

Off the top of my head, from somewhat left field, using filesystems to manage 
this sort of effect.

Would real tables in a tablespace defined on a ramdisk meet this need? So the 
functionality/accessibility of a 
physical table is provided, along with the performance of a filesystem actually 
residing in memory. Presumeably viable if you have the memory to spare  know 
the size of the temp tables won't exceed this.

You could also mount a tablespace on a physical disk with a filesystem which 
has delayed/deferred writes to disk, so that if it is created  deleted quickly 
enough, it is never actually written to disk, but just generally sits in the 
cache. 


Cheers,

Brent Wood


 Bill Moran [EMAIL PROTECTED] 06/06/08 8:01 AM 
In response to Tim Tassonis [EMAIL PROTECTED]:
 
 Bill Moran wrote:
  In response to Tim Tassonis [EMAIL PROTECTED]:
  
 
  Now, with apache/php in a mpm environment, I have no guarantee that a 
  user will get the same postgresql session for a subsequent request, thus 
  he will not see the temporary table.
 
  Is there a way to create temporary tables in another way, so they are 
  visible between sessions, or do I need to create real tables for my 
  purpose? And is the perfomance penalty big for real tables, as they have 
  been written to disk/read from disk?
  
  Build a framework that creates the tables in a special schema, and then
  can access them through any session.  Use some method to generate unique
  table names and store the names in the HTTP session.  Create some sort
  of garbage collection routines that removes tables when they're no longer
  needed.
  
  The details of exactly how you pull this off are going to depend heavily
  on the rest of your application architecture.
  
 
 What you describe is what I referred to as create real tables. I've 
 done that and it works, but I wondered if there's something similar 
 built in postgres apart from classical temporary tables.

Not that I'm aware of.

If you keep the mailing list in the CC, others can answer as well.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/pgSQL graph enumeration function hangs

2008-06-05 Thread Charles F. Munat
I have a table of organizations that has a many-to-many relationship 
with itself via another table called relationships. The relationships 
table has a serial id primary key and parent_id and child_id integer 
fields. The organizations table has a couple thousand records and the 
maximum depth is around 7 or 8 levels. The graph is directed and is (or 
will be) reconvergent.


Using pseudocode from Celko's SQL for Smarties book, I wrote the 
following function that builds a path enumeration table. I hope to 
trigger this function on the rare occasions that the organizations table 
is updated. But when I run this function, it hangs.


Can anyone spot the problem? If not, is there a better solution?

I am returning a trigger. There are no arguments. I'm using VOLATILE, 
CALLED ON NULL INPUT, and SECURITY INVOKER.


-

DECLARE
  oldsize integer NOT NULL;
  newsize integer NOT NULL;

BEGIN

  -- recreate the empty path_enum table
  DROP TABLE IF EXISTS organizations_path_enum;
  CREATE TABLE organizations_path_enum (
  parent_id integer NOT NULL,
  child_id integer NOT NULL,
  depth integer NOT NULL,
  CONSTRAINT depth_not_negative CHECK( depth = 0 )
  );
  CREATE INDEX ope_parent_idx ON organizations_path_enum(parent_id);
  CREATE INDEX ope_child_idx ON organizations_path_enum(child_id);
  CREATE INDEX ope_parent_child_idx ON 
organizations_path_enum(parent_id, child_id);
  CREATE INDEX ope_child_parent_idx ON 
organizations_path_enum(child_id, parent_id);
  CREATE UNIQUE INDEX ope_uniq_row_idx ON organizations_path_enum 
(parent_id, child_id, depth);


  -- load path of node to itself
  INSERT INTO organizations_path_enum
  SELECT DISTINCT child_id, child_id, 0 FROM relationships;

  -- load paths of length = 1 into table
  INSERT INTO organizations_path_enum
  SELECT DISTINCT parent_id, child_id, 1 FROM relationships;

  -- insert rows only while table grows
  oldsize := 0;
  SELECT COUNT(*) INTO newsize FROM organizations_path_enum;

  WHILE oldsize  newsize LOOP
  INSERT INTO organizations_path_enum
  SELECT o1.parent_id, r1.child_id, (o1.depth + 1)
  FROM organizations_path_enum o1, relationships r1
  -- advance existing paths by one level
  WHERE EXISTS (SELECT * FROM organizations_path_enum AS o2 
WHERE r1.parent_id = o2.child_id)

  -- insert only new rows into the table
  AND NOT EXISTS (SELECT * FROM organizations_path_enum AS o3 
WHERE o1.parent_id = o3.parent_id AND r1.child_id = o3.child_id);


  oldsize := newsize;
  SELECT COUNT(*) INTO newsize FROM organizations_path_enum;
  END LOOP;
END;

-

Thanks!

Charles Munat
Seattle

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: how to clean up temporary schemas (how to sync the system table with pg_dump)

2008-06-05 Thread Gary Fu



Gary Fu wrote:

I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.


If you restore a database, entries in pg_namespace will be created if
the dump contains any CREATE SCHEMA statements, i.e. if there are
schemas in your original database.

Check if the dump was created and restored by a database user with
the appropriate permissions (a superuser ideally), and look out for
error messages.

Do not try to manually change pg_namespace. Just don't.

Yours,
Laurenz Albe



Thanks for the response.  I think the problem is because there are
temporary schemas (pg_temp_1, ..) in the source db and the pg_dump
does not allow them to be restored (see below).

My question now is why those temporary schemas won't be cleaned
after I restart the db ?

Thanks,
Gary

% pg_dump -n pg_temp_1 -h nppdist
--
-- PostgreSQL database dump
--
SET client_encoding = 'LATIN1';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

--
-- Name: pg_temp_1; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA pg_temp_1;


ALTER SCHEMA pg_temp_1 OWNER TO postgres;

--
-- PostgreSQL database dump complete
--

-
% pg_dump -n pg_temp_1 -h nppdist | psql -h nppsds1
SET
SET
SET
SET
SET
ERROR:  unacceptable schema name pg_temp_1
DETAIL:  The prefix pg_ is reserved for system schemas.
ERROR:  schema pg_temp_1 does not exist

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: how to clean up temporary schemas (how to sync the system table with pg_dump)

2008-06-05 Thread Tom Lane
Gary Fu [EMAIL PROTECTED] writes:
 My question now is why those temporary schemas won't be cleaned
 after I restart the db ?

Just leave them alone and you'll be fine.  These tools actually have
had most of the bugs worked out of them ;-) ... if you think pg_dump is
omitting something, you are probably mistaken.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general