Damien Dougan wrote:
Basically we have a number of tables, which are exposed as 2 public
views (say PvA and PvB). For each row in PvA, there are a number of
related rows in PvB (this number is arbitrary, which is one of the
reasons why it cant be expressed as additional columns in PvA - so we
reall
Thanks for the thanks !
Generally, when grouping stuff together, it is a good idea to have two
sorted lists, and to scan them simultaneously. I have already used this
solution several times outside of Postgres, and it worked very well (it
was with Berkeley DB and there were 3 lists to sc
Pierre-Frederic, Paul,
Thanks for your fast response (especially for the python code and
performance figure) - I'll chase this up as a solution - looks most
promising!
Cheers,
Damien
---(end of broadcast)---
TIP 6: Have you searched our list arch
My simple python program dumps 1653992 items in 1654000 categories in :
real3m12.029s
user1m36.720s
sys 0m2.220s
It was running on the same machine as postgresql (AthlonXP 2500).
I Ctrl-C'd it before it dumped all the database but you get an idea.
If you don't know Python and Generators
There's a very simple solution using cursors.
As an example :
create table categories ( id serial primary key, name text );
create table items ( id serial primary key, cat_id integer references
categories(id), name text );
create index items_cat_idx on items( cat_id );
insert st
On 13/09/2004 12:38 Damien Dougan wrote:
[snip]
Are there any tools/tricks/tips with regards to extracting large volumes
of data across related tables from Postgres? It doesnt have to export
into XML, we can do post-processing on the extracted data as needed -
the important thing is to keep the rel
Hi All,
I am having a performance problem extracting a large volume of data from
Postgres 7.4.2, and was wondering if there was a more cunning way to get
the data out of the DB...
This isn't a performance problem with any particular PgSQL operation,
its more a strategy for getting large volumes