Hello all -

I'm working on a postgres project after coming from a MySQL background
( no flames, please :). We are importing fairly large xml datasets
( 10-20 MB of xml files per 'project', currently 5 projects) into the
database for querying.

We are using PHP to create a web interface where users can create
projects and upload their files. We have a parsing routine that gets
the xml data into the database.

The parsing has turned out to be pretty intense. It takes about 10-20
minutes for any project. When we are parsing data, it really slows
down the site's  response. I tested serving static webpages from
apache, endless loops in php , but the choke point seems to be doing
any other query on postgres when constructing a php page during
parsing.

As an example, the original designer specified separate tables for
each project. Since they were all the same structure, I suggested
combining them into a single table with a project_id column, but he
said it would take too long to query. I was suspicious, but I went
with his design anyway.

It turned out he was right for our current set up. When I needed to
empty the project table to re-parse data, doing a cascading delete
could take up to 10 minutes! I cut re-parsing time in half by just
dropping the table and creating a new one. Which was an okay thing to
do, since the table only belonged to one project anyway. But I hate to
think how long it would have taken to do a delete, cascading to child
tables, if all the project data was in a single table.

Since I'm not an expert in Postgres database design, I'm assuming I've
done something sub-optimal. Are there some common techniques for
tuning postgres performance? Do we need beefier hardware?

Or is it a problem with how PHP or apache pools connections to the
database?


---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to