Hi all,

This may be a little off topic, but there are a lot of smart minds here that might know.

I have a plpgsql function the does some work to compute all the intersections with street names in a road network. It creates a vertex table like in pgrouting and an intersection table. Then populates the tables from a streets network and finally updates the intersection table based on the number of connected segments that have names.

The performance issue I'm having is that if I include the final update in the procedure is takes forever (well longer than 12 hours when I aborted it) and postgres runs at 100 CPU for all that time. But if I comment out the update and run the procedure it takes say 1 hour to run, and then I then run the update from the command line it takes say another hour to run that, so a total of 2 hours.

plpgsql function run in a single transaction and breaking it into two steps does it in two transaction, so I figure that has something to do with it. An you can not COMMIT in a plpgsql function.

Any thoughts on how to fix this?
Can a SQL procedure do a COMMIT? If so I might be able to reorg the process into two some plpgsql procedures that get called from a SQL procedure.

Thoughts?

-Steve
_______________________________________________
postgis-users mailing list
postgis-users@postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users

Reply via email to