Hi,

many thanks for the new services!
I am currenty importing a NPI-File but it already takes 3 weeks now. The DB
has grown very rapidly to about 12 GB and now it is growing far more slowly
and has reached 20 GB.
OK, it is just a fast HDD and no SDD but otherwise a rather decend machine
(8 cores, 8GB-RAM).
However, as the bottleneck is IO, I wonder what can be done to speed up the
import.
There are a lot of postgres IO reads (~55MB/s) and very few writes
(~100KB/s). Digging deeper I found that postgres is doing sequential scans
in the search_name table (~11GB) about 2 times a minute. So this should be
one of the (if not the main) reasons for the massive slowdown.

I guess there a in general ways for tuning:
1. postgres.conf (adopted the recommended settings for mapnik planet import
here, so autovacuuming is off and buffers are rather large, so I guess there
is not much room for improvement).
2. proper indexes

So my questions are:
1. Is there a way around the sequential scans search_name table? Is this a
bug (maybe index creating has been neglected)?
2. What indexes would eleminate the table scans and is it possible and
meaningful to create them while 
the import is still running?
3. Roughly, what is the supposed size for the final DB (standard parameters,
no changes to the configuration found in SVN)?

Thanks
fatzopilot



--
View this message in context: 
http://gis.638310.n2.nabble.com/MapQuest-release-3-new-APIs-tools-XAPI-JXAPI-NPI-new-Broken-Poly-tool-new-tp6254289p6330330.html
Sent from the USA mailing list archive at Nabble.com.

_______________________________________________
Talk-us mailing list
Talk-us@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk-us

Reply via email to