(a) As you indicate later, you are drawing *ALL* your data, which
means the database is being used as nothing more than a big file.
For 400000 polygons, this is not a "real world" use case, since the
map you draw will be essentially meaningless (a 1000x1000 image has
only 1M pixels, so your 400K polygons will average 2.5 pixels each,
and that's the "best case" scenario).
(b) Is it possible that your table is either very wide (100s of
columns) or your geometries very big (1000s of vertices). If in
combination the data per row is > 8K, then each row will be toasted
into a side table, which will dramatically increase the overhead
involved in accessing the data. This could easily lead to a 20:1
performance difference versus reading directly off a simple shape file.
P.
On 17-Jul-07, at 11:49 PM, [EMAIL PROTECTED] wrote:
Dear Users,
I have been flirting with PostGIS data for a while, but now I have
come
accross a benchmark issue which baffles me. I loaded a big-bunch of
polygon data (431094 lines in postgres8.1) importing with shp2pgsql
utility (thus with GIST index and all)... I run the vacuum analyze
on the
database, but the time mapserver takes to draw all the data is
about 20
times slower than a shapfile.
Is this normal?
Thak you for your time.
Francesco Pirotti
_______________________________________________
postgis-users mailing list
[email protected]
http://postgis.refractions.net/mailman/listinfo/postgis-users
_______________________________________________
postgis-users mailing list
[email protected]
http://postgis.refractions.net/mailman/listinfo/postgis-users