[EMAIL PROTECTED] wrote:

I have a fairly simple database design with a series of tables that reference 
graphical data.  Currently, there is one table that contains the following:

CREATE TABLE polygons (id INTEGER PRIMARY KEY, externalref INTEGER, pointcount 
INTEGER, pointlist BLOB)

When I insert data to this table, I have to write a binary list of x,y 
coordinates in a sequential memory region before storing in the database.  
Getting information back from the database requires a conversion in the 
opposite direction.  Typically these lists contain 10-20 points, so they are 
not very large.

This currently works, but I'm wondering if it would be better to create a new 
table for the points and reference the polygon primary key:

CREATE TABLE pointlist (polyid INTEGER, x INTEGER, y INTEGER)

On average, I'm managing 50000+ polygons, so the pointlist could exceed 1 
million entries very easily.  Points are not shared between polygons.

Does anyone have a recommendation as to which solution would be more optimal, 
both for space and processing time?  In a worst case scenario, I could insert 
over 1 million polygons, resulting in a max 20 million point list table.

Thanks,
John





you will need some form of spatial indexing if you want to retrieve your data faster. I use a simpe quad-tree but I'm sure that better design exist.



--
Noël Frankinet
Gistek Software SA
http://www.gistek.net



Reply via email to