Hi,
as a last check before I start coding the datastore
I've dumped some design decisions and a rough outline
of how the datastore would work.

If anybody is interested in reviewing it, it would
be much appreciated.

General design
---------------------------------------------

The basic idea is simple, have a datastore
that can list all the "featureTypes" contained
in a directory by delegating all the work to
other datastores that can handle specific file
types.

To locate the datastores that can handle files,
we ask the datastores themselves to accept two
paramers:
- either a URL or a File
- and a namespace
The URL/File DataAccessFactory.Param must have
as associated metadata either FileDataStore.IS_FILE
or FileDataStore.IS_DIRECTORY that tell us
if the param is intended to be a file or a directory.

FileDataStore interface would be implemented by
data stores willing to help the directory data store (but
it would not be required), and would have a single method:
Collection<File> getFilesForType(String typeName)

that returns the file that make up a certain feature type.
This allows us to skip testing the .dbf, .prj, and so on,
for a shapefile for example (if we do the shapefile datastore
factory will actually accept them and create a new shapfile
datastore, which is wasteful).


DirectoryDataStore ideas and user stories
-----------------------------------------------

On datastore creation:
- scan the SPI for appropriate datastores
   - do support namespace
   - have a URL or File parameter with the proper flags
- pass the directory to all directory handlers,
   pass each file to the file/url handlers, if they
   can handle it:
   - open the datastore
   - get all the feature type names
   - associate each feature type with
     - the file/directory used to generate this entry
     - the factory that created the datastore
     - the datastore itself, as a soft reference
       subclass that disposes of the datastore on clear()
       (using WeakCollectionCleaner to have clear()
        called... more about this in a separate mail)

On getTypeNames():
- check for directory updates
- return the keys of the type-> datastore map

On getSchema(String typeName):
- check for directory update
- grab the datastore for the specific type name,
   return the schema requested directly (since
   the datastore is supposed to support namespaces
   we don't need any more headaches, right?)

On GetFeatureSource()
- check for directory updates
- grab the datastore for the specific feature type if
   any
- wrap it in a feature source that will return the
   proper datastore back (the directory one, not the
   original one)

Check for directory updates:
- use the last modified date of the directory
   to check wheter type name list should be udpated
- if so, do again the scan. To try and avoid
   killing all already opened data stores, try the
   following:
     - check if any of the type map entries is linked
       with the current file, if so, we keep it
     - if an existing entry file is gone, remove the
       entry
     - if a new file appears, create a new entry
- to avoid redoing over and over the checks, maybe
   have a ds parameter that controls how often we
   check the filesystem for updates (and use a
   default value of 1 second). If we do this, have
   a method to force the update of the cache too.

For all the DataStore reader/write operations, same
as above.

For createSchema()... I guess we have to refuse, as
we have no way to specify which datastore to use to
actually create the file.

That's all folks. Let me know.
Cheers
Andrea

-- 
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.

------------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
_______________________________________________
Geotools-devel mailing list
Geotools-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to