Hello, About implementation of JDBC in shapefile. I am stuck at the moment (since more than one year now) even after having studied Calcite a bit (I didn't understood really because I have not enough understanding of parsers, in general). It will be difficult to provide the complete support for complex WHERE clauses. Currently the driver only handle simple conditions (with operators >, <, <=, =>, etc.), but not the AND or OR operators. It should at least do that to be really useful, to allow to find features in database that respond to most common requests without going too far. I search another way to do since months. Could other tools like QueryDSL provide some help, or would they only move the problem ? I don't know.
However, the good side is that we don't need our driver to be able to support INSERT, UPDATE or DELETE operations, as these operations should be done through a featureStore. For exception management, the jdbc driver have a lot of checked exception inside. It allows it to have great accuracy and handle any case of trouble happening, but we have seen that for everyday use, it's not always easy to live with checked exceptions. I discovered that Spring offers the SQLErrorCodeSQLExceptionTranslator utility class https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jdbc/support/SQLErrorCodeSQLExceptionTranslator.html that converts SQLException to DataAccessException (or subclasses of DataAccessException), who are RuntimeException. Going from SQLException to DataAccessException is leaving legacy SQLException and can be a good idea. Regards, Marc Le Bihan 2017-07-31 14:47 GMT+02:00 johann sorel <[email protected]>: > Hello, > > The current DataStore and FeatureStore API in SIS is limited, it is a > draft in an internal package. > Since there is no specification on how to design such an API, the proposed > API is resulting > from our work over the last years on our projects at geomatys. > It copies some aspects from the MetaData specification (ISO-19115) and JCR > (Java Content Repository). > > + DataStore would be defined by the methods : > - Metadata getMetadata() > - Resource getRootResource() > - Resource findResource(String name) > > + Resource : > - Metadata getMetadata() > - Envelope getEnvelope() > > + DataSet extends Resource : > - Collection<Resource> getResources(); > > + FeatureResource extends Resource: > - Stream<Feature> read() > - FeatureResource subset(Query) > > The main concept is 'Resource' similar to what is abstractly explained in > the metadata specification. > The subtype DataSet is a aggregation of resources which can again be > Datasets, so we obtain a tree structure > that can be used to map folder hierarchy, database schema/table or > anything else. > > A Resource can be of multiple types, so far we can imagine several > subtypes : > - FeatureResource (or FeatureCollection?) for features obviously > - CoverageResource for images and OGC map services, WMS,WMTS,WCS ... > - MetadataResource (or Catalog) for services such as OGC CSW > - SensorResource for datas like NMEA or OGC SOS > - an custom implementation when nothing matches > > For now I propose only a FeatureResource implementation because we don't > have anything yet for the other types. > The proposed interfaces are very light and leave a lot of space for futur > evolutions. > For example, having FeatureResource extents JDBC api later on. > > The objective is to obtain a first usable datastore api which would match > any known format, > files, databases and services. > > If you have any comments, suggestions or preferences, discussion is open :) > > Thank you > > Johann Sorel > >
