Good day, I have considered making this functionality before, but have always resisted - for two reasons: 1. It does not scale, and I often find projects prototyped very quickly and then running into memory limitations later 2. A detached FeatureCollection, still looks like a FeatureCollection - and as long as you are programming against that interface there is no reason to break the API ruin your scalability.
So let me change the question, "Now that FeatureCollection can be used for editing can we remove addFeatures, modifyFeatures and deleteFeatures?" Lets take that one at a time: 1. addFeatures, yes it can be removed. featureStore.getFeatures().add( Collection ) works fine 2. removeFeatures, yes it can be removed. featureStore.getFeatures( filter ).delete() works fine 3. modify features ... nope it cannot be smoothly removed, we would need FeatureCollection.update( AttributeTypes[], Object[] ) I need to review what was done on the FeatureModel branch to see the specifics. That answer the question from an API perspective (is the code complete), the other reason you could ask for this is from a performance perspective - I would still ask that you consider using the API as it stands, and spend time working on a FeatureCollection implementation that acts as a cache (or buffer) of modification commands rather then duck around the API and make a solution that does not scale. Yes the existing TransactionStateDiff used by Shapefile offers some of this functionality, but it is not something that is under programmer control (the difference between an internal cache, and an external cache). To sum up: 1. the FeatureStore API is only there to be optimized into SQL, it is not intended to be complete. For add hoc modification FeatureWriter and FeatureCollection are both available 2. I am trying to move us to use optimized FeatureCollection implementations 3. making a FeatureCollection that is aware of Transaction state changes (and feature modification events) will accomplish your goal within the bounds of the existing API. 4. staying within the bounds of the existing API will allow your code to scale transparently to large data sizes. This does come with a word of warning, if you talk Corey he can tell you about using the GeoTools shapefile datastore in the recommended manner and running into scaling problems. They responded by producing several optimized Collections (ie spanning the spatial indexes etc...). We need a great deal more serious use out of our DataStore implementations (and more optimized collections), this process will occur as we use GeoTools for real work (rather then just serving and displaying information). You may also be interested in the earlier idea of an "Operation", ie a unit of code that does some kind of work on against Features: interface Operation { /** Needed to construct target FeatureCollection, or FeatureStore */ FeatureType processSchema( FeatureType fromSchema ); /** Needed to process Features one at a time */ Feature process( Feature from ) } A simpler API can be used to process features in place, the idea being that you can construct a "chain" of operations and have a fun time. I am not against the idea of Operations, we use it in uDig to great effect, but I am starting to think something at the FeatureCollection level will be of more use, especially now that FeatureVisitor is defined and used to great effect for aggregate functions showing the way to back some operations into raw SQL while not breaking ObjectOriented encapsulation. Jody Cheers, Jody Vitali Diatchkov wrote: > FeatureStore interface form some point is quite narrow. Try to explain use > case. > > Modifications are restricted by set of methods like: > > modifyFeatures(AttributeType[] type, Object[] value, Filter filter) > > This does not give enough flexibility to work with "detached" set of > features (I use "Hibernate"'s term thinking it is good enough to > characterize that behavior) - features that were requested from DataStore > and put into the memory for some processing, whatever. Quite complex > processing may be performed over them, various attributes may be modified, > etc (except FIDs of course - they are kind of non-modifiable entities - keys > - by which the "detached" set can be bound ("attached") again to the > external data store later. > So I would like to request set of features, "detach" them from data store, > perform arbitrary modifications, then update. I would like to have the > functionality in FeatureStore to pass a collection of "detached" features > (through any collection interface for features, abstractly now) to update > them. > > Now I am restricted by modifyFeatures(..) method that I have to call > multiple times. The problem here that there is no way to perform batch > updating of features that improve performance of this use case > significantly. (Just take a JDBC case to imagine). Each feature has a FID, > we have FIDMapper in JDBC case, so always we can reconstruct connection > between feature in external data store and feature in "detached" set - don't > see a problem here. > > What current implementation of FeatureStore. modifyFeatures(..) gives to us? > It lets to just specify filter to request features to be updated and specify > what attribute types and its values to update. A kind of batching update in > case when the set of features must be updated by the same attribute values. > But what if I have just various features has been requested on different > stages of my business process, in each feature I modified various attribute > values , etc. I want a method to pass a collection of features, their IDs > are native, so we can reconstruct a connection with features in external > data store. In JDBC case , for example, we can prepare UPDATE statement and > perform updating of all features one by one with one prepared statement, > etc. > > Most likely I see just one side and there are other points and reasons to > have so restricted modifyFeatures(..) capabilities. From described use case > point of view, not so much opportunities for various optimizations of > updating with this design. > > But I would like to discuss that issue:) > > Regards, Vitali Diatchkov. > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Geotools-devel mailing list > Geotools-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/geotools-devel > ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Geotools-devel mailing list Geotools-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/geotools-devel