Jesse (and Dan offline),

I'm glad I'm not the only one questioning this.  I wondered if I was doing
something wrong.  Dan suggested a ON CONFLICT UPDATE solution on the db
side.  Looking at that.  But we may also go with the locking mechanism.
Thanks for the feedback.

doug


On 9/14/11 11:11 AM, "Ciancetta, Jesse E." <jc...@mitre.org> wrote:

>> We¹ve implemented our own userprefs osapi service (yes, we dumped the
>> whole
>> idea of sitting on top of appdata).  As I understand it the calls to our
>> service will be asynchronous.  Recently Dan Dumont made some changes to
>> the
>> common container layer to support a callback for GET_PREFERENCES so that
>> we
>> didn¹t try to render the gadget before the GET had completed.
>> 
>> I need some help understanding the SET side of things.  A gadget could call
>> SET for each userpref in rapid succession.  It¹s possible 2 of these could
>> be handled simultaneously at the service layer (it¹s not only possible, we
>> are hitting it EVERYTIME with the horoscope gadget).  Thus it¹s possible we
>> could get a constraint violation on the database side (we are creating a
>> master record that references the userpref values).  Two values could try to
>> create the master record simultaneously.  Even if we flattened that out and
>> didn¹t do the master record, the gadget COULD still set the same value twice
>> in rapid succession and we¹d run into the same problem.
>> 
>> So, it the right solution to return a 409 (conflict) from the service and
>> have our container userprefs layer retry?
> 
> Huh -- while reading through your question and thinking through how I might
> solve it, I realized that I have this same exact problem in at least two
> places!  :-)
> 
> I think the way I'm planning to solve it is to introduce some intelligent
> synchronization.  So within Rave we have a method in our RegionWidgetService
> layer to save a RegionWidgetPreference which looks something like this:
> 
> public RegionWidgetPreference saveRegionWidgetPreference(long regionWidgetId,
> RegionWidgetPreference preference) {
> //fetch the region widget from the database
> //pull the preferences out of it and add or update the preference we were
> passed
> //save the region widget with its updated preferences back to the database
> }
> 
> So I think my solution is going to look something like this:
> 
> public RegionWidgetPreference saveRegionWidgetPreference(long regionWidgetId,
> RegionWidgetPreference preference) {
> //fetch a lock object for this RegionWidget instance using the regionWidgetId
> as a key (I'm looking at the java.util.concurrent.locks.Lock interface)
> //lock on the lock object
> //fetch the region widget from the database
> //pull the preferences out of it and add or update the preference we were
> passed
> //save the region widget with its updated preferences back to the database
> //release the lock on the lock object
> }
> 
> Fetching/releasing the lock will be delegated off to a generic LockService
> which will have synchronized methods for fetching/releasing the locks -- this
> way the only synchronization overhead that *all* threads will incur is
> fetching/releasing the locks -- and the only time two (or more) threads will
> need to wait for each others database operations is if they are trying to set
> preferences for the same RegionWidget instance.
> 
> I'll post a follow up when I have this implemented in Rave with pointers to
> the implementation for reference.
> 
>> And doesn¹t this same issue exist
>> from gadgets who are calling appdata (or any osapi write service for that
>> matter)?  Should our services be trying to prevent this or only reporting
>> errors?  Is there some shindig construct that helps with this?
> 
> I think all the code in Shindig that manipulates persistent data ends up doing
> so by delegating the actual data manipulation work off to an implementation of
> one of their SPI interfaces -- so I think it would be up to the implementer of
> those interfaces to do something like what I've outlined above.
> 
>> Your help is appreciated.
>> 
>> Thanks,
>> Doug
> 
> 


Reply via email to