On Wed, 2016-09-21 at 11:18 +0200, Carsten Ziegeler wrote:
> > 
> > On Wed, 2016-09-21 at 10:21 +0200, Carsten Ziegeler wrote:
> > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On 21.9.16 9:14 , Carsten Ziegeler wrote:
> > > > > 
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 21.9.16 8:50 , Carsten Ziegeler wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On 21.9.16 8:33 , Carsten Ziegeler wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > Pushing filters as much into Oak has many
> > > > > > > > > > performance
> > > > > > > > > > advantages
> > > > > > > > > > though
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > compared to filter messages after delivery. Also
> > > > > > > > > > > Oak
> > > > > > > > > > > would easily
> > > > > > > > > > > able
> > > > > > > > > > > to support the delete use case described above.
> > > > > > > > > > > 
> > > > > > > > > In all cases, always, guaranteed?
> > > > > > > > 
> > > > > > > > For some definition of "all cases, always, guaranteed":
> > > > > > > > yes
> > > > > > > > ;-)
> > > > > > > 
> > > > > > > :) So there is no compaction, never?
> > > > > > 
> > > > > > There isn't if you configure it that way. It's up to you.
> > > > > > 
> > > > > > But this is completely irrelevant here. If compaction would
> > > > > > cause events
> > > > > > to get lost, there is nothing you could do about it in
> > > > > > Sling.
> > > > > > Regardless
> > > > > > whether you implement an ad-hoc DYI filter in Sling or use
> > > > > > Oak
> > > > > > filters.
> > > > > > 
> > > > > I agree.
> > > > > 
> > > > > Just to clarify, if I delete "/libs/foo" I get oak
> > > > > observation
> > > > > events
> > > > > for all nodes that where under /foo with the removed
> > > > > properties
> > > > > of each
> > > > > node, right?
> > > > 
> > > > No, just for the root of the removed tree.
> > > > 
> > > > See
> > > > https://issues.apache.org/jira/browse/OAK-1459?focusedCommentId
> > > > =139
> > > > 11484&page=com.atlassian.jira.plugin.system.issuetabpanels:comm
> > > > ent-
> > > > tabpanel#comment-13911484
> > > > 
> > > > 
> > > ah..memories :)
> > > 
> > > ok, but that proves my point that glob filtering does not work
> > > for
> > > remove
> > 
> > Is that a hard blocker? I can imagine that it's more convenient for
> > the
> > application to get discrete change events for each removal, but if
> > we
> > slightly change the contract to follow Oak's approach, would it be
> > more
> > performant?
> > 
> > Without looking in more detail, I would imagine that the
> > application
> > usually needs to clear caches or stop doing work when such an
> > instance
> > is removed. The application can then do this for all resources with
> > a
> > common parent. Sure, it's slightly more verbose and might require a
> > slight rearrangement of in-memory data structures, but overall
> > doable.
> 
> It's not a problem in general - the problem is that we don't specify
> the
> behaviour correctly. Right now code registering a listener with a
> glob
> pattern of **.jsp expects to get remove events in all case for
> exactly
> the removed jsps. But this is only true if the jsp is directly
> removed,
> not if any parent is removed.
> 
> So we a) need to clarify the contract and b) think about what we do
> if
> someone registers for a glob pattern. Do we send removal of parents
> automatically or do we expect to listener to register at /?

I would say that we send removal of parents automatically, it's simpler
for the clients.

Robert

> 
> Regards
> Carsten
> > 
> > 
> > Stefan also echoed the same concern - sometimes it's not possible (
> > or
> > performant/scalable ) to have such fine-grained change
> > notifications.
> > 
> > Robert
> > 
> > > 
> > > 
> > > Carsten
> > > 
> > > > 
> > > > 
> > > > ;-)
> > > > 
> > > > Michael
> > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > Carsten
> > > > > 
> > > > > 
> > > > > 
> > > > 
> > > 
> > > 
> > >  
> > > 
> > 
> > 
> 
> 
>  
> 

Reply via email to