We could have second impl of org.apache.wicket.pageStore.IPageStore and you
can switch them by
overriding org.apache.wicket.DefaultPageManagerProvider#newPageStore

The question is: how useful this second level cache really is ?

The first level cache (all touched pages in the last request cycle being
kept in the http session) is useful because every Ajax interaction re-uses
the live page instance from this cache.

The second and third level caches are used when the user goes back in the
history.
Until now the second level cache (the one we discuss) keeps the last N
*serialized *pages for all users.
If the second level cache doesn't have entry for sessionId+pageId key then
the third level (the disk) is used.

In our application we have very low read ratio for the second and third
level caches. Our users mostly navigate always forward, i.e. they use the
links in the application.

On Fri, Mar 7, 2014 at 4:47 PM, Nick Pratt <[email protected]> wrote:

> Would it be possible to make this change available as a switchable option
> in 6.x (in addition to 7.x), leaving the current store/mechanism in place.
>
> The reason I ask is because this seems to be a very significant fundamental
> change to Wicket, and while the analysis by the original poster was very
> thorough and detailed, and the changes discussed so far sound very
> promising, I'd like to try it out on existing apps, without being forced to
> upgrade to 7.x due to the core nature of the changes being proposed.  So if
> we could switch the implementations (existing vs proposed) via a setting in
> Application::init this would be extremely helpful for testing memory usage
> and performance impacts.
>
> N
>
>
>
> On Fri, Mar 7, 2014 at 9:14 AM, Tom Götz <[email protected]> wrote:
>
> >
> > On 07.03.2014, at 14:14, Martin Grigorov <[email protected]> wrote:
> >
> > > On Fri, Mar 7, 2014 at 2:51 PM, Andrea Del Bene <[email protected]
> > >wrote:
> > >
> > >> Just my considerations (hopefully not to obvious) on caching inside
> > >> DefaultPageStore.SerializedPagesCache.
> > >>
> > >> -Using a LinkedList instead of ArrayList would bring better
> performances
> > >> because we would avoid to recompating (using add/remove last/first
> > element)
> > >> -Doing a deeper refactoring we could use one LinkedList for each
> session
> > >> and store all of them in a ConcurrentHashMap which supports
> non-blocking
> > >> readings. In addition, if a list for a given session is not yet
> present,
> > >> we could use method putIfAbsent to ensure that is added only once.
> > >> With this solution the limit of N-pages would be per session list and
> > >> not a global value.
> > >>
> > >
> > > This changes the behavior completely but I tend to agree with you.
> > > At the moment I use ConcurrentSkipListMap<SessionId,
> > > ConcurrentSkipListMap<PageId, IManageablePage>> , where SessionId class
> > > wraps the String sessionId and has modificationTime that is used to
> sort
> > > the entries so I can remove the oldest one when the limit is reached.
> > Same
> > > for PageId.
> > > The problem is that with this approach I will always remove the oldest
> > Page
> > > from the oldest Session. But there may be even older Page in another
> > > Session that is added later. So it is not correct...
> >
> > An additional consideration: what about getPage(String sessionId, int
> > pageId), should this also update the modification time (or maybe better:
> > "lastAccessTime") of the returned page? Pages that are more frequently
> > requested maybe should be removed later from the cache ...!?
> >
> >
> > > Your suggestion would be much simpler to implement but the total size
> of
> > > this cache will be dynamic and will depend on the number of the active
> > > sessions. But since this cache uses SoftReference for the values (the
> > > pages) I think this won't be a problem.
> >
> > We could have a (configurable) global limit for all cached pages in all
> > sessions, if that limit is reached, the least recently accessed/modified
> > page could be removed ...
> >
> >
> > Cheers,
> >    -Tom
> >
> >
> >
>

Reply via email to