Hi Thomas,

Im wondering whether you're running into 
https://issues.apache.org/jira/browse/WICKET-6702

I've been working on a solution to that problem, which is caused by pages being 
asynchronously serialized while another request is already coming in.

Or maybe it is something different.
Could you create a quickstart?

Sven

Am 25. Februar 2020 22:12:46 MEZ schrieb Thomas Heigl <tho...@umschalt.com>:
>Hi again,
>
>I investigated a bit and it does not seem to have anything to do with
>the
>PerSessionPageStore. I implemented a completely synchronized version of
>it
>and the problems still exist.
>
>If I switch to the default second-level cache that stores serialized
>pages
>in application scope, everything works as expected. Only the
>non-serialized
>pages in PerSessionPageStore seem to be affected by concurrent ajax
>modifications.
>
>What is the difference between keeping pages in the session and keeping
>the
>same pages in the PerSessionPageStore? Is there some additional locking
>done for pages in the session?
>
>Best,
>
>Thomas
>
>On Tue, Feb 25, 2020 at 8:25 PM Thomas Heigl <tho...@umschalt.com>
>wrote:
>
>> Hi all,
>>
>> I'm currently experimenting with PerSessionPageStore as a
>second-level
>> cache. We are moving our page store from memory (i.e. session) to
>Redis and
>> keeping 1-2 pages per session in memory speeds up ajax requests quite
>a bit
>> because network roundtrips and (de)serialization can be skipped for
>cached
>> pages.
>>
>> Our application is very ajax heavy (it is basically a single page
>> application with lots of lazy-loading). While rapidly clicking around
>and
>> firing as many parallel ajax requests as possible, I noticed that it
>is
>> quite easy to trigger exceptions that I have never seen before.
>> ConcurrentModificationExceptions during serialization,
>> MarkupNotFoundExceptions, exceptions about components already
>dequeuing etc.
>>
>> So I had a look at the implementation of PerSessionPageStore and
>noticed
>> that is does not do any kind of synchronization and does not use
>atomic
>> operations when updating the cache. It seems to me that the
>second-level
>> cache is not really usable in a concurrent ajax environment.
>>
>> I think that writing pages to the second level cache store should
>either
>> synchronize on sessionId+pageId or attempt to use atomic operations
>> provided by ConcurrentHashMap.
>>
>> Did anyone else ever run into these issues? The code
>> of PerSessionPageStore is quite complex because of soft references,
>> skip-list maps etc. so I'm not sure what the right approach to
>address
>> these problems would be.
>>
>> Best regards,
>>
>> Thomas
>>

Reply via email to