Im still not sure about not limiting the number of pages to keep in session, even DiskPageStore has some sort of limit, imo not having a limit exposes us to the possibility of a single malicious user grinding the system to a halt. Yes terracotta will persist it to disk if needs be, but if that session is is current active use then it will be paging to and from disk all the time.
I would like to get the opinion of some other people about this. Also I don't see how the the -1 ajax version can work; in disk based store it treats the -1 the same as in getPage, where it just looks for the highest version, in our case it will construct a key with the -1 value in it, i.e. it will only find the page where ajax version number is -1. Since this cant happen, contains page wont work. We could probably use the helper structure thing to simplify this though. Richard Stefan Fußenegger wrote: > > 1+2) well, it will only add pages as long as the session is alive. if a > page isn't used frequently it will be moved to and later persisted by the > TC server and finally GCed together with its session. therefore i don't > think deleting old pages is necessary. or do you have a special use case > where this could be problematic? maybe a bot crawling thousands of pages > could generate tons of serialized page? but is this really a problem? > > 3) okay, didn't see that little piece of javadoc. I think an extra > structure keeping track of most recent versions of pageIds could help to > make these searches efficient. > > I changed my code: > - one store per PageMapName, making deletes more efficient > - version info stored for all pageIds (HashMap<Integer,VersionInfo>) where > VersionInfo has a pointer to the most recent page and highest > ajaxVersionNumber > > Comments? > > New file: (untested!) > http://www.nabble.com/file/p18314611/MyTerracottaPageStore.java > MyTerracottaPageStore.java > > > > richardwilko wrote: >> >> Hi Stefan, >> >> Looking through your code I see a couple of issues: >> >> 1) There is no limit on the number of pages stored in the pagemap, pages >> could get added forever. I feel there needs to be a way to limit the >> number of pages stored, with oldest ones discarded first. This is how >> DiskPageStore works. >> >> 2) Following on from point 1, a HashMap does not keep insertion order so >> it is not possible to remove the oldest ones easily. A simple change to >> LinkedHashMap would solve this and make point 1 easy to implement. >> However storing all the pagemaps together does mean that the most recent >> pages from one pagemap could get removed due to high use of another >> pagemap. In this case when the user goes back to the other pagemap >> he/she will encounter an exception. >> >> 3) Your getPage code is not general enough; from the javadocs for getPage >> in IPageStore: >> * If ajaxVersionNumber is -1 and versionNumber is specified, the page >> store must return the page with highest ajax version. >> * If both versionNumber and ajaxVersioNumber are -1, the pagestore must >> return last touched (saved) page version with given id. >> Your method of constructing a key object wouldn't work in these >> situations, as it would only find exact matches, and so getPage would >> require iterating through the entire HashMap and looking at every entry. >> >> This issue is the reason why I went for the nested structure I used. I >> do agree that a single storage map would ideally be better, especially as >> this make it easier to better manage the number of pages stored, but i'm >> not sure if it is the most efficient method of storage for the complex >> getPage requirements. By efficient i mean execution time rather than >> memory usage. >> >> Thoughts? >> >> Richard >> >> >> Stefan Fußenegger wrote: >>> >>> Hi Richard, >>> >>> I had a thorough look on your code. I have the following remarks: >>> >>> - yes, SerializedPage must be clustered and should therefore implement >>> IClusterable (it is already Serializable, it should therefore be okay to >>> change) >>> - I found two problems with your implementation: >>> 1) unbind() is called during invalidation of a session. getPageStore() >>> will therefore result in a NPE as there is no WebRequest >>> 2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry, >>> String, int) ("page id to remove or -1 if the whole pagemap should be >>> removed") calling removePage(String, String, int) with an id of -1 >>> should delete all pages of a pageMap (however, that's not documented in >>> the JavaDoc of IPageStore!) >>> - I feel that all pages could be in a single HashMap (rather than using >>> 3 levels of nested HashMaps and HashSets). I therefore implemented my >>> own PageStore based on your ideas to confirm my feelings (using a single >>> HashMap per sesison, using less Hash(Map|Set) iterations; access >>> synchronized using a ReentrantReadWriteLock which I think has quite good >>> performance with TC). Please have a look. We can probably >>> http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java >>> MyTerracottaPageStore.java merge our ideas for best results! >>> >>> Regards >>> Stefan >>> >>> >>> http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java >>> MyTerracottaPageStore.java >>> >>> >> >> > > -- View this message in context: http://www.nabble.com/Terracotta-integration-tp18168616p18314991.html Sent from the Wicket - Dev mailing list archive at Nabble.com.
