Re: [vote] release Wicket 1.4-m3

2008-07-07 Thread Johan Compagner
[X] release 1.4-m3

On Sun, Jul 6, 2008 at 4:22 PM, Martijn Dashorst <[EMAIL PROTECTED]>
wrote:

> I've created and uploaded a release candidate for 1.4-m3 to my p.a.o
> space. I'm a little time constrained, so I didn't test the release in
> a container. I do think that a 1.4-m3 release is due, because we need
> feedback on whether this is the way to go forward.
>
> You can find the release here:
>
> http://people.apache.org/~dashorst/releases/apache-wicket-1.4-m3
>
> There's also a rat report available.
>
> [ ] don't release 1.4-m3
>
> Martijn
>
> --
> Become a Wicket expert, learn from the best: http://wicketinaction.com
> Apache Wicket 1.3.4 is released
> Get it now: http://www.apache.org/dyn/closer.cgi/wicket/1.3.
>


Re: Terracotta integration

2008-07-07 Thread Stefan Fußenegger

Hi Richard,

I had a thorough look on your code. I have the following remarks:

- yes, SerializedPage must be clustered and should therefore implement
IClusterable (it is already Serializable, it should therefore be okay to
change)
- I found two problems with your implementation:
  1) unbind() is called during invalidation of a session. getPageStore()
will therefore result in a NPE as there is no WebRequest
  2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry,
String, int) ("page id to remove or -1 if the whole pagemap should be
removed") calling removePage(String, String, int) with an id of -1 should
delete all pages of a pageMap (however, that's not documented in the JavaDoc
of IPageStore!)
- I feel that all pages could be in a single HashMap (rather than using 3
levels of nested HashMaps and HashSets). I therefore implemented my own
PageStore based on your ideas to confirm my feelings (using a single HashMap
per sesison, using less Hash(Map|Set) iterations; access synchronized using
a ReentrantReadWriteLock which I think has quite good performance with TC).
Please have a look. We can probably
http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
MyTerracottaPageStore.java  merge our ideas for best results!

Regards
Stefan


http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
MyTerracottaPageStore.java 

richardwilko wrote:
> 
> Hi again,
> 
> I have put together a second version which does away with the need to
> instrument TerracottaPageStore and AbstractPageStore, but not
> AbstractPageStore$SerializedPage (no getting away from that).
> 
> I have also improved the synchronisation stuff (i think, its not my strong
> point) and added a few more comments.
> 
> In the end I did make the classes static inner classes; i moved all the
> code calls to the methods in AbstractPageStore, to other places.
> 
> Please take a look and tell me what you think.
> 
> Richard
> 
>  http://www.nabble.com/file/p18280052/TerracottaPageStore.java
> TerracottaPageStore.java 
> 
> 
> 
> 
> richardwilko wrote:
>> 
>> It does add a slight overhead but I dont think 2 extra classes wouldn't
>> be noticed.
>> 
>> I can't use static inner classes because the methods in AbstractPageStore
>> aren't static.  
>> 
>> Your suggestion would work, with a slight modification:
>> 
>> (TerracottaPageStore)
>> ((SecondLevelCacheSessionStore)Application.get().getSesssionStore()).getStore();
>> 
>> So I will have a look at implementing it that way instead.
>> 
>> Cheers,
>> 
>> Richard
>> 
>> 
>> 
>> Stefan Fußenegger wrote:
>>> 
>>> I don't understand the problem. Is it just the visibility of those
>>> methods? If yes, TerracottaPageStore could allow public access to any
>>> protected method if needed. Or you could use static inner classes to
>>> remove this (hidden) reference and use lazy property initialization to
>>> get your hands on the current JVM's TerracottaSessionStore using:
>>> 
>>> 
>>> private transient TerracottaSessionStore _tss;
>>> 
>>> public TerracottaSessionStore getTerracottaSessionStore() {
>>>   if (_tss == null) _tss = (TerracottaSesssionStore)
>>> Application.get().getSesssionStore();
>>>   return _tss;
>>> }
>>> 
>>> 
>>> However, doesn't instrumenting classes that aren't meant be shared sound
>>> like unnecessary overhead?
>>> 
>>> best regards
>>> Stefan
>>> 
>>> 
>>> 
>>> 
>> 
>> 
> 
> 


-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
-- 
View this message in context: 
http://www.nabble.com/Terracotta-integration-tp18168616p18312624.html
Sent from the Wicket - Dev mailing list archive at Nabble.com.



Re: Terracotta integration

2008-07-07 Thread richardwilko

Hi Stefan,

Looking through your code I see a couple of issues:

1) There is no limit on the number of pages stored in the pagemap, pages
could get added forever.  I feel there needs to be a way to limit the number
of pages stored, with oldest ones discarded first.  This is how
DiskPageStore works.

2) Following on from point 1, a HashMap does not keep insertion order so it
is not possible to remove the oldest ones easily.  A simple change to
LinkedHashMap would solve this and make point 1 easy to implement.  However
storing all the pagemaps together does mean that the most recent pages from
one pagemap could get removed due to high use of another pagemap.  In this
case when the user goes back to the other pagemap he/she will encounter an
exception.

3) Your getPage code is not general enough; from the javadocs for getPage in
IPageStore:
* If ajaxVersionNumber is -1 and versionNumber is specified, the page store
must return the page with highest ajax version.
* If both versionNumber and ajaxVersioNumber are -1, the pagestore must
return last touched (saved) page version with given id.
Your method of constructing a key object wouldn't work in these situations,
as it would only find exact matches, and so getPage would require iterating
through the entire HashMap and looking at every entry.

This issue is the reason why I went for the nested structure I used.  I do
agree that a single storage map would ideally be better, especially as this
make it easier to better manage the number of pages stored, but i'm not sure
if it is the most efficient method of storage for the complex getPage
requirements.  By efficient i mean execution time rather than memory usage.

Thoughts?

Richard


Stefan Fußenegger wrote:
> 
> Hi Richard,
> 
> I had a thorough look on your code. I have the following remarks:
> 
> - yes, SerializedPage must be clustered and should therefore implement
> IClusterable (it is already Serializable, it should therefore be okay to
> change)
> - I found two problems with your implementation:
>   1) unbind() is called during invalidation of a session. getPageStore()
> will therefore result in a NPE as there is no WebRequest
>   2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry,
> String, int) ("page id to remove or -1 if the whole pagemap should be
> removed") calling removePage(String, String, int) with an id of -1 should
> delete all pages of a pageMap (however, that's not documented in the
> JavaDoc of IPageStore!)
> - I feel that all pages could be in a single HashMap (rather than using 3
> levels of nested HashMaps and HashSets). I therefore implemented my own
> PageStore based on your ideas to confirm my feelings (using a single
> HashMap per sesison, using less Hash(Map|Set) iterations; access
> synchronized using a ReentrantReadWriteLock which I think has quite good
> performance with TC). Please have a look. We can probably
> http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
> MyTerracottaPageStore.java  merge our ideas for best results!
> 
> Regards
> Stefan
> 
> 
>  http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
> MyTerracottaPageStore.java 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Terracotta-integration-tp18168616p18313078.html
Sent from the Wicket - Dev mailing list archive at Nabble.com.



Re: Terracotta integration

2008-07-07 Thread richardwilko

Ok,

I have adapted your code in the following ways:

1)
2) There is a configurable limit to the number of pages per page map and
page maps are stored separately, this is to combat the problem I found in
point 2.
3) I have removed the pagemap from PageKey class.
4) Adapted getPage to fit the api doc.
5) Moved serialization / de-serialization higher up so that dont need to
store transient TerracottaPageStore

What do you think? I'm add some debug output code and test it in a clustered
environment and will report back.

http://www.nabble.com/file/p18314496/OurTerracottaPageStore.java
OurTerracottaPageStore.java 

im still not entirly sure about the containsPage method either, it might
require iteration through the map because I'm not sure the
DEFAULT_AJAX_VERSION_NUMBER approach will work.

Richard 


richardwilko wrote:
> 
> Hi Stefan,
> 
> Looking through your code I see a couple of issues:
> 
> 1) There is no limit on the number of pages stored in the pagemap, pages
> could get added forever.  I feel there needs to be a way to limit the
> number of pages stored, with oldest ones discarded first.  This is how
> DiskPageStore works.
> 
> 2) Following on from point 1, a HashMap does not keep insertion order so
> it is not possible to remove the oldest ones easily.  A simple change to
> LinkedHashMap would solve this and make point 1 easy to implement. 
> However storing all the pagemaps together does mean that the most recent
> pages from one pagemap could get removed due to high use of another
> pagemap.  In this case when the user goes back to the other pagemap he/she
> will encounter an exception.
> 
> 3) Your getPage code is not general enough; from the javadocs for getPage
> in IPageStore:
> * If ajaxVersionNumber is -1 and versionNumber is specified, the page
> store must return the page with highest ajax version.
> * If both versionNumber and ajaxVersioNumber are -1, the pagestore must
> return last touched (saved) page version with given id.
> Your method of constructing a key object wouldn't work in these
> situations, as it would only find exact matches, and so getPage would
> require iterating through the entire HashMap and looking at every entry.
> 
> This issue is the reason why I went for the nested structure I used.  I do
> agree that a single storage map would ideally be better, especially as
> this make it easier to better manage the number of pages stored, but i'm
> not sure if it is the most efficient method of storage for the complex
> getPage requirements.  By efficient i mean execution time rather than
> memory usage.
> 
> Thoughts?
> 
> Richard
> 
> 
> Stefan Fußenegger wrote:
>> 
>> Hi Richard,
>> 
>> I had a thorough look on your code. I have the following remarks:
>> 
>> - yes, SerializedPage must be clustered and should therefore implement
>> IClusterable (it is already Serializable, it should therefore be okay to
>> change)
>> - I found two problems with your implementation:
>>   1) unbind() is called during invalidation of a session. getPageStore()
>> will therefore result in a NPE as there is no WebRequest
>>   2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry,
>> String, int) ("page id to remove or -1 if the whole pagemap should be
>> removed") calling removePage(String, String, int) with an id of -1 should
>> delete all pages of a pageMap (however, that's not documented in the
>> JavaDoc of IPageStore!)
>> - I feel that all pages could be in a single HashMap (rather than using 3
>> levels of nested HashMaps and HashSets). I therefore implemented my own
>> PageStore based on your ideas to confirm my feelings (using a single
>> HashMap per sesison, using less Hash(Map|Set) iterations; access
>> synchronized using a ReentrantReadWriteLock which I think has quite good
>> performance with TC). Please have a look. We can probably
>> http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java  merge our ideas for best results!
>> 
>> Regards
>> Stefan
>> 
>> 
>>  http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java 
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Terracotta-integration-tp18168616p18314496.html
Sent from the Wicket - Dev mailing list archive at Nabble.com.



Re: Terracotta integration

2008-07-07 Thread Stefan Fußenegger

1+2) well, it will only add pages as long as the session is alive. if a page
isn't used frequently it will be moved to and later persisted by the TC
server and finally GCed together with its session. therefore i don't think
deleting old pages is necessary. or do you have a special use case where
this could be problematic? maybe a bot crawling thousands of pages could
generate tons of serialized page? but is this really a problem?

3) okay, didn't see that little piece of javadoc. I think an extra structure
keeping track of most recent versions of pageIds could help to make these
searches efficient.

I changed my code:
- one store per PageMapName, making deletes more efficient
- version info stored for all pageIds (HashMap) where
VersionInfo has a pointer to the most recent page and highest
ajaxVersionNumber

Comments?

New file: (untested!) 
http://www.nabble.com/file/p18314611/MyTerracottaPageStore.java
MyTerracottaPageStore.java 



richardwilko wrote:
> 
> Hi Stefan,
> 
> Looking through your code I see a couple of issues:
> 
> 1) There is no limit on the number of pages stored in the pagemap, pages
> could get added forever.  I feel there needs to be a way to limit the
> number of pages stored, with oldest ones discarded first.  This is how
> DiskPageStore works.
> 
> 2) Following on from point 1, a HashMap does not keep insertion order so
> it is not possible to remove the oldest ones easily.  A simple change to
> LinkedHashMap would solve this and make point 1 easy to implement. 
> However storing all the pagemaps together does mean that the most recent
> pages from one pagemap could get removed due to high use of another
> pagemap.  In this case when the user goes back to the other pagemap he/she
> will encounter an exception.
> 
> 3) Your getPage code is not general enough; from the javadocs for getPage
> in IPageStore:
> * If ajaxVersionNumber is -1 and versionNumber is specified, the page
> store must return the page with highest ajax version.
> * If both versionNumber and ajaxVersioNumber are -1, the pagestore must
> return last touched (saved) page version with given id.
> Your method of constructing a key object wouldn't work in these
> situations, as it would only find exact matches, and so getPage would
> require iterating through the entire HashMap and looking at every entry.
> 
> This issue is the reason why I went for the nested structure I used.  I do
> agree that a single storage map would ideally be better, especially as
> this make it easier to better manage the number of pages stored, but i'm
> not sure if it is the most efficient method of storage for the complex
> getPage requirements.  By efficient i mean execution time rather than
> memory usage.
> 
> Thoughts?
> 
> Richard
> 
> 
> Stefan Fußenegger wrote:
>> 
>> Hi Richard,
>> 
>> I had a thorough look on your code. I have the following remarks:
>> 
>> - yes, SerializedPage must be clustered and should therefore implement
>> IClusterable (it is already Serializable, it should therefore be okay to
>> change)
>> - I found two problems with your implementation:
>>   1) unbind() is called during invalidation of a session. getPageStore()
>> will therefore result in a NPE as there is no WebRequest
>>   2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry,
>> String, int) ("page id to remove or -1 if the whole pagemap should be
>> removed") calling removePage(String, String, int) with an id of -1 should
>> delete all pages of a pageMap (however, that's not documented in the
>> JavaDoc of IPageStore!)
>> - I feel that all pages could be in a single HashMap (rather than using 3
>> levels of nested HashMaps and HashSets). I therefore implemented my own
>> PageStore based on your ideas to confirm my feelings (using a single
>> HashMap per sesison, using less Hash(Map|Set) iterations; access
>> synchronized using a ReentrantReadWriteLock which I think has quite good
>> performance with TC). Please have a look. We can probably
>> http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java  merge our ideas for best results!
>> 
>> Regards
>> Stefan
>> 
>> 
>>  http://www.nabble.com/file/p18312624/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java 
>> 
>> 
> 
> 


-
---
Stefan Fußenegger
http://talk-on-tech.blogspot.com // looking for a nicer domain ;)
-- 
View this message in context: 
http://www.nabble.com/Terracotta-integration-tp18168616p18314611.html
Sent from the Wicket - Dev mailing list archive at Nabble.com.



Re: Terracotta integration

2008-07-07 Thread richardwilko

Im still not sure about not limiting the number of pages to keep in session,
even DiskPageStore has some sort of limit, imo not having a limit exposes us
to the possibility of a single malicious user grinding the system to a halt. 
Yes terracotta will persist it to disk if needs be, but if that session is
is current active use then it will be paging to and from disk all the time.

I would like to get the opinion of some other people about this.

Also I don't see how the the -1 ajax version can work; in disk based store
it treats the -1 the same as in getPage, where it just looks for the highest
version, in our case it will construct a key with the -1 value in it, i.e.
it will only find the page where ajax version number is -1.  Since this cant
happen, contains page wont work.  We could probably use the helper structure
thing to simplify this though.

Richard



Stefan Fußenegger wrote:
> 
> 1+2) well, it will only add pages as long as the session is alive. if a
> page isn't used frequently it will be moved to and later persisted by the
> TC server and finally GCed together with its session. therefore i don't
> think deleting old pages is necessary. or do you have a special use case
> where this could be problematic? maybe a bot crawling thousands of pages
> could generate tons of serialized page? but is this really a problem?
> 
> 3) okay, didn't see that little piece of javadoc. I think an extra
> structure keeping track of most recent versions of pageIds could help to
> make these searches efficient.
> 
> I changed my code:
> - one store per PageMapName, making deletes more efficient
> - version info stored for all pageIds (HashMap) where
> VersionInfo has a pointer to the most recent page and highest
> ajaxVersionNumber
> 
> Comments?
> 
> New file: (untested!) 
> http://www.nabble.com/file/p18314611/MyTerracottaPageStore.java
> MyTerracottaPageStore.java 
> 
> 
> 
> richardwilko wrote:
>> 
>> Hi Stefan,
>> 
>> Looking through your code I see a couple of issues:
>> 
>> 1) There is no limit on the number of pages stored in the pagemap, pages
>> could get added forever.  I feel there needs to be a way to limit the
>> number of pages stored, with oldest ones discarded first.  This is how
>> DiskPageStore works.
>> 
>> 2) Following on from point 1, a HashMap does not keep insertion order so
>> it is not possible to remove the oldest ones easily.  A simple change to
>> LinkedHashMap would solve this and make point 1 easy to implement. 
>> However storing all the pagemaps together does mean that the most recent
>> pages from one pagemap could get removed due to high use of another
>> pagemap.  In this case when the user goes back to the other pagemap
>> he/she will encounter an exception.
>> 
>> 3) Your getPage code is not general enough; from the javadocs for getPage
>> in IPageStore:
>> * If ajaxVersionNumber is -1 and versionNumber is specified, the page
>> store must return the page with highest ajax version.
>> * If both versionNumber and ajaxVersioNumber are -1, the pagestore must
>> return last touched (saved) page version with given id.
>> Your method of constructing a key object wouldn't work in these
>> situations, as it would only find exact matches, and so getPage would
>> require iterating through the entire HashMap and looking at every entry.
>> 
>> This issue is the reason why I went for the nested structure I used.  I
>> do agree that a single storage map would ideally be better, especially as
>> this make it easier to better manage the number of pages stored, but i'm
>> not sure if it is the most efficient method of storage for the complex
>> getPage requirements.  By efficient i mean execution time rather than
>> memory usage.
>> 
>> Thoughts?
>> 
>> Richard
>> 
>> 
>> Stefan Fußenegger wrote:
>>> 
>>> Hi Richard,
>>> 
>>> I had a thorough look on your code. I have the following remarks:
>>> 
>>> - yes, SerializedPage must be clustered and should therefore implement
>>> IClusterable (it is already Serializable, it should therefore be okay to
>>> change)
>>> - I found two problems with your implementation:
>>>   1) unbind() is called during invalidation of a session. getPageStore()
>>> will therefore result in a NPE as there is no WebRequest
>>>   2) according to the JavaDoc of DiskPageStore#removePage(SessionEntry,
>>> String, int) ("page id to remove or -1 if the whole pagemap should be
>>> removed") calling removePage(String, String, int) with an id of -1
>>> should delete all pages of a pageMap (however, that's not documented in
>>> the JavaDoc of IPageStore!)
>>> - I feel that all pages could be in a single HashMap (rather than using
>>> 3 levels of nested HashMaps and HashSets). I therefore implemented my
>>> own PageStore based on your ideas to confirm my feelings (using a single
>>> HashMap per sesison, using less Hash(Map|Set) iterations; access
>>> synchronized using a ReentrantReadWriteLock which I think has quite good
>>> performance with TC). Please have a look. We can probably

Re: [vote] release Wicket 1.4-m3

2008-07-07 Thread Janne Hietamäki
>
>
> [x] release 1.4-m3
> [ ] don't release 1.4-m3
>


Re: [vote] release Wicket 1.4-m3

2008-07-07 Thread Frank Bille
On Sun, Jul 6, 2008 at 4:22 PM, Martijn Dashorst <[EMAIL PROTECTED]>
wrote:

> [x] release 1.4-m3
> [ ] don't release 1.4-m3
>

Checked:

* RAT report, looks good
* mvn clean install in zip dist (MAC)
* wicket-examples (win/ie6, win/ie7, win/ff2, mac/ff2)

Frank


Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread Korbinian Bachl - privat

Hi Maurice,

thx for the info. Im using the "use old wicket authentication thingy" 
workaround then :)


Would be cool if wicket 1.4m3 up or so would be compatible to swarm... 
however, do you think it would be possible for you to do a wicket 1.4 
branch of swarm/ wasp then? Im quite busy but maybe someone can grab a 
bit of time to dig a bit in, as I somehow like the idea - after I read 
the doc and your presentation several times - of SWARM for wicket apps 
(especially as its something wicket really needs).


Is there a reason why swarm/ wasp is not going to core for 1.4 ?

Best,

Korbinian


Maurice Marrink schrieb:

Sorry, atm wasp/swarm is not yet compatible with wicket 1.4.
I am waiting for at least a beta of wasp/swarm 1.3.1 before i start
working on version 1.4.
I realize 1.3.1 is long overdue and try to get it out as soon as possible.
In the meantime sorry for the inconvenience.
I am afraid your only option at this time is to check out the entire
wicket security project from svn and patch it to compile against
wicket 1.4
Again sorry for the inconvenience.

Maurice

On Sun, Jul 6, 2008 at 8:19 PM, Korbinian Bachl - privat
<[EMAIL PROTECTED]> wrote:

Hello,

I just spend some time to change my app (on wicket 1.4) to use the SWARM
implementation of WASP.

However, it seems that WASP 1.3.0 as well as the current 1.3-Snapshot
(1.3.1) wont work with 1.4; The error is nearly allways the same:


1.3.1:
"
java.lang.NoSuchMethodError:
org.apache.wicket.MetaDataKey.(Ljava/lang/Class;)V

 
org.apache.wicket.security.log.AuthorizationErrorKey.(AuthorizationErrorKey.java:41)

 
org.apache.wicket.security.strategies.WaspAuthorizationStrategy.(WaspAuthorizationStrategy.java:57)

 
org.apache.wicket.security.swarm.strategies.SwarmStrategyFactory.newStrategy(SwarmStrategyFactory.java:80)
   org.apache.wicket.security.WaspSession.(WaspSession.java:48)

 
org.apache.wicket.security.WaspWebApplication.newSession(WaspWebApplication.java:71)
   org.apache.wicket.Session.findOrCreate(Session.java:231)
   org.apache.wicket.Session.findOrCreate(Session.java:214)
   org.apache.wicket.Session.get(Session.java:253)
   org.apache.wicket.RequestCycle.getSession(RequestCycle.java:436)

 
org.apache.wicket.request.AbstractRequestCycleProcessor.resolveHomePageTarget(AbstractRequestCycleProcessor.java:315)

 
org.apache.wicket.protocol.http.WebRequestCycleProcessor.resolve(WebRequestCycleProcessor.java:159)
   org.apache.wicket.RequestCycle.step(RequestCycle.java:1246)
   org.apache.wicket.RequestCycle.steps(RequestCycle.java:1366)
   org.apache.wicket.RequestCycle.request(RequestCycle.java:499)

 org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:387)

 org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:199)

"

and 1.3.0:

"
java.lang.NoSuchMethodError:
org.apache.wicket.MetaDataKey.(Ljava/lang/Class;)V
   org.apache.wicket.security.checks.WaspKey.(WaspKey.java:41)

 
org.apache.wicket.security.components.SecureComponentHelper.getSecurityCheck(SecureComponentHelper.java:55)

 
org.apache.wicket.security.strategies.WaspAuthorizationStrategy.getSecurityCheck(WaspAuthorizationStrategy.java:185)

 
org.apache.wicket.security.strategies.WaspAuthorizationStrategy.isActionAuthorized(WaspAuthorizationStrategy.java:159)
   org.apache.wicket.Component.isActionAuthorized(Component.java:1983)
   org.apache.wicket.Page.renderPage(Page.java:855)

 
org.apache.wicket.request.target.component.BookmarkablePageRequestTarget.respond(BookmarkablePageRequestTarget.java:241)

 
org.apache.wicket.request.AbstractRequestCycleProcessor.respond(AbstractRequestCycleProcessor.java:104)

 org.apache.wicket.RequestCycle.processEventsAndRespond(RequestCycle.java:1194)
   org.apache.wicket.RequestCycle.step(RequestCycle.java:1265)
   org.apache.wicket.RequestCycle.steps(RequestCycle.java:1366)
   org.apache.wicket.RequestCycle.request(RequestCycle.java:499)

 org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:387)

 org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:199)

"

Is there any workaround Maurice?

Best,

Korbinian



Re: Terracotta integration

2008-07-07 Thread Stefan Fußenegger

Ok, i now used a LinkedHashMap and a limit of 1000 pages per PageMap. This
should give sufficient protection and rarely happen.

You were right with the -1 ajaxVersionNumber. I fixed that.

I also fixed the reference to the highestAjaxVersion as there needs to be
such a reference for each version, not only each pageId. There is now an
additional HashMap. So finally this implemenation requires two HashMaps and
a LinkedHashMap per PageMap.

For this implementation, I assumed that pages are inserted in order
(according to their versions). Could somebody confirm that? Otherwise, the
map pointing to the highest ajaxVersion would need to be updated when the
currently highest ajaxVersion is deleted due to an exceeded max pages limit
(one would have to search for a lower ajaxVersion and point to that page).
Otherwise, I'd say we are quite close to the DiskPageStore implementation
(not being asynchronous and not implementing ISerializationAwarePageStore -
which is only used for Wicket's session clustering, right?)

regards

http://www.nabble.com/file/p18318100/MyTerracottaPageStore.java
MyTerracottaPageStore.java 


richardwilko wrote:
> 
> Im still not sure about not limiting the number of pages to keep in
> session, even DiskPageStore has some sort of limit, imo not having a limit
> exposes us to the possibility of a single malicious user grinding the
> system to a halt.  Yes terracotta will persist it to disk if needs be, but
> if that session is is current active use then it will be paging to and
> from disk all the time.
> 
> I would like to get the opinion of some other people about this.
> 
> Also I don't see how the the -1 ajax version can work; in disk based store
> it treats the -1 the same as in getPage, where it just looks for the
> highest version, in our case it will construct a key with the -1 value in
> it, i.e. it will only find the page where ajax version number is -1. 
> Since this cant happen, contains page wont work.  We could probably use
> the helper structure thing to simplify this though.
> 
> Richard
> 
> 
> 
> Stefan Fußenegger wrote:
>> 
>> 1+2) well, it will only add pages as long as the session is alive. if a
>> page isn't used frequently it will be moved to and later persisted by the
>> TC server and finally GCed together with its session. therefore i don't
>> think deleting old pages is necessary. or do you have a special use case
>> where this could be problematic? maybe a bot crawling thousands of pages
>> could generate tons of serialized page? but is this really a problem?
>> 
>> 3) okay, didn't see that little piece of javadoc. I think an extra
>> structure keeping track of most recent versions of pageIds could help to
>> make these searches efficient.
>> 
>> I changed my code:
>> - one store per PageMapName, making deletes more efficient
>> - version info stored for all pageIds (HashMap)
>> where VersionInfo has a pointer to the most recent page and highest
>> ajaxVersionNumber
>> 
>> Comments?
>> 
>> New file: (untested!) 
>> http://www.nabble.com/file/p18314611/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java 
>> 
>> 
>> 
>> richardwilko wrote:
>>> 
>>> Hi Stefan,
>>> 
>>> Looking through your code I see a couple of issues:
>>> 
>>> 1) There is no limit on the number of pages stored in the pagemap, pages
>>> could get added forever.  I feel there needs to be a way to limit the
>>> number of pages stored, with oldest ones discarded first.  This is how
>>> DiskPageStore works.
>>> 
>>> 2) Following on from point 1, a HashMap does not keep insertion order so
>>> it is not possible to remove the oldest ones easily.  A simple change to
>>> LinkedHashMap would solve this and make point 1 easy to implement. 
>>> However storing all the pagemaps together does mean that the most recent
>>> pages from one pagemap could get removed due to high use of another
>>> pagemap.  In this case when the user goes back to the other pagemap
>>> he/she will encounter an exception.
>>> 
>>> 3) Your getPage code is not general enough; from the javadocs for
>>> getPage in IPageStore:
>>> * If ajaxVersionNumber is -1 and versionNumber is specified, the page
>>> store must return the page with highest ajax version.
>>> * If both versionNumber and ajaxVersioNumber are -1, the pagestore must
>>> return last touched (saved) page version with given id.
>>> Your method of constructing a key object wouldn't work in these
>>> situations, as it would only find exact matches, and so getPage would
>>> require iterating through the entire HashMap and looking at every entry.
>>> 
>>> This issue is the reason why I went for the nested structure I used.  I
>>> do agree that a single storage map would ideally be better, especially
>>> as this make it easier to better manage the number of pages stored, but
>>> i'm not sure if it is the most efficient method of storage for the
>>> complex getPage requirements.  By efficient i mean execution time rather
>>> than memory usage.
>>> 
>>> Thoughts?
>>> 
>>> Richard
>

Re: Terracotta integration

2008-07-07 Thread richardwilko

I'm not sure if pages are always inserted in version number, for example, if
you go back to a previous page and start doing something on it again, it
will start inserting pages with a lower version number (i think anyway)

I also have a modified version, it seems a bit simpler than yours and it
will work not matter when pages are inserted or deleted.  It uses an
additional TreeSet on the PageKeys; using the TreeSet's ordering it is easy
to quickly find specific versions, or the highest ajax version for a normal
version, or to find if a version exists.

I also added some debug code, and made the number of pages limit optional.

See what you think

http://www.nabble.com/file/p18318811/OurTerracottaPageStore.java
OurTerracottaPageStore.java 

Richard




Stefan Fußenegger wrote:
> 
> Ok, i now used a LinkedHashMap and a limit of 1000 pages per PageMap. This
> should give sufficient protection and rarely happen.
> 
> You were right with the -1 ajaxVersionNumber. I fixed that.
> 
> I also fixed the reference to the highestAjaxVersion as there needs to be
> such a reference for each version, not only each pageId. There is now an
> additional HashMap. So finally this implemenation requires two HashMaps
> and a LinkedHashMap per PageMap.
> 
> For this implementation, I assumed that pages are inserted in order
> (according to their versions). Could somebody confirm that? Otherwise, the
> map pointing to the highest ajaxVersion would need to be updated when the
> currently highest ajaxVersion is deleted due to an exceeded max pages
> limit (one would have to search for a lower ajaxVersion and point to that
> page). Otherwise, I'd say we are quite close to the DiskPageStore
> implementation (not being asynchronous and not implementing
> ISerializationAwarePageStore - which is only used for Wicket's session
> clustering, right?)
> 
> regards
> 
>  http://www.nabble.com/file/p18318100/MyTerracottaPageStore.java
> MyTerracottaPageStore.java 
> 
> 
> richardwilko wrote:
>> 
>> Im still not sure about not limiting the number of pages to keep in
>> session, even DiskPageStore has some sort of limit, imo not having a
>> limit exposes us to the possibility of a single malicious user grinding
>> the system to a halt.  Yes terracotta will persist it to disk if needs
>> be, but if that session is is current active use then it will be paging
>> to and from disk all the time.
>> 
>> I would like to get the opinion of some other people about this.
>> 
>> Also I don't see how the the -1 ajax version can work; in disk based
>> store it treats the -1 the same as in getPage, where it just looks for
>> the highest version, in our case it will construct a key with the -1
>> value in it, i.e. it will only find the page where ajax version number is
>> -1.  Since this cant happen, contains page wont work.  We could probably
>> use the helper structure thing to simplify this though.
>> 
>> Richard
>> 
>> 
>> 
>> Stefan Fußenegger wrote:
>>> 
>>> 1+2) well, it will only add pages as long as the session is alive. if a
>>> page isn't used frequently it will be moved to and later persisted by
>>> the TC server and finally GCed together with its session. therefore i
>>> don't think deleting old pages is necessary. or do you have a special
>>> use case where this could be problematic? maybe a bot crawling thousands
>>> of pages could generate tons of serialized page? but is this really a
>>> problem?
>>> 
>>> 3) okay, didn't see that little piece of javadoc. I think an extra
>>> structure keeping track of most recent versions of pageIds could help to
>>> make these searches efficient.
>>> 
>>> I changed my code:
>>> - one store per PageMapName, making deletes more efficient
>>> - version info stored for all pageIds (HashMap)
>>> where VersionInfo has a pointer to the most recent page and highest
>>> ajaxVersionNumber
>>> 
>>> Comments?
>>> 
>>> New file: (untested!) 
>>> http://www.nabble.com/file/p18314611/MyTerracottaPageStore.java
>>> MyTerracottaPageStore.java 
>>> 
>>> 
>>> 
>>> richardwilko wrote:
 
 Hi Stefan,
 
 Looking through your code I see a couple of issues:
 
 1) There is no limit on the number of pages stored in the pagemap,
 pages could get added forever.  I feel there needs to be a way to limit
 the number of pages stored, with oldest ones discarded first.  This is
 how DiskPageStore works.
 
 2) Following on from point 1, a HashMap does not keep insertion order
 so it is not possible to remove the oldest ones easily.  A simple
 change to LinkedHashMap would solve this and make point 1 easy to
 implement.  However storing all the pagemaps together does mean that
 the most recent pages from one pagemap could get removed due to high
 use of another pagemap.  In this case when the user goes back to the
 other pagemap he/she will encounter an exception.
 
 3) Your getPage code is not general enough; from the javadocs for
 getPage in IPageStore:
 * If

Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread Maurice Marrink
On Mon, Jul 7, 2008 at 3:39 PM, Korbinian Bachl - privat
<[EMAIL PROTECTED]> wrote:
> Hi Maurice,
>
> thx for the info. Im using the "use old wicket authentication thingy"
> workaround then :)
>
> Would be cool if wicket 1.4m3 up or so would be compatible to swarm...
> however, do you think it would be possible for you to do a wicket 1.4 branch
> of swarm/ wasp then? Im quite busy but maybe someone can grab a bit of time
> to dig a bit in, as I somehow like the idea - after I read the doc and your
> presentation several times - of SWARM for wicket apps (especially as its
> something wicket really needs).

I know, I'm actually a bit pissed at myself for not having it done
yet. Really need to make some time for this.

>
> Is there a reason why swarm/ wasp is not going to core for 1.4 ?

No particular reason other then that we are trying to make the
migration from wicket 1.3 to wicket 1.4 as small as possible.
There has not been an official vote on this yet but the plan is to
integrate it in wicket 1.5 which we will begin working on shortly
after 1.4 has been released.

Maurice

>
> Best,
>
> Korbinian
>
>
> Maurice Marrink schrieb:
>>
>> Sorry, atm wasp/swarm is not yet compatible with wicket 1.4.
>> I am waiting for at least a beta of wasp/swarm 1.3.1 before i start
>> working on version 1.4.
>> I realize 1.3.1 is long overdue and try to get it out as soon as possible.
>> In the meantime sorry for the inconvenience.
>> I am afraid your only option at this time is to check out the entire
>> wicket security project from svn and patch it to compile against
>> wicket 1.4
>> Again sorry for the inconvenience.
>>
>> Maurice
>>
>> On Sun, Jul 6, 2008 at 8:19 PM, Korbinian Bachl - privat
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> Hello,
>>>
>>> I just spend some time to change my app (on wicket 1.4) to use the SWARM
>>> implementation of WASP.
>>>
>>> However, it seems that WASP 1.3.0 as well as the current 1.3-Snapshot
>>> (1.3.1) wont work with 1.4; The error is nearly allways the same:
>>>
>>>
>>> 1.3.1:
>>> "
>>> java.lang.NoSuchMethodError:
>>> org.apache.wicket.MetaDataKey.(Ljava/lang/Class;)V
>>>
>>>
>>>  
>>> org.apache.wicket.security.log.AuthorizationErrorKey.(AuthorizationErrorKey.java:41)
>>>
>>>
>>>  
>>> org.apache.wicket.security.strategies.WaspAuthorizationStrategy.(WaspAuthorizationStrategy.java:57)
>>>
>>>
>>>  
>>> org.apache.wicket.security.swarm.strategies.SwarmStrategyFactory.newStrategy(SwarmStrategyFactory.java:80)
>>>   org.apache.wicket.security.WaspSession.(WaspSession.java:48)
>>>
>>>
>>>  
>>> org.apache.wicket.security.WaspWebApplication.newSession(WaspWebApplication.java:71)
>>>   org.apache.wicket.Session.findOrCreate(Session.java:231)
>>>   org.apache.wicket.Session.findOrCreate(Session.java:214)
>>>   org.apache.wicket.Session.get(Session.java:253)
>>>   org.apache.wicket.RequestCycle.getSession(RequestCycle.java:436)
>>>
>>>
>>>  
>>> org.apache.wicket.request.AbstractRequestCycleProcessor.resolveHomePageTarget(AbstractRequestCycleProcessor.java:315)
>>>
>>>
>>>  
>>> org.apache.wicket.protocol.http.WebRequestCycleProcessor.resolve(WebRequestCycleProcessor.java:159)
>>>   org.apache.wicket.RequestCycle.step(RequestCycle.java:1246)
>>>   org.apache.wicket.RequestCycle.steps(RequestCycle.java:1366)
>>>   org.apache.wicket.RequestCycle.request(RequestCycle.java:499)
>>>
>>>
>>>  org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:387)
>>>
>>>
>>>  
>>> org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:199)
>>>
>>> "
>>>
>>> and 1.3.0:
>>>
>>> "
>>> java.lang.NoSuchMethodError:
>>> org.apache.wicket.MetaDataKey.(Ljava/lang/Class;)V
>>>   org.apache.wicket.security.checks.WaspKey.(WaspKey.java:41)
>>>
>>>
>>>  
>>> org.apache.wicket.security.components.SecureComponentHelper.getSecurityCheck(SecureComponentHelper.java:55)
>>>
>>>
>>>  
>>> org.apache.wicket.security.strategies.WaspAuthorizationStrategy.getSecurityCheck(WaspAuthorizationStrategy.java:185)
>>>
>>>
>>>  
>>> org.apache.wicket.security.strategies.WaspAuthorizationStrategy.isActionAuthorized(WaspAuthorizationStrategy.java:159)
>>>   org.apache.wicket.Component.isActionAuthorized(Component.java:1983)
>>>   org.apache.wicket.Page.renderPage(Page.java:855)
>>>
>>>
>>>  
>>> org.apache.wicket.request.target.component.BookmarkablePageRequestTarget.respond(BookmarkablePageRequestTarget.java:241)
>>>
>>>
>>>  
>>> org.apache.wicket.request.AbstractRequestCycleProcessor.respond(AbstractRequestCycleProcessor.java:104)
>>>
>>>
>>>  
>>> org.apache.wicket.RequestCycle.processEventsAndRespond(RequestCycle.java:1194)
>>>   org.apache.wicket.RequestCycle.step(RequestCycle.java:1265)
>>>   org.apache.wicket.RequestCycle.steps(RequestCycle.java:1366)
>>>   org.apache.wicket.RequestCycle.request(RequestCycle.java:499)
>>>
>>>
>>>  org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:387)
>>>
>>>
>>>  
>>> org.apache.wicket.protocol.http.

Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread James Carman
On Mon, Jul 7, 2008 at 11:38 AM, Maurice Marrink <[EMAIL PROTECTED]> wrote:

> No particular reason other then that we are trying to make the
> migration from wicket 1.3 to wicket 1.4 as small as possible.
> There has not been an official vote on this yet but the plan is to
> integrate it in wicket 1.5 which we will begin working on shortly
> after 1.4 has been released.

I'm not trying to flame here or anything, but from what I've read of
Swarm/Wasp, it's quite complicated and that would go against the
spirit of Wicket, IMHO.  Also, it uses external files for
configuration.  Again, this goes against the spirit of Wicket.
Perhaps if there were a programmatic way of configuring everything?
Again, I've never used it, but I've seen responses on the lists about
how to do things and it just scared me away from it.  That's just my
$0.02.  To be fair, maybe I should play with it a bit to see it for
myself, but I haven't had the cycles.  Sorry.


Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread Korbinian Bachl - privat

Hi James,

thats no flaming - thats just human :) - I mean, when I first saw it I 
thought "WTF!?! - this is a big mess..." but after spending some time on 
it, it turned out that its really a cool thing.


IMHO the biggest mess Maurice made was the naming of its parts... I mean 
WASP, SWARM, HIVE, Principal & co: Man! This sounds like some sort of a 
B-class-horror-movie... but soon after you understand the meanings in 
the context you see how neat and cool this is.


I mean, security is usually never beloved nor easy - I know of some SAP 
system having more than 1000 defined roles (!) in over 10 Levels of 
usage each - but its just necessary. If you look at the current one we 
have with wicket, its just best described as : easy - and somehow 
useless in bigger contexts. Just imagine if every user may do everything 
if he's logged in - we authenticate but not really authorize, nor can we 
easily secure information or parts of it.


I mean, with swarm you got some really cool things like SecureModels or 
the possibility to overtake security from other places/ apps. This of 
course is not so easy anymore but is powerful and necessary in many 
applications.


If you want to get a good grip on it and what it can do just look this 
presentation:


http://www.slideshare.net/mrmean/wicket-security-presentation/

and grab the examples: 
http://wicketstuff.org/confluence/display/STUFFWIKI/Wicket-Security+Examples


(to see them in action: http://wicketstuff.org/wicketsecurity/ )

Best,

Korbinian


James Carman schrieb:

On Mon, Jul 7, 2008 at 11:38 AM, Maurice Marrink <[EMAIL PROTECTED]> wrote:


No particular reason other then that we are trying to make the
migration from wicket 1.3 to wicket 1.4 as small as possible.
There has not been an official vote on this yet but the plan is to
integrate it in wicket 1.5 which we will begin working on shortly
after 1.4 has been released.


I'm not trying to flame here or anything, but from what I've read of
Swarm/Wasp, it's quite complicated and that would go against the
spirit of Wicket, IMHO.  Also, it uses external files for
configuration.  Again, this goes against the spirit of Wicket.
Perhaps if there were a programmatic way of configuring everything?
Again, I've never used it, but I've seen responses on the lists about
how to do things and it just scared me away from it.  That's just my
$0.02.  To be fair, maybe I should play with it a bit to see it for
myself, but I haven't had the cycles.  Sorry.


Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread Guðmundur Bjarni Ólafsson
On Mon, Jul 7, 2008 at 5:50 PM, James Carman <[EMAIL PROTECTED]>
wrote:

> I'm not trying to flame here or anything, but from what I've read of
> Swarm/Wasp, it's quite complicated and that would go against the
> spirit of Wicket, IMHO.  Also, it uses external files for
> configuration.  Again, this goes against the spirit of Wicket.
> Perhaps if there were a programmatic way of configuring everything?
> Again, I've never used it, but I've seen responses on the lists about
> how to do things and it just scared me away from it.  That's just my
> $0.02.  To be fair, maybe I should play with it a bit to see it for
> myself, but I haven't had the cycles.  Sorry.
>


I cooked up a small API that does just this. Simply put, it's just an
implementation of a HiveFactory which acts as a Builder. I plan to publish
some time in the near future but there are several small issues that I'd
like to solve first.

Right now the usage of the API looks like this:

BuilderHiveFactory hiveFactory = new BuilderHiveFactory();
Principal principal = new SimplePrincipal("whatever");
Set> actions = ...;
actions.add(Inherit.class);
actions.add(Render.class);
actions.add(Enable.class);

hiveFactory.addComponentPermission(principal, MySecurePage.class, actions);

I would love to get your thoughts and input on this kind of HiveFactory.

regards,
Guðmundur Bjarni


Re: Terracotta integration

2008-07-07 Thread Stefan Fußenegger

First of all, I like the new name ;)

- Using a TreeSet with subsets is a great idea! 

- I wouldn't use an extra class just to wrap a HashMap of PageStore. I would
just put them into the plain session. But finally, this is just a matter of
taste. I even think that this class lacks proper synchronization. Doesn't
Terracotta complain about modifying an instance outside of a transaction??

- The way getPage(...) with versionNumber -1 is implemented isn't really
nice. Too bad that LinkedHashMap doesn't maintain a pointer to the end of
the list although it is a double linked one :-( would make that task much
faster. Another possibility would be to make a TreeMap out
of the current TreeSet (which is backed by a TreeMap anyway). The integer
would be a counter that indicates the insertion order. One would therefore
only have to iterate over a subMap() of the pages containing all PageKeys
with the pageId in question (implemented but not tested yet - not gonna
happen today).

http://www.nabble.com/file/p18321680/OurTerracottaPageStore.java
OurTerracottaPageStore.java 

I think we are quite close to something really cool ;) 



richardwilko wrote:
> 
> I'm not sure if pages are always inserted in version number, for example,
> if you go back to a previous page and start doing something on it again,
> it will start inserting pages with a lower version number (i think anyway)
> 
> I also have a modified version, it seems a bit simpler than yours and it
> will work not matter when pages are inserted or deleted.  It uses an
> additional TreeSet on the PageKeys; using the TreeSet's ordering it is
> easy to quickly find specific versions, or the highest ajax version for a
> normal version, or to find if a version exists.
> 
> I also added some debug code, and made the number of pages limit optional.
> 
> See what you think
> 
>  http://www.nabble.com/file/p18318811/OurTerracottaPageStore.java
> OurTerracottaPageStore.java 
> 
> Richard
> 
> 
> 
> 
> Stefan Fußenegger wrote:
>> 
>> Ok, i now used a LinkedHashMap and a limit of 1000 pages per PageMap.
>> This should give sufficient protection and rarely happen.
>> 
>> You were right with the -1 ajaxVersionNumber. I fixed that.
>> 
>> I also fixed the reference to the highestAjaxVersion as there needs to be
>> such a reference for each version, not only each pageId. There is now an
>> additional HashMap. So finally this implemenation requires two HashMaps
>> and a LinkedHashMap per PageMap.
>> 
>> For this implementation, I assumed that pages are inserted in order
>> (according to their versions). Could somebody confirm that? Otherwise,
>> the map pointing to the highest ajaxVersion would need to be updated when
>> the currently highest ajaxVersion is deleted due to an exceeded max pages
>> limit (one would have to search for a lower ajaxVersion and point to that
>> page). Otherwise, I'd say we are quite close to the DiskPageStore
>> implementation (not being asynchronous and not implementing
>> ISerializationAwarePageStore - which is only used for Wicket's session
>> clustering, right?)
>> 
>> regards
>> 
>>  http://www.nabble.com/file/p18318100/MyTerracottaPageStore.java
>> MyTerracottaPageStore.java 
>> 
>> 
>> richardwilko wrote:
>>> 
>>> Im still not sure about not limiting the number of pages to keep in
>>> session, even DiskPageStore has some sort of limit, imo not having a
>>> limit exposes us to the possibility of a single malicious user grinding
>>> the system to a halt.  Yes terracotta will persist it to disk if needs
>>> be, but if that session is is current active use then it will be paging
>>> to and from disk all the time.
>>> 
>>> I would like to get the opinion of some other people about this.
>>> 
>>> Also I don't see how the the -1 ajax version can work; in disk based
>>> store it treats the -1 the same as in getPage, where it just looks for
>>> the highest version, in our case it will construct a key with the -1
>>> value in it, i.e. it will only find the page where ajax version number
>>> is -1.  Since this cant happen, contains page wont work.  We could
>>> probably use the helper structure thing to simplify this though.
>>> 
>>> Richard
>>> 
>>> 
>>> 
>>> Stefan Fußenegger wrote:
 
 1+2) well, it will only add pages as long as the session is alive. if a
 page isn't used frequently it will be moved to and later persisted by
 the TC server and finally GCed together with its session. therefore i
 don't think deleting old pages is necessary. or do you have a special
 use case where this could be problematic? maybe a bot crawling
 thousands of pages could generate tons of serialized page? but is this
 really a problem?
 
 3) okay, didn't see that little piece of javadoc. I think an extra
 structure keeping track of most recent versions of pageIds could help
 to make these searches efficient.
 
 I changed my code:
 - one store per PageMapName, making deletes more efficient
 - version info store

Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread James Carman
On Mon, Jul 7, 2008 at 12:01 PM, Guðmundur Bjarni Ólafsson
<[EMAIL PROTECTED]> wrote:
> I cooked up a small API that does just this. Simply put, it's just an
> implementation of a HiveFactory which acts as a Builder. I plan to publish
> some time in the near future but there are several small issues that I'd
> like to solve first.
>
> Right now the usage of the API looks like this:
>
> BuilderHiveFactory hiveFactory = new BuilderHiveFactory();
> Principal principal = new SimplePrincipal("whatever");
> Set> actions = ...;
> actions.add(Inherit.class);
> actions.add(Render.class);
> actions.add(Enable.class);
>
> hiveFactory.addComponentPermission(principal, MySecurePage.class, actions);
>
> I would love to get your thoughts and input on this kind of HiveFactory.

Well, that's exactly what I'm talking about!  I really hated putting
stuff into external files when one of the biggest selling points about
Wicket was the fact that there are no configuration files necessary
(aside from the simple hook in the web.xml file of course).  I'm not
familiar enough with the configuration stuff yet, but the idea is just
what I'm looking for.


Re: Swarm & Wicket 1.4 m2

2008-07-07 Thread Maurice Marrink
On Mon, Jul 7, 2008 at 6:01 PM, Guðmundur Bjarni Ólafsson
<[EMAIL PROTECTED]> wrote:
> On Mon, Jul 7, 2008 at 5:50 PM, James Carman <[EMAIL PROTECTED]>
> wrote:
>
>> I'm not trying to flame here or anything, but from what I've read of
>> Swarm/Wasp, it's quite complicated and that would go against the
>> spirit of Wicket, IMHO.  Also, it uses external files for
>> configuration.  Again, this goes against the spirit of Wicket.
>> Perhaps if there were a programmatic way of configuring everything?
>> Again, I've never used it, but I've seen responses on the lists about
>> how to do things and it just scared me away from it.  That's just my
>> $0.02.  To be fair, maybe I should play with it a bit to see it for
>> myself, but I haven't had the cycles.  Sorry.
>>
>
>
> I cooked up a small API that does just this. Simply put, it's just an
> implementation of a HiveFactory which acts as a Builder. I plan to publish
> some time in the near future but there are several small issues that I'd
> like to solve first.
>
> Right now the usage of the API looks like this:
>
> BuilderHiveFactory hiveFactory = new BuilderHiveFactory();
> Principal principal = new SimplePrincipal("whatever");
> Set> actions = ...;
> actions.add(Inherit.class);
> actions.add(Render.class);
> actions.add(Enable.class);
>
> hiveFactory.addComponentPermission(principal, MySecurePage.class, actions);
>
> I would love to get your thoughts and input on this kind of HiveFactory.

Sounds like a usable contribution :)
The whole idea of the Wasp and Swarm for that matter is that if you
don't like a part of it you easily implement it yourself.
Configuration of all the permissions is only a first step.
Even i don't use the default policy file reader, we do use policy
files but i added some extra scripting support (haven't decided yet if
it something that is really re-usable by others).
So if anyone has anything that he/she thinks is reusable, just let me know.

Maurice

>
> regards,
> Guðmundur Bjarni
>