Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-18 Thread Erik van Oosten

 I realize it may not be a hotspot but, intuitively at least, it sounds like 
 your new caching improvement will help reduce CPU load. It should also reduce 
 the workload of the garbage collector by not having, in the above example, 
 2800 objects allocated and then dereferenced with every page render.


Don't trust intuition, measure. CPUs and garbage collectors behave different 
then you might think.

Introducing caching means introducing thread collaboration. Any synchronisation 
on modern cpus quickly take more time then any business processing you can 
program. In addition, short lived object cost nothing. The gc can easily handle 
millions per second.

In other words: don't add caching unless you are willing to measure the 
throughput gains (on the intended target systems).

Regards,
 Erik.

--
Erik van Oosten
http://day-to-day-stuff.blogspot.com

Op 17 okt. 2011, om 22:06 heeft Chris Colman het volgende geschreven:

 We have already discussed this problem before.
 I agree that caching will improve the performance and I'll try to
 implement it soon.
 
 Cool!
 
 Topicus' (Martijn's daily job) biggest application have ~ 700 mounted
 pages and this code is not a hotspot for them, that's why no one spend
 time on optimizing it so far.
 
 Wow that's big! If you had a variety of main and side bar menus on each page 
 that had links to say 40 or so Bookmarkable pages that would mean each page 
 render would result in building 40 weight/mount pair collections, each with 
 700 entries - and then throwing them away after each bookmarkable page link 
 has been created.
 
 I realize it may not be a hotspot but, intuitively at least, it sounds like 
 your new caching improvement will help reduce CPU load. It should also reduce 
 the workload of the garbage collector by not having, in the above example, 
 2800 objects allocated and then dereferenced with every page render.
 
 
 On Sun, Oct 16, 2011 at 1:27 PM, Chris Colman
 chr...@stepaheadsoftware.com wrote:
 I'll try to get some time to build a test to get some timings.
 
 -Original Message-
 From: Jeremy Thomerson [mailto:jer...@wickettraining.com]
 Sent: Sunday, 16 October 2011 11:55 AM
 To: users@wicket.apache.org
 Subject: Re: Efficiency of 1.5 MountMapper weighted/matching algorithm
 
 On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
 chr...@stepaheadsoftware.comwrote:
 
 Obviously this isn't a problem during debug with a single user but
 when
 1000s of pages need to be rendered each minute the time spent
 performing
 the
 above operations may become significant. I haven't done any benchmark
 testing but from experience, the frequenct allocation and compiling
 of
 collections and sorting can get CPU expensive and switching to a
 caching
 alternative usually leads to significant performance
 improvements.
 
 
 It'd definitely be worth optimizing if we can prove it's a bottle-neck.
 But
 we try to avoid premature optimization.  Can you put together some
 numbers
 to see what kind of processing load we're talking about?  I'd be
 interested
 in seeing % of overall processing time under load.  Something like
 with X
 clients browsing Y pages per minute, each page render took an average R
 milliseconds, and Z milliseconds of this was in creating link URLs.
 Or
 something like that.
 
 --
 Jeremy Thomerson
 http://wickettraining.com
 *Need a CMS for Wicket?  Use Brix! http://brixcms.org*
 
 -
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org
 
 
 
 
 
 --
 Martin Grigorov
 jWeekend
 Training, Consulting, Development
 http://jWeekend.com
 
 -
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org
 



RE: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-18 Thread Chris Colman
I will definitely try to get up to speed with WicketTester and see if
it's possible for it to run in a multi threaded way that can simulate a
massive load test.

Any suggestions as to which profiler to use while it's running? Would
need to be one that can measure the CPU usage of the GC, not just the
application.

I agree: having some metrics on it is preferable but in the given
example if a site is serving 100 pages per second then that makes for
280,000 objects that are created and thrown away every second. I know
that the GC could *probably* handle it but even in s C++ program with no
GC I still wouldn't want to perform 280,000 memory allocations
(new/malloc) per second if there was an easy way to avoid it - even if
it could handle it I'd much rather be devoting the CPU to other tasks
like rendering or performing database queries.

-Original Message-
From: Erik van Oosten [mailto:e.vanoos...@grons.nl]
Sent: Tuesday, 18 October 2011 6:06 PM
To: users@wicket.apache.org
Subject: Re: Efficiency of 1.5 MountMapper weighted/matching algorithm


 I realize it may not be a hotspot but, intuitively at least, it
sounds
like your new caching improvement will help reduce CPU load. It should
also
reduce the workload of the garbage collector by not having, in the
above
example, 2800 objects allocated and then dereferenced with every page
render.


Don't trust intuition, measure. CPUs and garbage collectors behave
different then you might think.

Introducing caching means introducing thread collaboration. Any
synchronisation on modern cpus quickly take more time then any business
processing you can program. In addition, short lived object cost
nothing.
The gc can easily handle millions per second.


In other words: don't add caching unless you are willing to measure the
throughput gains (on the intended target systems).

Regards,
 Erik.

--
Erik van Oosten
http://day-to-day-stuff.blogspot.com

Op 17 okt. 2011, om 22:06 heeft Chris Colman het volgende geschreven:

 We have already discussed this problem before.
 I agree that caching will improve the performance and I'll try to
 implement it soon.

 Cool!

 Topicus' (Martijn's daily job) biggest application have ~ 700
mounted
 pages and this code is not a hotspot for them, that's why no one
spend
 time on optimizing it so far.

 Wow that's big! If you had a variety of main and side bar menus on
each
page that had links to say 40 or so Bookmarkable pages that would mean
each
page render would result in building 40 weight/mount pair collections,
each
with 700 entries - and then throwing them away after each bookmarkable
page
link has been created.

 I realize it may not be a hotspot but, intuitively at least, it
sounds
like your new caching improvement will help reduce CPU load. It should
also
reduce the workload of the garbage collector by not having, in the
above
example, 2800 objects allocated and then dereferenced with every page
render.


 On Sun, Oct 16, 2011 at 1:27 PM, Chris Colman
 chr...@stepaheadsoftware.com wrote:
 I'll try to get some time to build a test to get some timings.

 -Original Message-
 From: Jeremy Thomerson [mailto:jer...@wickettraining.com]
 Sent: Sunday, 16 October 2011 11:55 AM
 To: users@wicket.apache.org
 Subject: Re: Efficiency of 1.5 MountMapper weighted/matching
algorithm

 On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
 chr...@stepaheadsoftware.comwrote:

 Obviously this isn't a problem during debug with a single user
but
 when
 1000s of pages need to be rendered each minute the time spent
 performing
 the
 above operations may become significant. I haven't done any
benchmark
 testing but from experience, the frequenct allocation and
compiling
 of
 collections and sorting can get CPU expensive and switching to a
 caching
 alternative usually leads to significant performance
 improvements.


 It'd definitely be worth optimizing if we can prove it's a bottle-
neck.
 But
 we try to avoid premature optimization.  Can you put together some
 numbers
 to see what kind of processing load we're talking about?  I'd be
 interested
 in seeing % of overall processing time under load.  Something like
 with X
 clients browsing Y pages per minute, each page render took an
average
R
 milliseconds, and Z milliseconds of this was in creating link
URLs.
 Or
 something like that.

 --
 Jeremy Thomerson
 http://wickettraining.com
 *Need a CMS for Wicket?  Use Brix! http://brixcms.org*


-
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org





 --
 Martin Grigorov
 jWeekend
 Training, Consulting, Development
 http://jWeekend.com


-
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org

Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-17 Thread Martin Grigorov
We have already discussed this problem before.
I agree that caching will improve the performance and I'll try to
implement it soon.

Topicus' (Martijn's daily job) biggest application have ~ 700 mounted
pages and this code is not a hotspot for them, that's why no one spend
time on optimizing it so far.

On Sun, Oct 16, 2011 at 1:27 PM, Chris Colman
chr...@stepaheadsoftware.com wrote:
 I'll try to get some time to build a test to get some timings.

-Original Message-
From: Jeremy Thomerson [mailto:jer...@wickettraining.com]
Sent: Sunday, 16 October 2011 11:55 AM
To: users@wicket.apache.org
Subject: Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
chr...@stepaheadsoftware.comwrote:

 Obviously this isn't a problem during debug with a single user but
 when
 1000s of pages need to be rendered each minute the time spent
 performing
the
 above operations may become significant. I haven't done any benchmark
 testing but from experience, the frequenct allocation and compiling
 of
 collections and sorting can get CPU expensive and switching to a
 caching
 alternative usually leads to significant performance
 improvements.


It'd definitely be worth optimizing if we can prove it's a bottle-neck.
But
we try to avoid premature optimization.  Can you put together some
 numbers
to see what kind of processing load we're talking about?  I'd be
 interested
in seeing % of overall processing time under load.  Something like
 with X
clients browsing Y pages per minute, each page render took an average R
milliseconds, and Z milliseconds of this was in creating link URLs.
 Or
something like that.

--
Jeremy Thomerson
http://wickettraining.com
*Need a CMS for Wicket?  Use Brix! http://brixcms.org*

 -
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org





-- 
Martin Grigorov
jWeekend
Training, Consulting, Development
http://jWeekend.com

-
To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
For additional commands, e-mail: users-h...@wicket.apache.org



RE: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-17 Thread Chris Colman
We have already discussed this problem before.
I agree that caching will improve the performance and I'll try to
implement it soon.

Cool!

Topicus' (Martijn's daily job) biggest application have ~ 700 mounted
pages and this code is not a hotspot for them, that's why no one spend
time on optimizing it so far.

Wow that's big! If you had a variety of main and side bar menus on each page 
that had links to say 40 or so Bookmarkable pages that would mean each page 
render would result in building 40 weight/mount pair collections, each with 700 
entries - and then throwing them away after each bookmarkable page link has 
been created.

I realize it may not be a hotspot but, intuitively at least, it sounds like 
your new caching improvement will help reduce CPU load. It should also reduce 
the workload of the garbage collector by not having, in the above example, 2800 
objects allocated and then dereferenced with every page render.


On Sun, Oct 16, 2011 at 1:27 PM, Chris Colman
chr...@stepaheadsoftware.com wrote:
 I'll try to get some time to build a test to get some timings.

-Original Message-
From: Jeremy Thomerson [mailto:jer...@wickettraining.com]
Sent: Sunday, 16 October 2011 11:55 AM
To: users@wicket.apache.org
Subject: Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
chr...@stepaheadsoftware.comwrote:

 Obviously this isn't a problem during debug with a single user but
 when
 1000s of pages need to be rendered each minute the time spent
 performing
the
 above operations may become significant. I haven't done any benchmark
 testing but from experience, the frequenct allocation and compiling
 of
 collections and sorting can get CPU expensive and switching to a
 caching
 alternative usually leads to significant performance
 improvements.


It'd definitely be worth optimizing if we can prove it's a bottle-neck.
But
we try to avoid premature optimization.  Can you put together some
 numbers
to see what kind of processing load we're talking about?  I'd be
 interested
in seeing % of overall processing time under load.  Something like
 with X
clients browsing Y pages per minute, each page render took an average R
milliseconds, and Z milliseconds of this was in creating link URLs.
 Or
something like that.

--
Jeremy Thomerson
http://wickettraining.com
*Need a CMS for Wicket?  Use Brix! http://brixcms.org*

 -
 To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
 For additional commands, e-mail: users-h...@wicket.apache.org





--
Martin Grigorov
jWeekend
Training, Consulting, Development
http://jWeekend.com

-
To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
For additional commands, e-mail: users-h...@wicket.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
For additional commands, e-mail: users-h...@wicket.apache.org



RE: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-16 Thread Chris Colman
I'll try to get some time to build a test to get some timings.

-Original Message-
From: Jeremy Thomerson [mailto:jer...@wickettraining.com]
Sent: Sunday, 16 October 2011 11:55 AM
To: users@wicket.apache.org
Subject: Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
chr...@stepaheadsoftware.comwrote:

 Obviously this isn't a problem during debug with a single user but
when
 1000s of pages need to be rendered each minute the time spent
performing
the
 above operations may become significant. I haven't done any benchmark
 testing but from experience, the frequenct allocation and compiling
of
 collections and sorting can get CPU expensive and switching to a
caching
 alternative usually leads to significant performance
improvements.


It'd definitely be worth optimizing if we can prove it's a bottle-neck.
But
we try to avoid premature optimization.  Can you put together some
numbers
to see what kind of processing load we're talking about?  I'd be
interested
in seeing % of overall processing time under load.  Something like
with X
clients browsing Y pages per minute, each page render took an average R
milliseconds, and Z milliseconds of this was in creating link URLs.
Or
something like that.

--
Jeremy Thomerson
http://wickettraining.com
*Need a CMS for Wicket?  Use Brix! http://brixcms.org*

-
To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org
For additional commands, e-mail: users-h...@wicket.apache.org



Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-15 Thread Chris Colman
While debugging a problem during our migration from 1.4 to 1.5 I noticed
that for each bookmarkable page link that we created quite an extensive
set of operations occurs:
 
We have pages that can have 20-40 links to bookmarkable pages. We also
have about 40 different page classes, each mounted with a different
path.
 
What I noticed was that when each BookmarkablePageLink was established
the following occurred:
 
Every mounted page was compared with the URL of the link - this involved
many URL segment comparisons
A matching score/weight was associated with each mounted page.
The score/page pairs were added to a collection.
After all mounted pages were evaluated the collection was then sorted.
The page with the best match is then used for further processing of the
BookmarkablePageLink.
 
Given the number of links we have on each page and the number of page
classes we have mounted and the processor intensive work in performing
the above steps I was wondering if the information collected above could
be cached to avoid performing all of the above steps each time a
BookmarkablePageLink is created.
 
Obviously this isn't a problem during debug with a single user but when
1000s of pages need to be rendered each minute the time spent performing
the above operations may become significant. I haven't done any
benchmark testing but from experience, the frequenct allocation and
compiling of collections and sorting can get CPU expensive and switching
to a caching alternative usually leads to significant performance
improvements.
 
Yours sincerely,
 
Chris Colman
 
Pagebloom Team Leader,
Step Ahead Software

 
pagebloom - your business  your website growing together
 
Sydney: (+61 2) 9656 1278 Canberra: (+61 2) 6100 2120 
Email: chr...@stepahead.com.au mailto://chr...@stepahead.com.au 
Website:
http://www.pagebloom.com blocked::http://www.pagebloom.com/ 
http://develop.stepaheadsoftware.com
blocked::http://develop.stepaheadsoftware.com/ 
 


Re: Efficiency of 1.5 MountMapper weighted/matching algorithm

2011-10-15 Thread Jeremy Thomerson
On Sat, Oct 15, 2011 at 8:28 PM, Chris Colman
chr...@stepaheadsoftware.comwrote:

 Obviously this isn’t a problem during debug with a single user but when
 1000s of pages need to be rendered each minute the time spent performing the
 above operations may become significant. I haven’t done any benchmark
 testing but from experience, the frequenct allocation and compiling of
 collections and sorting can get CPU expensive and switching to a caching
 alternative usually leads to significant performance improvements.


It'd definitely be worth optimizing if we can prove it's a bottle-neck.  But
we try to avoid premature optimization.  Can you put together some numbers
to see what kind of processing load we're talking about?  I'd be interested
in seeing % of overall processing time under load.  Something like with X
clients browsing Y pages per minute, each page render took an average R
milliseconds, and Z milliseconds of this was in creating link URLs.  Or
something like that.

-- 
Jeremy Thomerson
http://wickettraining.com
*Need a CMS for Wicket?  Use Brix! http://brixcms.org*