Thanks, this is important work. Are you warming up your pages (i.e. by
contributing to PagePreloader) in production and do you know if it makes
any difference?

Kalle

On Tue, Jul 19, 2016 at 2:41 AM, Michael Mikhulya <m.mikhu...@gmail.com>
wrote:

> Hello,
>
> I would like to discuss one issue and idea how to fix it.
>
> Currently the page loading takes much time. I do understand that it is
> done once, but:
> 1) if we start webserver under serious load then many http requests
> goes into queue before it fully start up. And many concurrent problems
> arises including starvation. Frequently it is not easy to understand
> what goes wrong in such situations.
> 2) for test environment I prefer to load all pages in parallel during
> startup to check validation errors (some stupid errors in tml, etc).
>
> Startup time of test environment is important for me.
>
> There is a logs of start:
>
> 12.05.2016 15:25:06
> org.apache.tapestry5.modules.InternalModule.PageLoader.invoke
> INFO: Loaded page 'admin/mail/TemplateManagement' (ru_US) in 2367,497 ms
> ...
> 12.05.2016 15:25:10
> org.apache.tapestry5.modules.InternalModule.PageLoader.invoke
> INFO: Loaded page 'account/Questionary' (ru_US) in 2273,285 ms
> 12.05.2016 15:25:10
> org.apache.tapestry5.modules.InternalModule.PageLoader.invoke
> INFO: Loaded page 'account/Register' (ru_US) in 31,681 ms
>
> Loading time of all pages for very simple project takes about 6-7
> seconds.
>
> I decided to spend some time to check whether some easy fixes can be
> applied. First of all I want to improve concurrency. So I checked lock
> contention.
>
> There are "top suckers":
> grep "waiting to lock" load_page.log | sort | uniq -c | sort -g -r | head
> -n 7
>      46         - waiting to lock <0x00000000d486f5e8> (a
> org.apache.tapestry5.internal.plastic.PlasticClassLoader)
>      35         - waiting to lock <0x00000000db2fbf80> (a
> org.apache.tapestry5.internal.services.ComponentInstantiatorSourceImpl)
>      28         - waiting to lock <0x00000000d49028a0> (a
> org.apache.tapestry5.internal.services.ComponentInstantiatorSourceImpl)
>      15         - waiting to lock <0x00000000d1880688> (a
> org.apache.tapestry5.ioc.internal.services.JustInTimeObjectCreator)
>       5         - waiting to lock <0x00000000dc4a2930> (a
> org.apache.tapestry5.ioc.internal.services.JustInTimeObjectCreator)
>       5         - waiting to lock <0x00000000d59c7168> (a
> org.apache.tapestry5.ioc.internal.services.JustInTimeObjectCreator)
>       3         - waiting to lock <0x00000000d49b9638> (a
> org.apache.tapestry5.ioc.internal.services.PropertyAccessImpl)
>
>
> The worst one is ComponentInstantiatorSourceImpl and PlasticClassLoader.
>
> Fixing ComponentInstantiatorSourceImpl was a toy, see
> https://issues.apache.org/jira/browse/TAP5-2557.
>
> Another patch is here: https://issues.apache.org/jira/browse/TAP5-2558
>
> Could anybody please review and apply these patches?
>
>
> Situation with PlasticClassLoader is not so easy and I would like to
> discuss it with you.
>
> There are only 4 places which are hot: synchronized methods
> of PlasticClassLoader class and PlasticClassPool, which uses "synchronized
> (loader) {" criticlal sections.
>
> I have interesting idea how to split one global lock to many small
> ones. I suggest use one lock per class.
> It is easily achievable by using following approach:
>
> +++
> b/plastic/src/main/java/org/apache/tapestry5/internal/plastic/PlasticClassLoader.java
> -   protected synchronized Class<?> loadClass(String name, boolean
> resolve) throws ClassNotFoundException
> +   protected Class<?> loadClass(String name, boolean resolve) throws
> ClassNotFoundException
>      {
> +       synchronized (name.intern()) {
>
> Most of servlet containers recently added support for parallel class
> loading:
> https://bugs.eclipse.org/bugs/show_bug.cgi?id=464442
> https://bz.apache.org/bugzilla/show_bug.cgi?id=57681
> I believe tapestry should do support for it too.
> It shouldn't be too hard with  synchronization on interned class name.
> May be even this synchronization can be avoided...
>
> Could anybody check this idea?
>
>
> Wbr,
> Michael.

Reply via email to