Re: mod_jk 1.2.28 on i5/OS

2009-08-12 Thread Rainer Jung

Am 05.08.2009 12:37, schrieb Henri Gomez:

Hi Rainer.

With your latest patch, it seems to works.

May be the problem wasn't in thread collision but just with pool
problem with double inits.

I'll do more tests and so stress load




Same patches needed to make it works on i5/OS V6R1.

When did a new release of mod_jk is planned (1.2.29 ?)


Hi Henri,

I collected a couple of ideas for a 1.3 branch, so doing a 1.2.29 in the 
next 2-4 weeks before branching seems reasonable, because we have some 
small things patched since 1.2.28. I'm a bit busy right now, but could 
move towards releasing next week.


Is there any additional change you want to provide or that is needed for i5?

Regards,

Rainer

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-08-05 Thread Henri Gomez
 Hi Rainer.

 With your latest patch, it seems to works.

 May be the problem wasn't in thread collision but just with pool
 problem with double inits.

 I'll do more tests and so stress load



Same patches needed to make it works on i5/OS V6R1.

When did a new release of mod_jk is planned (1.2.29 ?)

Regards

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-15 Thread Henri Gomez
Hi Rainer.

With your latest patch, it seems to works.

May be the problem wasn't in thread collision but just with pool
problem with double inits.

I'll do more tests and so stress load


Regards

2009/5/14 Henri Gomez henri.go...@gmail.com:
 I'll try the new patch today.

 Thanks for your time on this !

 Le 13 mai 09 à 15:51, Rainer Jung rainer.j...@kippdata.de a écrit :

 Sorry for the broken patch. Besides the not so nice multiple registation
 of the cleanup, the real problem for the crash after the patch is, that
 clear() on a pool already calls the cleanup. So I had to register the
 cleanup for the parent pool (pconf) and not for the pool itself.

 I'll think about the thread-safety next, but as I said that is not the
 cause for your crashes.

 Regards,

 Rainer

 On 13.05.2009 14:56, Henri Gomez wrote:

 Some comments on your latest provided patch :

      if (!jk_resolv_pool) {
           if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
               JK_TRACE_EXIT(l);
               return JK_FALSE;
           }
       }
       /* We need to clear the pool reference, if the pool gets destroyed
                       * via its parent pool. */
       apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
       apr_pool_clear(jk_resolv_pool);
       if (apr_sockaddr_info_get
           (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,
 jk_resolv_pool)
           != APR_SUCCESS) {
           JK_TRACE_EXIT(l);
           return JK_FALSE;
       }

 Why not just add the cleanup register in pool create side ?

      if (!jk_resolv_pool) {
           if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
               JK_TRACE_EXIT(l);
               return JK_FALSE;
           }

       /* We need to clear the pool reference, if the pool gets destroyed
                       * via its parent pool. */
       apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
       }

       apr_pool_clear(jk_resolv_pool);
       if (apr_sockaddr_info_get
           (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,
 jk_resolv_pool)
           != APR_SUCCESS) {
           JK_TRACE_EXIT(l);
           return JK_FALSE;
       }


 Also what could happen if we get many threads calling jk_resolv at the
 same time ?

 jk_connect.patch

 -
 To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: dev-h...@tomcat.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-14 Thread Henri Gomez

I'll try the new patch today.

Thanks for your time on this !

Le 13 mai 09 à 15:51, Rainer Jung rainer.j...@kippdata.de a écrit :

Sorry for the broken patch. Besides the not so nice multiple  
registation
of the cleanup, the real problem for the crash after the patch is,  
that

clear() on a pool already calls the cleanup. So I had to register the
cleanup for the parent pool (pconf) and not for the pool itself.

I'll think about the thread-safety next, but as I said that is not the
cause for your crashes.

Regards,

Rainer

On 13.05.2009 14:56, Henri Gomez wrote:

Some comments on your latest provided patch :

  if (!jk_resolv_pool) {
   if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
!= APR_SUCCESS) {
   JK_TRACE_EXIT(l);
   return JK_FALSE;
   }
   }
   /* We need to clear the pool reference, if the pool gets  
destroyed

   * via its parent pool. */
   apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
jk_resolv_cleanup, jk_resolv_cleanup);
   apr_pool_clear(jk_resolv_pool);
   if (apr_sockaddr_info_get
   (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,  
jk_resolv_pool)

   != APR_SUCCESS) {
   JK_TRACE_EXIT(l);
   return JK_FALSE;
   }

Why not just add the cleanup register in pool create side ?

  if (!jk_resolv_pool) {
   if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
!= APR_SUCCESS) {
   JK_TRACE_EXIT(l);
   return JK_FALSE;
   }

   /* We need to clear the pool reference, if the pool gets  
destroyed

   * via its parent pool. */
   apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
jk_resolv_cleanup, jk_resolv_cleanup);
   }

   apr_pool_clear(jk_resolv_pool);
   if (apr_sockaddr_info_get
   (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,  
jk_resolv_pool)

   != APR_SUCCESS) {
   JK_TRACE_EXIT(l);
   return JK_FALSE;
   }


Also what could happen if we get many threads calling jk_resolv at  
the

same time ?

jk_connect.patch

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-13 Thread Henri Gomez
Hi Rainer.

The new patch didn't fix it :

User Trace Dump for job 680894/QTMHHTTP/DAPSERVER. Size: 300K, Wrapped
0 times.
--- 05/13/2009 10:35:17 ---
 0018:292544 apr_palloc: WARNING --
 0018:292568 apr_palloc() called with NULL pool.
 0018:292592 requested size to allocate = 16.
 0018:292624 Stack Dump For Current Thread
 0018:292632 Stack:  apr_pools.c : Current stack -
 0018:292696 Stack:  Library/ Program Module  Stmt
Procedure
 0018:292744 Stack:  QSYS   / QCMD455   :
 0018:292776 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 0 :
_CXX_PEP__Fv
 0018:292808 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 18:
main
 0018:292848 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 234   :
BigSwitch__FiPPc
 0018:292872 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN0 :
_CXX_PEP__Fv
 0018:292904 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN2 :
main
 0018:301392 Stack:  QHTTPSVR   / QZSRCOREMAIN718   :
apache_main
 0018:311408 Stack:  QHTTPSVR   / QZSRCOREHTTP_CONFI  5 :
ap_run_post_config
 0018:311688 Stack:  QHTTPSVR   / MOD_JK1229  MOD_JK  60:
jk_post_config
 0018:311720 Stack:  QHTTPSVR   / MOD_JK1229  MOD_JK  35:
init_jk
 0018:312088 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   34:
wc_open
0018:312128 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   9 :
build_worker_map
0018:312152 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   28:
wc_create_worker
0018:312296 Stack:  QHTTPSVR   / MOD_JK1229  JK_AJP13_W  5 :
validate
0018:312320 Stack:  QHTTPSVR   / MOD_JK1229  JK_AJP_COM  29:
ajp_validate
0018:312352 Stack:  QHTTPSVR   / MOD_JK1229  JK_CONNECT  21:
jk_resolve
0018:328256 Stack:  QHTTPSVR   / QZSRAPR SOCKADDR5 :
find_addresses
0018:328296 Stack:  QHTTPSVR   / QZSRAPR APR_STRING  4 :
apr_pstrdup
0018:331816 Stack:  QHTTPSVR   / QZSRAPR APR_POOLS   11:
apr_palloc
0018:331848 Stack:  QHTTPSVR   / QZSRAPR OS400TRACE  7 :
apr_dstack_CCSID
0018:341576 Stack:  QSYS   / QP0ZCPA QP0ZUDBG3 :
Qp0zDumpStack
0018:358784 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG2 :
Qp0zSUDumpStack
0018:358816 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG12:
Qp0zSUDumpTargetStack
0018:362632 Stack:  Completed



  TRCTCPAPP Output
0018:382128 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN0 :
_CXX_PEP__Fv
0018:382136 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN2 :
main
0018:382144 Stack:  QHTTPSVR   / QZSRCOREMAIN718   :
apache_main
0018:382160 Stack:  QHTTPSVR   / QZSRCOREHTTP_CONFI  5 :
ap_run_post_config
0018:382168 Stack:  QHTTPSVR   / MOD_JK1229  MOD_JK  60:
jk_post_config
0018:382176 Stack:  QHTTPSVR   / MOD_JK1229  MOD_JK  35:
init_jk
0018:382192 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   34:
wc_open
0018:382200 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   9 :
build_worker_map
0018:382208 Stack:  QHTTPSVR   / MOD_JK1229  JK_WORKER   28:
wc_create_worker
0018:382216 Stack:  QHTTPSVR   / MOD_JK1229  JK_AJP13_W  5 :
validate
0018:382232 Stack:  QHTTPSVR   / MOD_JK1229  JK_AJP_COM  29:
ajp_validate
0018:382240 Stack:  QHTTPSVR   / MOD_JK1229  JK_CONNECT  21:
jk_resolve
0018:382248 Stack:  QHTTPSVR   / QZSRAPR SOCKADDR5 :
find_addresses
0018:391552 Stack:  QHTTPSVR   / QZSRAPR APR_STRING  5 :
apr_pstrdup
0018:391576 Stack:  QHTTPSVR   / QZSRCOREMAIN18:
Main_Excp_Handler
0018:391584 Stack:  QHTTPSVR   / QZSRAPR OS400TRACE  7 :
apr_dstack_CCSID
0018:391600 Stack:  QSYS   / QP0ZCPA QP0ZUDBG3 :
Qp0zDumpStack
0018:391608 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG2 :
Qp0zSUDumpStack



2009/5/12 Henri Gomez henri.go...@gmail.com:
 I'll try it tomorrow !

 2009/5/12 Rainer Jung rainer.j...@kippdata.de:
 Here's the patch keeping the original structure but using a cleanup to
 destroy the pool reference. If it works, I would like that better.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: dev-h...@tomcat.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-13 Thread Henri Gomez
Some comments on your latest provided patch :

   if (!jk_resolv_pool) {
if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
!= APR_SUCCESS) {
JK_TRACE_EXIT(l);
return JK_FALSE;
}
}
/* We need to clear the pool reference, if the pool gets destroyed
* via its parent pool. */
apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
jk_resolv_cleanup, jk_resolv_cleanup);
apr_pool_clear(jk_resolv_pool);
if (apr_sockaddr_info_get
(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_resolv_pool)
!= APR_SUCCESS) {
JK_TRACE_EXIT(l);
return JK_FALSE;
}

Why not just add the cleanup register in pool create side ?

   if (!jk_resolv_pool) {
if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
!= APR_SUCCESS) {
JK_TRACE_EXIT(l);
return JK_FALSE;
}

/* We need to clear the pool reference, if the pool gets destroyed
* via its parent pool. */
apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
jk_resolv_cleanup, jk_resolv_cleanup);
}

apr_pool_clear(jk_resolv_pool);
if (apr_sockaddr_info_get
(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_resolv_pool)
!= APR_SUCCESS) {
JK_TRACE_EXIT(l);
return JK_FALSE;
}


Also what could happen if we get many threads calling jk_resolv at the
same time ?

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-13 Thread Rainer Jung
On 13.05.2009 14:56, Henri Gomez wrote:
 Some comments on your latest provided patch :
 
if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 }
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 Why not just add the cleanup register in pool create side ?


You are totally right. I had it in the error branch of the following if,
and then moved it up one to many blocks. Although that makes it more
correct, if the patch doesn't help, the changed patch won't help either,
or did you try? We need to add some logging to the cleanup to see, why
it doesn't work (check, whether it gets called).

if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 }
 
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 
 Also what could happen if we get many threads calling jk_resolv at the
 same time ?

To make it absolutely correct we need to use a mutex. but the crash you
experience is not due to multi-threading issues.

jk_resolv() is called during startup and also later, in case you change
a worker address via the statu worker. The whole startup is done
single-threaded, the init code only runs on one thread.

Changing addresses could be done in multiple threads, if you do it using
parallel requests to the status worker. This is something we should also
fix, but not the problem you observe.

You can set the JkLogLevel to tace, then you will get a log line for
each entry and exit of jk_resolve. You'll notivce, that there will be no
entries without leaving first.

Regards,

Rainer


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-13 Thread Rainer Jung
Gimme a few minutes, there's something non i5-specific wrong with the
patch ...

On 13.05.2009 14:56, Henri Gomez wrote:
 Some comments on your latest provided patch :
 
if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 }
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 Why not just add the cleanup register in pool create side ?
 
if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 }
 
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 
 Also what could happen if we get many threads calling jk_resolv at the
 same time ?

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-13 Thread Rainer Jung
Sorry for the broken patch. Besides the not so nice multiple registation
of the cleanup, the real problem for the crash after the patch is, that
clear() on a pool already calls the cleanup. So I had to register the
cleanup for the parent pool (pconf) and not for the pool itself.

I'll think about the thread-safety next, but as I said that is not the
cause for your crashes.

Regards,

Rainer

On 13.05.2009 14:56, Henri Gomez wrote:
 Some comments on your latest provided patch :
 
if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 }
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 Why not just add the cleanup register in pool create side ?
 
if (!jk_resolv_pool) {
 if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 /* We need to clear the pool reference, if the pool gets destroyed
 * via its parent pool. */
 apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool,
 jk_resolv_cleanup, jk_resolv_cleanup);
 }
 
 apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
 jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 
 
 Also what could happen if we get many threads calling jk_resolv at the
 same time ?
Index: common/jk_connect.c
===
--- common/jk_connect.c (revision 763986)
+++ common/jk_connect.c (working copy)
@@ -35,7 +35,7 @@
 #include apr_errno.h
 #include apr_general.h
 #include apr_pools.h
-static apr_pool_t *jk_apr_pool = NULL;
+static apr_pool_t *jk_resolv_pool = NULL;
 #endif
 
 #ifdef HAVE_SYS_FILIO_H
@@ -58,6 +58,13 @@
 typedef const char* SET_TYPE;
 #endif
 
+static apr_status_t jk_resolv_cleanup(void *d)
+{
+/* Clean up pointer content */
+*(apr_pool_t **)d = NULL;
+return APR_SUCCESS;
+}
+
 /** Set socket to blocking
  * @param sd  socket to manipulate
  * @returnerrno: fcntl returns -1 (!WIN32)
@@ -343,15 +350,19 @@
 apr_sockaddr_t *remote_sa, *temp_sa;
 char *remote_ipaddr;
 
-if (!jk_apr_pool) {
-if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) != 
APR_SUCCESS) {
+if (!jk_resolv_pool) {
+if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool) != 
APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
+apr_pool_cleanup_register((apr_pool_t *)pool, jk_resolv_pool,
+  jk_resolv_cleanup, jk_resolv_cleanup);
 }
-apr_pool_clear(jk_apr_pool);
+/* We need to clear the pool reference, if the pool gets destroyed
+ * via its parent pool. */
+apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
-(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_apr_pool)
+(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
FYI.

If I comment the apr_pool_clear() call, I didn't get the initialisation error

2009/5/12 Henri Gomez henri.go...@gmail.com:
 Hi to all,

 I rebuild the mod_jk 1.2.28 on our i5/OS and Apache instance failed.

 Here is the stack trace :

 0009:259448 Stack:  Library    / Program     Module      Stmt
 Procedure
 0009:259488 Stack:  QSYS       / QCMD                    455   :
 0009:259520 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     0     :
 _CXX_PEP__Fv
 0009:259552 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     18    :
 main
 0009:259576 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     234   :
 BigSwitch__FiPPc
 0009:259608 Stack:  QHTTPSVR   / QZSRHTTP    QZSRMAIN    0     :
 _CXX_PEP__Fv
 0009:259640 Stack:  QHTTPSVR   / QZSRHTTP    QZSRMAIN    2     :
 main
 0009:267440 Stack:  QHTTPSVR   / QZSRCORE    MAIN        868   :
 apache_main
 0009:287992 Stack:  QHTTPSVR   / QZSRCORE    HTTP_CONFI  5     :
 ap_run_post_config
 0009:288288 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK      60    :
 jk_post_config
 0009:288320 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK      35    :
 init_jk
 0009:288688 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   34    :
 wc_open
 0009:288720 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   9     :
 build_worker_map
 0009:296848 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   28    :
 wc_create_worker
 0009:298192 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP13_W  5     :
 validate
 0009:298208 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP_COM  29    :
 ajp_validate
 0009:298216 Stack:  QHTTPSVR   / MOD_JK1228  JK_CONNECT  19    :
 jk_resolve
 0009:316840 Stack:  QHTTPSVR   / QZSRAPR     APR_POOLS   13    :
 apr_pool_clear
 0009:316864 Stack:  QHTTPSVR   / QZSRAPR     APR_POOLS   8     :
 allocator_free
 0009:316880 Stack:  QHTTPSVR   / QZSRCORE    MAIN        18    :
 Main_Excp_Handler
 0009:316888 Stack:  QHTTPSVR   / QZSRAPR     OS400TRACE  7     :
 apr_dstack_CCSID
 0009:326912 Stack:  QSYS       / QP0ZCPA     QP0ZUDBG    3     :
 Qp0zDumpStack
 0009:346808 Stack:  QSYS       / QP0ZSCPA    QP0ZSDBG    2     :
 Qp0zSUDumpStack
 0009:346824 Stack:  QSYS       / QP0ZSCPA    QP0ZSDBG    12    :
 Qp0zSUDumpTargetStack
 0009:346824 Stack:  Completed
 0009:407280 apr_dump_trace(): dump for job
 678302/QTMHHTTP/DAPSERVER
                                                 TRCTCPAPP Output

 The problem appears in jk_resolve just after apr_pool_create.

 What happen if 2 threads goes in jk_resolve at the same time ?

        if (!jk_apr_pool) {
            if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
                JK_TRACE_EXIT(l);
                return JK_FALSE;
            }
        }
        apr_pool_clear(jk_apr_pool);
        if (apr_sockaddr_info_get
            (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_apr_pool)
            != APR_SUCCESS) {
            JK_TRACE_EXIT(l);
            return JK_FALSE;
        }


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Rainer Jung
Hi Henri,

can you try the below patch? It replaces the global pool by a function
local one, which is OK, because the resolver calls are not in the
performance critical path (mostly startup initialization and
reconfiguration).

Why do you think, is it possible, that multiple threads will enter
jk_resolve() in parallel?

Regards,

Rainer

Index: jk_connect.c
===
--- jk_connect.c(revision 763986)
+++ jk_connect.c(working copy)
@@ -35,7 +35,6 @@
 #include apr_errno.h
 #include apr_general.h
 #include apr_pools.h
-static apr_pool_t *jk_apr_pool = NULL;
 #endif

 #ifdef HAVE_SYS_FILIO_H
@@ -342,17 +341,16 @@
 #ifdef HAVE_APR
 apr_sockaddr_t *remote_sa, *temp_sa;
 char *remote_ipaddr;
+apr_pool_t *jk_apr_pool = NULL;

-if (!jk_apr_pool) {
-if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
APR_SUCCESS) {
-JK_TRACE_EXIT(l);
-return JK_FALSE;
-}
+if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
APR_SUCCESS) {
+JK_TRACE_EXIT(l);
+return JK_FALSE;
 }
-apr_pool_clear(jk_apr_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,
jk_apr_pool)
 != APR_SUCCESS) {
+apr_pool_destroy(jk_apr_pool);
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
@@ -367,12 +365,17 @@
 if (NULL != temp_sa)
 remote_sa = temp_sa;
 else {
+apr_pool_destroy(jk_apr_pool);
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }

 apr_sockaddr_ip_get(remote_ipaddr, remote_sa);

+/* No more use of data allocated from the jk_apr_pool
+ * APR pool below this line */
+apr_pool_destroy(jk_apr_pool);
+
 laddr.s_addr = jk_inet_addr(remote_ipaddr);

 #else /* HAVE_APR */


On 12.05.2009 13:04, Henri Gomez wrote:
 FYI.
 
 If I comment the apr_pool_clear() call, I didn't get the initialisation error
 
 2009/5/12 Henri Gomez henri.go...@gmail.com:
 Hi to all,

 I rebuild the mod_jk 1.2.28 on our i5/OS and Apache instance failed.

 Here is the stack trace :

 0009:259448 Stack:  Library/ Program Module  Stmt
 Procedure
 0009:259488 Stack:  QSYS   / QCMD455   :
 0009:259520 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 0 :
 _CXX_PEP__Fv
 0009:259552 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 18:
 main
 0009:259576 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 234   :
 BigSwitch__FiPPc
 0009:259608 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN0 :
 _CXX_PEP__Fv
 0009:259640 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN2 :
 main
 0009:267440 Stack:  QHTTPSVR   / QZSRCOREMAIN868   :
 apache_main
 0009:287992 Stack:  QHTTPSVR   / QZSRCOREHTTP_CONFI  5 :
 ap_run_post_config
 0009:288288 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK  60:
 jk_post_config
 0009:288320 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK  35:
 init_jk
 0009:288688 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   34:
 wc_open
 0009:288720 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   9 :
 build_worker_map
 0009:296848 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   28:
 wc_create_worker
 0009:298192 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP13_W  5 :
 validate
 0009:298208 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP_COM  29:
 ajp_validate
 0009:298216 Stack:  QHTTPSVR   / MOD_JK1228  JK_CONNECT  19:
 jk_resolve
 0009:316840 Stack:  QHTTPSVR   / QZSRAPR APR_POOLS   13:
 apr_pool_clear
 0009:316864 Stack:  QHTTPSVR   / QZSRAPR APR_POOLS   8 :
 allocator_free
 0009:316880 Stack:  QHTTPSVR   / QZSRCOREMAIN18:
 Main_Excp_Handler
 0009:316888 Stack:  QHTTPSVR   / QZSRAPR OS400TRACE  7 :
 apr_dstack_CCSID
 0009:326912 Stack:  QSYS   / QP0ZCPA QP0ZUDBG3 :
 Qp0zDumpStack
 0009:346808 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG2 :
 Qp0zSUDumpStack
 0009:346824 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG12:
 Qp0zSUDumpTargetStack
 0009:346824 Stack:  Completed
 0009:407280 apr_dump_trace(): dump for job
 678302/QTMHHTTP/DAPSERVER
 TRCTCPAPP Output

 The problem appears in jk_resolve just after apr_pool_create.

 What happen if 2 threads goes in jk_resolve at the same time ?

if (!jk_apr_pool) {
if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
JK_TRACE_EXIT(l);
return JK_FALSE;
}
}
apr_pool_clear(jk_apr_pool);
if (apr_sockaddr_info_get
(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_apr_pool)
!= APR_SUCCESS) {
JK_TRACE_EXIT(l);

Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
I see you take a similar approach :)

Could you attach the patch file ?

2009/5/12 Rainer Jung rainer.j...@kippdata.de:
 Hi Henri,

 can you try the below patch? It replaces the global pool by a function
 local one, which is OK, because the resolver calls are not in the
 performance critical path (mostly startup initialization and
 reconfiguration).

 Why do you think, is it possible, that multiple threads will enter
 jk_resolve() in parallel?

 Regards,

 Rainer

 Index: jk_connect.c
 ===
 --- jk_connect.c        (revision 763986)
 +++ jk_connect.c        (working copy)
 @@ -35,7 +35,6 @@
  #include apr_errno.h
  #include apr_general.h
  #include apr_pools.h
 -static apr_pool_t *jk_apr_pool = NULL;
  #endif

  #ifdef HAVE_SYS_FILIO_H
 @@ -342,17 +341,16 @@
  #ifdef HAVE_APR
         apr_sockaddr_t *remote_sa, *temp_sa;
         char *remote_ipaddr;
 +        apr_pool_t *jk_apr_pool = NULL;

 -        if (!jk_apr_pool) {
 -            if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
 -                JK_TRACE_EXIT(l);
 -                return JK_FALSE;
 -            }
 +        if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
 +            JK_TRACE_EXIT(l);
 +            return JK_FALSE;
         }
 -        apr_pool_clear(jk_apr_pool);
         if (apr_sockaddr_info_get
             (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,
 jk_apr_pool)
             != APR_SUCCESS) {
 +            apr_pool_destroy(jk_apr_pool);
             JK_TRACE_EXIT(l);
             return JK_FALSE;
         }
 @@ -367,12 +365,17 @@
         if (NULL != temp_sa)
             remote_sa = temp_sa;
         else {
 +            apr_pool_destroy(jk_apr_pool);
             JK_TRACE_EXIT(l);
             return JK_FALSE;
         }

         apr_sockaddr_ip_get(remote_ipaddr, remote_sa);

 +        /* No more use of data allocated from the jk_apr_pool
 +         * APR pool below this line */
 +        apr_pool_destroy(jk_apr_pool);
 +
         laddr.s_addr = jk_inet_addr(remote_ipaddr);

  #else /* HAVE_APR */


 On 12.05.2009 13:04, Henri Gomez wrote:
 FYI.

 If I comment the apr_pool_clear() call, I didn't get the initialisation error

 2009/5/12 Henri Gomez henri.go...@gmail.com:
 Hi to all,

 I rebuild the mod_jk 1.2.28 on our i5/OS and Apache instance failed.

 Here is the stack trace :

 0009:259448 Stack:  Library    / Program     Module      Stmt
 Procedure
 0009:259488 Stack:  QSYS       / QCMD                    455   :
 0009:259520 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     0     :
 _CXX_PEP__Fv
 0009:259552 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     18    :
 main
 0009:259576 Stack:  QHTTPSVR   / QZHBMAIN    ZHBMAIN     234   :
 BigSwitch__FiPPc
 0009:259608 Stack:  QHTTPSVR   / QZSRHTTP    QZSRMAIN    0     :
 _CXX_PEP__Fv
 0009:259640 Stack:  QHTTPSVR   / QZSRHTTP    QZSRMAIN    2     :
 main
 0009:267440 Stack:  QHTTPSVR   / QZSRCORE    MAIN        868   :
 apache_main
 0009:287992 Stack:  QHTTPSVR   / QZSRCORE    HTTP_CONFI  5     :
 ap_run_post_config
 0009:288288 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK      60    :
 jk_post_config
 0009:288320 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK      35    :
 init_jk
 0009:288688 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   34    :
 wc_open
 0009:288720 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   9     :
 build_worker_map
 0009:296848 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   28    :
 wc_create_worker
 0009:298192 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP13_W  5     :
 validate
 0009:298208 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP_COM  29    :
 ajp_validate
 0009:298216 Stack:  QHTTPSVR   / MOD_JK1228  JK_CONNECT  19    :
 jk_resolve
 0009:316840 Stack:  QHTTPSVR   / QZSRAPR     APR_POOLS   13    :
 apr_pool_clear
 0009:316864 Stack:  QHTTPSVR   / QZSRAPR     APR_POOLS   8     :
 allocator_free
 0009:316880 Stack:  QHTTPSVR   / QZSRCORE    MAIN        18    :
 Main_Excp_Handler
 0009:316888 Stack:  QHTTPSVR   / QZSRAPR     OS400TRACE  7     :
 apr_dstack_CCSID
 0009:326912 Stack:  QSYS       / QP0ZCPA     QP0ZUDBG    3     :
 Qp0zDumpStack
 0009:346808 Stack:  QSYS       / QP0ZSCPA    QP0ZSDBG    2     :
 Qp0zSUDumpStack
 0009:346824 Stack:  QSYS       / QP0ZSCPA    QP0ZSDBG    12    :
 Qp0zSUDumpTargetStack
 0009:346824 Stack:  Completed
 0009:407280 apr_dump_trace(): dump for job
 678302/QTMHHTTP/DAPSERVER
                                                 TRCTCPAPP Output

 The problem appears in jk_resolve just after apr_pool_create.

 What happen if 2 threads goes in jk_resolve at the same time ?

        if (!jk_apr_pool) {
            if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
                JK_TRACE_EXIT(l);
                return JK_FALSE;
            }
        }
        apr_pool_clear(jk_apr_pool);
        if 

Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Rainer Jung
On 12.05.2009 15:31, Henri Gomez wrote:
 I see you take a similar approach :)

Yes, but based on your analysis.

 Could you attach the patch file ?

Attached.

 2009/5/12 Rainer Jung rainer.j...@kippdata.de:
 Hi Henri,

 can you try the below patch? It replaces the global pool by a function
 local one, which is OK, because the resolver calls are not in the
 performance critical path (mostly startup initialization and
 reconfiguration).

 Why do you think, is it possible, that multiple threads will enter
 jk_resolve() in parallel?

 Regards,

 Rainer

 Index: jk_connect.c
 ===
 --- jk_connect.c(revision 763986)
 +++ jk_connect.c(working copy)
 @@ -35,7 +35,6 @@
  #include apr_errno.h
  #include apr_general.h
  #include apr_pools.h
 -static apr_pool_t *jk_apr_pool = NULL;
  #endif

  #ifdef HAVE_SYS_FILIO_H
 @@ -342,17 +341,16 @@
  #ifdef HAVE_APR
 apr_sockaddr_t *remote_sa, *temp_sa;
 char *remote_ipaddr;
 +apr_pool_t *jk_apr_pool = NULL;

 -if (!jk_apr_pool) {
 -if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
 -JK_TRACE_EXIT(l);
 -return JK_FALSE;
 -}
 +if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
 +JK_TRACE_EXIT(l);
 +return JK_FALSE;
 }
 -apr_pool_clear(jk_apr_pool);
 if (apr_sockaddr_info_get
 (remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0,
 jk_apr_pool)
 != APR_SUCCESS) {
 +apr_pool_destroy(jk_apr_pool);
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 @@ -367,12 +365,17 @@
 if (NULL != temp_sa)
 remote_sa = temp_sa;
 else {
 +apr_pool_destroy(jk_apr_pool);
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }

 apr_sockaddr_ip_get(remote_ipaddr, remote_sa);

 +/* No more use of data allocated from the jk_apr_pool
 + * APR pool below this line */
 +apr_pool_destroy(jk_apr_pool);
 +
 laddr.s_addr = jk_inet_addr(remote_ipaddr);

  #else /* HAVE_APR */


 On 12.05.2009 13:04, Henri Gomez wrote:
 FYI.

 If I comment the apr_pool_clear() call, I didn't get the initialisation 
 error

 2009/5/12 Henri Gomez henri.go...@gmail.com:
 Hi to all,

 I rebuild the mod_jk 1.2.28 on our i5/OS and Apache instance failed.

 Here is the stack trace :

 0009:259448 Stack:  Library/ Program Module  Stmt
 Procedure
 0009:259488 Stack:  QSYS   / QCMD455   :
 0009:259520 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 0 :
 _CXX_PEP__Fv
 0009:259552 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 18:
 main
 0009:259576 Stack:  QHTTPSVR   / QZHBMAINZHBMAIN 234   :
 BigSwitch__FiPPc
 0009:259608 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN0 :
 _CXX_PEP__Fv
 0009:259640 Stack:  QHTTPSVR   / QZSRHTTPQZSRMAIN2 :
 main
 0009:267440 Stack:  QHTTPSVR   / QZSRCOREMAIN868   :
 apache_main
 0009:287992 Stack:  QHTTPSVR   / QZSRCOREHTTP_CONFI  5 :
 ap_run_post_config
 0009:288288 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK  60:
 jk_post_config
 0009:288320 Stack:  QHTTPSVR   / MOD_JK1228  MOD_JK  35:
 init_jk
 0009:288688 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   34:
 wc_open
 0009:288720 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   9 :
 build_worker_map
 0009:296848 Stack:  QHTTPSVR   / MOD_JK1228  JK_WORKER   28:
 wc_create_worker
 0009:298192 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP13_W  5 :
 validate
 0009:298208 Stack:  QHTTPSVR   / MOD_JK1228  JK_AJP_COM  29:
 ajp_validate
 0009:298216 Stack:  QHTTPSVR   / MOD_JK1228  JK_CONNECT  19:
 jk_resolve
 0009:316840 Stack:  QHTTPSVR   / QZSRAPR APR_POOLS   13:
 apr_pool_clear
 0009:316864 Stack:  QHTTPSVR   / QZSRAPR APR_POOLS   8 :
 allocator_free
 0009:316880 Stack:  QHTTPSVR   / QZSRCOREMAIN18:
 Main_Excp_Handler
 0009:316888 Stack:  QHTTPSVR   / QZSRAPR OS400TRACE  7 :
 apr_dstack_CCSID
 0009:326912 Stack:  QSYS   / QP0ZCPA QP0ZUDBG3 :
 Qp0zDumpStack
 0009:346808 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG2 :
 Qp0zSUDumpStack
 0009:346824 Stack:  QSYS   / QP0ZSCPAQP0ZSDBG12:
 Qp0zSUDumpTargetStack
 0009:346824 Stack:  Completed
 0009:407280 apr_dump_trace(): dump for job
 678302/QTMHHTTP/DAPSERVER
 TRCTCPAPP Output

 The problem appears in jk_resolve just after apr_pool_create.

 What happen if 2 threads goes in jk_resolve at the same time ?

if (!jk_apr_pool) {
if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) !=
 APR_SUCCESS) {
JK_TRACE_EXIT(l);
return 

Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez

 Why do you think, is it possible, that multiple threads will enter
 jk_resolve() in parallel?

It seems to be the case at least on the i5/OS implementation.

These one is heavily multi-threaded

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
 On 12.05.2009 15:31, Henri Gomez wrote:
 I see you take a similar approach :)

 Yes, but based on your analysis.


I works :)

If nobody object, you should commit it, static variable, apr_pool on a
multi-threaded application, it's evil ;-(

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Rainer Jung
On 12.05.2009 15:57, Henri Gomez wrote:
 On 12.05.2009 15:31, Henri Gomez wrote:
 I see you take a similar approach :)
 Yes, but based on your analysis.

 
 I works :)
 
 If nobody object, you should commit it, static variable, apr_pool on a
 multi-threaded application, it's evil ;-(

I will. But it's still unclear, why multiple threads should call it. The
whole initialization is done single threaded, and i remember the only
other i5 related problem we had, when the life cycle of one of the pools
was different. So it may be more related to the way the double
initialization of httpd is done on i5 and not directly to concurrency.

Nevertheless i think the local variable is cleaner/safer unless
jk_resolve() sometimes moves into the performence critical path, where
we might then switch to the global variable bt then also using correct
locking.

The pool clean was introduced last October in

http://svn.eu.apache.org/viewvc/tomcat/connectors/trunk/jk/native/common/jk_connect.c?r1=706039r2=745136diff_format=h

Don't know exactly, what was Mladens motivation for it, but the locally
created and destroyed pool will release resources as well.

Thanks for reporting and breaking it down to jk_resolve() and the pool.

Regards,

Rainer

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
Under i5/OS, the IBM HTTP server use Apache but with a different strategy.

From the IBM documentation (SG246716)

The HTTP Server (powered by Apache) has its own multi-process model.
Each HTTP server
starts two (or three) processes under the QHTTPSVR subsystem:
The manager process
The primary process
The backup process, when configured with the HotBackup directive
Each child process maintains its own thread pool independently.


...

Tip: Asynchronous I/O is one of many enhancements to the standard
Apache server as
delivered to IBM Rochester by the Apache Software Foundation (ASF).
This is just one of
the many reasons that the parenthetical phrase (powered by Apache)
means integration.


It's really an IBM HTTP server powered by Apache.

2009/5/12 Rainer Jung rainer.j...@kippdata.de:
 On 12.05.2009 15:57, Henri Gomez wrote:
 On 12.05.2009 15:31, Henri Gomez wrote:
 I see you take a similar approach :)
 Yes, but based on your analysis.


 I works :)

 If nobody object, you should commit it, static variable, apr_pool on a
 multi-threaded application, it's evil ;-(

 I will. But it's still unclear, why multiple threads should call it. The
 whole initialization is done single threaded, and i remember the only
 other i5 related problem we had, when the life cycle of one of the pools
 was different. So it may be more related to the way the double
 initialization of httpd is done on i5 and not directly to concurrency.

 Nevertheless i think the local variable is cleaner/safer unless
 jk_resolve() sometimes moves into the performence critical path, where
 we might then switch to the global variable bt then also using correct
 locking.

 The pool clean was introduced last October in

 http://svn.eu.apache.org/viewvc/tomcat/connectors/trunk/jk/native/common/jk_connect.c?r1=706039r2=745136diff_format=h

 Don't know exactly, what was Mladens motivation for it, but the locally
 created and destroyed pool will release resources as well.

 Thanks for reporting and breaking it down to jk_resolve() and the pool.

 Regards,

 Rainer

 -
 To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: dev-h...@tomcat.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
 http://svn.eu.apache.org/viewvc/tomcat/connectors/trunk/jk/native/common/jk_connect.c?r1=706039r2=745136diff_format=h

 Don't know exactly, what was Mladens motivation for it, but the locally
 created and destroyed pool will release resources as well.

 Thanks for reporting and breaking it down to jk_resolve() and the pool.

Thanks to you for the quick fix.

Hope to be the only one with this problem but I suspect Windows and
Unix threaded implementations may be also affected ;(

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Rainer Jung
Before committing I gave it a second thought. My theory is motivated by
the crash around main_log we fixed in 2007. There we learned, that on i5
the two initialization passes of httpd are done in the same process. We
also saw, that the pool pconf got invalidated after the first pass. That
was a problem for the global static main_log, because we allocated it
from pconf, and if the pointer was not NULL, we didn't allocate it again
during the second initialization and instead used its now invalid contents.

In the file mod_jk.c there is no other memory allocated from an apr pool
in the first initialization run and reused in the second.

Outside of mod_jk.c we use apr pools only in jk_resolve() in file
jk_connect.c. And yes, again it is based on pconf!

So what happens is, that the static apr pool jk_apr_pool in jk_connect.c
is created as a sub pool of pconf during the first init run, and then
destroyed at the beginning of the second init run when pconf gets
cleared. During the second init run we then reuse jk_apr_pool although
it is destroyed and not valid any more. So the patch I sent should be
OK, because it doesn't reuse the pool, but it would be more correct to
register a cleanup, that invalidates the pointer to the pool. I'l see
how that goes and maybe send you that as a patch too.

Regards,

Rainer

On 12.05.2009 17:55, Henri Gomez wrote:
 http://svn.eu.apache.org/viewvc/tomcat/connectors/trunk/jk/native/common/jk_connect.c?r1=706039r2=745136diff_format=h

 Don't know exactly, what was Mladens motivation for it, but the locally
 created and destroyed pool will release resources as well.

 Thanks for reporting and breaking it down to jk_resolve() and the pool.
 
 Thanks to you for the quick fix.
 
 Hope to be the only one with this problem but I suspect Windows and
 Unix threaded implementations may be also affected ;(

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Rainer Jung
Here's the patch keeping the original structure but using a cleanup to
destroy the pool reference. If it works, I would like that better.

Regards,

Rainer
Index: jk_connect.c
===
--- jk_connect.c(revision 763986)
+++ jk_connect.c(working copy)
@@ -35,7 +35,7 @@
 #include apr_errno.h
 #include apr_general.h
 #include apr_pools.h
-static apr_pool_t *jk_apr_pool = NULL;
+static apr_pool_t *jk_resolv_pool = NULL;
 #endif

 #ifdef HAVE_SYS_FILIO_H
@@ -58,6 +58,13 @@
 typedef const char* SET_TYPE;
 #endif

+static apr_status_t jk_resolv_cleanup(void *d)
+{
+/* Clean up pointer content */
+*(apr_pool_t **)d = NULL;
+return APR_SUCCESS;
+}
+
 /** Set socket to blocking
  * @param sd  socket to manipulate
  * @returnerrno: fcntl returns -1 (!WIN32)
@@ -343,15 +350,18 @@
 apr_sockaddr_t *remote_sa, *temp_sa;
 char *remote_ipaddr;

-if (!jk_apr_pool) {
-if (apr_pool_create(jk_apr_pool, (apr_pool_t *)pool) != 
APR_SUCCESS) {
+if (!jk_resolv_pool) {
+if (apr_pool_create(jk_resolv_pool, (apr_pool_t *)pool) != 
APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;
 }
 }
-apr_pool_clear(jk_apr_pool);
+/* We need to clear the pool reference, if the pool gets destroyed
+ * via its parent pool. */
+apr_pool_cleanup_register(jk_resolv_pool, jk_resolv_pool, 
jk_resolv_cleanup, jk_resolv_cleanup);
+apr_pool_clear(jk_resolv_pool);
 if (apr_sockaddr_info_get
-(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, jk_apr_pool)
+(remote_sa, host, APR_UNSPEC, (apr_port_t) port, 0, 
jk_resolv_pool)
 != APR_SUCCESS) {
 JK_TRACE_EXIT(l);
 return JK_FALSE;


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Re: mod_jk 1.2.28 on i5/OS

2009-05-12 Thread Henri Gomez
I'll try it tomorrow !

2009/5/12 Rainer Jung rainer.j...@kippdata.de:
 Here's the patch keeping the original structure but using a cleanup to
 destroy the pool reference. If it works, I would like that better.

 Regards,

 Rainer


 -
 To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: dev-h...@tomcat.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org