Forgot to report: when it works out then around 680 requests (= Rexx instances) 
the reported
slowdown can be observed (around a second for each subsequent 
request/interpreter instance).

One time this slow down occurred already around request # 350, so not sure what 
influences that.

---rony


On 02.12.2021 17:53, Rony G. Flatscher wrote:
>
> Tested r12333, there is a different crash at:
>
>     ... cut ...
>
>     ---------------  T H R E A D  ---------------
>
>     Current thread (0x00007f5a00e0a5f0):  JavaThread "RexxCleanupRef.clean_1" 
> daemon [_thread_in_native, id=158097, 
> stack(0x00007f5a243e0000,0x00007f5a244e1000)]
>
>     Stack: [0x00007f5a243e0000,0x00007f5a244e1000],  sp=0x00007f5a244df630,  
> free space=1021k
>     Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
>     C  [librexx.so.4+0x20fe12]  SysActivity::equals(unsigned long)+0x14
>     C  [librexx.so.4+0x20fe72]  Activity::isThread(unsigned long)+0x2e
>     C  [librexx.so.4+0x20f1f8]  ActivityManager::findActivity(unsigned 
> long)+0x7c
>     C  [librexx.so.4+0x20f291]  ActivityManager::findActivity()+0x15
>     C  [librexx.so.4+0x20f706]  ActivityManager::attachThread()+0x20
>     C  [librexx.so.4+0x27884f]  InterpreterInstance::attachThread()+0x71
>     C  [librexx.so.4+0x2787b4]  
> InterpreterInstance::attachThread(RexxThreadContext_*&)+0x20
>     C  [librexx.so.4+0x1ae43a]  AttachThread+0x4a
>
>     Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
>     J 5173  
> org.rexxla.bsf.engines.rexx.RexxCleanupRef.jniUnregisterRexxObject(Ljava/lang/String;)I
>  (0 bytes) @ 0x00007f5a53eef774 [0x00007f5a53eef720+0x0000000000000054]
>     J 5174 c1 
> org.rexxla.bsf.engines.rexx.RexxCleanupRef.finalizeRexxObject()V (82 bytes) @ 
> 0x00007f5a4cf84954 [0x00007f5a4cf842a0+0x00000000000006b4]
>     J 5171 c1 
> org.rexxla.bsf.engines.rexx.RexxCleanupRef.access$100(Lorg/rexxla/bsf/engines/rexx/RexxCleanupRef;)V
>  (5 bytes) @ 0x00007f5a4cf83e94 [0x00007f5a4cf83e40+0x0000000000000054]
>     j  org.rexxla.bsf.engines.rexx.RexxCleanupRef$1.run()V+17
>     j  java.lang.Thread.run()V+11 [email protected]
>     v  ~StubRoutines::call_stub
>
>     ... cut ...
>
> Full log file attached.
>
> ---rony
>
>
> On 30.11.2021 12:35, Rony G. Flatscher wrote:
>>
>> O.K. did setup the Tomcat server differently (without systemctl which 
>> intercepted the crash and
>> restarted Tomcat), then started Tomcat manually with the same setup (Java 
>> 17LTS, debug version of
>> ooRexx r12330) and got the following hs_error*.log file:
>>
>>     ... cut ...
>>     ---------------  T H R E A D  ---------------
>>
>>     Current thread (0x00007f5b20003800):  JavaThread "http-nio-42857-exec-2" 
>> daemon [_thread_in_native, id=124916, 
>> stack(0x00007f5b1e6ed000,0x00007f5b1e7ee000)]
>>
>>     Stack: [0x00007f5b1e6ed000,0x00007f5b1e7ee000],  sp=0x00007f5b1e7eacc0,  
>> free space=1015k
>>     Native frames: (J=compiled Java code, A=aot compiled Java code, 
>> j=interpreted, Vv=VM code, C=native code)
>>     C  [librexx.so.4+0x20ffee]  Activity::hasRunPermission()+0x10
>>     C  [librexx.so.4+0x20e8b7]  ActivityManager::dispatchNext()+0x53
>>     C  [librexx.so.4+0x20f2c3]  ActivityManager::releaseAccess()+0x2d
>>     C  [librexx.so.4+0x20e4b3]  
>> ActivityManager::addWaitingActivity(Activity*, bool)+0x89
>>     C  [librexx.so.4+0x1d8372]  
>> ActivityManager::relinquishIfNeeded(Activity*)+0x33
>>     C  [librexx.so.4+0x1cd0b6]  RexxActivation::run(RexxObject*, 
>> RexxString*, RexxObject**, unsigned long, RexxInstruction*, 
>> ProtectedObject&)+0x4fc
>>     C  [librexx.so.4+0x1da047]  RexxCode::call(Activity*, RoutineClass*, 
>> RexxString*, RexxObject**, unsigned long, RexxString*, RexxString*, 
>> ActivationContext, ProtectedObject&)+0x99
>>     C  [librexx.so.4+0x1d9fa6]  RexxCode::call(Activity*, RoutineClass*, 
>> RexxString*, RexxObject**, unsigned long, ProtectedObject&)+0x64
>>     C  [librexx.so.4+0x18ace0]  RoutineClass::call(Activity*, RexxString*, 
>> RexxObject**, unsigned long, ProtectedObject&)+0x68
>>     C  [librexx.so.4+0x21af09]  CallRoutineDispatcher::run()+0x7b
>>     C  [librexx.so.4+0x1deed8]  
>> NativeActivation::run(ActivityDispatcher&)+0x8e
>>     C  [librexx.so.4+0x219c57]  Activity::run(ActivityDispatcher&)+0xbb
>>     C  [librexx.so.4+0x1b150d]  CallRoutine+0x82
>>
>>     Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
>>     J 4545  org.rexxla.bsf.engines.rexx.RexxAndJava.jniRexxRunProgram
>>     ... cut ...
>>
>> The full log file is enclosed.
>>
>> ---
>>
>> Ad repeated e-mails: sorry, this is again Sourceforge! :(
>>
>> ---rony
>>
>>
>>
>>
>>
>> On 30.11.2021 12:09, Rony G. Flatscher wrote:
>>>
>>> Tested r12330. It seems to get a little bit further (ca 50 more requests 
>>> before slowing down),
>>> also occasionally there still are crashes of ooRexx in the Tomcat server. 
>>>
>>> ---rony
>>>
>>>
>>> On 29.11.2021 13:33, Rony G. Flatscher wrote:
>>>>
>>>> Tested r12326 in the "debug" configuration which did not change the 
>>>> behavior.
>>>>
>>>> Spending quite some time on it a few corrections/remarks:
>>>>
>>>>   * It seems that the number of successful requests depends on the ooRexx 
>>>> configuration,
>>>>     whether it is "debug" or "release". The "debug" version has about 50 
>>>> successful (quickly
>>>>     served) requests fewer than the "release" configuration that I used 
>>>> yesterday.
>>>>
>>>>   * There are crashes from time to time which tear down Tomcat 9 and 
>>>> systemd restarts that
>>>>     service such that after a short while the response is quick (up to 
>>>> 680-710 requests when
>>>>     the crawling down occurs).
>>>>
>>>>   * If there are no crashes, then the requests slow down to take a little 
>>>> bit more than a second.
>>>>
>>>> Looking at the Tomcat log files there are no traces from ooRexx there.
>>>>
>>>> ---rony
>>>>
>>>>
>>>>
>>>> On 28.11.2021 20:39, Rony G. Flatscher wrote:
>>>>> Tested with today's r12325,  slow-down to appr. one second per request 
>>>>> still occurs, however, it
>>>>> seems that there are appr. 50 more instances (over r12317, now around 730 
>>>>> vs. 680) that can be
>>>>> created before slowing down and quickly dropping down to a little bit 
>>>>> more than one second per
>>>>> request from that moment on. Restarting Tomcat will have quick responses 
>>>>> again, before it dorps down
>>>>> after appr. 730 requests again.
>>>>>
>>>>> ---rony
>>>>>
>>>>> On 25.11.2021 13:58, Rony G. Flatscher wrote:
>>>>>> Tested with r12317, slow-down to appr. one second per request still 
>>>>>> occurs, however, it seems that
>>>>>> there are appr. 30-40 more instances that can be created before slowing 
>>>>>> down and quickly dropping
>>>>>> down to a little bit more than one second per request.
>>>>>>
>>>>>> Maybe one observation with regards to Tomcat 9: it usually uses the same 
>>>>>> thread to service client
>>>>>> requests for the same JSP (which is the case in the test here).
>>>>>>
>>>>>> ---rony
>>>>>>
>>>>>> On 23.11.2021 16:12, Rony G. Flatscher wrote:
>>>>>>> Tested 64-bit Tomcat+ooRexx on Linux using the same Java 17LTS as on 
>>>>>>> Windows and a debug build of
>>>>>>> ooRexx from trunk (r12316) and the same web application that uses 
>>>>>>> ooRexx in JSPs as on the Windows
>>>>>>> Tomcat server.
>>>>>>>
>>>>>>> The test will request constantly the same ooRexx JSP page from the 
>>>>>>> Linux server for ten seconds and
>>>>>>> repeat it once. Each request will cause a Rexx interpreter instance RII 
>>>>>>> to be created in the Tomcat
>>>>>>> server to execute all Rexx scripts in the requested JSP. (This rii will 
>>>>>>> also have its .output
>>>>>>> redirected to the 'out'-Java object for that JSP run, such that output 
>>>>>>> of say statements will get
>>>>>>> sent back as normal text to the client.)
>>>>>>>
>>>>>>> After restarting Tomcat, the very first run of fetching as quickly as 
>>>>>>> possible the ooRexx test JSP
>>>>>>> will be rather quick up (appr. 13 msecs/req) to request # 400 
>>>>>>> (sometimes up to # 600) after which
>>>>>>> the requests get visually slower and slower and then (after appr. 30 
>>>>>>> more requests) start to need a
>>>>>>> little bit more than a second for each request (e.g.  1179 msecs/req) 
>>>>>>> to return as if something
>>>>>>> blocks ooRexx serving the JSP. From this point on each request will 
>>>>>>> need a little bit more than a
>>>>>>> second (a timeout?) until Tomcat gets restarted.
>>>>>>>
>>>>>>> Not sure how to debug this, so any advice, ideas and hints very welcome!
>>>>>>>
>>>>>>> ---rony
>>>>>>>
>>>>>>> P.S.: Did test with three builds of ooRexx from trunk, release, 
>>>>>>> relWithDebInfo, debug, all
>>>>>>> exhibiting the same behaviour.
>>>>>>> P.P.S.: There are no errors in the various Linux Tomcat log files.
>>>>>>> P.P.P.S.: Tomcat on the Windows server, since yesterday's fix, remains 
>>>>>>> responsive no matter how many
>>>>>>> requests with ooRexx in JSPs get sent to it.

_______________________________________________
Oorexx-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oorexx-devel

Reply via email to