Bonjour Jean Louis,

On 04.01.2026 08:27, Jean Louis Faucher wrote:


On 3 Jan 2026, at 23:03, P.O. Jonsson <[email protected]> wrote:

Hi Rony,

Have you monitored the memory usage while running the test? In the cases I have had similar problem it have invariantly been lack of physical memory and the machine starting to swap out memory to disk.



Tested on Apple M1 Pro 32 GB


created test_01.json: 00:00:00.030275
wrote test_01.json: 00:00:00.000987
created test_02.json: 00:00:00.052254
wrote test_02.json: 00:00:00.001881
---
importing test_02.json ...
importing test_02.json lasted: 00:00:19.435259
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:02:06.677061
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:51.895894
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:13.390069
test_01.json: res~items: 90000


If you add drop res then it's better

do fn over "test_02.json", "test_02.json", "test_01.json", "test_01.json"
   res=read_file(fn)
   say fn":" "res~items:" res~items
   drop res
end


created test_01.json: 00:00:00.023453
wrote test_01.json: 00:00:00.000948
created test_02.json: 00:00:00.043462
wrote test_02.json: 00:00:00.002323
---
importing test_02.json ...
importing test_02.json lasted: 00:00:19.998360
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:00:24.779940
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:01.770389
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:01.420197
test_01.json: res~items: 90000





Repeating reading test_02.json lasts appr. five (!) times longer (/01:10.770/ vs.*06:03.864* minutes)! Also reading the first time test_01.json lasts five times longer than the second time (*02:15.433* vs. /00:43.929/). Also, the latest "json.cls" is enclosed (committed to trunk today), which seems to be able to read almost twice as fast as the release version (test.rex was carried out with the enclosed version of json.cls).

Now, the test data did not change, the code did not change, so it is strange that such a big variation can be observed (unless I miss something obvious).

So maybe there is some heavy effect on ooRexx creating 180.000 (90.000) directory objects and storing them in an array?

What might be the reason? What can be done about it?


Given the impact of drop res, I would say it's the GC marking that may slow 
down the execution.

Attached:
profiling without drop.txt
profiling with drop.txt

I use (or rather chatGPT uses) DeadObjectPool::findFit as an example to explain how to interpret profiling. The problem doesn't stem from findFit itself, but from the fact that it's called frequently, which explains its presence at the top of the list.

*wow*, thank you for this great analysis and insights you have been able to 
come up with!

Kudos!

Best regards

---rony

_______________________________________________
Oorexx-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oorexx-devel

Reply via email to