Dear P.O.,
On 04.01.2026 10:49, ooRexx wrote:
I ran the test twice with the json.cls appended to Ronys Email. I do not see any difference or
“slow-down” the 2nd time and the two runs show more or less the same timing.
Hmm, I was not clear enough what I meant: the "second" relates to the second read in the loop of
"test_02.json" in the program: the first time to read "test_02.json" in your case (first overall
run) lasted "00:00:41.931019" whereas the second time it read in the loop it took "00:01:48.512751"
which is about 2.5 times slower.
So your machine seems to be much faster, yet the program exhibits the same behaviour, the second run
is much slower than the first run (of reading "test_02.json"). And the observation on reading twice
"test_01.json" after that is also matching the observation here: reading "test_01.json" the first
time (third loop) takes "00:00:51.876754" and reading "test_01.json" another time (fourth loop)
takes only 1/4 of the time!
Thank you for your testing!
Best regards
---rony
The last iteration is even faster than the earlier ones so I guess either the GC have done
something or (more probable) my machine have cashed the data and learned.
Mac macOS Sequoia 15.7.2
3,2 GHz 16-Core Intel Xeon W
96 GB 2933 MHz DDR4
jenkins@POs-16-Core-Pro json % rexx test.rex
created test_01.json: 00:00:00.044627
wrote test_01.json: 00:00:00.001806
created test_02.json: 00:00:00.082784
wrote test_02.json: 00:00:00.005616
---
importing test_02.json ...
importing test_02.json lasted: 00:00:41.931019
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:01:48.512751
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:51.876754
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:14.082174
test_01.json: res~items: 90000
jenkins@POs-16-Core-Pro json % rexx test.rex
created test_01.json: 00:00:00.046251
wrote test_01.json: 00:00:00.001825
created test_02.json: 00:00:00.084495
wrote test_02.json: 00:00:00.005528
---
importing test_02.json ...
importing test_02.json lasted: 00:00:41.617933
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:01:48.699853
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:50.627889
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:13.548549
test_01.json: res~items: 90000
Hälsningar/Regards/Grüsse,
ooRexx
[email protected]
Am 04.01.2026 um 08:27 schrieb Jean Louis Faucher <[email protected]>:
Hi Rony, P.O.
On 3 Jan 2026, at 23:03, P.O. Jonsson <[email protected]> wrote:
Hi Rony,
Have you monitored the memory usage while running the test? In the cases I have had similar
problem it have invariantly been lack of physical memory and the machine starting to swap out
memory to disk.
Tested on Apple M1 Pro 32 GB
created test_01.json: 00:00:00.030275
wrote test_01.json: 00:00:00.000987
created test_02.json: 00:00:00.052254
wrote test_02.json: 00:00:00.001881
---
importing test_02.json ...
importing test_02.json lasted: 00:00:19.435259
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:02:06.677061
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:51.895894
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:13.390069
test_01.json: res~items: 90000
If you add drop res then it's better
do fn over "test_02.json", "test_02.json", "test_01.json", "test_01.json"
res=read_file(fn)
say fn":" "res~items:" res~items
drop res
end
created test_01.json: 00:00:00.023453
wrote test_01.json: 00:00:00.000948
created test_02.json: 00:00:00.043462
wrote test_02.json: 00:00:00.002323
---
importing test_02.json ...
importing test_02.json lasted: 00:00:19.998360
test_02.json: res~items: 180000
importing test_02.json ...
importing test_02.json lasted: 00:00:24.779940
test_02.json: res~items: 180000
importing test_01.json ...
importing test_01.json lasted: 00:00:01.770389
test_01.json: res~items: 90000
importing test_01.json ...
importing test_01.json lasted: 00:00:01.420197
test_01.json: res~items: 90000
Repeating reading test_02.json lasts appr. five (!) times longer (/01:10.770/ vs.*06:03.864*
minutes)! Also reading the first time test_01.json lasts five times longer than the second time
(*02:15.433* vs. /00:43.929/). Also, the latest "json.cls" is enclosed (committed to trunk
today), which seems to be able to read almost twice as fast as the release version (test.rex
was carried out with the enclosed version of json.cls).
Now, the test data did not change, the code did not change, so it is strange that such a big
variation can be observed (unless I miss something obvious).
So maybe there is some heavy effect on ooRexx creating 180.000 (90.000) directory objects and
storing them in an array?
What might be the reason? What can be done about it?
Given the impact of drop res, I would say it's the GC marking that may slow
down the execution.
Attached:
profiling without drop.txt
profiling with drop.txt
I use (or rather chatGPT uses) DeadObjectPool::findFit as an example to explain how to interpret
profiling.
The problem doesn't stem from findFit itself, but from the fact that it's called frequently,
which explains its presence at the top of the list.
<profiling without drop.txt>
<profiling with drop.txt> _______________________________________________
Oorexx-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oorexx-devel
_______________________________________________
Oorexx-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oorexx-devel