I know some find COBOL repugnant, but I do wonder if with the latest COBOL from IBM, being used for applications work, if it would be quicker and faster to develop in COBOL, turn on the options that have the compiler using the instructions available on the latest hardware, and if that would give us a performance boost by not having cache lines being marked as "invalid"...

The compiler is capable of analyzing the code, flagging bad to poor statements, optimizing what syntactically passed parse/scan/analysis steps...

It could be made such that it is 64bit capable, it would automatically use DCBEs allowing I/O buffers to be above the line.... VSAM would be above...

just throwing this out for comments.

And for those that don't know, COBOL now allows some "hex" operations. And it supports json and a certain amount of methodization....

Just thinking about future of z/Arch systems given the amount of bias I have found in AI for deciding if cloud is better or a z/xx machine....

I asked what the bus widths were for the CPUs being used for cloud and then told it what they are between RAM and CPUs (and cache) for z/Arch and the answers were still pushing cloud, even though it supposedly knew that a z/15|16 uses a lot less electricity......

--
Regards,
Steve Thompson

Make Mainframes Great Again
They use far less Electricity than Clouds and can do more work




On 8/21/2025 8:03 PM, Jon Perryman wrote:
On Tue, 19 Aug 2025 21:14:31 +0000, Seymour J Metz <sme...@gmu.edu> wrote:

I suspect that the double MVC is more of a performance issue than the cache hit.
I typically use LOCTR to avoid the duplicate execution.
I suspect the LOCTR is inefficient for an EX MVC for a few reasons.

1. You want relevant instructions in the pipeline so that decoding can begin as 
soon as possible. Both EX and MVC must be decoded.

2. Using LOCTR means MVC is not in the 6 instruction pipeline and must be 
fetched.

3. As someone mentioned, J around the MVC is slower. Realize MVC is ignored by 
the pipeline because it saw the J. This is faster than LOCTR because the MVC is 
still in L1 cache.

4. LOCTR most likely must be retrieved from L2, L3, L4 or ram. If so, MVC must 
first be fetched before it can be decoded, causing an additional delay for the 
fetch of the source and destination data.

5. Fetch is slower than instruction speed. Think about Peter's comments about 
modifying the L1 instruction cache causing a fetch most likely from L2. Speed 
is impacted by the speed of L1, L2, L3, L4 and ram.

With the impact of C and C based languages, is it worth the effort to be super 
efficient?

Reply via email to