Re: Structured Programming Macros

2016-05-12 Thread Rob Van der Heij

> Thanks, having had the privilege of developing code under VM/CMS in the
> past I do have knowledge of the MACLIB format and would be comfortable
> writing the needed Rexx to convert the libraries should I need them.
>
> Working them into the official development environment for approved use
is
> a different kettle of fish entirely.
>
> Is the request you mentioned a SHARE requirement or an IBM RFE?  In any
> case, can you point us where to vote on that request?

John referred to this RFE, I think

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=47699
Tenzij hierboven anders aangegeven: / Unless stated otherwise above:
IBM Nederland B.V.
Gevestigd te Amsterdam
Inschrijving Handelsregister Amsterdam Nr. 33054214


Re: Structured Programming Macros

2016-05-18 Thread Rob van der Heij
I had the pleasure of doing a lot of work with the Structured Programming
macros that we have for CMS Pipelines.

Unlike the SPM from HLASM Toolkit, things like IF / THEN / ELSE and some
expect the condition code to be produced by normal instructions, so the
argument for IF is a condition code. I like that because it does not put
instructions with possible side effect in your condition code. There's also
a shortcut for an IF statement with just one statement in the body and no
ELSE clause. And some of the others have an extra check on condition code
too. The exception is probably the CASE statement where you provide the
skeleton and the instructions for compare and branch are generated for each
entry.

What really helps is having nested procedures with a stack for local
variables that is pre-computed based on static nesting where possible, so
you don't incur the overhead of all that dynamic memory allocation. This
avoids code duplication in many cases and helps making the modules more
compact so you don't run out of base registers that quick.

It really works. I have not needed to code branch instructions or invent
labels and it helped me produce good quality code quicker than without.
http://www.rvdheij.nl/Presentations/2009-s8134.pdf

Rob


On 18 May 2016 at 13:36, Bernd Oppolzer  wrote:

> Am 18.05.2016 um 13:15 schrieb Steve Hobson:
>
>> >From Bernd Oppolzer:
>>
>> If you specify this global "baseless" parameter (specified at the
>>> startup macro and controlling
>>> all SP macros and the lightweight CALL and PROC macros), there is
>>> only one base register
>>> which covers the area after ENDPROC, allowing for up to 4 k of local
>>> static variables
>>>
>> FWIW, if you can use the long-displacement instuctions then you can
>> access a pretty much unlimited amount of local static with one base
>> register.
>>
>> Best regards, Steve Hobson
>>
>>
>> Thank you for your comments on this.
>
> The macros I mentioned in my original post had their origins in a time
> when all instructions
> had 12 bit offsets, so that the range covered by one base register was 4 k
> only. Anyway: because
> every function block has its own 4 k local static area and you can have an
> unlimited number of them
> in one module, the module size could be really large, in those long gone
> times already, if you used
> those (procedure) macros and the SP macros from the start. All function
> blocks shared the same base
> registers (one for the code area = 4k, one for the static area = 4k) and
> then there were two for the
> code area of the main line, two for the global static area, and this still
> left some for other addressing
> and computing tasks. Some large programs ran out of registers, anyway.
> Going "baseless" made things
> much easier.
>
> Kind regards
>
> Bernd
>


Re: Please use meaningful subject (was: ASSEMBLER-LIST Digest - 6 Jun 2016 to 20 Jun 2016 (#2016-62))

2016-06-21 Thread Rob van der Heij
On 22 June 2016 at 08:07, Peter Hunkeler  wrote:

> I'd appreciate people responding to digest message would take a little
> care of adding a meaningful subject. And include some snippets of the text
> you're referring to.
>
> --
> Peter Hunkeler
>

An interesting idea might be to sign up for the web interface at
listserv.uga.edu and use that to respond to the individual post, instead of
trying to reply to the digest mail. Even when you modify the subject, your
response would not thread into the discussion for next day's digest, nor
for those who follow the list by individual posts. It might also save us
some of the quoted style sheets that some mail agents inject in the reply.

Rob


Re: Please use meaningful subject (was: ASSEMBLER-LIST Digest - 6 Jun 2016 to 20 Jun 2016 (#2016-62))

2016-06-22 Thread Rob van der Heij
On 22 June 2016 at 09:48, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:

> On 2016-06-22, at 00:50, Rob van der Heij wrote:
> >
> > An interesting idea might be to sign up for the web interface at
> > listserv.uga.edu and use that to respond to the individual post,
> instead of
> > trying to reply to the digest mail. Even when you modify the subject,
> your
> > response would not thread into the discussion for next day's digest, nor
> >
> Are messages are threaded by Subject or by In-Reply-to?  I think
> I've observed both behaviors.
>

There's RFC 4021 that describes Message-ID: and References: headers and how
they can be used to define the thread. This can be a challenge for the mail
agent to reconstruct a context from the various threads that were forked.
Most mail agents use that approach.

Google Mail originally was supposed tho thread messages by content. I am
not sure how much of that still happens, but I find gmail very effective
for following mailing lists. It's very effective to skip or mute the
remainder of a thread. And I really like it that the entire thread comes
back into your inbasket when a new message arrives, so you can still review
the material and don't need every post to include the full chain of the
conversation. And you do have a fair amount of control over the layout of
your response, so you can chop off all but the paragraph you respond to.

But there may be good reasons you don't want to use gmail for your
correspondence, even when mailing lists present less privacy issues.

Rob


Re: SIIS "issue" after upgrade to z13 machine.

2016-11-11 Thread Rob van der Heij
The example that I have seen was where the customer had a linkage model for
small subroutines that used a static save area and local storage after a
branch at the start of the program. That's painful for small routines
because a lot of the code is in the same cache line and gets hit each time
you store something. If you can't change it all, it would be interesting
how much can be undone by some slack space between the save area and the
code to ensure it's a different cache line. And you would only have to do
with the 10% of the code that makes up for 90% of the overhead.

Rob

On 12 November 2016 at 05:52, Philippe Cloarec 
wrote:

> Hi,
> As you may know, there is some kind of performance issue because of
> SIIS(Store Into Instruction Stream) after upgrade to a z13 machine in some
> scenarios.  CPU increase of 30% can be seen in some case, so it may be good
> to perform to related changes to avoid issue from reoccuring.  Did start to
> work on this case since some time.   Here we do talk of code written years
> ago, hundreds of programs for which there is no time to rewrite them as
> RENT, so I am interested to talk with any having some experience about
> this.   More generally, dealing with old even old code, I would like to
> change the code beside SIIS case to improve the performance  at execution
> time.   I mean by using newer instructions and optimizing the code to save
> some CPU cycles.  Did find some interesting documents but wanting to
> discuss of any scenario we can think about.   TTYL then. Philippe   (
> philippe.cloa...@gmail.com)
>


Re: SIIS "issue" after upgrade to z13 machine.

2016-11-12 Thread Rob van der Heij
On 12 November 2016 at 10:10, Philippe Cloarec 
wrote:

>
> Since we do talk of CPU cycles savings here I will check for AGI cases and
> their resolution and try to implement instruction grouping as much I can.
>
> From my humble point this is a real topic and all z13 sites having old
> productions Batch programs should perform some action.
>

I would reduce expectations of the benefits of "typical" things like
loading registers early and avoiding AGI. Our CPU is not really that
typical. Since a lot of our instructions have been coded in the Language of
our Fathers, we can't simply recompile and take advantage of such tricks
(even if customers had source code and wish to accept the business risk of
recompiles). Instead, our CPU does very good in figuring out those things
on its own with Out-of-Order Execution. Even better if you can take
advantage of SMT.

On z/OS you should be able to use hardware profiling to find the pieces
that are worth a closer look (since z/VM does not virtualize that support,
I had to write my own profiler in software). I have had numerous cases
where I expected low-hanging fruit and found that I could not do outrun our
CPU. For some critical parts it does help to unroll a loop a little bit,
but something extreme with 16-fold and swapping registers actually made it
slower. Hopefully you find a spot that is worth spending some time to
optimize. If you're looking at touching all code to scrape off 10% it may
be wiser to look higher up in the stack.

I will be the first to admit that you can achieve impressive results by
carefully coding a critical part using the right instructions. In one case
we had a end-user transaction take 700 ms - when I was done it was down to
7 ms. This was somewhat unique in that it did module multiplication for
cryptography. The code had been written with the assumption that operations
on words twice as long take 4 times more time. But on our CPU it takes just
log(2) times longer, so going from 16-bit multiply to 64-bit saves you a
lot.

Rob


Re: EXECUTE Instruction and location of its target instruction

2016-11-23 Thread Rob van der Heij
It's not like we would walk all the way to the location that "more far" is
expensive. It's in the same cache line or it's not. I have a macro to
generate the target in-line to ensure that HLASM knows the USING that
applies to the target, accepting the fact that I sometimes need to branch
over it. Like
MOVE   MVC A(0),B
EX   R1,MOVE
Rumor was that walking over the target before executing it was so common
that the CPU exploited the fact that the instruction was still in the
pipeline or at least outweigh the branch over it. But I have not been able
to show the difference other than in code length.On some models it was not
good to have the target in the literal pool, and I did not feel like doing
an extra code section for them.

Rather than dreaming of huge savings, measure before you make changes. And
understand things will be different with the next model. Most likely for
90% of the code you will never gain back the time you spend on this.

Rob

On 23 November 2016 at 10:15, Philippe Cloarec 
wrote:

> Hi,
>
> My understanding is, we should keep as closest as possible, the EXECUTE
> instruction and its target instruction...EXECUTE instruction being  greedy
> enough in term of CPU use...to be clear dozens of Cycles needed to complete
> its execution.
>
> Reviewing old Assembler programs, I guess I am surprised to see enough
> often that all target instructions for ALL the Execute instructions coded
> in the programs grouped mostly at the end...
>
> Sometimes I can see offset between EXECUTION and its Target instruction to
> be enough BIG like below!!:
> >
> 001214 4490 B94201942  1142EX
> R9,LIBCLE2
> ..
> ..
> 001942 D200 BC76 BD3F 01C76 01D3F  1607 LIBCLE2  MVC   SCLE2(0),ZONLIB
> >
>
> I did read many articles, and read often EX should be closed to Target
> instruction but no recommendation in term of offset between both elements
> ?!?  From my humble understanding, more far is the Target instruction from
> EX one more costly will be its execution - right ? ; So knowing dozens of
> Cycles is "normally" required to complete one EX instruction, actually to
> change the program to minimize the offset between EX and its Target
> instruction CAN GREATLY reduce CPU use by comparaison with previous code -
> right ?
>
> Thx in advance for  input you may have.  regards Philippe
>


Re: Finding Dave Bond

2016-12-08 Thread Rob van der Heij
On 8 December 2016 at 19:35, Ed Jaffe  wrote:

> He lives in Switzerland and works for L^z Labs -- the latest John Moores
> kill-the-mainframe endeavor...
>

Shaken, not stirred. License to kill the mainframe?


Re: 50 year old assembler code still running.

2017-03-11 Thread Rob van der Heij
On 11 March 2017 at 13:52, Tony Thigpen  wrote:

> I am working on some REALLY old code. Some of the code has dates back in
> 1967! The oldest date found is 5/9/67.
>
> This code is still running daily. That's as good as 50 years later. The
> only reason we are touching the code is because we are migrating this
> application from z/VSE to z/OS.
>

I think that's a very cool find. Wonder how long the original developer had
expected the code to be in active use. I read that even a tiny bit of code
could be used to determine eating habits and such...

Also kind of strange to see it branch to 4 bytes beyond the routine
address. And does &NAME show that we already had macros or is that just a
label?
Maybe if we had IBM Watson analyze early code snippets from various places,
we could recognize patterns that influenced developers and find out where
the first assembler programmers came from.


Re: Structured programminng macros

2017-05-15 Thread Rob van der Heij
On 15 May 2017 at 08:41, Pieter Wiid  wrote:


> With the IF structure, given the normal ratio of comment to "real" data,
> you will have a very high percentage of pipeline flush due to incorrect
> branch prediction.
>

Why would prediction of a branch in a loop be notoriously bad? You can't
surprise the CPU more than once, can you? Unless you'd test on the
rightmost bit of a counter, in which case unrolling the loop a bit might be
attractive.

Rob


Re: Structured programminng macros

2017-05-15 Thread Rob van der Heij
On 15 May 2017 at 10:09, Pieter Wiid  wrote:

> Think about my example where you could have cards with or without an "*"
> in col 1.
>
> First card - "*", so the "IF" branch is not taken, followed by
> unconditional branch.
>
> next 5 cards the branch is taken, then another "*" invalidates the BPT,
> etc.
>

True. I thought you were talking about the branch to iterate or leave the
loop.


Re: random quest

2017-05-17 Thread Rob van der Heij
On 17 May 2017 at 01:33, MELVYN MALTZ  wrote:

>
> There are several statistical tests for randomness, perhaps the easiest to
> calculate is MSSD (mean squared successive differences) and you are on the
> right track
>

Neat. I could not resist and wrote a pipeline to try the random order with
SHA256 hash

PIPE literal | dup 9 | spec number from 0 1.5 r | digest sha256 append
| sort 6-* | chop 5 | mssd | cons

Interesting was that I did not need to sort on the full 32-byte hash, but
just the first 2 byte was pretty good, and beyond 5 there was no change in
the first 12 digits of MSSD.
But while this particular sequence was pretty random, the next one will be
the same. So maybe I need a HMAC with the TOD clock as key...


Re: Question about CPUs

2017-08-01 Thread Rob van der Heij
The term you are groping for here is "memory interlock". This was coined
> by IBM in regard to the TS instruction in that it imposes a lock on its
> target byte to prevent any other processor in the SMP configuration from
> manipulating that byte until its operation is complete.
>

I believe the thread has been ignoring the earlier reference to "block
concurrent" that the Principles of Operation uses to explain the issue. To
redefine the concept of interrupts does not make it more clear in my
opinion. Mind you, on a single CPU system we still have channels that
access memory, though some of that access is self-inflicted and
predictable. Oh, and while it does not apply to CP's, you can have your IFL
in SMT mode...

I grunted a lot at the early gcc port that was using CSD to store a double
word, inspired by architectures where such operations were not atomic.
There's more "common knowledge" from the other side that does not apply to
IBM Z.


Re: KMx vs PCC crypto instructions

2017-08-11 Thread Rob van der Heij
On 10 August 2017 at 22:33, Farley, Peter x23353 <
peter.far...@broadridge.com> wrote:

> OT: I would disagree that ICSF is the "overall better choice".  IMHO, if
> you do not need unique Crypto Express co-processor functions or completely
> and totally secure keys then ICSF is just wasted overhead compared to using
> the native CPACF instructions.
>
> I have measured the difference between using native CPACF and the ICSF
> equivalents for "clear key" computations and it is many, many orders of
> magnitude slower to use ICSF.  That is unacceptable.
>

With an asynchronous interface to CPEX you would need to poll and introduce
some latency unless you wish to spin while CPEX is processing. For
operations where the CPEX is much better, spinning can be justified with a
single threaded workload. Obviously when you need to have the CPEX do the
symmetric encryption because you don't have the keys yourself, then it's
another matter.

Overhead can be an issue. When measuring the Linux on Z crypto, I found
that the flexible crypto engine layer introduced so much overhead that it
was cheaper to do some easier symmetric encryption (on short blocks) in
software than involve the engine that uses CPACF to do it. This changed
when the openssl project got a developer with interest in S/390
architecture and put the native CPACF instructions in the mainline code.

Rob


Re: interesting, to me, new z14 instruction: BIC

2017-09-15 Thread Rob van der Heij
And I thought the BIC instruction was for when you press hard, that you
could write several cache lines at once :-)


Re: Detection of Compile-Time Self-Modifying Code

2017-10-09 Thread Rob van der Heij
On 10 October 2017 at 07:05, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:

>
> Does HLASM know whether you're changing an instruction or data?  Should it
> simply
> notify of all negative ORGs?
>
> That sounds a bit rude. While we can argue that macros can be rewritten to
avoid it, I don't think it's evil when composing a translate table for
example.

If the code path is conditional, for example with a slower fall-back
routine for when machine type has been tested during execution, then Ed's
approach with the ACONTROL might be justified. But there might be more than
just the question whether the CPU can do it; maybe you also want you
development tools (and developers) well enough aware of the new
instructions to avoid mistakes.

Rob


Re: Rehabilitated TROT Routine (Was: Detection of Compile-Time Self-Modifying Code)

2017-10-11 Thread Rob van der Heij
On 12 October 2017 at 08:03, Pieter Wiid  wrote:

> Correction:
>
> DOWHILE,TROT,R14,R2,B'0001',1
> ENDDO
>
>
Constructs where evaluation of the condition has a side effect (or
completely relies on the side effect) are often more a convenience to the
writer than to the reader.
When working in a language that encourages it, I try to restrain myself to
common constructs like testing a null pointer with the assignment. There's
something to say for languages where it's left to the compiler to optimize
that.

Rob


Re: Load module

2017-11-11 Thread Rob van der Heij
You should really study the class material rather than asking others on the
list.

On Sat, 11 Nov 2017 at 20:20, Sudershan Ravi 
wrote:

> How can I say that the module is dynamic or Static? where can I find the
> info?
>


Re: A modest proposal: LE enabled HLASM + C runtime

2017-11-16 Thread Rob van der Heij
On 15 November 2017 at 15:58, John McKown 
wrote:


> This has lead me into the world of the weird. Where possible, instead of
> using QSAM to do my printing to SYSPRINT, I can do a fopen() and
> fprintf() with fclose() at the end. These routines free me from the
> necessity of having a DCB in my program. I can simply write code like:
>

I feel spoiled with CMS Pipelines filters using a macro doing that for me
in my assembler code:
  PRINTF 'IUCV call failed RC=%d',(R3)
The macro figures out the number of substitutions and builds the parameter
list on the stack. It even does indirection
  PRINTF  'Socket %>d and reason %>d',GSSOCK,GSREAS

The runtime environment provided by CMS Pipelines has the logic to process
the format string and substitute the values.

Rob


Re: Any real need for sequence numbers in 73-80 any more?

2017-12-12 Thread Rob van der Heij
On 11 December 2017 at 20:36, Dave Wade  wrote:


> As I generally only play with old versions of VM I don't have ISPF. In old
> versions of VM and line numbers on the assembler source are integral to the
> way the system is built.
> So the build process keeps each change as a separate source update and the
> base source is never updated. The VM UPDATE command uses the line numbers
> to work out how to update and assemble the source..
> In later versions of VM XEDIT will handle all of this under the covers so
> you just see the merged updates...
>
>
Indeed, sequence numbers are used in UPDATE. This all comes from the days
where you would not dream of keeping a stack of cards with another copy
when there were only a few lines changed. And we still have processes and
tools that handle the base and individual updates to see different versions
of the source or merge updates in. To me the only reason to stick with it
is not to break all things and lose history.

Suggesting anything but git as an alternative will probably make you a
local attraction for folks visiting the site. I spent a Friday night moving
the base and updates from a few 100K base and updates into git commits
(dropping sequence numbers and update identifier from the files, and
joining the continued lines into a single line). The result was elegant and
pleasant to see with web-based repository tools like GitHub. Obviously
that's a one-way process, though I could certainly generate individual
update files again like SUPERC would create them.

It would be liberating to write HLASM wider than 61 columns, especially
when doing structured assembler. It's an easy pipeline to fold the lines
into 71+ for HLASM which could put the original line number * 100 or so as
sequence numbers. I normally assemble within XEDIT using the listing to
steer the editor to the first line causing the error.

Rob


Re: Any real need for sequence numbers in 73-80 any more?

2017-12-12 Thread Rob van der Heij
On 12 December 2017 at 16:53, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:


> > sequence numbers. I normally assemble within XEDIT using the listing to
> > steer the editor to the first line causing the error.
> >
> Can you share?  Can it be adapted for ISPF Edit?
>
>
> It's mostly Melinda's HLA XEDIT which uses CMS Pipelines' "xedit" stage to
get the data and "hlasm" to process it.
http://vm.marist.edu/~pipeline/#PupHla
When we'd have CMS Pipelines there, the next challenge would be an
interface to the editor :-(

Rob


Re: Any real need for sequence numbers in 73-80 any more?

2017-12-12 Thread Rob van der Heij
On 12 December 2017 at 17:45, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:


> Perhaps not a great challenge.  Long ago, I wrote a bimodal (XEDIT and
> ISPF Edit) macro.  It relied on a few interface subroutines:
> Get a line, Put a line, UP, DOWN, SAVE, ...  No pipelines; none on
> MVS in that era, at least not for me.
>
> CMS Pipelines support for XEDIT is in reading and replacing/appending
lines in the current file as well as to issue XEDIT commands to manipulate
the file if needed. Combined with your own XEDIT macros written in REXX
(that can invoke pipes again) makes it very effective.

I honestly never looked under the hood of the ISPF editor. Even a small
challenge can be rewarding ;)

Rob


Re: Address of a =LITERAL

2017-12-14 Thread Rob van der Heij
On 14 December 2017 at 08:17, Windt, W.K.F. van der (Fred) <
0782fe4a8c02-dmarc-requ...@listserv.uga.edu> wrote:

> > It surprises me that no one has pointed out that having complex tables
> > assembled/compiled into a program is generally a Bad Idea. Almost always
> > such tables have to be updated and that tends to require a programmer,
> > rather than anyone with a text editor, if the tables are read into
> storage at
> > program start, which generally is not going to be a significant
> overhead. Then
> > one is also not limited by whatever restrictions are in the
> > compiler/assembler.
>
> I completely agree but we make a difference between tables with
> 'functional' data and 'technical' data. Functional data is stuff like
> country codes, currency codes, holidays etc. Technical data is stuff like
> program parameters, system configuration and the like. Maintaining this
> technical data requires a dev engineer anyway.
>
> My =literal problem does not occur in a *table*  but a (technical) data
> *structure* that cannot easily/efficiently be represented by a table...
>

Indeed, that makes a lot of sense. Some time ago I replaced 15 KLOC of
"evolved" flat code by a small 1 KLOC table-driven engine. My table was
constructed from 3 different external sources, so I ended up with a
pipeline that generates the HLASM input to build the tables. I expected it
would burn more CPU in return for easier maintenance, but it turned out to
be cheaper to run (likely some of the redundant pieces were now skipped,
and probably the small code base sits better in cache).

Rob


Re: Fair comparison C vs HLASM

2018-01-22 Thread Rob van der Heij
On 22 January 2018 at 07:47, Jon Perryman  wrote:

> I find it amazing how C programmers believe in the superiority despite
> overwhelming evidence to the contrary. Surprisingly, the psychological term
> for this is "motivated reasoning" and I never believed it until now. Below
> actually transpired yet they still believe that C is superior (even with an
> examples).
>

I hope you also have other New Year's Resolutions like to give up smoking;
probably easier than a programming language fight :-)

HLASM (with the macro libraries written over the past 40 years) gets beyond
what many people think of when you mention assembler language. Many with a
strong opinion base that on hearsay or lack of familiarity. A Structured
Programming macro library can help avoid goto-spaghetti. The one I use for
CMS Pipelines also adds a procedure concept to do flexible calling of
subroutines. I would not have benefit of using C except when I can take
code written by someone else. If you want to disagree about "superior" you
might first have to agree on the criteria. The least number of non-blank
lines or characters? Shortest execution time? If C were superior, why have
so many new languages been developed based on it? ;-)

Many say the strong point of C is the availability of libraries to call.
Obviously assembler routines can call subroutines and most of us have
extensive libraries available. You can even write HLASM macros to check
parameter lists similar to function prototypes in C. If you put in some
effort to initialize the runtime environment, you can call those C routines
from assembler code as well. But this is nothing compared to the function
call mechanisms in Python for example.

There's probably a good use case for each of the thousands of programming
languages, but how would you pick the right tool for the job? I would say
that when you are comfortable with half a dozen languages in different
classes, you have a good basis to pick the proper tool for the job. As for
your example, I would not try to parse XML in either C or assembler, unless
it's really a very trivial case. You're probably better off with a generic
XML parser to build an abstract tree in memory, or specify the program for
an XML stream processor. Or have lexx and yacc generate the code to parse
your grammar if you don't expect it to be extended often.

Rob


Re: Fair comparison C vs HLASM

2018-01-24 Thread Rob van der Heij
On 24 January 2018 at 18:33, Charles Mills  wrote:

>
> The reality is that cycle times are not getting any faster. A z14 does not
> execute z10 machine instructions significantly (any?) faster than a z10.
>

The second sentence does not follow from the first one. While a single
instruction may take (in many cases) the same number of clock cycles,
improved pipeline and out-of-order execution with deeper and wider cache
makes that a series of instructions often does run quicker than before.
Workloads that just didn't fit in cache anymore before may well fit now.
I'm frequently surprised by the number of extra instructions that you can
sneak into the code without slowing it down. Add to that the SMT support
for specialty engines.

One of the reasons for investing in those aspects of a CPU is that a fair
amount of the instructions executedare in programs that may never get
compiled again. Would be interesting to know which percentage of the
executed instructions are plain old S/370 ones...

Rob


Re: Fair comparison C vs HLASM

2018-01-25 Thread Rob van der Heij
On 25 January 2018 at 10:18, Dave Wade  wrote:


> I don't know if Fortran H does loop unrolling, but some compilers that are
> targeted at Vector Processors do.
> FORTRAN can be a pig of a language though, especially with a poor
> compiler.  Even worse, on two dimensional arrays, the data is stored as
> columns, not rows.
> (I think that’s correct. So if you vary the last subscript, you leap all
> over storage. Typically Fortran users want to write code where the last
> subscript varies fastest resulting in excessive paging.
> Reversing the subscript order in such programs can produce a significant
> performance improvements.
>

Long time ago we evaluated the vector feature and got the relevant FORTRAN
compiler with it. Outcome was that we passed on the vector feature, but
kept the compiler as it got 90% of the improvement also on the non-vector
CPU :-)  In those days I could also speed up a PASCAL/VS program by a
factor 5 when I added 6 unused elements to all arrays with 250 elements and
defined the index as integer rather than 1..250.

Rob


Re: Fair comparison C vs HLASM

2018-02-01 Thread Rob van der Heij
On 29 January 2018 at 20:16, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:

> On 2018-01-29, at 11:55:56, Seymour J Metz wrote:
>
> > While the DOS I/O was very device dependent, there was the DTFDI with
> limited device independence.
> >
> Insofar as "device independence" means restricting every device
> type to the capabilities of a card reader/punch.
>
> CMS is similarly limited.  Pipelines adds some flexibility.
>
>
I knew we could drag this on into February ;-)

Indeed, traditional CMS programs all have their own logic to identify data
sources, though we can access Shared File System directories as if it were
a mini disk and have most programs handle the data there. Exploitation of
FILEDEF and NAMEDEF is minimal, as far as I know.

CMS Pipelines allows programs to be chained together like stdin and stdout
let you do on UNIX. It comes with a suite of efficient built-in programs
and provides a programming framework to write your own (REXX) programs
operate on input and output streams. CMS Pipelines goes beyond UNIX pipes
with a multi-stream pipeline topology and coordinated error handling to
write real world applications with pipes.
https://en.wikipedia.org/wiki/CMS_Pipelines

When you write your business logic as a pipeline (even when done as
monolithic piece of procedural REXX logic) the same logic can be used
independent of where the data resides. This is also convenient during
development and testing of applications because you don't have to run the
logic against some test data or capture intermediate results of the
process. And if you have CMS Pipelines on z/OS, you can run the same
business logic there and just provide a small wrapper to identify the data
sources.

Sir Rob the Plumber


Re: Fair comparison C vs HLASM

2018-02-01 Thread Rob van der Heij
On 1 February 2018 at 16:40, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:


> > with a multi-stream pipeline topology ...
>
> That restriction is a myth.  C programs can deal with multi-stream
> pipe topologies.  In shell that requires named pipes.
>

Because CMS Pipelines does not buffer the data, the flow of records in
different segments of the pipeline is predictable. Without that, even
simple plumbing does not work as I would expect.
With named pipes you have a bunch of programs using each others output, and
you don't really care when they do it.

Rob


Re: Fair comparison C vs HLASM

2018-02-01 Thread Rob van der Heij
On 1 February 2018 at 20:10, Paul Gilmartin <
0014e0e4a59b-dmarc-requ...@listserv.uga.edu> wrote:

> On 2018-02-01, at 10:28:47, Kirk Wolf wrote:
>
> > and you can also get a completion statusarray ("PIPESTATUS[i]")
> > from a multi-stage pipe.
> >
> Valuable indeed.  I often wish for it.  (How would I do that
> with CMS Pipelines?
>

I'm not deep enough in UNIX to know, and I'm not sure oranges and bananas
compare, but it sounds like 'streamstate all' is where you look.
It's mostly useful to write stages that behave differently when some
streams are not connected (but it's just a matter of time before I regret
such design).
Even more subtle is when you act on relative timing differences between
streams (like arrival of a new key on the secondary of a cipher stage). I
don't see an analogy with buffered streams.

Sir Rob the Plumber


Re: Fair comparison C vs HLASM

2018-02-01 Thread Rob van der Heij
On 2 February 2018 at 03:11, Paul Raulerson  wrote:

>
> Timing is usually done with signal and/or semaphores - or better yet with
> message
> queues. :)
>

With 'relative timing' I mean the flow of records in two parallel paths,
for example selecting a subset of the records to modify, and then combine
the two streams again. When stages would buffer records, you would need to
number records so you merge in the right order.

I don't suppose you would seriously suggest implementing a messaging system
to pass records between stages in the pipeline that each take a few dozen
instructions to do their work. For one of the things I work on, the
application runs some 15,000 stages in parallel (just a 64K buffer between
them would add up to 1 GB).

It's true that systems are 3-4 orders of magnitude larger than when the
dispatcher in CMS Pipelines was written, but it has been demonstrated that
a 'straightforward' implementation in Java even today is too resource
intensive to be practical.

Rob


Re: Pascal

2018-02-02 Thread Rob van der Heij
On 2 February 2018 at 14:28, Martin Ward  wrote:

>
> Incidentally, perl strings can be over 4GB in length: in fact,
> any size which will will fit in memory (including swap space).
>

Just don't let the ASN.1 folks come closer, or you end up with variable
length length fields... ;-)

I would think that when things get long enough, the requirement for
consecutive storage of the characters would be restrictive. Implementing a
string as a series of (address,length) pairs would solve that, and would
also make for elegant string concatenation.
Considering that most strings are less than 4GB, I would be tempted to
maybe use negative length to imply the extra indirection, and use positive
length for characters following the length. If functions return such
strings, you'd need a garbage collector as well... sigh.

Rob


Re: Count Words?

2018-06-17 Thread Rob van der Heij
On 17 June 2018 at 03:57, Farley, Peter x23353 
wrote:


> Didn't think of that, but you are probably right - pipeline stalls are
> quite expensive and tough to benchmark.
>

But pipeline stalls at least are consistent and show in a profile, even
though it may not show the exact spot.

I find cache effects much more frustrating as you can't tell what the
sibling CPUs are doing though you share the higher level cache with them. I
find the number of CPU seconds may vary a factor 2, which makes my 10%
tweaking hard to do...

Rob


Re: An idea I got when researching a bug - warnings when a specific register is changed

2018-06-25 Thread Rob van der Heij
On 25 June 2018 at 21:49, Phil Smith III  wrote:

>
> Seriously, l like it and would use it. I'd prefer it not be tied to USING
> because there are other reasons to not use a register (I think; can't come
> up with any offhand, but I feel like there are?). Maybe:
>
> I suppose there are plenty of cases outside the USING where it is harmful
to modify a register. Don't forget the base register, for those who don't
write baseless yet. And reading the value from the wrong register can also
keep you entertained for a while. I find these bugs mostly get in when I go
back and change the original choice of registers because it appears handy
to pick another one.

Quite often the USING remains valid but the register points to another
object. I can see how you would put the USING and DROP within the loop when
the pointer is incremented. When you're picking up the next pointer from a
chain, you end up having to quality the field outside the USING scope
again. If anything, I would want tie this to the static nesting scope or
block structure rather than manually having to free up the protection.

Instead of having this done during assembly, you could also have it as a
post-processing on the LISTING or ADATA. I inherited something that goes
through the assembly listing and frowns at known concerns.Personally, I
don't think this one is highest on my facepalm list.

Rob


Re: EQU * considered harmful

2018-08-01 Thread Rob van der Heij
I very often use it to define location and length of a composite set of
variables. Your END idea would not help me. And don’t we do plain constants
like hash table size? Don’t think length is always known early enough. And
bits in a flag byte?

Rob

On Wed, 1 Aug 2018 at 18:34, Steve Smith  wrote:

> EQU * is a very common idiom in assembler programming.  I'd like to submit
> for your consideration that it is wrong, 100% of the time.
>
> Any symbol referencing memory should always be defined with DS/DC, so the
> correct alignment can be specified.  * per se, is a very useful concept,
> just not on EQU.  But as far as I can see, every EQU * is a bug, either
> latent or actual.
>
> The most acceptable usage would be to generate the length of an area (*-X),
> but even that can easily be done by defining an 'end' symbol, so that EQU
> X-Y is available.
>
> If I'm overlooking something, I hardly have to ask... but tell me if
> there's no better way for some example.
>
> --
> sas
>


Re: EQU * considered harmful

2018-08-02 Thread Rob van der Heij
On Fri, 3 Aug 2018 at 03:32, Phil Smith III  wrote:

> Hobart Spitz wrote:
>
> >can't endorse either DS 0H or EQU *; use structured macros instead.
>
> Why "can't endorse"? I'm not getting your point.
>

I'm with Hobart there. I have *never* had the problem that I was coding a
branch to a data field. With structured programming macros you don't have
to deal with that. I also like my COND macro that is a short-hand for a
conditional branch over a single instruction, like
  COND MINUS,LHI,R0,0

Cute. I never realized that it made type 'I' and that one could validate
that on the target of a branch. That looks like a sign that we were never
meant to code DS 0H for labels...

Rob


Re: EQU * considered harmful

2018-08-03 Thread Rob van der Heij
I’m afraid those sequences only make sense when you wrote them, not much
later. I inherited similar attempts to code the length of data. Just don’t.

On Fri, 3 Aug 2018 at 18:03, Tony Thigpen  wrote:

> I was taught that to make it easy to read, do the following:
>BL   *+4+2
> LR  R1,R2
> or
>BL   *+4+2+4
> LR  R1,R2
> LA  R3,0(,r1)
> It may not look right in your email, but the branched around
> instructions are indented one extra character.
>
> Tony Thigpen
>
> Phil Smith III wrote on 08/03/2018 10:40 AM:
> > Peter Relson wrote:
> >
> > I don't remember who taught me the technique, though it must have been
> at UofW in the early 80s. I internalized it as "This isn't a 'real'
> branch-that is, we aren't going very far, just skipping a single
> instruction". And I would never, ever, ever consider doing it for more than
> one instruction.
> >
> >
>


Re: EQU * considered harmful

2018-08-03 Thread Rob van der Heij
I mean, if you would code a macro and use
  ERROR NOTLOW, 30303
It would be a different discussion. I think dome 4+4+2+4 keeps me busy way
longer.

Rob

On Fri, 3 Aug 2018 at 19:05, Tony Thigpen  wrote:

> Those were just made up instructions, not real code.
>
> The point was that I was taught that when you branch around multiple
> instructions, to make it clear to someone else, you can:
> 1) Write the *+ information so that it was obvious that there were
> multiple instructions and the length you expected them to be, and,
> 2) Indent the instructions being branched around so that, again, it is
> obvious that something needed to be looked at if the code was modified.
>
> Personally, adding an extra label when branching around one, or two
> instructions just makes the program more cluttered.
>
> One place I used this a lot is when handling errors.
>  BL *+4+4+4
>   L R15,ERRNO_30303
>   B General_error_routine
>  MVCX,Y
> (Again, just typing some example code, not actual code.)
>
> Of course, I also use GOTO in COBOL, so maybe I am just a non-standard
> person.
>
> Tony Thigpen
>
> Rob van der Heij wrote on 08/03/2018 12:13 PM:
> > I’m afraid those sequences only make sense when you wrote them, not much
> > later. I inherited similar attempts to code the length of data. Just
> don’t.
> >
> > On Fri, 3 Aug 2018 at 18:03, Tony Thigpen  wrote:
> >
> >> I was taught that to make it easy to read, do the following:
> >> BL   *+4+2
> >>  LR  R1,R2
> >> or
> >> BL   *+4+2+4
> >>  LR  R1,R2
> >>  LA  R3,0(,r1)
> >> It may not look right in your email, but the branched around
> >> instructions are indented one extra character.
> >>
> >> Tony Thigpen
> >>
> >> Phil Smith III wrote on 08/03/2018 10:40 AM:
> >>> Peter Relson wrote:
> >>>
> >>> I don't remember who taught me the technique, though it must have been
> >> at UofW in the early 80s. I internalized it as "This isn't a 'real'
> >> branch-that is, we aren't going very far, just skipping a single
> >> instruction". And I would never, ever, ever consider doing it for more
> than
> >> one instruction.
> >>>
> >>>
> >>
> >
> >
>


Re: EX

2018-08-06 Thread Rob van der Heij
On Mon, 6 Aug 2018 at 16:35, Ed Jaffe  wrote:

> We use 'Jxx *+2' which disturbs no registers and is guaranteed to fail
> with an 0C1.
>
> I'm very fond of having an extra code for the type of assert, so I can
already blush before I see the listing ;-)
I suppose I could have the macro generate *+(code*2) and give me room for
256 different codes and still falls over the X'00' opcode.

Rob


Re: EX

2018-08-06 Thread Rob van der Heij
Ouch, obviously not as Jonathan points out... :facepalm:

On Mon, 6 Aug 2018 at 17:23, Rob van der Heij  wrote:

> On Mon, 6 Aug 2018 at 16:35, Ed Jaffe  wrote:
>
>> We use 'Jxx *+2' which disturbs no registers and is guaranteed to fail
>> with an 0C1.
>>
>> I'm very fond of having an extra code for the type of assert, so I can
> already blush before I see the listing ;-)
> I suppose I could have the macro generate *+(code*2) and give me room for
> 256 different codes and still falls over the X'00' opcode.
>
> Rob
>
>


Re: Macro Behavior

2018-10-21 Thread Rob van der Heij
On Fri, 19 Oct 2018 at 19:04, Peter Relson  wrote:

> There is no "should" in this sort of situation. There is a "could". There
> is a "wouldn't it be nice if".
>
> Could it be done? Sure. Would having done so have helped this case? Sure.
> Would doing so be a better use of limited resources than doing something
> else?
>
> There is no requirement that a macro do any more than what it is
> documented to do, which is to provide the correct expansion when you have
> provided valid syntax. Most macros do far more than that.
>

I have inherited an extensive macro library where each API macro has an
extra &X parameter to catch any excess positional operands and misspelled
keyword operands. It's just a single internal macro to validate that &X is
empty or complain otherwise. There is a lot of value in catching the
mistake as early as possible. It saved me already several times, and I am
sure it avoided pulling hairs in the decades since the investment was made.

Rob


Re: Multi CPU interlock question

2019-01-09 Thread Rob van der Heij
On Wed, 9 Jan 2019 at 11:29, Joe Owens  wrote:

> A 4 byte address field in virtual storage has one updater and many readers
>
> If using load and store instuctions, will the readers always see a
> complete (valid) address, or could a CPU see a partially updated field
> while a store is in progress on another CPU?
>
> Is the answer any different for other instructions, like MVC?
>
> The terminology in the Principles of Operation to look for is
"block-concurrent references"  (Chapter 5). That section explains MVC and
the conditions under which it appears to do a double word at a time.

Rob


Re: Multi CPU interlock question

2019-01-09 Thread Rob van der Heij
On Wed, 9 Jan 2019 at 12:17, Martin Truebner  wrote:

> Joe,
>
> Robs is answer is already saying everything but let me give you
> some more details.
>
> the load (or store) will always do it on a fullwordBUT to do it
> proper would require doing it with a CS.
>

As long as the operand is properly aligned...


> MVC might on some CPUs appear to do it 4 byte wise (or other multiples
> thereof) -
>
> And again the question: why not do it proper in a code-segment
> that is fully aware of the multi-CPU environment.
>

The CPU cache is the other motivation to stay away from heavy use of shared
variables but keep things per CPU with a low-frequency distribution
process. When you keep the per-CPU objects  far enough apart, you avoid
frequent invalidating cache lines on the sibling CPU.

Rob


Re: Fwd: Assembler III at Latham, NY 12110, USA

2019-03-19 Thread Rob van der Heij
On Tue, 19 Mar 2019 at 20:01, Seymour J Metz  wrote:

> No worse than Monster.
>
> "When the only tool in your toolbox is a pipe, everything looks like a
> filter."
>

I don't really mind a job offer as Piping Engineer, but I don't like
off-shore work :-)  It bothers me more if they match VMware with z/VM.

It's not unusual for these recruiters to be one of the 100 offering a
candidate for a position with a resume written after the vacancy profile
(sometimes with the same typos). When the employer responds, they go
looking for candidates.

Rob


Re: I Want An OPTABLE Built-In Function

2019-04-03 Thread Rob van der Heij
On Wed, 3 Apr 2019 at 09:58, Jonathan Scott 
wrote:

To check whether a machine operation code is supported in the
> current OPTABLE, use the operation code attribute, O'opcode.
>

I refer to the "new" instructions as the Welsh instruction set, and this
checks whether it's Irish :-)

Rob


Re: I Want An OPTABLE Built-In Function

2019-04-03 Thread Rob van der Heij
On Wed, 3 Apr 2019 at 16:35, Farley, Peter x23353 <
peter.far...@broadridge.com> wrote:

> As opposed to the Scots instruction set, MC'opcode?
>
> That was a good chuckle, thanks for making my morning brighter!
>

Why didn't I think of that? All these things like EQU, ORG, DS, and
PRINT... that's the Scotts instruction set that only lives in HLASM.


Re: Fwd: Assembler III at Latham, NY 12110, USA

2019-04-10 Thread Rob van der Heij
On Wed, 10 Apr 2019 at 13:20, Don Higgins  wrote:

> I am interested in the Assembler III position even though I do not own
> steel toed shoes.


I consider "signal on novalue" in REXX as my steel toed shoes, but I
decline offshore roles.

Sir Rob the Plumber


Re: BIC documentation - unclear?

2019-06-21 Thread Rob van der Heij
On Thu, 20 Jun 2019 at 16:53, John McKown 
wrote:


> Especially CFC & UPT. IIRC, those were put in explicitly for DFSORT. And,
> used improperly CFC causes ozone depletion {grin}. But the latest one that
> caused me to laugh was "Perform Random Number Operation" (PRNO -- porno).
> BRAS is also mildly amusing to me. But then, I'm rather juvenile. I do like
> ICY, OI, SLAG, and the fast TROT.
>

.. and I guessed that BIC was an instruction that wouldn't write the first
time you use it, but needed you to breathe onto it and try in a scratch
area first... the relation with GSF is no surprise then :-)


Re: Questionable Instructions in Obtaining EAX documentation

2019-11-10 Thread Rob van der Heij
On Sat, 9 Nov 2019 at 20:51, Kerry Liles  wrote:

> Old habits die hard... I still just useLA  1,256although now I
> might just code it asLA   1,256(,0)
>

I'm more worried about old code and old programmers doing LA 1,1(,1) doing
arithmetic.

Rob


Re: Questionable Instructions in Obtaining EAX documentation

2019-11-11 Thread Rob van der Heij
On Mon, 11 Nov 2019 at 14:56, Charles Mills  wrote:

> Works better than it used to! It's good to ~2 billion now, right? Was only
> good to ~16 million when they coded it.
>
> I'm not confused on how LA works in AMODE 31, am I? I never use it for
> integer arithmetic anymore so I could be off base here.
>

No, I was bitten by old code that was made to run in AMODE64. The developer
believed that using old instructions it would not hit the upper half and
thus not needed to save and restore it. We should never have made the old
code run in AMODE64, but I lost that fight.

Rob


Re: BASR to AMODE 64

2019-11-26 Thread Rob van der Heij
On Sat, 23 Nov 2019 at 01:11, Bernd Oppolzer 
wrote:


> As others have pointed out, the different OSes have different strategies;
> z/OS does never allocate virtual addresses from 0x8000 to 0x.
>

I sometimes wish the Principles of Operation would along those lines avoid
to allocate some of the pages in the preface. Somehow reading phrases like
"described in Reference [23.] on page xxx." makes me program check each
time :-)
It would actually be enough to keep such a page blank so that we don't have
references to it.

Rob


Re: *-*

2020-04-30 Thread Rob van der Heij
On Thu, 30 Apr 2020 at 14:40, Seymour J Metz  wrote:

> Empirically, it *is* necessary to explain that equating R0 though R15 to
> register numbers other than the ones implicit in the names is a cardinal
> sin.
>

I ran into some code where the programmer decided to know better and had
defined RA-RF and used that where he made his changes (and have me scratch
my head about something keeping an address as return code).  I did not
friend him.

Rob


Re: Code visitation

2020-08-05 Thread Rob van der Heij
On Wed, 5 Aug 2020 at 09:49, Keven  wrote:

Are there any of y’all out there who, like me, sometimes have a
> wistful yearning to see some code they wrote some number of years ago at a
> company they no longer work for..for no reason other than simply
> wanting to look at it?  Maybe also to scroll some of it up and down a few
> times for extra fuzzies?
> Keven
>

Some of the code I wrote does not want to see me anymore; bad things have
been said about me, blaming me for all the evil that happened.

Rob


Re: Clearing a register

2020-08-11 Thread Rob van der Heij
Speed hardly matters unless you have to do it very often. I had something
like that when shifting 32 bit into 64 and was lucky enough to have half a
spare register and could do LR (which I suspect might be something the CPU
has some tricks for)


Re: Deep cuts

2020-09-05 Thread Rob van der Heij
On Fri, 4 Sep 2020 at 19:46, Seymour J Metz  wrote:

> VM uses a token of 8X'FF' at the end of the (R1) parameter list. I don't
> recall what the convention is for the (R0) extended parameter list
> introduced by VM/SP.


The extended parameter list uses begin- and end pointers, 31 bit as CMS
does its things under the bar. In zCMS an application can get buffers above
the bar with full-page granularity

>
>


Re: Structured Programming Macro Package

2020-09-14 Thread Rob van der Heij
On Sun, 13 Sep 2020 at 13:18, FancyDancer 
wrote:


> The main claim to fame that this package has is that is is able to
> override the normal precedence of AND conjuctions and have OR conjunctions
> be evaluated at a higher priority, by adding an extra pair of parentheses
> around two or more logic expressions conjoined by OR:
>
> IF (CLC,A,Z,NE),AND,((CLC,B,Z,NE),OR,(CLC,C,Z,NE))
>

I dislike conditional expressions with built-in side effects. My concern is
that you often use the condition code from other types of instructions, not
just things like CLC. It gets ugly when you can't predict which part of the
expression may be executed or the order in which the clauses will run.

I am using structured programming in HLASM as well, but use John Hartmann's
set of macros that use simple condition codes as expressions, with the
preceding instructions developing the condition code.

Rob


Re: 9121 TCM Teardown

2020-10-06 Thread Rob van der Heij
On Tue, 6 Oct 2020 at 19:58, Ray Mansell  wrote:

> What fun! However, I feel this should have been posted to the
> DisAssembler list :-)


If you have the stamina for more destructive review of devices we value:
https://youtu.be/CBjoWMA5d84


Re: Vector Instructions

2021-09-29 Thread Rob van der Heij
On Wed, 29 Sept 2021 at 06:06, Dan Greiner  wrote:

> I have put together a series of PowerPoint files illustrating the
> operation of the vector-facility instructions ... a sort of graphic-novel
> version of Chapters 21-25 of the PoO. Since the Assembler List doesn't
> accept file uploads, you can find the material on my Google drive:
> https://drive.google.com/file/d/1O_RWJJGMX-tLR0AxEYk4QARxJhi_0MVV/view?usp=sharing


Very nice! I really miss you presenting this and explaining why an
instruction makes sense and how it makes life better. For example when I'm
looking at LCBB and wonder how often we need that and how much harder it is
to mask and subtract. I briefly thought this might be nice for bound
checking, but noticed the weird setting of CC.
Thank you for creating this, Rob


Re: Will z/OS be obsolete in 5 years?

2023-07-20 Thread Rob van der Heij
On Thu, 20 Jul 2023 at 02:15, Jon Perryman  wrote:


> > Why do that. It would result in a huge loss of hardware revenue.
> > IFLs for running UNIX are much cheaper than the CPUs needed to run z/OS.
>
> IFL's are discounted because Linux runs poorly on z16. Every CPU in a z16
> is the same so IBM will never discount an entire z16 just for Linux. Linux
> customers don't want z/OS so z16 is not an option for Linux only customers.
> If IBM wants to increase the z16 market share, they must make RHEL perform
> as well as z/OS and charge full price for CPUs.


It would be interesting to see your evidence of IBM Z not performing well
with Linux. That was probably true 20 years ago with the early CMOS CPUs,
but not anymore. My experience is that z16 CPUs are very effective running
enterprise application workloads in Linux at high levels of utilization.
IBM contributions to the various open source projects like the gcc
toolchain let you generate code that is optimized to take advantage of the
CPU architecture, the zlib compression library takes advantage of the
built-in compression instruction, the openssl libraries exploit CPACF
instructions when compiled for s390x, java applications in Linux and in
z/OS compete well with other platforms, the entire machine learning suite
exploits the built-in neural network instruction of the Telum chip.

Pricing is too complicated for techies. You get a CPU rather than IFL to
run licensed IBM software, which suggests that the price difference for the
hardware is for operating system software revenue not recovered by MLC. The
same holds for the other specialty engine types that run workloads that do
not have to contribute to the operating system software revenue; java runs
as fast on a zIIP as on a CP, so that's no reason for the rebate on a
zIIP. If you don't need any licensed IBM software to run, you get a machine
with only IFL.

Rob


Re: Will z/OS be obsolete in 5 years?

2023-08-08 Thread Rob van der Heij
On Tue, 8 Aug 2023 at 00:06, Jon Perryman  wrote:

> > On Thu, 20 Jul 2023 at 09:01, Rob van der Heij 
> wrote:
> > It would be interesting to see your evidence of IBM Z not performing
> well with Linux.
>
> Linux on z performs better than Linux on most other hardware. My point is
> that Linux wastes much of z hardware.
>
> Since I haven't seen Linux on z, I have to make some assumptions. It's
> probably fair to say the Linux filesystem still uses block allocation.
> Let's say it's a 10 disk filesystem and 100 people are writing 1 block
> repeatedly at the same time. After each writes 10 blocks, where are the 10
> blocks for a specific user. In z/OS you know exactly where those blocks
> would be in the file. If you read that file are these blocks located
> sequentially. While the filesystem can make a few decisions, it's nothing
> close to the planning provided by SMS, HSM, SRM and other z/OS tools. Like
> MS Windows disks, Linux filesystems can benefit from defrag.  Also consider
> when Linux needs more CPUs than available. Clustering must be implemented
> on Linux to increase the number of CPU which does not share the filesystem.
> In z/OS, a second box has full access to all files because of Sysplex.
>

I used to say that with several layers of virtualization, performance is
rarely intuitive, and often counterintuitive to the uninformed. The famous
case is where Linux "CPU wait" goes down when you give it *less* virtual
CPUs. Not having looked at it may not give you the best foundation for an
opinion.

Linux (on any platform) uses a "lazy write" approach where data is kept in
memory (page cache) briefly after a change, to see whether it's going to be
changed again. A typical case would be where you're copying a lot of files
in a directory, and for each file added, the operating system modifies the
(in memory) directory. Eventually, the "dirty" blocks are written to disk
(we may worry about data loss around an outage, but that's a different
discussion - there are mechanisms to ensure data is persistent before you
continue with destructive changes). Because Linux will write out blocks at
its own convenience, the device driver can order the data to create runs of
consecutive blocks in a single I/O operation.

Most serious use cases use journaling file systems on Linux and stripe the
file systems over multiple disks, so I'm not entirely sure what you aim at
with the blocks of a single extent being close together. Yes, I used to
worry about the typical stripe that does not align with the 3390 track
length, but as 3390 stopped rotating 30 years ago, the control unit cache
is not aligned by track either. I don't think anyone on Linux will defrag a
file system, especially not because a lot is either on SSD or virtualized
on RAID devices. The data-heavy applications often use FCP (SCSI) rather
than FICON attached disk because the logical I/O model doesn't take full
advantage of the complexity and cost of running your own channel programs.

The common scenario is to run Linux in a guest on z/VM so you can size the
virtual machine to meet the application requirements. And z/VM Single
System Image lets you move the running virtual machine from one member of
the cluster to the other to exploit the full capacity available in multiple
physically separate IBM Z hardware configurations. Since Linux is popular
on small devices, a lot of applications scale horizontally rather than
vertically: when your web traffic increases, you fire up a few more Linux
guests to spread the load, rather than triple the size of a single Linux
instance and expect everything in the application to scale. It is rare to
have a Linux application that can consume a full IBM Z configuration.


> I'm sure IBM has made improvements but some design limitations will be
> difficult to resolve without the correct tools. For instance,  can DB2 for
> Linux on z share a database across multiple z frames. It's been a while
> since I last looked but DB2 for z/OS was used because it outperformed DB2
> for Linux on z.
>

I expect "outperformed" depends on the type of workload and the API. When
you have a COBOL application intimately connected to DB2 to the point where
they share the same buffers and such, that's different from an API that is
transaction based and communicates through DRDA over TCP/IP as if the
application and the data could be in different places.You get away with a
lot of bad things in the application design when latency is neglectable.
Customers have Linux applications use DB2 on z/OS because the data is
there, not because of performance.

Rob


Re: z Linux assembler relative or friend or foe?

2010-07-07 Thread Rob van der Heij
On Wed, Jul 7, 2010 at 8:10 AM, Miklos Szigetvari
 wrote:
>Hi
>
> My colleague is porting some assembler code to z Linux (gnc glx compiler
> ? )
> and got some "invalid op code" assembler error for "EPSW" (extract psw)

There's probably -march=z9-109 and similar options that would tell it
for which machine you generate code.

But the two abbreviations don't ring a bell with me in this context. I
believe 'glx' is the GNU framework to talk to your PC video card...

The Linux toolchain has 'gcc' create intermediate assembler source,
and 'gas' to take that and produce the object code. If you're brave,
you could write source for 'gas' by hand, but it lacks all the
features that you need for serious assembler projects.
David Bond's presentation explains the issues:
http://www.tachyonsoft.com/s8131db.pdf

Rob


Re: z Linux assembler relative or friend or foe?

2010-07-13 Thread Rob van der Heij
On Tue, Jul 13, 2010 at 7:55 PM, Tony Harminc  wrote:

> Linux may well have facilities that I am not familiar with to scan
> modules for certain byte sequences, but any such static scan is going
> to be easily foolable by even a slightly motivated programmer. Linux
> could in theory, but we know does not in practice, interpret most
> programs at run time, and thus catch any behaviour it doesn't like, at
> an extreme cost in performance. What it cannot do is alter the
> architecture of the machine it is running on in violation of the
> Principles of Operation.

I think your assessment is accurate, though I had probably used far
less polite words for it...
Clearly there *is* something in Linux that is involved when a user
program is taken into execution, and you could envision to extend that
with code to "screen" the program (ignoring for the moment that Linux
does not actually load the code, but merely maps the module into the
address space and relies on demand paging to bring it in). The problem
with most malware however is that it uses "normal" instructions to do
bad things.
But when you run a program as non-root user, there's still limits to
what you can break. And z/Linux has some lucky aspects that make it
less likely a program can acquire root privileges.

Screening program code for bad things is fuzzy, as the frequent
updates of my antivirus software demonstrate. And it's not even
searching for the malicious code itself, but for fingerprints of know
malicious programs.

Rob


Re: z Linux assembler relative or friend or foe?

2010-07-14 Thread Rob van der Heij
On Wed, Jul 14, 2010 at 1:54 AM, Paul Gilmartin  wrote:

> Gatekeeper, for early Macintosh OS worked quite effectively by
> intercepting suspected program behavior.  Until the malware authors
> got smarter.

And for the moment z/Linux has the advantage that attempts to store
particular sequences of x86 instructions beyond the end of a buffer is
not likely to have the desired effect on s390. :-)  Developing such
exploits for s390 is harder because of the memory model of z/Linux,
and less attractive because there's far less potential victims (which
is IMHO one of the reasons Mac is less targeted as well).

PS I would also question the business model for such exploits. After
all, we know that 1000 compromised virtual Linux guests on a the same
z/VM system will not be able to send out 1000 times as much spam ;-)

Rob


Re: z Linux assembler relative or friend or foe?

2010-07-14 Thread Rob van der Heij
On Wed, Jul 14, 2010 at 4:33 PM, David Bond  wrote:

> z/Linux does have the security advantage over most (all?) other Linux
> implementations that the kernel is not mapped into the user address space.
> But even if the kernel is visible to user processes there is still no sneaky
> way to acquire root privileges if the process owner does not have them.
> This is not luck.  It is intrinsic to the design.

Perspective of the observer does a lot. (and language - I probably
meant "fortunate" when I wrote "lucky" )

I believe the motivatioon for this memory layout was this missing
upper bit in 31-bit address mode. This looked bad for Linux for S/390
because "them" had 4G. By using different address spaces, Linux for
S/390 could have a 2G+2G model, which got closer to the 3G+1G that
most others were using.

Rob


Re: z Linux assembler relative or friend or foe?

2010-07-15 Thread Rob van der Heij
On Thu, Jul 15, 2010 at 8:47 AM, Binyamin Dissen
 wrote:

> And branch instructions.
>
> An the choice of the base register to use to point to the instruction. The
> instruction would have to be interpreted to determine what register is
> available.

Very true. While the idea of using EX sounds neat, I think you're
right that it's just not practical in the end. It is probably more
robust to have PER intercept all instructions and maintain both
contexts for you.

The Hercules people have shown that you can do a full interpreter if
you give up an order of magnitude. It's probably similar to what you
could achieve to run x86 programs on our CPU. Some smart tricks may
get a factor 2-3 back again for some applications.
The result may be a feasable business case for some. Just like writing
applications in high level frameworks sometimes makes sense, even if
it takes 1-2 orders of magnitude more CPU.

| Rob


Re: z Linux assembler relative or friend or foe?

2010-07-15 Thread Rob van der Heij
On Thu, Jul 15, 2010 at 12:03 PM, robin  wrote:

>>And branch instructions.
>
> What do you mean?
> A branch instruction is EXecuted just like the others.
> The only difference is that the branch address needs checking prior to 
> execution.

Hmm... I would expect EX of a branch never to return back to your
simulator, but resume execution outside your simulator. Wouldn't you
need to "interpret" the branch such that you load the effective
address into the register that you're using to point into the program
code that you're playing?

It's an interesting thought experiment, but more for the Friday
afternoon...  (and that's not even yet in .au )

| Rob


Re: SPMs and Ease of Maintenance

2010-07-21 Thread Rob van der Heij
On Wed, Jul 21, 2010 at 4:47 PM, Andreas F. Geissbuehler
 wrote:

> No apology necessary or expected. The idea was to prevent collisions
> and intentionally coded, "hand-written" jump/branch statements to labels
> generated by macros.

I knew there's an end to how far one can go to protect people against
their own foolishness...  next focus would be to require larger
pencils so people can't swallow them ;-)

Is there even a valid reason for coding a branch when you do SPM's?
Could we opsyn all branch instructions to DONKEY (other than branches
generated by macros)

| Rob

PS In my youth we had an assembler that allowed for "local" labels
(starting with a period) whose scope was limited by real labels. We
frequently used the .th and .el and .fi label to code an if/then/else.
But SPM's remove the need for such things entirely.


Re: SPMs and Ease of Maintenance

2010-07-21 Thread Rob van der Heij
On Wed, Jul 21, 2010 at 6:06 PM, Andreas F. Geissbuehler
 wrote:

> I couldn't agree more. But those dummies' bosses just don't want to
> comprehend and arguing about human qualities invariably gets
> reworded into further justification against having Assembler "...hey
> Fred made a mistake, he is the best (Java-)Programmer we have!"
> I posted this in response to the dummies issue but also see benefits
> elsewhere, i.e. one less chance for a shot in the foot. And I'd make
> heavy use of "double-dash" =LABEL's because it would be so much
> neater than &SYSNDX on all labels.

I did hear of another case where the "best assembler programmer in the
area" was also working with the assembler listing with expanded macros
to understand the flow of the program that used SPM's

Yes, for writing macros it would certainly be cool to have a scope per
nesting level. It also avoids the messy cross reference for the labels
(I assume).

I'm thinking I also want the USING to follow the scope of blocks...
In one of my current projects converting ancient code to use SPM's, I
found several cases where blocks of code were copied and not all
labels changed along with that. It's amazing how much code still works
when you do that :-)

| Rob


Re: SPMs and Ease of Maintenance

2010-07-21 Thread Rob van der Heij
On Wed, Jul 21, 2010 at 8:00 PM, Bill Fairchild  wrote:

> Another wrong way of thinking is that any hand-coded label at all is 
> dogmatically bad, as some structured programmers apparently think.

I'm not saying all is bad, I just can't think of cases where you need
them. Especially if we consider a "go to exit" as legitimate (because
a cascade of IF statements testing various exit conditions is just not
making things easier).

| Rob


Re: Mainframe Assembler Coding Contest - new problem #22

2010-07-31 Thread Rob van der Heij
On Sat, Jul 31, 2010 at 2:13 AM, Steve Comstock
 wrote:

> Huh? 8 bits per byte, right? so l'string * 8 should do it
> (use sla for the multiply)

ROTFLA. :-)   You missed the parity bits...


Re: Mainframe Assembler Coding Contest - new problem #22

2010-08-01 Thread Rob van der Heij
On Sat, Jul 31, 2010 at 1:36 AM, Don Higgins  wrote:

> A new problem #22 - Code fastest instruction sequence to count bits in an
> arbitrary string of bytes using currently available z/Architecture
> instructions prior to new instruction coming with z196 which is estimated
> to be 5 times faster.

I've been wondering about the type of algorithms that this instruction
was meant for. I know of some cases where you're interested in "any
ones" but OC would do that. Or is emphasis on "string" as
null-terminated sequence of bytes? But counting bits in a text string
does not make much sense either...

| Rob


Re: Mainframe Assembler Coding Contest - new problem #22

2010-08-01 Thread Rob van der Heij
On Mon, Aug 2, 2010 at 8:23 AM, Fred van der Windt
 wrote:
>> I've been wondering about the type of algorithms that this
>> instruction was meant for. I know of some cases where you're
>> interested in "any ones" but OC would do that. Or is emphasis
>> on "string" as null-terminated sequence of bytes? But
>> counting bits in a text string does not make much sense either...
>
> Probably for managing bitmaps for memory- or file allocation tables?

Right you are. I was thinking they only did new instructions to help
compiler writers...

| Rob


Re: Assembler Coding Contest - new problem #22‏‏

2010-08-02 Thread Rob van der Heij
2010/8/2 john gilmore :
> The question--What use are one-bit counts for a bit string?--would occur only 
> to someone who was unfamiliar with bit maps and their uses.

I'm devastated to find myself revealed as a complete moron in front of
my peers by asking such a silly question.

I am well aware of how bit maps are used. However, most legitimate
cases I could come up with (like others, apparently) were situations
where I would also want to know *which* bit was the first one set
(where TRT would be appropriate). Or cases where I'd just want to know
that not all bits are on or off. But to have the hardware actually
bother to count the bits seems to me pure luxury.

While I see that it would help to count the free slots in an
allocation map, you'd certainly hope that the code that updates the
bitmap also keeps counters so that we don't have to count the bits too
often.
And one could compute parity, but I'm not sure that's often done on
long strings anymore.

| Rob


Re: Selecting which instruction(s) to use. Thoughts for discussioni

2010-08-23 Thread Rob van der Heij
On Mon, Aug 23, 2010 at 2:56 AM, John McKown  wrote:

Your points make a lot of sense, and I think every programmer should
have such a rationale to motivate choices he makes in programming.

> 1) KISS - when possible, use simply to understand instructions. Just to
> be kind to yourself and others who will come later. Don't do the
> following, even though it is "neat".

I've seen this argument being abused too often. Simply the fact that
the programmer did not think about it when he wrote the code does not
mean someone reading the code later does not have to think about it
either. Along those lines are redundant instructions "added for
clarity" like an LTR when the instruction that set the register
already sets condition code.

Sure, it does take some effort to change things when you find that
your initial choice of registers was unlucky. One I found in my
inherited code is using a LM to load length and address, and then use
3 XR instructions to swap the registers so you can do a MVCL. A
straight path is so much easier to follow.

| Rob


Re: Efficient Memory List

2010-08-23 Thread Rob van der Heij
On Mon, Aug 23, 2010 at 4:45 PM, Patrick Roehl  
wrote:

> 3) The binary search from step 1 indicates where the new entry should be
> inserted.  To add the entry to the list, individual entries are moved one
> at a time (to avoid overlapping moves) to open a spot in the list for the
> new entry.

That's probably going to be an expensive process. Have you considered
using a binary tree rather than an array that is kept sorted? Doing a
balanced tree is not trivial, but would make searches much faster.
If you're not too tight on memory, you could also consider a hash
table. You don't want to let the hash table get too full. If the
objects are large compared to the index numbers, one would stow the
objects in a table as they come, and use a hash table to find them.

| Rob


Re: Efficient Memory List

2010-08-24 Thread Rob van der Heij
On Tue, Aug 24, 2010 at 2:44 PM, Paul Gilmartin  wrote:

> Is there a hashing technique that works well without a priori
> knowledge of the list size?

No. The purpose of the hash is to distribute the keys over the entire
table, so you need to know the output range of the hash function. The
original poster mentioned that often the list is small, but sometimes
it is very large. You could start with a hash table that handles all
the small cases. When it fills, you allocate one 4 or 8 times as large
and rehash the existing entries from the small table (the rehash can
be avoided when you can tell early that you're getting a large list).

I understood from the first post that the object also has a payload,
but the discussion about sparse bitmap suggests there isn't. If that's
the case, rather than building a chain in each hash table entry, you
could just store the key in the entry and rehash on collision (takes
some time on collision, but allows a table twice as big, so collisions
are less likely).

| Rob


Re: number of new instructions

2010-09-01 Thread Rob van der Heij
On Wed, Sep 1, 2010 at 4:14 AM, John Dravnieks  wrote:

> For your information and possible amusement, I have a printed copy of
> SA22-7832-06 (February 2008) on my desk and it is 2.5 inches thick (6.5 cm)

Hopefully that has been coordinated with someone in POK for balance
purposes, to avoid impact on the axial precession of the earth... If
not, IBM should maybe find you an office on the South Pole...

I believe someone (Kristine?) claimed that IBM Assembler Programming
is what makes the world go round. We might add that the printed copies
of The Principles is what makes the world wobble ;-)

| Rob - (envy! My latest printed copy is GA22-7000-6 - 16mm when I
remove the punch cards used as book markers)


Re: Instruction Set Architecture

2010-09-09 Thread Rob van der Heij
Boys! You stop fighting or I stop the car and put the assembler away
until we get home.

| Rob


Re: 16-bytes the same

2010-10-05 Thread Rob van der Heij
On Tue, Oct 5, 2010 at 3:29 PM, Bill Fairchild  wrote:

> I couldn't think of a "useful" reason for such code either, other than 
> perhaps as a teaching exercise.  But I was going to reply "Who cares what the 
> value of it is?  Why not just answer the original question?  A long time ago 
> someone wondered "What's the use of this strange thing that always points in 
> the same direction if we put it on top of a piece of wood so it will float on 
> water?  It's no good.  Let's throw it overboard."

I too appreciate the opportunity to learn. It would also be
interesting to hear about possible performance implications...
Such ideas often make me think (and ask) about where to apply the new
knowledge, or maybe just to learn more.
In this case I realized that the exact comparison might be affected by
code page. So you would compare one byte with the original, and the
others with each other. That made sense to me since we don't have
another easy way to compare a string with a character (or do we)?

> Then Frank Ramaekers' reply reminded me that I, too, have written code to 
> replace 16 (or hundreds, or thousands, of) consecutive identical bytes with 
> the one phrase "Same as above."  And I have done so more than once.

But that would involve comparing two strings, rather than a string
with a character?  I would be pretty annoyed reading a dump and get
portions omitted with "... these 32 byte are all the same, guess what
they are..."

| Rob


Re: z/OS IARV64

2010-12-09 Thread Rob van der Heij
On Thu, Dec 9, 2010 at 4:36 PM, Tom Marchant  wrote:
> On Thu, 9 Dec 2010 06:23:45 -0700, Paul Gilmartin wrote:
>
>>I understand that on z/OS Java has a special dispensation to use
>>storage within the bar.
>
> I wish you wouldn't write "within the bar".  It suggests that

Yes, programming in the bar is frowned upon...  In fact, some people
already get upset when they just hear assembler instructions being
discussed in the bar...

Couldn't help thinking of the "A Virus Walks Into a Bar..." series by
Brian Malow  :-)


Re: prize for a good replacement for "baseless"

2010-12-20 Thread Rob van der Heij
On Sun, Dec 19, 2010 at 1:04 PM, John McKown  wrote:
> How about IC-relative code? Where "IC" is short for "Instruction
> Counter"? Or "IA-relative" for "Instruction Address Relative"? As

Eeks!  I have some code still that might get you into Intensive Care
when you're unlucky... ;-)

| Rob


Re: prize for a good replacement for "baseless"

2010-12-23 Thread Rob van der Heij
On Thu, Dec 23, 2010 at 12:00 PM, Kevin Lynch
 wrote:

> The idea of calling it 'debased' brought a smile to my face.

Looks like all good ones are taken. I thought 'unbased' might work
(short enough to be useful) but you don't want your spell checker
trying correct you into 'unbiased' either..  when nothing else left, I
sometimes favor the politically tricky choices, like we called our
Linux guests "topless penguins"  (that did not have "top" installed)

| Rob


Re: Best (or any) practices to rewrite "spaghetti"

2011-02-03 Thread Rob van der Heij
On Thu, Feb 3, 2011 at 6:29 PM, Tony Thigpen  wrote:

> Personally, I consider a 'branch to abnormal exit' much better than
> trying to unwind all the 'perform' levels, be it COBOL or Assembler.
>
> I have seen programs where they attempted to unwind everything during an
> error and ended up processing code unintentionally.

My father used to say that 'anything with "too" in it is bad'  and I
had to think of that when I was recently trying to untange some
ancient code. I allow myself premature exit to the end of the routine
as well, but it's a glippery slope. At least I find that when you
don't give it enough thought, it's easy to end up with a lot of them
and make following the flow much harder.

My dialect of Structured Assembler does a combined "call with
automatic exit on nonzero return code"  Though this is the same as
coding that extra branch, the simplification to me is that it's always
the same NZ that does the exit. If necessary, I will put a number of
those diverse tests in a subroutine and let that conclude Z or NZ.
That avoids things like this:
  CLC   DEVADDR,RANGELO
  BL EXIT
  CLC   DEVADDR,RANGEHI
  BH EXIT

| Rob


Re: Best (or any) practices to rewrite "spaghetti"

2011-02-04 Thread Rob van der Heij
On Fri, Feb 4, 2011 at 4:40 PM, Thomas Berg  wrote:

> I don't quite understand Your problems with SIGNAL.  AFAICS, You use SIGNAL
> when the situation is such that You can't handle it within Your REXX routine
> logic/context.  That's is, You must abort all processing and (normally) give
> a comprehensive error message, maybe also log it.

The "abort all processing" is the magic. At best show some diagnostic
information to diagnose the problem. But not everyone can resist the
temptation...

I found some code where a bright soul had put a "restart:" somewhere
in the REXX program and then a "signal restart" in the "error:"
handler. Unfortunately the trapped error was in a "procedure" so once
the program went though that path, it continued with part of the REXX
variables hidden outside the scope (but filling up memory).

| Rob


Re: IEEE floating points

2011-03-24 Thread Rob van der Heij
On Thu, Mar 24, 2011 at 8:46 AM, Miklos Szigetvari
 wrote:
>Hi
>
>We have always ignored this small rounding differences, but here
> the "end effect" was that
>  the generated document was different, as "hyphenation" , word
> splitting worked differently.

You must have a huge canvas that you're using floating point to
position characters in a line. Missing 1 pixel on 8" with 4800 dpi
would be an error of 1e-6. Think I can do a lot of computation even in
4-byte floats and not get close to that.

Sound more like some programming that tricked the compiler to fold an
intermediate to integer or so...  Maybe even optimizer level?  We had
something where a 64-bit integer was  M = N * 2**20 -1  and we wanted
(M+1) in a float F. Doing the float(M)+1  does not work well for large
N while  float(M+1) can be done very accurate...

Rob - wondering whether rounding errors also get my Sudoku puzzles wrong :-)


Re: CPU: ASSM vs ENTERPRISE COBOL

2011-04-05 Thread Rob van der Heij
On Tue, Apr 5, 2011 at 2:27 PM, Fred van der Windt
 wrote:

> Which means it is unforunate that Angel isn't able to post the code. 
> Something must be awfully wrong with the assembler code if it is twice as 
> slow as the Enterprise COBOL code...

Guess it is possible that the assembler code implemented a (slightly)
different algorithm than what was written in COBOL.

| Rob


Re: ASM vs HLL (Was: CPU: ASSM vs ENTERPRISE COBOL - SOLVED!)

2011-04-08 Thread Rob van der Heij
On Fri, Apr 8, 2011 at 1:35 PM, Peter Relson  wrote:

> An interesting, but almost inevitable, phenomenon is that if you started
> with two programs, one in asssembler, one in a HLL, both highly optimized,
> over time, as that program is modified, the assembler one usually gets
> worse, and the HLL one gets better.

In this context, you could claim that the compiler "rewrites" (and
optimizes) the entire program code with each change. You would not do
that with a program in assembler because it takes too much time and
because we don't want to introduce new bugs beyond the intended
changes.

Another phenomenon I observed is that some teams using C have no
reservations to rewrite entire programs when they change programmer.
At the same time dropping function of requirements they don't fully
understand...  (in another universe this week someone proposed major
simplificiation because "a track always contains 12 blocks" .. sigh).

| Rob


Re: ASM vs HLL (Was: CPU: ASSM vs ENTERPRISE COBOL - SOLVED!)

2011-04-08 Thread Rob van der Heij
On Fri, Apr 8, 2011 at 4:33 PM, Edward Jaffe
 wrote:

> There is also a propensity for programmers to suggest that 'everything'
> should
> be rewritten in whatever their language "du jour" happens to be. I'm always
> astonished when someone suggests with a straight face that several million
> lines
> of working code they don't understand be 'scrapped' and rewritten. ROTFLMAO!

I once had source from a working program that could go through either
the Pascal compiler or the FORTRAN compiler. I did offer that idea as
a way to avoid religious wars about programming languages...

We put way too much value in upward compatible. It's hard to believe I
took someone serious when he reported a problem saying "You must have
changed something. It worked last time I ran this (which was 10 years
ago)". Fashion is to change APIs for no reason other than to "shake
off any lazy developers who are unable to keep up with change"   Or
applications that don't run after an upgrade because the developer
decided you need to specify a new option to demonstrate you really
looked at all the new options... (these are real examples)

| Rob


Re: ASM vs HLL (Was: CPU: ASSM vs ENTERPRISE COBOL - SOLVED!)

2011-04-10 Thread Rob van der Heij
On Sat, Apr 9, 2011 at 11:28 PM, John Ehrman  wrote:
> Kirk Wolf noted:
>
>> Cheating is always encouraged :-), but I think that the point was that
> neither programmer had such a routine "in his toolbox".   The C programmer
> will find something that is easily reused, the assembler guy probably
> won't.
>
> I thought the CBT tape actively encouraged "cheating".

I like the qualification of "cheating" for borrowing others' code.
Which IMHO is not the same as software reuse.

Software reuse software is hard work. The implementation must be done
generic enough that it can be used in all places, and changes to that
common code need be handled with great care. Copying (and modifying)
borrowed code is less likely to improve quality because you may be
dragging in stuff that you did not need but still am impacted by bugs
in it.

Rob


Re: IBM manuals

2011-06-08 Thread Rob van der Heij
On Wed, Jun 8, 2011 at 5:34 PM, Bill Fairchild  wrote:

> Adding paper updates (TNLs) to paper manuals (SRL) every few months was a 
> major agony (PITA).

Right... we introduced "demand paging" for manuals in the 80's. The
TNL was simply put in front of the binder, leaving the merge to the
first one who actually needed the book (and was kind enough to take
the duty). It's amazing how many books we discarded with the TNL's
unmerged...

The only one I really missed were the control block books. I never
managed to get used to softcopy versions of sticking punch cards
between the relevant pages while reading a dump :-)

Rob


Re: A Basic Question

2011-08-10 Thread Rob van der Heij
On Wed, Aug 10, 2011 at 1:20 PM, Baraniecki, Ray
 wrote:

> I never said I did not know of Google. I tried a Google search but came up 
> with many references to the IRCTC, an Indian Railway reference.

:-)  And be aware that Google is way past where it returns the same
results for everyone ... Based on your search history, profile,
geographical location and lots of other things, you get what Google
thinks is best for you. When there's corporate NAT gateways involved,
you may end up getting what's best for your colleagues.

"Let's search for it... your Google or mine?"


Re: HLASM manuals

2011-08-19 Thread Rob van der Heij
I am way too much of a been newbie to dare comment. Without adequate
training, my need is mostly less than that ofthe average subscriber on the
list. I normally fail to predict whether the answer is in the language ref
or programmers guide.
When looking in the poo I often just need a bit of reminder, not always the
full detail. Like to fix my mistake to write slr when I want srl... Or to
see whether the instruction modifies cc or not. Maybe a book like the pocket
ref but in font I can read. Bonus for a book small enuf that it does not
have to be pdf only.

Certainly, most of us lost skill and patience to read anything more than a
page. It is not fair to blame the book. Read "The Shallows" while you still
can. :-)
On Aug 19, 2011 8:13 PM, "David Cole"  wrote:


OT: You know it's time to do less serious work when...

2011-12-02 Thread Rob van der Heij
>From the loader building my module:

DMSLIO201W The following names are undefined:
 R15

Time to start the weekend :-)


Re: OT: You know it's time to do less serious work when...

2011-12-02 Thread Rob van der Heij
On Fri, Dec 2, 2011 at 5:58 PM, Martin Truebner  wrote:

> You must have really goofed it
>
> I can not even imagine the (weird) circumstances where such an error
> would occur.
>
> Who (in a non friday mode) would code
>LR15,=V(R15)
>BASR R14,R15
>
> or what caused it?

Hey, I went for the booze *after* this, not before...  Martin was very
close, except that I did not code the mishap myself but have a macro
to do it.

There's a CALL macro we inherited from a very ancient MVS release. It
takes a symbol (to generate the =V(xxx) operand) or a register in
parentheses. The other part is that I have a convenient COND macro
that does an IF / FI for a single instruction. Unfortunately HLASM
unravels the parentheses and complained about my  COND
NOTZERO,CALL,(15) and silly enough I coded it as COND
NOTZERO,CALL,(R15) which made HLASM happy. Obviously that generated
the reference to a routine R15 which was not intended...

Weekend! Rob


Re: OT: You know it's time to do less serious work when...

2011-12-04 Thread Rob van der Heij
On Sat, Dec 3, 2011 at 7:44 PM, Binyamin Dissen
 wrote:

> Bright side is that the linker will catch it (unless you DO have a subroutine
> R15)

Right, the linker caught me. Having an entry point R15 would be scary
;-)  Sort of like naming a file "-f ./" in Unix (I believe you can)

Rob


Re: Assistive Technology

2011-12-08 Thread Rob van der Heij
On Thu, Dec 8, 2011 at 10:34 PM, esst...@juno.com  wrote:
> I noticed in the back of several IBM publication several sentences regarding 
> "Assistive Technologies". For Example - MVS USING THE SUBSYSTEM INTERFACE, 
> the Appendix contains a section on Assistive Technologies.
>
> So what constitutes "Assistive Technology" ?
> Is any OEM Program Product consider "Assistive Technology" ?
> What criteria determines "Assistive Technology" vs a Program product ?

http://en.wikipedia.org/wiki/Assistive_technology

Isn't that like printing ABEND in all-caps for the elder sysprogs with
bi-focals?  ;-)


Re: ASM Program to copy a file

2011-12-09 Thread Rob van der Heij
On Fri, Dec 9, 2011 at 9:44 AM, Dougie Lawson  wrote:

>>> :>And MVC INOUTBUF,=CL(L'INOUTBUF)='Test record' would have worked as
>>> :>well.

> Hey, that's a neat trick. I wish the folks who taught me assembler
> back in 1981 had shown me that one. But for some odd reason they were
> against using literals in ANY way shape or form. We were taught to
> write all items in static storage with a label.

Yes, very neat trick. It helps to avoid redundant constants in the
code.Since I try such things immediately, I guess Martin does not use
it as often as he should (the second "=" should not be there... )

Rob

PS I recently used something like   IC R0,=C'0123456789ABCDEF'(R1)
and was pleased to see that it actually works...


Re: ASM Program to copy a file

2011-12-09 Thread Rob van der Heij
On Fri, Dec 9, 2011 at 11:50 AM, Martin Truebner  wrote:
> Rob,
>
> I felt bad about posting it...it does waste storage under a
> baseregister (which is still in most cases 4K)

Hey, you obviously should not use this when the output field is
seriously larger than the string. I would only do this when I
otherwise had padded the string with blanks myself (or maybe still).
When you put an 8-byte string into a 80 byte record, I'm tempted to
see initialization as something separate.

If someone changes the size of the output field a year later, I'd
appreciate an assembler error more than weird problems at runtime.

Rob


Re: mixed case in assembler

2011-12-30 Thread Rob van der Heij
On Fri, Dec 30, 2011 at 3:43 PM, John Gilmore
 wrote:

> This makes the case for using the break character, '_', instead of
> camel case that Shane and I tried to make earlier in this thread.
>
> The question which looks better is clearly subjective; the question
> which is more error-prone is settled.

Ha! That's my kind of logic... ;-)   If one option is error-prone,
then the alternative must be better...

IMHO I think our main objective is that the machine understands the
program correctly. So we must not make typos in our programming.
I'd love to see statistics, but my assumption is that typing errors
are proportional with the amount you type. The justification for
typing more than the bare minumum (say symbols of only length 1 and 2)
is to make the namespace larger and have the assembler detect your
typing errors. Practical reasons normally keep you from using very
long names. And DSECTs used in many different modules keep you from
using very short names.

But making the symbols longer will only help if it increases entropy.
So the longer names must be substantially different to prevent a
single error produce a valid (but different) identifier. With HLASM
we're mostly on our own since very few semantic errors can be
detected. While it may sound attractive to have a "standard" that
suggests to use PFOO for the pointer to FOO, the assembler will often
not catch it when you drop the P by mistake.

I think camel case and very long identifiers have become more popular
through the use of IDE's where you pick the identifier from a
pull-down menu. With this kind of program-by-number development it
helps to have descriptive long names to select from. Because you don't
type them yourself, the longer names are not more error prone. The
frequent need of singleton instances encourages the "standard" to name
variables similar to their type - the IDE has the context info to
avoid presenting you with the wrong ones in the menu. You can't assume
that such practices do well when you have to type it all yourself.

Rob


Re: Lacunć

2012-01-08 Thread Rob van der Heij
On Sun, Jan 8, 2012 at 10:55 PM, John P. Baker  wrote:
> Tony,
>
> On my system it appeared as "lacunae", which is a term used to indicate a 
> cavity or depression, or in this context, an omission in the design.

More precisely, the plural of "lacuna" with the ligature "ae" as Tony
himself already pointed out. Some of those 309 hits are probably due
to the same transformations. I wonder how many medical students will
end up writing it as "lacunć" just because of broken mail agents :-)
Just like you find lots of hits for the word "cow-orker" in Google,
and worse: http://wackypedia.wikia.com/wiki/Cow-orker

I configured Windows so I can switch my US keyboard to German layout
so I can type the ü and ß where applicable. That also swaps the Z and
Y keys (among some others) and sometimes I am too layz to switch the
kezboard and continue to tzpe English with the German lazout :-)  That
also creates some unique words...

Rob


Re: Lacunæ

2012-01-09 Thread Rob van der Heij
On Mon, Jan 9, 2012 at 6:17 PM, John Gilmore  wrote:

> o  'ffl', 'fi' and the like are typesetters' ligatures, usually found
> only in 'expert' fonts and not having their own code points, probably
> because needs for them vary from font to font.

Since we seem to have a rather long Friday ;-)  At least there was no
need for ligature "ff" on Facebook :-)
http://www.zdnet.com/blog/facebook/the-facebook-battle-for-effin-is-over-but-the-war-rages-on/6924


  1   2   >