What you said was basically what I told the programmer to do. What he
really wants is for the READ .... INTO to do is to read in the record
"somewhere" (i.e. the I/O buffer). The ODO value is within the record
itself. So he was wanting COBOL to effectively do a MOVE of the ODO
variable, then do the rest of the MOVE dependent on the just updated value
of the ODO variable. Like a "two stage" move. I told him that I didn't
think COBOL would do it that way, but he basically insisted that it was
what he wanted and was going to get or die (ABEND) trying. I don't know why
he doesn't just READ without the INTO and then MOVE from the 01 in the FD
to the 01 in the LINKAGE SECTION (sorry, said WORKING STORAGE in previous
posts). He just doesn't want to. I have noted the program name and will
just shrug if/when it starts abending in production (if it gets that far).


On Wed, Sep 11, 2013 at 1:13 PM, Joel C. Ewing <jcew...@acm.org> wrote:

> On 09/11/2013 12:02 PM, John McKown wrote:
> > A programmer came by today with a problem. He is sometimes getting a
> S0C4-4
> > abend in a COBOL program. This is a subroutine. One of the parameters
> > passed in is a data area, which can be of various lengths. It is defined
> > with an OCCURS DEPENDING ON with a data element within the area. I.e. the
> > first 05 level is PIC S9(5) COMP. The subroutine does a READ of a data
> set
> > into this area. This is where the abend occurs. The reason is because the
> > OCCURS DEPENDING ON maximum size is significantly larger than what the
> > caller is passing it. And the READ to the 01 is trying to pad the entire
> > possible 01 level with blanks.
> >
> > The problem is how do I describe this to a COBOL programmer who just
> > doesn't "get it". He expects COBOL to _not_ pad the "non existent"
> > occurrences with blanks. And, if fact, to not even reference this area
> > wherein they would have resided, had they existed. I'm just get "deer in
> > headlights" looks. I'm not using the correct words, somehow.
> >
>
> Presumably the "area" in question is the target of INTO as in "READ...
> INTO area".
>
> The manuals say data movement for READ to the INTO "area" is governed by
> the rules for "MOVE", and the semantics for MOVE says any length
> evaluation of the receiving field is determined just "before" any data
> is moved.  Is the DEPENDING ON variable in the receiving group item
> initialized to  the proper expected value or to the maximum acceptable
> value that the calling program can accept prior to the READ?
>
> The way I read the manuals, the implicit MOVE of the READ instruction
> will replace the DEPENDING ON value in the receiving structure, so
> afterwards it should reflect the actual number of occurrences present,
> but the length of the MOVE and any padding of the receiving field as
> part of that MOVE will be based on contents of the receiving field's
> DEPENDING ON variable prior to the move.
>
> If the programmer is expecting COBOL to *assume* that the length of the
> receiving field is the length of the source field (in this case, the
> record just read), the manuals seem to explicitly indicate that is not
> the way things work.
>
> If my understanding is correct, a more efficient way to avoid
> unnecessary padding would be to do a READ without "INTO", then set the
> DEPENDING ON value in the receiving area to minimum of max count space
> supplied by caller and the DEPENDING ON value in the actual record read,
> and finally MOVE the file record to the receiving area.
>
> --
> Joel C. Ewing,    Bentonville, AR       jcew...@acm.org
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
As of next week, passwords will be entered in Morse code.

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to