Re: COBOL 2014 dynamic capacity tables

2016-08-09 Thread Bernd Oppolzer

At the moment I only look at one HANC (heap segment),
which is of limited size, depending on the HEAP parameters
on enclave startup. Typical size is from some KB to some MB.
For example, today I tested with 64 KB heap segment size,
which is pretty small.

That is, if you allocate several small areas which take (much) less than
64 KB, all is fine, and you will get a significant delta listing. Today, 
there

were only 11 areas of (in sum) 600 bytes, which were not freed on
the interesting function call. (This could anyway be a problem, if this 
function
is called several million times per day in an online environment; in 
fact it is

a function that gets a customer address from the DB2 database).

If a new heap segment is started between two calls, the delta listing
(as it is implemented at the moment) will look not so good. You can control
this by defining larger segment sizes.

Because the snapshot and the delta listing is limited to one HANC, it is
pretty fast.

You can take a snapshot of the whole heap chain, but there is no
delta listing of the whole heap chain at the moment. The delta listing
of the actual HANC seemed sufficient on the first shot.

Other parameters of the heap snapshot function are:

- which file handle for output? Default is stderr
- do you want size summary or not? (number of occupied and free areas,
total size)
- which size of every occupied area do you want to be dumped
(nothing at all, only the address and the length, or the first n bytes)?
- only the actual HANC or the whole heap chain?
- simple hex dump of the HANCs as a whole, or edited HANC (header info
with field names, individual areas and so on)

The implementation of the LE heap seems to be pretty stable over time;
the documentation is from 1997 and 2005, and the information is still 
valid;

so I guess there will be not much changes in the future. The algorithm is
from IBM Watson Research Center, see the presentation below.

Kind regards

Bernd



David Crayford schrieb:
Cool! So you run the heap control blocks and take snapshots? Does that 
work well when lots of memory is being allocated?



On 9/08/2016 9:16 PM, bernd.oppol...@t-online.de wrote:

No, I don't instrument at all.

I use my knowledge of the LE heap implementation (see the presentation
mentioned below) to store a map of all allocated and free areas at a 
given time
(area adresses and lengths) and then - some microseconds later - I 
compare that
with the new image then. Then I build a delta listing which shows all 
the areas
that have changed in the meantime, that is, which have been allocated 
and not freed.


If you call this before and after a critical function, you will see 
exactly

what areas are left "unfreed" by this function. This proved to be very
helpful in our environment, especially if the critical function is 
written
by another team. (I don't need to "instrument" the functions of the 
foreign team).


Kind regards

Bernd



-Original-Nachricht-
Betreff: Re: COBOL 2014 dynamic capacity tables
Datum: 2016-08-09T13:53:15+0200
Von: "David Crayford" 
An: "IBM-MAIN@LISTSERV.UA.EDU" 

I  use the HEAPCHK(ON,1,0,10,10) LE runtime option to find my memory
leaks. How is your instrumentation different? I assume you wrap
malloc()/free() calls?


On 9/08/2016 1:50 AM, Bernd Oppolzer wrote:

IMO there is no need to create additional heaps
to support dynamic tables in COBOL.

I did some research some days ago on the LE heap
implementation and found an old IBM presentation (from 2005) on
this topic called "Stacks and Heaps" (you will find it using
Google, the full title reads something like "Stacks are easy,
heaps are fun"). There are COBOL examples included, which
use the LE functions CEEGTST (get storage) and CEEFRST
(free storage) - I hope, I recall the names correctly.

Based on these functions (which are malloc and free, after all),
you could do all sorts of dynamic allocations from COBOL, using
the normal LE "user heap", which is sufficient, IMO.

BTW: I wrote some functions in the last few days, which dumps all
the allocated heap areas and - more interesting - they write reports,
how the heap has changed since the last call. This is very interesting
...
if you have a function that you think that does not free its heap areas
properly, you can call this heap report function before and after
this function call and find out by looking at the so called heap delta
listing.

If you are interested, feel free to contact me offline.

Kind regards

Bernd





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



---

Re: COBOL 2014 dynamic capacity tables

2016-08-09 Thread David Crayford

On 9/08/2016 11:42 PM, Frank Swarbrick wrote:

Neither here nor there, but I did not know that C++ enums could have 
non-numeric values!  Are they limited to a single character?  Just wondering...


With C/C++ anything in single quotes is a char which equates to an 
integer. You cannot switch() on strings as it makes no sense as a switch 
is a branch table. Having said that, C#

supports switch on strings but that's syntactic sugar, but very nice.



From: IBM Mainframe Discussion List  on behalf of David 
Crayford 
Sent: Monday, August 8, 2016 9:01 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 9/08/2016 12:36 AM, Farley, Peter x23353 wrote:

David,

Not so easy to write as you might think.  The COBOL LE environment and the 
C/C++ LE environment are very different.  Calling C/C++ runtime routines (other 
than the Metal C ones resident in the system, but even some of those require 
some memory-allocation initialization) requires that the C/C++ RTL environment 
be set up, but you do not want to do that for every call, so you have to have 
at least a name/token pair to save the (created once) C/C++ environment.

#pragma linkage(...,fetchable) takes care of the ILC linkage. LE will
dynamically load the C++ module and use the same LE environment for both
the COBOL and the C++ program. I've done this before for ILC calls
between HLASM->C++ and it works well. It's very fast, the call overhead
is just a few instructions in the FECB glue code.

I wrote a test with a COBOL main that calls a C++ subroutine 10,000,000
times. It ran in 01.33 CPU seconds, roughly the same as C++ calling a
statically linked C++ routine.

 identification division.

 program-id.  cobilc.

 data division.

 working-storage section.

 01  set-module-name  pic x(08) value "PCOLSET ".

 procedure division.

 perform 1000 times
 call set-module-name
 end-perform

 goback.

#pragma linkage(PCOLSET,fetchable)

extern "C" int PCOLSET()
{
  return 0;
}

HTRT01I CPU (Total) Elapsed
CPU (TCB)CPU (SRB) Service
HTRT02I Jobname  Stepname ProcStepRCI/O hh:mm:ss.th hh:mm:ss.th
hh:mm:ss.th  hh:mm:ss.th Units
HTRT03I COBILC   C00 73   01.33 02.24
01.3300.00 29549


And probably impossible for any RTL routine that requires POSIX ON, though I 
don't suppose the data collection routines fall into that category.

One of my colleagues wrote a POSIX(ON) COBOL program a few weeks ago. It
uses the new callable services for HTTP web services and POSIX(ON) is a
requirement. No problems.


I investigated this once upon a time and decided that with the amount of work 
required, it would probably be better to wait for IBM to provide it.  Maybe 
COBOL V6++ will do that.  :)

  From what I can tell it would be quite easy. It's simple to write a C++
module to wrap an STL container. I would design it to take one argument,
a COBOL record (C struct) list with a request type, the set handle and a
record buffer. For example, for a dictionary (std::set)  NEW, TERM,
INSERT, FIND, REPLACE, DELETE etc. I would store records in the set as
C++ strings to simplify memory management and write a custom comparator
function for comparing keys. One constraint would be that record keys
must be fixed length, but this is COBOL right so that's nothing new.

Excuse my COBOL, it's been a while but something like this which would
have a C/C++ structure mapping in the API.

01 stl-set.
 05 stl-set-token   pic x(4) value low-values.
 05 stl-set-method  pic x.
88 stl-set-method-new value 'N'.
88 stl-set-method-insert  value 'I'.
88 stl-set-method-findvalue 'F'.
88 stl-set-method-update  value 'U'.
88 stl-set-method-delete  value 'D'.
88 stl-set-method-termvalue 'T'.
 05 stl-set-key-length  pic 9(8) binary.
 05 stl-set-rec-length  pic 9(8) binary.
 05 stl-set-rec-ptr pointer.

struct stl_set
{
  void * token;// ptr to std::set instance
  enum // request type
  {
method_new= 'N',   // - create a new set
method_insert = 'I',   // - insert a record
method_find   = 'F',   // - find a record
method_update = 'U',   // - update a record
method_delete = 'D',   // - delete a record
method_term   = 'T'// - destroy the set
  } method;
  size_t keylen;   // [input]  the fixed key length
  size_t reclen;   // [in/out] the record length
  void * rec;  // [in/out] the record buffer
}


Peter

-Original Message-
From: IBM M

Re: COBOL 2014 dynamic capacity tables

2016-08-09 Thread Frank Swarbrick
Neither here nor there, but I did not know that C++ enums could have 
non-numeric values!  Are they limited to a single character?  Just wondering...


From: IBM Mainframe Discussion List  on behalf of 
David Crayford 
Sent: Monday, August 8, 2016 9:01 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 9/08/2016 12:36 AM, Farley, Peter x23353 wrote:
> David,
>
> Not so easy to write as you might think.  The COBOL LE environment and the 
> C/C++ LE environment are very different.  Calling C/C++ runtime routines 
> (other than the Metal C ones resident in the system, but even some of those 
> require some memory-allocation initialization) requires that the C/C++ RTL 
> environment be set up, but you do not want to do that for every call, so you 
> have to have at least a name/token pair to save the (created once) C/C++ 
> environment.

#pragma linkage(...,fetchable) takes care of the ILC linkage. LE will
dynamically load the C++ module and use the same LE environment for both
the COBOL and the C++ program. I've done this before for ILC calls
between HLASM->C++ and it works well. It's very fast, the call overhead
is just a few instructions in the FECB glue code.

I wrote a test with a COBOL main that calls a C++ subroutine 10,000,000
times. It ran in 01.33 CPU seconds, roughly the same as C++ calling a
statically linked C++ routine.

identification division.

program-id.  cobilc.

data division.

working-storage section.

01  set-module-name  pic x(08) value "PCOLSET ".

procedure division.

perform 1000 times
call set-module-name
end-perform

goback.

#pragma linkage(PCOLSET,fetchable)

extern "C" int PCOLSET()
{
 return 0;
}

HTRT01I CPU (Total) Elapsed
CPU (TCB)CPU (SRB) Service
HTRT02I Jobname  Stepname ProcStepRCI/O hh:mm:ss.th hh:mm:ss.th
hh:mm:ss.th  hh:mm:ss.th Units
HTRT03I COBILC   C00 73   01.33 02.24
01.3300.00 29549

> And probably impossible for any RTL routine that requires POSIX ON, though I 
> don't suppose the data collection routines fall into that category.

One of my colleagues wrote a POSIX(ON) COBOL program a few weeks ago. It
uses the new callable services for HTTP web services and POSIX(ON) is a
requirement. No problems.

> I investigated this once upon a time and decided that with the amount of work 
> required, it would probably be better to wait for IBM to provide it.  Maybe 
> COBOL V6++ will do that.  :)

 From what I can tell it would be quite easy. It's simple to write a C++
module to wrap an STL container. I would design it to take one argument,
a COBOL record (C struct) list with a request type, the set handle and a
record buffer. For example, for a dictionary (std::set)  NEW, TERM,
INSERT, FIND, REPLACE, DELETE etc. I would store records in the set as
C++ strings to simplify memory management and write a custom comparator
function for comparing keys. One constraint would be that record keys
must be fixed length, but this is COBOL right so that's nothing new.

Excuse my COBOL, it's been a while but something like this which would
have a C/C++ structure mapping in the API.

01 stl-set.
05 stl-set-token   pic x(4) value low-values.
05 stl-set-method  pic x.
   88 stl-set-method-new value 'N'.
   88 stl-set-method-insert  value 'I'.
   88 stl-set-method-findvalue 'F'.
   88 stl-set-method-update  value 'U'.
   88 stl-set-method-delete  value 'D'.
   88 stl-set-method-termvalue 'T'.
05 stl-set-key-length  pic 9(8) binary.
05 stl-set-rec-length  pic 9(8) binary.
05 stl-set-rec-ptr pointer.

struct stl_set
{
 void * token;// ptr to std::set instance
 enum // request type
 {
   method_new= 'N',   // - create a new set
   method_insert = 'I',   // - insert a record
   method_find   = 'F',   // - find a record
   method_update = 'U',   // - update a record
   method_delete = 'D',   // - delete a record
   method_term   = 'T'// - destroy the set
 } method;
 size_t keylen;   // [input]  the fixed key length
 size_t reclen;   // [in/out] the record length
 void * rec;  // [in/out] the record buffer
}

> Peter
>
> -----Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of David Crayford
> Sent: Monday, August 08, 2016 7:49 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: COBOL 2014 dynamic capacity tables
>
> On 5/08/2016 11:11 PM,

Re: AW: COBOL 2014 dynamic capacity tables

2016-08-09 Thread David Crayford
Cool! So you run the heap control blocks and take snapshots? Does that 
work well when lots of memory is being allocated?



On 9/08/2016 9:16 PM, bernd.oppol...@t-online.de wrote:

No, I don't instrument at all.

I use my knowledge of the LE heap implementation (see the presentation
mentioned below) to store a map of all allocated and free areas at a given time
(area adresses and lengths) and then - some microseconds later - I compare that
with the new image then. Then I build a delta listing which shows all the areas
that have changed in the meantime, that is, which have been allocated and not 
freed.

If you call this before and after a critical function, you will see exactly
what areas are left "unfreed" by this function. This proved to be very
helpful in our environment, especially if the critical function is written
by another team. (I don't need to "instrument" the functions of the foreign 
team).

Kind regards

Bernd



-Original-Nachricht-
Betreff: Re: COBOL 2014 dynamic capacity tables
Datum: 2016-08-09T13:53:15+0200
Von: "David Crayford" 
An: "IBM-MAIN@LISTSERV.UA.EDU" 

I  use the HEAPCHK(ON,1,0,10,10) LE runtime option to find my memory
leaks. How is your instrumentation different? I assume you wrap
malloc()/free() calls?


On 9/08/2016 1:50 AM, Bernd Oppolzer wrote:

IMO there is no need to create additional heaps
to support dynamic tables in COBOL.

I did some research some days ago on the LE heap
implementation and found an old IBM presentation (from 2005) on
this topic called "Stacks and Heaps" (you will find it using
Google, the full title reads something like "Stacks are easy,
heaps are fun"). There are COBOL examples included, which
use the LE functions CEEGTST (get storage) and CEEFRST
(free storage) - I hope, I recall the names correctly.

Based on these functions (which are malloc and free, after all),
you could do all sorts of dynamic allocations from COBOL, using
the normal LE "user heap", which is sufficient, IMO.

BTW: I wrote some functions in the last few days, which dumps all
the allocated heap areas and - more interesting - they write reports,
how the heap has changed since the last call. This is very interesting
...
if you have a function that you think that does not free its heap areas
properly, you can call this heap report function before and after
this function call and find out by looking at the so called heap delta
listing.

If you are interested, feel free to contact me offline.

Kind regards

Bernd





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


AW: COBOL 2014 dynamic capacity tables

2016-08-09 Thread bernd.oppol...@t-online.de
No, I don't instrument at all. 

I use my knowledge of the LE heap implementation (see the presentation
mentioned below) to store a map of all allocated and free areas at a given time
(area adresses and lengths) and then - some microseconds later - I compare that 
with the new image then. Then I build a delta listing which shows all the areas
that have changed in the meantime, that is, which have been allocated and not 
freed. 

If you call this before and after a critical function, you will see exactly
what areas are left "unfreed" by this function. This proved to be very 
helpful in our environment, especially if the critical function is written 
by another team. (I don't need to "instrument" the functions of the foreign 
team). 

Kind regards

Bernd



-Original-Nachricht-
Betreff: Re: COBOL 2014 dynamic capacity tables
Datum: 2016-08-09T13:53:15+0200
Von: "David Crayford" 
An: "IBM-MAIN@LISTSERV.UA.EDU" 

I  use the HEAPCHK(ON,1,0,10,10) LE runtime option to find my memory 
leaks. How is your instrumentation different? I assume you wrap 
malloc()/free() calls?


On 9/08/2016 1:50 AM, Bernd Oppolzer wrote:
> IMO there is no need to create additional heaps
> to support dynamic tables in COBOL.
>
> I did some research some days ago on the LE heap
> implementation and found an old IBM presentation (from 2005) on
> this topic called "Stacks and Heaps" (you will find it using
> Google, the full title reads something like "Stacks are easy,
> heaps are fun"). There are COBOL examples included, which
> use the LE functions CEEGTST (get storage) and CEEFRST
> (free storage) - I hope, I recall the names correctly.
>
> Based on these functions (which are malloc and free, after all),
> you could do all sorts of dynamic allocations from COBOL, using
> the normal LE "user heap", which is sufficient, IMO.
>
> BTW: I wrote some functions in the last few days, which dumps all
> the allocated heap areas and - more interesting - they write reports,
> how the heap has changed since the last call. This is very interesting 
> ...
> if you have a function that you think that does not free its heap areas
> properly, you can call this heap report function before and after
> this function call and find out by looking at the so called heap delta 
> listing.
>
> If you are interested, feel free to contact me offline.
>
> Kind regards
>
> Bernd
>
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-09 Thread David Crayford
I  use the HEAPCHK(ON,1,0,10,10) LE runtime option to find my memory 
leaks. How is your instrumentation different? I assume you wrap 
malloc()/free() calls?



On 9/08/2016 1:50 AM, Bernd Oppolzer wrote:

IMO there is no need to create additional heaps
to support dynamic tables in COBOL.

I did some research some days ago on the LE heap
implementation and found an old IBM presentation (from 2005) on
this topic called "Stacks and Heaps" (you will find it using
Google, the full title reads something like "Stacks are easy,
heaps are fun"). There are COBOL examples included, which
use the LE functions CEEGTST (get storage) and CEEFRST
(free storage) - I hope, I recall the names correctly.

Based on these functions (which are malloc and free, after all),
you could do all sorts of dynamic allocations from COBOL, using
the normal LE "user heap", which is sufficient, IMO.

BTW: I wrote some functions in the last few days, which dumps all
the allocated heap areas and - more interesting - they write reports,
how the heap has changed since the last call. This is very interesting 
...

if you have a function that you think that does not free its heap areas
properly, you can call this heap report function before and after
this function call and find out by looking at the so called heap delta 
listing.


If you are interested, feel free to contact me offline.

Kind regards

Bernd



Frank Swarbrick schrieb:
By "heap pool" are you referring to using CEECRHP to create 
additional LE heaps?  I am doing that upon creation of the first 
"dynamic table" within a program.  (Just using the defaults of 0 for 
each of the CEECRHP parameters at the moment.)  Are you thinking it 
might make sense to use a separate heap for each table?  I have no 
idea what phi is (I took neither Greek nor higher mathematics), but 
I'll take a look at it.


I personally would like COBOL to have most of those "collection 
classes" you refer to.  But I'm not sure how user friendly these ILCs 
wrappers you refer to would be.  Feel free to develop them, though!  
:-)  We don't have access to the C/C++ compiler, and thus I will not 
be playing around with that.


Frank




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread David Crayford

On 9/08/2016 11:01 AM, David Crayford wrote:


Not so easy to write as you might think.  The COBOL LE environment 
and the C/C++ LE environment are very different. Calling C/C++ 
runtime routines (other than the Metal C ones resident in the system, 
but even some of those require some memory-allocation initialization) 
requires that the C/C++ RTL environment be set up, but you do not 
want to do that for every call, so you have to have at least a 
name/token pair to save the (created once) C/C++ environment.


#pragma linkage(...,fetchable) takes care of the ILC linkage. LE will 
dynamically load the C++ module and use the same LE environment for 
both the COBOL and the C++ program. I've done this before for ILC 
calls between HLASM->C++ and it works well. It's very fast, the call 
overhead is just a few instructions in the FECB glue code.


I wrote a test with a COBOL main that calls a C++ subroutine 
10,000,000 times. It ran in 01.33 CPU seconds, roughly the same as C++ 
calling a statically linked C++ routine. 


It turns out that statically linking the C++ subroutine not only works 
it's faster.


   identification division.

   program-id.  cobilc.

   data division.

   working-storage section.

   01 set-module-namepic x(8) value 'PCOLSET'.

   01 stl-set.
  05 stl-set-token   pic x(4) value low-values.
  05 stl-set-method  pic x.
 88 stl-set-method-new value 'N'.
 88 stl-set-method-insert  value 'I'.
 88 stl-set-method-findvalue 'F'.
 88 stl-set-method-update  value 'U'.
 88 stl-set-method-delete  value 'D'.
 88 stl-set-method-termvalue 'T'.
  05 stl-set-key-length  pic 9(8) binary.
  05 stl-set-rec-length  pic 9(8) binary.
  05 stl-set-rec-ptr pointer.

   procedure division.

   perform 1000 times
   call "PCOLSET" using stl-set
   end-perform

   goback.

HTRT01I CPU (Total) Elapsed  
CPU (TCB)CPU (SRB) Service
HTRT02I Jobname  Stepname ProcStepRCI/O hh:mm:ss.th hh:mm:ss.th  
hh:mm:ss.th  hh:mm:ss.th Units
HTRT03I COBPSET  C00144   00.33 00.86
00.3300.00  7256


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread David Crayford

On 9/08/2016 12:36 AM, Farley, Peter x23353 wrote:

David,

Not so easy to write as you might think.  The COBOL LE environment and the 
C/C++ LE environment are very different.  Calling C/C++ runtime routines (other 
than the Metal C ones resident in the system, but even some of those require 
some memory-allocation initialization) requires that the C/C++ RTL environment 
be set up, but you do not want to do that for every call, so you have to have 
at least a name/token pair to save the (created once) C/C++ environment.


#pragma linkage(...,fetchable) takes care of the ILC linkage. LE will 
dynamically load the C++ module and use the same LE environment for both 
the COBOL and the C++ program. I've done this before for ILC calls 
between HLASM->C++ and it works well. It's very fast, the call overhead 
is just a few instructions in the FECB glue code.


I wrote a test with a COBOL main that calls a C++ subroutine 10,000,000 
times. It ran in 01.33 CPU seconds, roughly the same as C++ calling a 
statically linked C++ routine.


   identification division.

   program-id.  cobilc.

   data division.

   working-storage section.

   01  set-module-name  pic x(08) value "PCOLSET ".

   procedure division.

   perform 1000 times
   call set-module-name
   end-perform

   goback.

#pragma linkage(PCOLSET,fetchable)

extern "C" int PCOLSET()
{
return 0;
}

HTRT01I CPU (Total) Elapsed  
CPU (TCB)CPU (SRB) Service
HTRT02I Jobname  Stepname ProcStepRCI/O hh:mm:ss.th hh:mm:ss.th  
hh:mm:ss.th  hh:mm:ss.th Units
HTRT03I COBILC   C00 73   01.33 02.24
01.3300.00 29549



And probably impossible for any RTL routine that requires POSIX ON, though I 
don't suppose the data collection routines fall into that category.


One of my colleagues wrote a POSIX(ON) COBOL program a few weeks ago. It 
uses the new callable services for HTTP web services and POSIX(ON) is a 
requirement. No problems.



I investigated this once upon a time and decided that with the amount of work 
required, it would probably be better to wait for IBM to provide it.  Maybe 
COBOL V6++ will do that.  :)


From what I can tell it would be quite easy. It's simple to write a C++ 
module to wrap an STL container. I would design it to take one argument, 
a COBOL record (C struct) list with a request type, the set handle and a 
record buffer. For example, for a dictionary (std::set)  NEW, TERM, 
INSERT, FIND, REPLACE, DELETE etc. I would store records in the set as 
C++ strings to simplify memory management and write a custom comparator 
function for comparing keys. One constraint would be that record keys 
must be fixed length, but this is COBOL right so that's nothing new.


Excuse my COBOL, it's been a while but something like this which would 
have a C/C++ structure mapping in the API.


01 stl-set.
   05 stl-set-token   pic x(4) value low-values.
   05 stl-set-method  pic x.
  88 stl-set-method-new value 'N'.
  88 stl-set-method-insert  value 'I'.
  88 stl-set-method-findvalue 'F'.
  88 stl-set-method-update  value 'U'.
  88 stl-set-method-delete  value 'D'.
  88 stl-set-method-termvalue 'T'.
   05 stl-set-key-length  pic 9(8) binary.
   05 stl-set-rec-length  pic 9(8) binary.
   05 stl-set-rec-ptr pointer.

struct stl_set
{
void * token;// ptr to std::set instance
enum // request type
{
  method_new= 'N',   // - create a new set
  method_insert = 'I',   // - insert a record
  method_find   = 'F',   // - find a record
  method_update = 'U',   // - update a record
  method_delete = 'D',   // - delete a record
  method_term   = 'T'// - destroy the set
} method;
size_t keylen;   // [input]  the fixed key length
size_t reclen;   // [in/out] the record length
void * rec;  // [in/out] the record buffer
}


Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of David Crayford
Sent: Monday, August 08, 2016 7:49 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 5/08/2016 11:11 PM, Frank Swarbrick wrote:

That's good to know.  I searched the internet and found a page about implementing dynamic 
arrays in C and he was using "double", but 1.5 also sounds reasonable.  I 
wonder if perhaps there should be some sort of ratcheting down as the number of rows gets 
very large.

The C++ runtime library on z/OS is a commercial offering from Dinkumware. 
Interestingly they use phi as the growth factor. A lot of the choices seem to 
be based on the pro

Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread Frank Swarbrick
I used a single separate heap in order so that the RPTSTG(ON) report will 
report "my" storage actions separately from COBOLs:

HEAP statistics:
  Initial size:32768
  Increment size:  32768
  Total heap storage used (sugg. initial size):   368656
  Successful Get Heap requests:   11
  Successful Free Heap requests:   2
  Number of segments allocated:2
  Number of segments freed:0

Additional Heap statistics:
  Successful Create Heap requests: 1
  Successful Discard Heap requests:0
  Total heap storage used: 30088
  Successful Get Heap requests:   20
  Successful Free Heap requests:  16
  Number of segments allocated:2
  Number of segments freed:0



From: IBM Mainframe Discussion List  on behalf of 
Bernd Oppolzer 
Sent: Monday, August 8, 2016 11:50 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

IMO there is no need to create additional heaps
to support dynamic tables in COBOL.

I did some research some days ago on the LE heap
implementation and found an old IBM presentation (from 2005) on
this topic called "Stacks and Heaps" (you will find it using
Google, the full title reads something like "Stacks are easy,
heaps are fun"). There are COBOL examples included, which
use the LE functions CEEGTST (get storage) and CEEFRST
(free storage) - I hope, I recall the names correctly.

Based on these functions (which are malloc and free, after all),
you could do all sorts of dynamic allocations from COBOL, using
the normal LE "user heap", which is sufficient, IMO.

BTW: I wrote some functions in the last few days, which dumps all
the allocated heap areas and - more interesting - they write reports,
how the heap has changed since the last call. This is very interesting ...
if you have a function that you think that does not free its heap areas
properly, you can call this heap report function before and after
this function call and find out by looking at the so called heap delta
listing.

If you are interested, feel free to contact me offline.

Kind regards

Bernd



Frank Swarbrick schrieb:
> By "heap pool" are you referring to using CEECRHP to create additional LE 
> heaps?  I am doing that upon creation of the first "dynamic table" within a 
> program.  (Just using the defaults of 0 for each of the CEECRHP parameters at 
> the moment.)  Are you thinking it might make sense to use a separate heap for 
> each table?  I have no idea what phi is (I took neither Greek nor higher 
> mathematics), but I'll take a look at it.
>
> I personally would like COBOL to have most of those "collection classes" you 
> refer to.  But I'm not sure how user friendly these ILCs wrappers you refer 
> to would be.  Feel free to develop them, though!  :-)  We don't have access 
> to the C/C++ compiler, and thus I will not be playing around with that.
>
> Frank
>
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread Bernd Oppolzer

IMO there is no need to create additional heaps
to support dynamic tables in COBOL.

I did some research some days ago on the LE heap
implementation and found an old IBM presentation (from 2005) on
this topic called "Stacks and Heaps" (you will find it using
Google, the full title reads something like "Stacks are easy,
heaps are fun"). There are COBOL examples included, which
use the LE functions CEEGTST (get storage) and CEEFRST
(free storage) - I hope, I recall the names correctly.

Based on these functions (which are malloc and free, after all),
you could do all sorts of dynamic allocations from COBOL, using
the normal LE "user heap", which is sufficient, IMO.

BTW: I wrote some functions in the last few days, which dumps all
the allocated heap areas and - more interesting - they write reports,
how the heap has changed since the last call. This is very interesting ...
if you have a function that you think that does not free its heap areas
properly, you can call this heap report function before and after
this function call and find out by looking at the so called heap delta 
listing.


If you are interested, feel free to contact me offline.

Kind regards

Bernd



Frank Swarbrick schrieb:

By "heap pool" are you referring to using CEECRHP to create additional LE heaps?  I am 
doing that upon creation of the first "dynamic table" within a program.  (Just using the 
defaults of 0 for each of the CEECRHP parameters at the moment.)  Are you thinking it might make 
sense to use a separate heap for each table?  I have no idea what phi is (I took neither Greek nor 
higher mathematics), but I'll take a look at it.

I personally would like COBOL to have most of those "collection classes" you 
refer to.  But I'm not sure how user friendly these ILCs wrappers you refer to would be.  
Feel free to develop them, though!  :-)  We don't have access to the C/C++ compiler, and 
thus I will not be playing around with that.

Frank




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread Farley, Peter x23353
David,

Not so easy to write as you might think.  The COBOL LE environment and the 
C/C++ LE environment are very different.  Calling C/C++ runtime routines (other 
than the Metal C ones resident in the system, but even some of those require 
some memory-allocation initialization) requires that the C/C++ RTL environment 
be set up, but you do not want to do that for every call, so you have to have 
at least a name/token pair to save the (created once) C/C++ environment.

And probably impossible for any RTL routine that requires POSIX ON, though I 
don't suppose the data collection routines fall into that category.

I investigated this once upon a time and decided that with the amount of work 
required, it would probably be better to wait for IBM to provide it.  Maybe 
COBOL V6++ will do that.  :)

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of David Crayford
Sent: Monday, August 08, 2016 7:49 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 5/08/2016 11:11 PM, Frank Swarbrick wrote:
> That's good to know.  I searched the internet and found a page about 
> implementing dynamic arrays in C and he was using "double", but 1.5 also 
> sounds reasonable.  I wonder if perhaps there should be some sort of 
> ratcheting down as the number of rows gets very large.

The C++ runtime library on z/OS is a commercial offering from Dinkumware. 
Interestingly they use phi as the growth factor. A lot of the choices seem to 
be based on the properties of the memory allocator.  Modern allocators, 
including z/OS LE are configurable, so if you plan to use a growth factor of 2 
then you should look into using heap pools.

I have to admire what you're doing. I used to be application programmer a long 
time ago and COBOL seriously lacks collection classes that we take for granted 
in modern languages.
It would be trivial to write a thin ILC wrapper around the C++ STL to enable 
COBOL to use the C++ container classes like vectors, linked lists, heaps, 
stacks, queues, maps and hash maps. I'm not sure how much demand there seems to 
be for that on the mainframe though.

> 
> From: IBM Mainframe Discussion List  on 
> behalf of David Crayford 
> Sent: Thursday, August 4, 2016 8:41 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: COBOL 2014 dynamic capacity tables
>
> On 4/08/2016 2:52 AM, Frank Swarbrick wrote:
>> Even in the case where it does increase the actual allocated capacity, it 
>> does not do it "one row at a time".  Rather, it doubles the current physical 
>> capacity and "reallocates" (using CEECZST) the storage to the new value.  
>> This may or may not actually cause LE storage control to reallocate out of a 
>> different area (copying the existing data from the old allocated area).  If 
>> there is enough room already it does nothing except increase the amount 
>> reserved for your allocation.  And even then, LE has already allocated a 
>> probably larger area prior to this from actual OS storage, depending on the 
>> values in the HEAP runtime option.
> Almost all the dynamic array implementations that I'm aware of, C++ 
> std::vector, Java ArrayList, Python lists, Lua tables, use a growth 
> factor of 1.5. Apparently it's a golden ratio.
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread Frank Swarbrick
By "heap pool" are you referring to using CEECRHP to create additional LE 
heaps?  I am doing that upon creation of the first "dynamic table" within a 
program.  (Just using the defaults of 0 for each of the CEECRHP parameters at 
the moment.)  Are you thinking it might make sense to use a separate heap for 
each table?  I have no idea what phi is (I took neither Greek nor higher 
mathematics), but I'll take a look at it.

I personally would like COBOL to have most of those "collection classes" you 
refer to.  But I'm not sure how user friendly these ILCs wrappers you refer to 
would be.  Feel free to develop them, though!  :-)  We don't have access to the 
C/C++ compiler, and thus I will not be playing around with that.

Frank


From: IBM Mainframe Discussion List  on behalf of 
David Crayford 
Sent: Monday, August 8, 2016 5:49 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 5/08/2016 11:11 PM, Frank Swarbrick wrote:
> That's good to know.  I searched the internet and found a page about 
> implementing dynamic arrays in C and he was using "double", but 1.5 also 
> sounds reasonable.  I wonder if perhaps there should be some sort of 
> ratcheting down as the number of rows gets very large.

The C++ runtime library on z/OS is a commercial offering from
Dinkumware. Interestingly they use phi as the growth factor. A lot of
the choices seem to be based on the
properties of the memory allocator.  Modern allocators, including z/OS
LE are configurable, so if you plan to use a growth factor of 2 then you
should look into using heap pools.

I have to admire what you're doing. I used to be application programmer
a long time ago and COBOL seriously lacks collection classes that we
take for granted in modern languages.
It would be trivial to write a thin ILC wrapper around the C++ STL to
enable COBOL to use the C++ container classes like vectors, linked
lists, heaps, stacks, queues, maps and hash maps. I'm not sure how much
demand there seems to be for that on the mainframe though.

> 
> From: IBM Mainframe Discussion List  on behalf of 
> David Crayford 
> Sent: Thursday, August 4, 2016 8:41 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: COBOL 2014 dynamic capacity tables
>
> On 4/08/2016 2:52 AM, Frank Swarbrick wrote:
>> Even in the case where it does increase the actual allocated capacity, it 
>> does not do it "one row at a time".  Rather, it doubles the current physical 
>> capacity and "reallocates" (using CEECZST) the storage to the new value.  
>> This may or may not actually cause LE storage control to reallocate out of a 
>> different area (copying the existing data from the old allocated area).  If 
>> there is enough room already it does nothing except increase the amount 
>> reserved for your allocation.  And even then, LE has already allocated a 
>> probably larger area prior to this from actual OS storage, depending on the 
>> values in the HEAP runtime option.
> Almost all the dynamic array implementations that I'm aware of, C++
> std::vector, Java ArrayList, Python lists, Lua tables, use a growth
> factor of 1.5. Apparently it's a golden ratio.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-08 Thread David Crayford

On 5/08/2016 11:11 PM, Frank Swarbrick wrote:

That's good to know.  I searched the internet and found a page about implementing dynamic 
arrays in C and he was using "double", but 1.5 also sounds reasonable.  I 
wonder if perhaps there should be some sort of ratcheting down as the number of rows gets 
very large.


The C++ runtime library on z/OS is a commercial offering from 
Dinkumware. Interestingly they use phi as the growth factor. A lot of 
the choices seem to be based on the
properties of the memory allocator.  Modern allocators, including z/OS 
LE are configurable, so if you plan to use a growth factor of 2 then you 
should look into using heap pools.


I have to admire what you're doing. I used to be application programmer 
a long time ago and COBOL seriously lacks collection classes that we 
take for granted in modern languages.
It would be trivial to write a thin ILC wrapper around the C++ STL to 
enable COBOL to use the C++ container classes like vectors, linked 
lists, heaps, stacks, queues, maps and hash maps. I'm not sure how much 
demand there seems to be for that on the mainframe though.




From: IBM Mainframe Discussion List  on behalf of David 
Crayford 
Sent: Thursday, August 4, 2016 8:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 4/08/2016 2:52 AM, Frank Swarbrick wrote:

Even in the case where it does increase the actual allocated capacity, it does not do it "one 
row at a time".  Rather, it doubles the current physical capacity and "reallocates" 
(using CEECZST) the storage to the new value.  This may or may not actually cause LE storage 
control to reallocate out of a different area (copying the existing data from the old allocated 
area).  If there is enough room already it does nothing except increase the amount reserved for 
your allocation.  And even then, LE has already allocated a probably larger area prior to this from 
actual OS storage, depending on the values in the HEAP runtime option.

Almost all the dynamic array implementations that I'm aware of, C++
std::vector, Java ArrayList, Python lists, Lua tables, use a growth
factor of 1.5. Apparently it's a golden ratio.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-05 Thread Mike Schwab
Define a file with a record containing the maximum array size for each table.
Open file, read record, set sizes to 10% over maximum, Close file.
Process main file.
If any maximum array size exceeded, Set new maximum, Open file for
output, Write record, Close file.

On Fri, Aug 5, 2016 at 10:25 AM, Bill Woodger  wrote:
> Yes, good to know. I realised when I was writing my earlier comment that 
> there is not much downside to defining "extra" storage, it is what we do 
> already with a static table :-)
>
> However, I'd still go horses-for-courses. Make the increment for the unboaded 
> table a size which relates to the data and its usage. It'd avoid 50 unbounded 
> tables sucking up all the storage available when only 2/3 of the storage was 
> expected to be used.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 2014 dynamic capacity tables

2016-08-05 Thread Bill Woodger
Yes, good to know. I realised when I was writing my earlier comment that there 
is not much downside to defining "extra" storage, it is what we do already with 
a static table :-)

However, I'd still go horses-for-courses. Make the increment for the unboaded 
table a size which relates to the data and its usage. It'd avoid 50 unbounded 
tables sucking up all the storage available when only 2/3 of the storage was 
expected to be used.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-05 Thread Frank Swarbrick
That's good to know.  I searched the internet and found a page about 
implementing dynamic arrays in C and he was using "double", but 1.5 also sounds 
reasonable.  I wonder if perhaps there should be some sort of ratcheting down 
as the number of rows gets very large.


From: IBM Mainframe Discussion List  on behalf of 
David Crayford 
Sent: Thursday, August 4, 2016 8:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 4/08/2016 2:52 AM, Frank Swarbrick wrote:
> Even in the case where it does increase the actual allocated capacity, it 
> does not do it "one row at a time".  Rather, it doubles the current physical 
> capacity and "reallocates" (using CEECZST) the storage to the new value.  
> This may or may not actually cause LE storage control to reallocate out of a 
> different area (copying the existing data from the old allocated area).  If 
> there is enough room already it does nothing except increase the amount 
> reserved for your allocation.  And even then, LE has already allocated a 
> probably larger area prior to this from actual OS storage, depending on the 
> values in the HEAP runtime option.

Almost all the dynamic array implementations that I'm aware of, C++
std::vector, Java ArrayList, Python lists, Lua tables, use a growth
factor of 1.5. Apparently it's a golden ratio.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-04 Thread David Crayford

On 4/08/2016 2:52 AM, Frank Swarbrick wrote:

Even in the case where it does increase the actual allocated capacity, it does not do it "one 
row at a time".  Rather, it doubles the current physical capacity and "reallocates" 
(using CEECZST) the storage to the new value.  This may or may not actually cause LE storage 
control to reallocate out of a different area (copying the existing data from the old allocated 
area).  If there is enough room already it does nothing except increase the amount reserved for 
your allocation.  And even then, LE has already allocated a probably larger area prior to this from 
actual OS storage, depending on the values in the HEAP runtime option.


Almost all the dynamic array implementations that I'm aware of, C++ 
std::vector, Java ArrayList, Python lists, Lua tables, use a growth 
factor of 1.5. Apparently it's a golden ratio.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 2014 dynamic capacity tables

2016-08-04 Thread Bill Woodger
Peter, it becomes a large and wide topic, then, and the discussion of the 
dynamic table gets lost in it. To discuss it, best it is another topic.

As you have made clear, it is not (always) possible to discern the stupid from 
the ignorant. For instance, there is/are the person(s) themselves, who, 
factually, are one, both or neither, and there is the perception (because they 
have done something stupid).

Business logic may require sophisticated solutions, but that does not stop me 
wanting to divorce the "technical" from the "business". When 
Problem-determining why Mrs Squirrel (ret.) received 300sqm of living turf 
instead of a potted plant I want to see the business logic, I want to be able 
to "read" the program without the technicalities "getting in the way". It it is 
already clear, or becomes clear, that it is a fault in the "technical", I want 
to see that in isolation (as much as possible), not to have to untangle it from 
business logic.

If you look on the internet, or in the right places (because LinkedIn is no 
longer searchable from "outside") you'll see (I hope) that I have publicised 
many things.

This particular topic is not about publicising techniques (until Frank's useful 
example) but about whether it is reasonable or not for IBM to reject or 
consider the RFE.

I think a considerable amount of what is new in COBOL 2014 will never make it 
to Enterprise COBOL. I think there was a convergence towards the 1985 Standard 
by IBM a few years ago (they threw out a hold load of IBM Extensions) but I 
think the trend now is towards divergence again. Unless or until we need to 
stop knowing what COBOL does, because of "charging", there's a lot that just 
doesn't make sense.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-04 Thread Farley, Peter x23353
Bill,

Sorry, but I must humbly disagree with you.  The technical prowess (or lack 
thereof) of regular Mainframe programmers is positively on-topic for this issue.

In response to your hypothetical question, IMHO anyone who lights a match in 
that cellar counts as "stupid" and deserves the result that they get.  Natural 
selection is brutal, only the smart and aware survive.  Continuous 
self-education is a life skill that all need to learn for survival in a 
constantly changing world.

I have never believed in "hiding away" technical sophistication.  Management 
may or may not decide that they want such techniques to be used (I have 
personally had one such technique rejected by management in the last 
half-decade), but (again IMHO) not publicizing the sophisticated technique is 
never a good idea.  "Business logic" is not a homogenous entity, sometimes 
business needs require sophisticated solutions, and programmers should be made 
aware of all the tools at their disposal.  Whether and when they are ultimately 
used is a different set of issues entirely.

To the topic at hand, I think I understand IBM's rejection of this part of the 
new COBOL standard at this point in time, for all the reasons described in this 
thread.  Many questions and not many good answers yet.

Frank S.,

You may be surprised at how many "ordinary" programmers will grasp the concepts 
and pitfalls of dynamic storage allocation if you only give them a clear 
exposition of the ideas and details of how to use it.  I have done that here 
more than once with reasonable success.  Plus once you introduce them to COBOL 
POINTER variables, many other techniques will become possible and easier to 
understand, including ways to avoid moving large chunks of storage around when 
using a pointer will do the job, thus improving performance in non-trivial ways 
by eliminating unnecessary MOVE's.

Try it, you may find that you (and they) like it.

Regards,

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bill Woodger
Sent: Wednesday, August 03, 2016 7:16 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: COBOL 2014 dynamic capacity tables

Well, Peter, there is much in what you say, but be careful of quotes.

"Mmm... I smell gas in this dark cellar, has anyone got a match...?" - was the 
person ignorant of the rapid combustion of said gas when a flame is introduced, 
or just stupid? Same question for the match provider, and the others with them. 
Given the chance to question the fleeing ghosts, you'd probably hear "we needed 
light, we've always done it that way".

How to improve Mainframe COBOL programmers is way off this topic.

Yes, explain, but also hide it away. I normally dislike the idea that "then 
some magic happens" in programming, but for the out-of-the-ordinary which is 
not part of the business logic, stick it in a sub-program (can be embedded 
these days, and included within a copybook, and the nice compiler will even be 
able to consider it for "inlining" so you may be able to have your cake and eat 
it. 
--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 2014 dynamic capacity tables

2016-08-04 Thread Bill Woodger
The key to your code being useful *and* performant is the knowledge to allocate 
new "spare" slots in chunks, which are neither too small, more shifting of 
data, nor too large (actually, much less of a problem). 

If you have a native COBOL way to do that, fine.

I have a table with 400,000 entries. I'd like the program to survive if 400,003 
entries happen to turn up, I'd prefer not to shift 400,000 entries even three 
times, but it's not too bad. But what I don't want is it to allow for 600,000 
entries and keep coming, because I know that is wrong.

Now, if I want to set the whole thing up in a "parameter file", so that I can 
generalise the use... how would I do that in native COBOL? Ever?

So, in answer to your question, in this specific instance, No.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Frank Swarbrick
And wouldn't it be even nicer if you didn't need to call subroutines, but could 
just use features of the language itself?!  :-)

Frank


From: IBM Mainframe Discussion List  on behalf of 
Bill Woodger 
Sent: Wednesday, August 3, 2016 5:15 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: COBOL 2014 dynamic capacity tables

Well, Peter, there is much in what you say, but be careful of quotes.

"Mmm... I smell gas in this dark cellar, has anyone got a match...?" - was the 
person ignorant of the rapid combustion of said gas when a flame is introduced, 
or just stupid? Same question for the match provider, and the others with them. 
Given the chance to question the fleeing ghosts, you'd probably hear "we needed 
light, we've always done it that way".

How to improve Mainframe COBOL programmers is way off this topic.

Yes, explain, but also hide it away. I normally dislike the idea that "then 
some magic happens" in programming, but for the out-of-the-ordinary which is 
not part of the business logic, stick it in a sub-program (can be embedded 
these days, and included within a copybook, and the nice compiler will even be 
able to consider it for "inlining" so you may be able to have your cake and eat 
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Frank Swarbrick
I don't look down on "mere programmers", and in fact am often quite offended 
when people on this list appear to be disparaging them.  But I do have to be 
realistic to know what most COBOL programmers (at least where I work) are not 
"techies", but rather people who "implement automated business logic".  And 
indeed there is nothing wrong with this, most are quite good at it, even if I 
often cringe at the resulting code.  But any time I come up with a solution to 
something that "seems too technical" its generally not implemented.


My "roll my own" dynamic tables have not yet been presented in-house, but along 
with some copybooks I've developed to make it even more "developer friendly", I 
have a good feeling that people will respond well to it (especially considering 
how often they are called in the middle of the night because a "table ran out 
of room"!).  We shall see.


As much as I am happy with my solution, I would still prefer a COBOL language 
solution.


Frank


From: IBM Mainframe Discussion List  on behalf of 
Farley, Peter x23353 
Sent: Wednesday, August 3, 2016 4:03 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

Now, now gentlemen, let's not look down on mere programmers (I resemble that 
remark!)  -- Remember the saying from author Robert Heinlein's "The Moon is a 
Harsh Mistress":

"Ignorance is curable, only stupidity is fatal."

If employers have not provided sufficient training time and dollars, what else 
but ignorance do you expect?

Try issuing a technical note or two to the ignorant telling them about the 
wonderful things the system can do for them (best if the notes include examples 
the ignoranti can try for themselves of course).  We do that here (well, at 
least Victor and I do) and call them DYKT's (Did You Know That . . . ).  The 
response is sometimes quite positive.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Frank Swarbrick
Sent: Wednesday, August 03, 2016 3:00 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

Because your average COBOL programmer, in my experience, knows "bupkis" abound 
dynamic memory allocation.  Perhaps I am wrong, but as far as I know I am the 
only one in our shop who ever uses it.  As for your concern about serializing 
storage between multiple concurrent tasks, I don't know of situations that 
would require this.  This is intended for use within a single run-unit.  But 
even if a table was shared between multiple tasks (within a CICS region I can 
only imagine) you have this concern with our without "dynamic tables".  You'd 
have to serialize updates in either case, so I don't see why dynamic capacity 
tables would cause any additional heartache.


Frank


From: IBM Mainframe Discussion List  on behalf of 
Victor Gil 
Sent: Wednesday, August 3, 2016 8:30 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

I am not sure why would you want the compiler to handle such a general case of 
maintaining a dynamic-size table, while this can be easily programmed by using 
the "Get heap storage" calls [LE function CEEGTST] and even encapsulated in a 
callable service.

We do this kind of dynamic table management all the time, under CICS [i.e. 
using EXEC CICS GETMAIN/FREEMAIN calls] and the main problem here  is how to 
safely re-size a table which has reached its allocated capacity. This shouldn't 
be an issue in batch where the storage is owned by a single task because it can 
just wait for the call to come back while the subroutine allocates a new larger 
size table and populates it from the old one.

So if you ask the compiler to perform such a function it would have to know how 
to serialize storage access between multiple concurrent tasks and, moreover, 
let the in-flight transactions to keep accessing the old table while the new 
one is being allocated and populated.  Pretty difficult for a general case.

-Victor-

--
Last year I submitted an RFE 
(https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=73693) 
that Enterprise COBOL be enhanced to support dynamic capacity tables as defined 
in the COBOL 2014 ISO standard.  It was declined: "Thank you for bringing this 
RFE to our attention. But we believe that if implemented, it will be really 
slow and error prone.Some clients may have restrictions on using this. This 
is not in our multi-year strategy. Hence, we'll have to reject this RFE."


Since the year has passed I have resubmitted the RFE, now with the following 
comments that I hope might address IBM's concerns:


"This RFE was declined b

COBOL 2014 dynamic capacity tables

2016-08-03 Thread Bill Woodger
Well, Peter, there is much in what you say, but be careful of quotes.

"Mmm... I smell gas in this dark cellar, has anyone got a match...?" - was the 
person ignorant of the rapid combustion of said gas when a flame is introduced, 
or just stupid? Same question for the match provider, and the others with them. 
Given the chance to question the fleeing ghosts, you'd probably hear "we needed 
light, we've always done it that way".

How to improve Mainframe COBOL programmers is way off this topic.

Yes, explain, but also hide it away. I normally dislike the idea that "then 
some magic happens" in programming, but for the out-of-the-ordinary which is 
not part of the business logic, stick it in a sub-program (can be embedded 
these days, and included within a copybook, and the nice compiler will even be 
able to consider it for "inlining" so you may be able to have your cake and eat 
it. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Farley, Peter x23353
Now, now gentlemen, let's not look down on mere programmers (I resemble that 
remark!)  -- Remember the saying from author Robert Heinlein's "The Moon is a 
Harsh Mistress":

"Ignorance is curable, only stupidity is fatal."

If employers have not provided sufficient training time and dollars, what else 
but ignorance do you expect?

Try issuing a technical note or two to the ignorant telling them about the 
wonderful things the system can do for them (best if the notes include examples 
the ignoranti can try for themselves of course).  We do that here (well, at 
least Victor and I do) and call them DYKT's (Did You Know That . . . ).  The 
response is sometimes quite positive.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Frank Swarbrick
Sent: Wednesday, August 03, 2016 3:00 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

Because your average COBOL programmer, in my experience, knows "bupkis" abound 
dynamic memory allocation.  Perhaps I am wrong, but as far as I know I am the 
only one in our shop who ever uses it.  As for your concern about serializing 
storage between multiple concurrent tasks, I don't know of situations that 
would require this.  This is intended for use within a single run-unit.  But 
even if a table was shared between multiple tasks (within a CICS region I can 
only imagine) you have this concern with our without "dynamic tables".  You'd 
have to serialize updates in either case, so I don't see why dynamic capacity 
tables would cause any additional heartache.


Frank


From: IBM Mainframe Discussion List  on behalf of 
Victor Gil 
Sent: Wednesday, August 3, 2016 8:30 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

I am not sure why would you want the compiler to handle such a general case of 
maintaining a dynamic-size table, while this can be easily programmed by using 
the "Get heap storage" calls [LE function CEEGTST] and even encapsulated in a 
callable service.

We do this kind of dynamic table management all the time, under CICS [i.e. 
using EXEC CICS GETMAIN/FREEMAIN calls] and the main problem here  is how to 
safely re-size a table which has reached its allocated capacity. This shouldn't 
be an issue in batch where the storage is owned by a single task because it can 
just wait for the call to come back while the subroutine allocates a new larger 
size table and populates it from the old one.

So if you ask the compiler to perform such a function it would have to know how 
to serialize storage access between multiple concurrent tasks and, moreover, 
let the in-flight transactions to keep accessing the old table while the new 
one is being allocated and populated.  Pretty difficult for a general case.

-Victor-

--
Last year I submitted an RFE 
(https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=73693) 
that Enterprise COBOL be enhanced to support dynamic capacity tables as defined 
in the COBOL 2014 ISO standard.  It was declined: "Thank you for bringing this 
RFE to our attention. But we believe that if implemented, it will be really 
slow and error prone.Some clients may have restrictions on using this. This 
is not in our multi-year strategy. Hence, we'll have to reject this RFE."


Since the year has passed I have resubmitted the RFE, now with the following 
comments that I hope might address IBM's concerns:


"This RFE was declined based on concerns about performance.I would like to 
submit the following possibilities for consideration:

Would IBM be more amenable to a partial implementation? while you did not 
indicate in the declined RFE what performance issues you foresee,I can 
hazzard some guesses.One is the requirement in the standard is 8.5.1.6.3.2 
Implicit changes in capacity: "When a data item in a dynamic-capacity table is 
referenced as a receiving item and the value of the subscript exceeds the 
current capacity of the table, a new element is automatically created and the 
capacity of the table is increased to the value given by the subscript. If this 
new capacity is more than one greater than the previous
capacity, new intermediate occurrences are implicitly created."I believe 
this would require a runtime check in each of these cases to see if the 
subscript is greater than the current capacity, and if so to increase the 
current capacity.

The current capacity can also be increased explicitly.8.5.1.6.3.3 states 
"If the OCCURS clause specifies a CAPACITY phrase, the capacity of the 
dynamic-capacity table may be increased or decreased explicitly by means of the 
dynamic-capacity-table format SET statement."The "implicit changes&quo

COBOL 2014 dynamic capacity tables

2016-08-03 Thread Bill Woodger
I think that is the correct way to do it, Frank. The chunk-size of "20" is 
obviously determined by whatever best fits the data use. I'd go for the old 
"table size you expect, and then a bit" but doing the "and then a bit" by 
extending it.

For batch it doesn't matter, but for other usage you do need to be aware that 
the address of the start of the table may change. If "something else" has the 
old address (when there is an old address) then something is going to behave 
less than optimally.

The difference in your example to what you mentioned earlier is that the 
storage is entirely contiguous, and, since it is "defined" in the LINKAGE 
SECTION (a mapping of storage, with storage acquired for it) then nothing else 
gets upset.

If you remember from a while ago I thought the UNBOUNDED *was* IBM's 
implementation of dynamic tables.

Here's another example of what I presume your code is doing, generally 
(although you have the better idea with the chunks) 
http://enterprisesystemsmedia.com/article/leveraging-cobol-language-environment-services#sr=g&m=o&cp=or&ct=-tmc&st=(opu%20qspwjefe)&ts=1470260061

The actual dynamic tables (or the possibilities) described in the current 
Standard I regard as a nightmare. Even let's assume that all the dynamic table 
storage is consecutive. Well, you can get nested, and you can mix them with 
fixed or ODO tables under the same group item. Yes, you could implicitly 
untangle that, but at what cost? What about REDEFINES? Yes, you can ban it, 
like for an ODO, but for an ODO you can still achieve REDEFINES. Try to do that 
with the proposed junk, and there's a mess.

IBM has just rewritten the compiler, including work on WORKING-STORAGE and 
LOCAL-STORAGE. Anything with a dynamic capacity table would have to have its 
own storage allocated, separate from the WORKING-STORAGE/LOCAL-STORAGE 
allocation (so you end up with multiples of each).

Let's not get into how programmers may (ab)use it. 

Nor sites which "ban" its use.

So, since UNBOUNDED does what you want in this example, you need a very strong 
case to get dynamic tables addressed, and I don't think such a case exists, 
given the inherent drawbacks.

If it doesn't fit into IBM's plans, which they stated it doesn't when rejecting 
the RFE first time, then you're going to get it rejected again.

I don't doubt that it would be great fun to play with, outside the Mainframe, 
and really welcome anything you can produce for GnuCOBOL, but I'm not sure 
you're going to get anywhere with Enterprise COBOL.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Frank Swarbrick
#1) one should always code for threadsafety regardless...

#2) yes, I did 'code the required service'.  Which doesn't make the requirement 
superfluous by any means.


From: IBM Mainframe Discussion List  on behalf of 
Victor Gil 
Sent: Wednesday, August 3, 2016 2:06 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

If the "update a table row" logic has no imbedded CICS commands it won't be 
interrupted by CICS, so the updater will have no competition, thus no 
serialization is required [I am talking about "regular" CICS tasks, dispatched 
on QTCB, not those fancy running on "open TCBs"].
However, it is required during the table re-allocation as the logic does need 
to call CICS for memory re-allocation [which may or may not cause the task to 
get re-dispatched].

Anyway, looks like you have already coded the required service.

-Victor-

---
Because your average COBOL programmer, in my experience, knows "bupkis" abound 
dynamic memory allocation.  Perhaps I am wrong, but as far as I know I am the 
only one in our shop who ever uses it.  As for your concern about serializing 
storage between multiple concurrent tasks, I don't know of situations that 
would require this.  This is intended for use within a single run-unit.  But 
even if a table was shared between multiple tasks (within a CICS region I can 
only imagine) you have this concern with our without "dynamic tables".  You'd 
have to serialize updates in either case, so I don't see why dynamic capacity 
tables would cause any additional heartache.


Frank

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Victor Gil
If the "update a table row" logic has no imbedded CICS commands it won't be 
interrupted by CICS, so the updater will have no competition, thus no 
serialization is required [I am talking about "regular" CICS tasks, dispatched 
on QTCB, not those fancy running on "open TCBs"].
However, it is required during the table re-allocation as the logic does need 
to call CICS for memory re-allocation [which may or may not cause the task to 
get re-dispatched].

Anyway, looks like you have already coded the required service.

-Victor- 

---
Because your average COBOL programmer, in my experience, knows "bupkis" abound 
dynamic memory allocation.  Perhaps I am wrong, but as far as I know I am the 
only one in our shop who ever uses it.  As for your concern about serializing 
storage between multiple concurrent tasks, I don't know of situations that 
would require this.  This is intended for use within a single run-unit.  But 
even if a table was shared between multiple tasks (within a CICS region I can 
only imagine) you have this concern with our without "dynamic tables".  You'd 
have to serialize updates in either case, so I don't see why dynamic capacity 
tables would cause any additional heartache.


Frank

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Frank Swarbrick
Because your average COBOL programmer, in my experience, knows "bupkis" abound 
dynamic memory allocation.  Perhaps I am wrong, but as far as I know I am the 
only one in our shop who ever uses it.  As for your concern about serializing 
storage between multiple concurrent tasks, I don't know of situations that 
would require this.  This is intended for use within a single run-unit.  But 
even if a table was shared between multiple tasks (within a CICS region I can 
only imagine) you have this concern with our without "dynamic tables".  You'd 
have to serialize updates in either case, so I don't see why dynamic capacity 
tables would cause any additional heartache.


Frank


From: IBM Mainframe Discussion List  on behalf of 
Victor Gil 
Sent: Wednesday, August 3, 2016 8:30 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

I am not sure why would you want the compiler to handle such a general case of 
maintaining a dynamic-size table, while this can be easily programmed by using 
the "Get heap storage" calls [LE function CEEGTST] and even encapsulated in a 
callable service.

We do this kind of dynamic table management all the time, under CICS [i.e. 
using EXEC CICS GETMAIN/FREEMAIN calls] and the main problem here  is how to 
safely re-size a table which has reached its allocated capacity. This shouldn't 
be an issue in batch where the storage is owned by a single task because it can 
just wait for the call to come back while the subroutine allocates a new larger 
size table and populates it from the old one.

So if you ask the compiler to perform such a function it would have to know how 
to serialize storage access between multiple concurrent tasks and, moreover, 
let the in-flight transactions to keep accessing the old table while the new 
one is being allocated and populated.  Pretty difficult for a general case.

-Victor-

--
Last year I submitted an RFE 
(https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=73693) 
that Enterprise COBOL be enhanced to support dynamic capacity tables as defined 
in the COBOL 2014 ISO standard.  It was declined: "Thank you for bringing this 
RFE to our attention. But we believe that if implemented, it will be really 
slow and error prone.Some clients may have restrictions on using this. This 
is not in our multi-year strategy. Hence, we'll have to reject this RFE."


Since the year has passed I have resubmitted the RFE, now with the following 
comments that I hope might address IBM's concerns:


"This RFE was declined based on concerns about performance.I would like to 
submit the following possibilities for consideration:

Would IBM be more amenable to a partial implementation? while you did not 
indicate in the declined RFE what performance issues you foresee,I can 
hazzard some guesses.One is the requirement in the standard is 8.5.1.6.3.2 
Implicit changes in capacity: "When a data item in a dynamic-capacity table is 
referenced as a receiving item and the value of the subscript exceeds the 
current capacity of the table, a new element is automatically created and the 
capacity of the table is increased to the value given by the subscript. If this 
new capacity is more than one greater than the previous
capacity, new intermediate occurrences are implicitly created."I believe 
this would require a runtime check in each of these cases to see if the 
subscript is greater than the current capacity, and if so to increase the 
current capacity.

The current capacity can also be increased explicitly.8.5.1.6.3.3 states 
"If the OCCURS clause specifies a CAPACITY phrase, the capacity of the 
dynamic-capacity table may be increased or decreased explicitly by means of the 
dynamic-capacity-table format SET statement."The "implicit changes" was one 
of the arguments I've seen against implementation of dynamic capacity tables, 
with the concern that one might have a bug that set a subscript to an incorrect 
and possibly very large value, which would cause the table to be increased to 
that value "improperly".

So why not eliminate that requirement as part of the implementation?I can't 
see any problem with a simple "SET tbl-capacity UP BY 1" when intentionally 
adding a new row to the table.


One other feature I can see that could be bypassed, at least initially, would 
be the behavior of the MOVE of a group containing a dynamic capacity table.
Because a d.c. table would most likely not be "physically contiguous" with the 
rest of the items in the table, a MOVE of the entire group would at the very 
least be "less efficient".So how about a restriction that you can't do a 
group MOVE where the group contains one or more dynamic capacit

Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Frank Swarbrick
nd to post the source code for these routines to the "IBM COBOL Cafe" as 
soon as I can get a good "test case" that is not an actual production program!  
:-)

I'm sure most people have not read this far, but if you have I welcome any 
comments.

None of this eliminates my desire for IBM to implement language support for 
dynamic capacity tables.  I just felt I'd waited long enough that I might as 
well develop my own interm solution.

Frank


From: IBM Mainframe Discussion List  on behalf of 
Bill Woodger 
Sent: Wednesday, August 3, 2016 2:08 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: COBOL 2014 dynamic capacity tables

If you are expecting this type of table to non-contiguous in any way, then that 
breaks things.

You couldn't REDEFINES (you could "ban" it. No you can't. REDEFINES is banned 
for OCCURS DEPENDING ON yet I use it a lot).

OK, for SEARCH, you could have special versions of the library routines (great, 
two places to maintain stuff). But what about INSPECT, STRING, UNSTRING? Ban 
those as well.

What about the code

an-entry ( a-subscript + n )

where + is +/- n is a literal numeric value. OK ban it.

What about CALL? Ban it.

What about anything except the particulars of the dynamic capacity table?

So, forget non-contiguous storage. So make the performance issue that of 
keeping the table contiguous, implicitly. Like with any acquiring of storage, 
"adding to it" can be a heavy process. An implicit process. Which newly-trained 
CS people are not used to either knowing about or being concerned about.

Where in the DATA DIVISION would it be? LINKAGE SECTION again (non-Standard). 
WORKING-STORAGE or LOCAL-STORAGE (breaks how they work currently)?

I don't think everything (by a long stretch) in the  current  COBOL Standard 
(2014, replacing 2002) is a "good fit" for what we (that's me saying "I" and 
hoping not to look entirely isolated) expect for a Mainframe COBOL.

Also, performance was not the only reason. There is "error prone" and also 
these: "Some clients may have restrictions on using this. This is not in our 
multi-year strategy."

It would be good, but perhaps not possible, if IBM were to outline their 
multi-year strategy for COBOL. Avoids the rejection of RFEs which stood no 
chance for that reason.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-03 Thread Victor Gil
I am not sure why would you want the compiler to handle such a general case of 
maintaining a dynamic-size table, while this can be easily programmed by using 
the "Get heap storage" calls [LE function CEEGTST] and even encapsulated in a 
callable service.

We do this kind of dynamic table management all the time, under CICS [i.e. 
using EXEC CICS GETMAIN/FREEMAIN calls] and the main problem here  is how to 
safely re-size a table which has reached its allocated capacity. This shouldn't 
be an issue in batch where the storage is owned by a single task because it can 
just wait for the call to come back while the subroutine allocates a new larger 
size table and populates it from the old one.

So if you ask the compiler to perform such a function it would have to know how 
to serialize storage access between multiple concurrent tasks and, moreover, 
let the in-flight transactions to keep accessing the old table while the new 
one is being allocated and populated.  Pretty difficult for a general case.

-Victor-

--
Last year I submitted an RFE 
(https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=73693) 
that Enterprise COBOL be enhanced to support dynamic capacity tables as defined 
in the COBOL 2014 ISO standard.  It was declined: "Thank you for bringing this 
RFE to our attention. But we believe that if implemented, it will be really 
slow and error prone.Some clients may have restrictions on using this. This 
is not in our multi-year strategy. Hence, we'll have to reject this RFE."


Since the year has passed I have resubmitted the RFE, now with the following 
comments that I hope might address IBM's concerns:


"This RFE was declined based on concerns about performance.I would like to 
submit the following possibilities for consideration:

Would IBM be more amenable to a partial implementation? while you did not 
indicate in the declined RFE what performance issues you foresee,I can 
hazzard some guesses.One is the requirement in the standard is 8.5.1.6.3.2 
Implicit changes in capacity: "When a data item in a dynamic-capacity table is 
referenced as a receiving item and the value of the subscript exceeds the 
current capacity of the table, a new element is automatically created and the 
capacity of the table is increased to the value given by the subscript. If this 
new capacity is more than one greater than the previous
capacity, new intermediate occurrences are implicitly created."I believe 
this would require a runtime check in each of these cases to see if the 
subscript is greater than the current capacity, and if so to increase the 
current capacity.

The current capacity can also be increased explicitly.8.5.1.6.3.3 states 
"If the OCCURS clause specifies a CAPACITY phrase, the capacity of the 
dynamic-capacity table may be increased or decreased explicitly by means of the 
dynamic-capacity-table format SET statement."The "implicit changes" was one 
of the arguments I've seen against implementation of dynamic capacity tables, 
with the concern that one might have a bug that set a subscript to an incorrect 
and possibly very large value, which would cause the table to be increased to 
that value "improperly".

So why not eliminate that requirement as part of the implementation?I can't 
see any problem with a simple "SET tbl-capacity UP BY 1" when intentionally 
adding a new row to the table.


One other feature I can see that could be bypassed, at least initially, would 
be the behavior of the MOVE of a group containing a dynamic capacity table.
Because a d.c. table would most likely not be "physically contiguous" with the 
rest of the items in the table, a MOVE of the entire group would at the very 
least be "less efficient".So how about a restriction that you can't do a 
group MOVE where the group contains one or more dynamic capacity tables?I 
don't see too many uses cases where this would cause an issue, and if we can 
get implementation of the most important features, that is better than nothing 
at all?"


I still believe this would be one of the most useful enhancements that COBOL 
could have.  Please vote if you agree.  (There is also a similar SHARE 
requirement.)


My resubmitted RFE: 
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=92391

Share Requirement: http://www.share.org/index.php?mo=is&op=vi&iid=67&type=23


Frank Swarbrick

Principal Analyst, Mainframe Applications

FirstBank -- Lakewood, CO USA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 2014 dynamic capacity tables

2016-08-03 Thread Bill Woodger
If you are expecting this type of table to non-contiguous in any way, then that 
breaks things.

You couldn't REDEFINES (you could "ban" it. No you can't. REDEFINES is banned 
for OCCURS DEPENDING ON yet I use it a lot).

OK, for SEARCH, you could have special versions of the library routines (great, 
two places to maintain stuff). But what about INSPECT, STRING, UNSTRING? Ban 
those as well.

What about the code 

an-entry ( a-subscript + n ) 

where + is +/- n is a literal numeric value. OK ban it.

What about CALL? Ban it.

What about anything except the particulars of the dynamic capacity table?

So, forget non-contiguous storage. So make the performance issue that of 
keeping the table contiguous, implicitly. Like with any acquiring of storage, 
"adding to it" can be a heavy process. An implicit process. Which newly-trained 
CS people are not used to either knowing about or being concerned about.

Where in the DATA DIVISION would it be? LINKAGE SECTION again (non-Standard). 
WORKING-STORAGE or LOCAL-STORAGE (breaks how they work currently)? 

I don't think everything (by a long stretch) in the  current  COBOL Standard 
(2014, replacing 2002) is a "good fit" for what we (that's me saying "I" and 
hoping not to look entirely isolated) expect for a Mainframe COBOL.

Also, performance was not the only reason. There is "error prone" and also 
these: "Some clients may have restrictions on using this. This is not in our 
multi-year strategy."

It would be good, but perhaps not possible, if IBM were to outline their 
multi-year strategy for COBOL. Avoids the rejection of RFEs which stood no 
chance for that reason.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 2014 dynamic capacity tables

2016-08-02 Thread Windt, W.K.F. van der (Fred)
You got my vote!

Fred!



ATTENTION:
The information in this e-mail is confidential and only meant for the intended 
recipient. If you are not the intended recipient, don't use or disclose it in 
any way. Please let the sender know and delete the message immediately.
--

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 2014 dynamic capacity tables

2016-08-02 Thread Frank Swarbrick
Last year I submitted an RFE 
(https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=73693) 
that Enterprise COBOL be enhanced to support dynamic capacity tables as defined 
in the COBOL 2014 ISO standard.  It was declined: "Thank you for bringing this 
RFE to our attention. But we believe that if implemented, it will be really 
slow and error prone.Some clients may have restrictions on using this. This 
is not in our multi-year strategy. Hence, we'll have to reject this RFE."


Since the year has passed I have resubmitted the RFE, now with the following 
comments that I hope might address IBM's concerns:


"This RFE was declined based on concerns about performance.I would like to 
submit the following possibilities for consideration:

Would IBM be more amenable to a partial implementation? while you did not 
indicate in the declined RFE what performance issues you foresee,I can 
hazzard some guesses.One is the requirement in the standard is 8.5.1.6.3.2 
Implicit changes in capacity: "When a data item in a dynamic-capacity table is 
referenced as a receiving item and the value of the subscript exceeds the 
current capacity of the table, a new element is automatically created and the 
capacity of the table is increased to the value given by the subscript. If this 
new capacity is more than one greater than the previous
capacity, new intermediate occurrences are implicitly created."I believe 
this would require a runtime check in each of these cases to see if the 
subscript is greater than the current capacity, and if so to increase the 
current capacity.

The current capacity can also be increased explicitly.8.5.1.6.3.3 states 
"If the OCCURS clause specifies a CAPACITY phrase, the capacity of the 
dynamic-capacity table may be increased or decreased explicitly by means of the 
dynamic-capacity-table format SET statement."The "implicit changes" was one 
of the arguments I've seen against implementation of dynamic capacity tables, 
with the concern that one might have a bug that set a subscript to an incorrect 
and possibly very large value, which would cause the table to be increased to 
that value "improperly".

So why not eliminate that requirement as part of the implementation?I can't 
see any problem with a simple "SET tbl-capacity UP BY 1" when intentionally 
adding a new row to the table.


One other feature I can see that could be bypassed, at least initially, would 
be the behavior of the MOVE of a group containing a dynamic capacity table.
Because a d.c. table would most likely not be "physically contiguous" with the 
rest of the items in the table, a MOVE of the entire group would at the very 
least be "less efficient".So how about a restriction that you can't do a 
group MOVE where the group contains one or more dynamic capacity tables?I 
don't see too many uses cases where this would cause an issue, and if we can 
get implementation of the most important features, that is better than nothing 
at all?"


I still believe this would be one of the most useful enhancements that COBOL 
could have.  Please vote if you agree.  (There is also a similar SHARE 
requirement.)


My resubmitted RFE: 
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=92391

Share Requirement: http://www.share.org/index.php?mo=is&op=vi&iid=67&type=23


Frank Swarbrick

Principal Analyst, Mainframe Applications

FirstBank -- Lakewood, CO USA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN