That's good to know.  I searched the internet and found a page about 
implementing dynamic arrays in C and he was using "double", but 1.5 also sounds 
reasonable.  I wonder if perhaps there should be some sort of ratcheting down 
as the number of rows gets very large.

________________________________
From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> on behalf of 
David Crayford <dcrayf...@gmail.com>
Sent: Thursday, August 4, 2016 8:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 2014 dynamic capacity tables

On 4/08/2016 2:52 AM, Frank Swarbrick wrote:
> Even in the case where it does increase the actual allocated capacity, it 
> does not do it "one row at a time".  Rather, it doubles the current physical 
> capacity and "reallocates" (using CEECZST) the storage to the new value.  
> This may or may not actually cause LE storage control to reallocate out of a 
> different area (copying the existing data from the old allocated area).  If 
> there is enough room already it does nothing except increase the amount 
> reserved for your allocation.  And even then, LE has already allocated a 
> probably larger area prior to this from actual OS storage, depending on the 
> values in the HEAP runtime option.

Almost all the dynamic array implementations that I'm aware of, C++
std::vector, Java ArrayList, Python lists, Lua tables, use a growth
factor of 1.5. Apparently it's a golden ratio.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to