In UV at least (not sure about UD) attribute processing has improved
dramatically since "the old days," to the point where it is essentially
instant.

Try this:

>
>CT BP ATTRIBUTE.TEST

     ATTRIBUTE.TEST
0001 * test attributes vs values in large array
0002 * 05-12-05 asb
0003
0004 DIM DLIST(10000)
0005 FOR N = 1 TO 10000
0006   DLIST(N) = N
0007 NEXT N
0008
0009 PRINT "COUNTING DIMS.........":
0010 T = SYSTEM(9)
0011 FOR N = 9000 TO 10000
0012   X = DLIST(N)
0013 NEXT N
0014 PRINT SYSTEM(9)-T
0015
0016 LIST = ""
0017 FOR N = 1 TO 10000
0018   LIST<N> = N
0019 NEXT N
0020
0021 PRINT "COUNTING ATTRIBUTES...":
0022 T = SYSTEM(9)
0023 FOR N = 9000 TO 10000
0024   X = LIST<N>
0025 NEXT N
0026 PRINT SYSTEM(9)-T
0027
0028
0029 CONVERT @AM TO @VM IN LIST
0030 PRINT "COUNTING VALUES.......":
0031 T = SYSTEM(9)
0032 FOR N = 9000 TO 10000
0033   X = LIST<1,N>
0034 NEXT N
0035 PRINT SYSTEM(9)-T
0036
>

>RUN BP ATTRIBUTE.TEST
COUNTING DIMS.........0
COUNTING ATTRIBUTES...0
COUNTING VALUES.......280
>.X
01 RUN BP ATTRIBUTE.TEST
COUNTING DIMS.........0
COUNTING ATTRIBUTES...0
COUNTING VALUES.......250
>.X
01 RUN BP ATTRIBUTE.TEST
COUNTING DIMS.........0
COUNTING ATTRIBUTES...0
COUNTING VALUES.......240
>.X
01 RUN BP ATTRIBUTE.TEST
COUNTING DIMS.........0
COUNTING ATTRIBUTES...0
COUNTING VALUES.......250
>.L RELLEVEL

     RELLEVEL
001 X
002 9.6.1.14
003 PICK
004 PICK.FORMAT
005 9.6.1.14

It seems to me that the speed advantages of dimensioned arrays are no
longer valid. BTW, converting VMs to AMs in even the largest array (like
100,000 elements) is nearly instant, and the speed benefits of
processing attributes vs values is so large that if you have old
applications that have VM delimited lists (I used to structure lists
that way) you should (if possible) convert to AMs first, then convert
back to VMs before putting the data back.

Scott Ballinger
Pareto Corporation
Edmonds WA USA
206 713 6006


-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roger Glenfield
Sent: Friday, May 13, 2005 12:47 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] I'm in an Array quandry, any suggestions...
>Have you ever compared performance between dynamic and dimensioned 
>arrays,
>or are you just saying that you've never notice problems but have never

>tried dimensioned arrays?  I've seen it make a HUGE difference in Pick,

>UniVerse, and UniData.  If you reference many elements of a dynamic
array 
>many times, you'll burn a lot of CPU cycles just to locate the data.
When 
>you reference an element of a dimensioned array, it's stored in
separate 
>address space, and is immediately referenced.
>
>I have a standard way to avoid problems with the last attribute folding
>into the highest array element.  Just dimension the array one element 
>larger than the highest attribute you reference in the program.  So if
the 
>highest attribute you reference is number 72, dimension the array at 73
or 
>higher.  Where I used to work, we had an automated process that created

>file definitions, including standard equates and the code to dimension 
>arrays.  We always created the arrays at one more than the highest 
>attribute, and never had problems.  This won't be necessary in 
>environments where the extra attributes are placed on element zero, but
it 
>won't hurt anything, either.  That way your code will be portable.
>
>  
>
Payback during 2nd generation Pick was 10-20 attributes.  Back then, the

problem was to not oversize because it slowed down the read/writing of 
the blank attributes.

Didn't we hear/read recently that the new compiler and/or run time 
machine  is keeping track of  individual attribute marks in dynamic 
arrays, so that a full string search is not necessary every time?


Roger
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to