> What I don't understand is 
> why you'd want a split load of something like 20 in any circumstance. 
> Perhaps it's my misunderstanding of split load, but I take it to mean 
that 
> when a group reaches 20% full, it will split it into two different 
groups.

For KEYONLY dynamic files, the split and merge load factors refer to how 
full the group is of keys, not keys plus data.  It's possible to fill a 
block with keys and not get any data in the primary block.  In those cases 
you can use smaller factors to force the data into the same blocks as the 
keys.  You may want to go even lower - maybe 10/5.

If you're using a KEYDATA dynamic file, the factors are based on the size 
of keys plus data.  Try it as a KEYONLY file with small factors and see if 
you get better results.

> Of course, I'm dealing with a headache of a dynamic file that just isn't 

> happy.  In short, it's got about 33,000,000 records, each in under 512 
in 
> size with purely integer keys...
> When I copy records from the original to the new dynamic file and get 
> somewhere around 20,000,000 records, it start creating ovr segments lie 
> crazy.  I end up with 4 dat segments and 5 ovr segments.

How close to 512 is the average record size?  You may be close enough that 
two records won't fit into the primary block.  Have you tried a larger 
block size?  For fun, you may want to create a small version of the file 
as static, copy in a reasonable number of records, and see what guide 
recommends for the modulo and block size.  While those won't necessarily 
translate to dynamics, the exercise may give you some ideas.  Besides, you 
may get better performance with a larger block size (maybe 4 KB), 
depending on how your disks are set up.  On the other hand, don't go too 
large arbitrarily, especially if you're running with RFS.  All that being 
said, the block size *probably* wouldn't be creating the overflow - it 
should be forcing splits, but it's something to consider.

If you haven't already, take a look at the GROUP.STAT output for the file. 
 Since you have sequential keys, the records should be spread pretty 
evenly among the groups - if not, something else may be going on.  Also 
look at the number of bytes per group to see if it's fairly consistent.

Tim Snyder
Consulting I/T Specialist
U2 Lab Services
Information Management, IBM Software Group
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to