I agree, but I would add that you also want to look at it from a lock point of view. Locks are hashed into shared memory using the group #, so if the more records in one group, the more will hash into a single lock row. IIRC, record locks are promoted to group locks if you have more than one record locked in the group. So you can minimize group lock contention by making the file "wide and shallow". So not only is there no gain from having 30+ records in a group, there's a potential penalty.

I don't believe so. The principal criterion in separation is to get the I/O as efficient (minimal) as possible. With today's hardware it is neither here nor there that you get 30+ records per group to scan through in memory.
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to