UniData has a 2 gig limit on hashed files. What is the exact limit?
Is it really 2,147,483,648 (2**32)or is 2 gig a loose term. What
is the true threat level? Is it truly 2**31, or is it something else?
We will need to do something soon, but how soon is what I am trying
to determine.
The problem occurs within a few bytes of 2**31, depending on the block
size and a few other things. Actually, it's the write that causes the
block offset to go over 2**31.
Be aware that, if you're anywhere close to 2 GB, you may be closer to
going over the top than you realize. If you have a nice, sequential key,
it's quite possible that the file size could double rather quickly as all
of the groups go into overflow in rapid fashion. Once you go over, writes
will begin to fail, and data loss is imminent.
When I see a file approaching 1 GB, I start thinking about converting it
to dynamic. When it's at about 1.5 GB, I consider it time to convert it
at the next available opportunity. If you're at the point that you're
worried about the difference between 2*10**9 and 2**31, I suggest that
you're close enough to warrant action rather soon.
Tim Snyder
Consulting I/T Specialist
U2 Lab Services
Information Management, IBM Software Group
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users