> > :> actually being used, while providing instant lookups.
> > :> 
> > :> The single file would be about 96G addressable bytes... But the actual
> > :> block count would be much lower.  I suppose I will have to create a seri
> es
> > :> of these files and divide the problem into < 4GB chunks, but one
> > :> lookup/store will still be done with one calculation and one disk seek
> > :> (just more filehandles open).
> > :> 
> > :> Deletes seem problematic.  My question is, does the operating system
> > :> provide any way to free blocks used in the middle of a file?
> > :> 
> > :> Must I periodically re-create these files (long and slow process, but no
> t
> > :> so bad if done infrequently) to reclaim space, or is there a way to free
> > :> arbitrary blocks in a file in userland (WITHOUT trashing the filesystem?
> > :> :-)
> > :> 
> > :> - Ryan
> > :> 
> > :> -- 
> > :>   Ryan Thompson <[EMAIL PROTECTED]>
> > :>   Network Administrator, Accounts
> > :>   Phone: +1 (306) 664-1161
> > :> 
> > :>   SaskNow Technologies     http://www.sasknow.com
> > :>   #106-380 3120 8th St E   Saskatoon, SK  S7H 0W2
> > 
> >     I would strongly recommend using a B+Tree instead of a hash table.  Wit
> h
> >     a hash table slightly different lookups will seek all over the place.
> >     With a B+Tree lookups will stay more localized.
> 
> Right... That's a good point, but (and, no, I really didn't mention this
> in my original post), "sequential" access is never important with the
> system I have described.  Lookups are always random, and they are almost
> always done one at a time (multiple lookups only done for data swaps...
> very rare, like 1/1e7 accesses... and those are still O(1) (well,
> O(2), but who's counting ;-)))...


> Remember, though, I'm not proposing a hash table.  I have been lucky
> enough to design a "hash" function that will crunch down all possible
> values into about 64M unique keys--so, no collisions.  I propose to use
> this function with address calculation.  So, O(1) everything, for single
> random lookups, which is (virtually) all this data structure must deal
> with.  Sorting isn't necessary, and that could be accomplished in O(n)
> anyway.
> 


Way to use has for variable-length records:

Use an intermediate 'file" whose records are simply pointers tot the location 
of the real data.

Finding a record goes like this:

ACnumber=getRecord("KEY");
realRecordNumber=AC(ACnumber);

then read record number realRecordNumber.

AC I pinched from Adabas which (back in the early 80s, dunno what it does now) 
had an address covertor. It changed record numers (returned by getRecord() 
here) to a disk address. Its size is n, where n is the number of records you 
support.

In data storage (andother Adabas term), you have fixed-length records, each 
containing several records.

Adabas stored the Adabas record number and the data files; numbers stored as 
binary, character fields (usually) preceded with a (one-byte) record size so 
trailing spaces could be dropped.

Inserted records go into empty blocks (except in load mode where blocks are 
filled to a user-specified percentage) (and presumably into a block in the 
buffer pool with space).

Blocks are compressed before being rewritten - all the data is packed to the 
front. The first field in the DS block is the offset to free space in the 
block.

Adabas alsh had a work dataset, where data are recorded pending end of 
transaction; EOT can be acknowldged when the information is safely recorded in 
work as Adabas can always recover from there. It also has transaction logging, 
either too tape (alternating  with two drives) or two files on disk.

















To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to