Umm, the reason that the hash is slow is that it only hashes string type 
values. The test you were using had numeric values. Try your tests with 
strings and you should see a significant difference and I will add numeric 
values to the hash as well for the next release... However, I'm not 
planning on hashing block values...

  - jim

At 06:31 PM 7/7/2000 -0500, you wrote:
>Ladislav wrote:
>>My experiment was raher related to Brian's Memoize, or other
>>simple associative structures. The result is, that for structures
>>containing more than 4 096 elements, the natives Find, Select,...
>>can successfully be replaced by some appropriately designed
>>approaches.
>
>Associative structures are best accessed with select, not
>find. The hash! type optimizes select, but is no faster
>than a block with find, as far as I can tell.
>
>Your bfind is faster than find with sorted data, as find
>uses a linear search. When working with associative lists
>you need to consider the overhead of sorting the data as
>well. This either means resorting the list after every
>insert, using an incremental insertion sort, or making
>sure the data is inserted in order. Any of these may be
>efficient to do manually in some circumstances. Another
>advantage to your binary search is that you can search
>records based on keys (with a little rewriting), making
>it more appropriate for record lookup than find.
>
>Memoizing is only an advantage when the set of arguments
>is small, when a lot of repeated calls with the same value
>are common. Otherwise the overhead of inserts, either from
>hashing overhead or incremental insertion sort (so you can
>use bfind), outweighs the benefit. Even then memoizing is
>only appropriate when the cost of calculating the return
>value is more than the cost of memoizing it, and when no
>side effects occur. You also must consider the memory
>requirements of the lookup table.
>
>Over all I've found hash! tables to be most efficient for
>association lists, using select to do the lookup. Lookup
>tables are only a good idea when lookups outnumber inserts
>by enough to make up for the insert overhead. Any other
>case makes memoization a bad idea anyway.
>
>Any other kind of record lookup, say of records with more
>than 2 fields, would be more efficient with your bfind
>function modified to work with other record lengths with
>a /skip refinement like sort.
>
>Brian Hawley

Reply via email to