On 2012-09-10, Dan Goodman <dg.gm...@thesamovar.net> wrote:
> On 04/09/2012 03:54, Roy Smith wrote:
>> Let's assume you're testing two strings for equality.  You've already
>> done the obvious quick tests (i.e they're the same length), and you're
>> down to the O(n) part of comparing every character.
>>
>> I'm wondering if it might be faster to start at the ends of the strings
>> instead of at the beginning?  If the strings are indeed equal, it's the
>> same amount of work starting from either end.  But, if it turns out that
>> for real-life situations, the ends of strings have more entropy than the
>> beginnings, the odds are you'll discover that they're unequal quicker by
>> starting at the end.
>
>  From the rest of the thread, it looks like in most situations it won't 
> make much difference as typically very few characters need to be 
> compared if they are unequal.
>
> However, if you were in a situation with many strings which were almost 
> equal, the most general way to improve the situation might be store a 
> hash of the string along with the string, i.e. store (hash(x), x) and 
> then compare equality of this tuple. Almost all of the time, if the 
> strings are unequal the hash will be unequal. Or, as someone else 
> suggested, use interned versions of the strings. This is basically the 
> same solution but even better. In this case, your startup costs will be 
> higher (creating the strings) but your comparisons will always be instant.
>

Computing the hash always requires iterating over all characters in the string
so is best case O(N) where string comparison is best case (and often average
case) O(1).

Also, so far as I know the hash value once computed is stored on the string
object itself [1] and used for subsequent string comparisons so there's no
need for you to do that in your code.

Oscar

[1] http://hg.python.org/cpython/file/71d94e79b0c3/Include/unicodeobject.h#l293

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to