On Tue, 26 Nov 2013 10:33:06 +0000
Oscar Benjamin <oscar.j.benja...@gmail.com> wrote:

> On 26 November 2013 06:18, John O'Hagan <resea...@johnohagan.com>
> wrote:
> >
[...]
> >
> > def _multicombs(prepend, words, r, chkstr):
> >     """chkstr is the string of remaining availalable characters"""
> >     if r == 0:
> >         yield prepend, chkstr
> >         return
> >     (word, count, rem), *remaining = words
> >     newstr = chkstr
> >     for k in range(max(r-rem, 0), min(count, r) + 1):
> >         for letter in word * k:
> >             if letter in newstr:
> >                 newstr = newstr.replace(letter, '' , 1)
> >             else:
> >                 return
> >         yield from _multicombs(prepend + (word,) * k, remaining,
> > r-k, newstr)
> 
> Further micro-optimisations are possible. One is to inline the lower
> recursion levels. For example if the termination condition is checked
> immediately before recursion you can eliminate a redundant generator
> function call.
> 
> > By progressively checking against the available characters, it
> > avoids fruitless recursion branches. I'm not 100% sure I'm doing
> > this right yet but so far it's promising. This is the quickest
> > non-redundant approach I've used so far (although strangely, the
> > original product-only version which produces a lot of redundant
> > results is still by far the quickest.)
> 
> Bear in mind that the interpreter does a lot of  "redundant" things so
> what you might think of as non-redundant code can actually be highly
> redundant when interpreter over-head is accounted for.

I don't mind redundancy as long as I don't have to see it in the output!

> Recently I've been trying out PyPY in my own work and it is *really
> fast*. In fact it is so much faster that I've given up on the idea of
> speed-optimising code for CPython: just use PyPy instead - although it
> is worth optimising highly intensive code for PyPy.

I'll have a look at it.

> > I'll try to explain better what I'm actually doing. It's quite long,
> > but you seem to be wondering, or maybe someone else is!
> 
> [snip]
> 
> Okay I understand now. I was confused because I think of anagrams as
> being words of the same length. I didn't understand how your multiword
> version works but I think I do now. In my own words: You want to
> generate all sequences of English words having the same letters as
> some input string, ignoring the spaces in both the input and output
> strings (so that the number or length of the individual words need not
> be the same).
> 
> [snip]
> >
> > For example, take the phrase "break beat". I make a wordlength
> > dictionary of sub-alphabet words, using the system dictionary file,
> > like this:
> >
> >  {1: ['a'], 2: ['at', 'be', 're'], 3: ['are', 'ark', 'art', 'ate',
> >  'baa', 'bar', 'bat', 'bee', 'bet', 'bra', 'ear', 'eat', 'ebb',
> > 'eke', 'era', 'ere', 'eta', 'rat', 'tab', 'tar', 'tea', 'tee'], 4:
> > ['abet', 'area', 'babe', 'bake', 'barb', 'bare', 'bark', 'bate',
> > 'beak', 'bear', 'beat', 'beer', 'beet', 'beta', 'brat', 'rake',
> > 'rate', 'reek', 'take', 'tare', 'teak', 'tear', 'tree', 'trek'], 5:
> > ['abate', 'baker', 'beret', 'brake', 'break', 'eater', 'karat',
> > 'kebab', 'taker'], 6: ['aerate', 'beaker', 'beater', 'berate',
> > 'betake', 'karate', 'rebate', 'retake']}
> 
> Okay, how about the following (pseudoish-code)?
> 
> def word_sequences(prepend, target, subwords):
>     # subwords is a list rather than a dict
>     for n in range(1, len(subwords)):
>         for word in subwords[n]:
>             if equal_in_a_multiset_sense(word, target):
>                 yield prepend + ' ' + target
>             elif is_a_submultiset(word, target):
>                 recurse_target = multiset_subtract(target, word)
>                 subsubwords =
> list_of_subwords_by_length[:len(recurse_target)] if any(subsubwords):
>                     yield from word_sequences(prepend + ' ' + word,
> recurse_target, subsubwords)
> 

[...]

That is very clever. Direct, economical and does effectively the
same pruning job as my approach without all the combinatorial
brainstrain. I got it working by changing "for n in range..." to "for
words in subwords" (otherwise it missed the one-letter words) and the
first yield needed to be "prepend + ' ' + word". 

I simplified it a bit more to this:

def word_sequences(prepend, target, subwords):
    """subwords is a list of lists of subwords grouped by length,
        in order of length"""
    for words in subwords:
        for word in words:
            recurse_target = subset_subtract(target, word)
            if recurse_target:
                yield from word_sequences(prepend + ' ' + word,
                        recurse_target, subwords[:len(recurse_target)])
            elif recurse_target == '':
                yield prepend + ' ' + word

with just one function to do the subset testing:

def subset_subtract(target, word):
    for i in word:
        if i in target:
            target = target.replace(i, '' ,1)
        else:
            return
    return target

Speed-wise it is somewhat faster than any of my non-duplicate-producing
attempts, but still way slower than the current champion, the redundant
cartesian product-only version. 

However, because it iterates through all the remaining words on each
recursion, it seems to produce n! of each unique result, where n in the
number of words in the result, so this is the new champion as far as
redundancy is concerned. I'll keep working on it, the totally
different approach is interesting.

--

John 
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to