[EMAIL PROTECTED] wrote:


Is it worth the effort to pre-process the pattern?

For example:

    %% -> %

This is already done, required by spec.

    %_ -> _%

If applied recursively, this would automatically cover:

    %_%  -> _%
    _%_  -> __%

The 'benefit' would be that the pattern matching code would not
need an inner if statement?

I doubt it's worth the trouble.

Also - I didn't see a response to my query with regard treating UTF-8
as a two pass match. First pass treating it as bytes. If the first pass
matches, the second pass doing a full analysis. In the case of low
selectivity, this will be a win, as the primary filter would be the
full speed byte-based matching.

All matching will now be done byte-wise. CHAREQ is dead.

Advancing will also be done byte-wise except for:
 . where text matching is against _ for UTF8
 . where text matching is against % or _ for other multi-byte charsets.

So two passes doesn't sound like much of a win.
I had also asked why the focus would be on high selectivity. Why would
the primary filter criteria for a properly designed select statement by
a like with high selectivity? The only time I have ever used like is
when I expect low selectivity. Is there a reasonable case I am missing?



I think you'd need to show something close to a Pareto improvement: nobody worse off and some people better off. If you can do that then send in a patch.

However, I'm trying to minimise special case processing for UTF8, not create a whole new code path for it. The less special cases we have the easier it will be to maintain.


cheers

andrew

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to