Easily fixed by installing one of the alternate regex libraries.

re performance and its edge cases have been discussed endlessly. Please look it 
up before restarting that discussion.

Top-posted from my Windows phone

From: Franklin? Lee
Sent: Thursday, February 8, 2018 2:46
To: Serhiy Storchaka
Cc: Python-Ideas
Subject: Re: [Python-ideas] Complicate str methods

On Feb 7, 2018 17:28, "Serhiy Storchaka" <storch...@gmail.com> wrote:
04.02.18 00:04, Franklin? Lee пише:

Let s be a str. I propose to allow these existing str methods to take params in 
new forms.

s.replace(old, new):
     Allow passing in a collection of olds.
     Allow passing in a single argument, a mapping of olds to news.
     Allow the olds in the mapping to be tuples of strings.

s.split(sep), s.rsplit, s.partition:
     Allow sep to be a collection of separators.

s.startswith, s.endswith:
     Allow argument to be a collection of strings.

s.find, s.index, s.count, x in s:
     Similar.
     These methods are also in `list`, which can't distinguish between items, 
subsequences, and subsets. However, `str` is already inconsistent with `list` 
here: list.M looks for an item, while str.M looks for a subsequence.

s.[r|l]strip:
     Sadly, these functions already interpret their str arguments as 
collections of characters.

The name of complicated str methods is regular expressions. For doing these 
operations efficiently you need to convert arguments in special optimized form. 
This is what re.compile() does. If make a compilation on every invocation of a 
str method, this will add too large overhead and kill performance.

Even for simple string search a regular expression can be more efficient than a 
str method.

$ ./python -m timeit -s 'import re; p = re.compile("spam"); s = "spa"*100+"m"' 
-- 'p.search(s)'
500000 loops, best of 5: 680 nsec per loop

$ ./python -m timeit -s 's = "spa"*100+"m"' -- 's.find("spam")'
200000 loops, best of 5: 1.09 usec per loop

That's an odd result. Python regexes use backtracking, not a DFA. I gave a 
timing test earlier in the thread:
https://mail.python.org/pipermail/python-ideas/2018-February/048879.html
I compared using repeated .find()s against a precompiled regex, then against a 
pure Python and unoptimized tree-based algorithm.

Could it be that re uses an optimization that can also be used in str? CPython 
uses a modified Boyer-Moore for str.find:
https://github.com/python/cpython/blob/master/Objects/stringlib/fastsearch.h
http://effbot.org/zone/stringlib.htm
Maybe there's a minimum length after which it's better to precompute a table.

In any case, once you have branches in the regex, which is necessary to emulate 
these features, it will start to slow down because it has to travel down both 
branches in the worst case.

_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to