Quoting Xavier Noria f...@hashref.com:
Hi gents,
I am playing around with an idea to improve the performance of singularize
and pluralize for Rails 4.0. In my proof of concept I see some 5x boost,
but it relies an assumption that I'd like to consult with you all. Let me
explain.
As you
On Sun, Feb 12, 2012 at 6:45 AM, Aaron Patterson
tenderl...@ruby-lang.orgwrote:
Interesting. Have you investigated expanding the regular expressions
and doing hash based replacement via gsub!? Since we can know the
replacements in advance, it's possible to compile a hash and use it for
the
I have implemented the variant that uses regular captures rather than named
captures, based on the group counter hack.
We have a similar improvement of 6x, and the code reads better I believe:
https://gist.github.com/1808549
Also, I've added a sanity check to the script that ensures the
Back in the day I implemented a patch for the inflector that gave (iirc)
something like 10x improvement. It was based on the Merb inflector, and was
mostly accomplished by more aggressive caching. My argument at the time was
that inflections tend to be used over and over again in a given app
So, knowing the key place to improve in the current implementation,
I've rewritten that test this way:
https://github.com/rails/rails/commit/d3071db1200e90c0533f75b967c4afb519656d00
which exploits the fact that uncountables are not regexps, but words.
It is not entirely backwards compatible
Hi gents,
I am playing around with an idea to improve the performance of singularize
and pluralize for Rails 4.0. In my proof of concept I see some 5x boost,
but it relies an assumption that I'd like to consult with you all. Let me
explain.
As you know, inflection rules have a lhs which is a
On Sat, Feb 11, 2012 at 01:10:27PM +0100, Xavier Noria wrote:
Hi gents,
I am playing around with an idea to improve the performance of singularize
and pluralize for Rails 4.0. In my proof of concept I see some 5x boost,
but it relies an assumption that I'd like to consult with you all. Let
On Sun, Feb 12, 2012 at 2:10 AM, Aaron Patterson
tenderl...@ruby-lang.orgwrote:
Hi there!
It's possible to count the number of captures in a given regexp:
def count_captures regexp
Regexp.union([regexp, //]).match('').captures.length
end
Yeah, Active Support had that some time
Ah, of course, another assumption here is that regexps are *in practice*
simple.
All the time I have in mind the actual sets of regexps and words to apply
them we find in typical Rails applications.
In particular in practice I don't expect any backtracking explosion due to
quantification +
Nah, forget that last mail about backtracking. If there's excessive
backtracking in some regexp it will be present in both approaches, the way
we build the alternation does not add to it.
--
You received this message because you are subscribed to the Google Groups Ruby
on Rails: Core group.
To
On Sun, Feb 12, 2012 at 03:14:34AM +0100, Xavier Noria wrote:
On Sun, Feb 12, 2012 at 2:10 AM, Aaron Patterson
tenderl...@ruby-lang.orgwrote:
Hi there!
It's possible to count the number of captures in a given regexp:
def count_captures regexp
Regexp.union([regexp,
On Sun, Feb 12, 2012 at 3:43 AM, Aaron Patterson
tenderl...@ruby-lang.orgwrote:
Ya, but if we're going to put arbitrary restrictions on the type of
matches people can do (i.e. no backreferences), you may as well use an
engine that can execute in O(n) time (n = string length). Otherwise,
On Sun, Feb 12, 2012 at 4:12 AM, Xavier Noria f...@hashref.com wrote:
Nowadays long strings get a performance boost. That does not make sense
statistically speaking, English words should be the fast ones.
Indeed, running the benchmark against /usr/share/dict/words gives an
overall speedup of
Ugh, I started the RuleSet class as a replacement of what the inflector
does today and prepending rules was actually reversing them in the script
:D.
Appending does slow it down a bit, but we are still around 6x.
--
You received this message because you are subscribed to the Google Groups Ruby
On Sun, Feb 12, 2012 at 04:31:49AM +0100, Xavier Noria wrote:
On Sun, Feb 12, 2012 at 4:12 AM, Xavier Noria f...@hashref.com wrote:
Nowadays long strings get a performance boost. That does not make sense
statistically speaking, English words should be the fast ones.
Indeed, running the
15 matches
Mail list logo