> > It is good to hear from everyone about common code patterns and what the
> > common scenarios are, and what needs to be targeted.  But, the problem
> often
> > is that ensuring correctness for the 1% (or even 0.1%) uncommon case
> might
> > effectively block good performance for the 99% common case.  The
> specifics
> > will depend on the specific case being considered.
>
> There's two schools of thought here...one is that we could explicitly
> define the optimization characteristics of the system and say "if you
> do X after point Y, your changes won't be visible to compiled code."
> In the 0.1 or 0.01% cases, this may be acceptable, and perhaps nobody
> will ever be impacted by it. But no matter how small the likelihood,
> we can't claim that our optimizations are 100% non-damaging to normal,
> expected Ruby behavior. Whether we can bend the rules of what is
> "normal" or "expected" is more a political debate than a technical
> one.
>
> The other school of thought is that we must be slavishly 100%
> compatible all the time. I think our lack of an aliasable "eval"
> proves that's not the case; there *are* things that people simply *do
> not do*, and we do not need to always allow them to penalize
> performance. And taking an even stronger position, we can always say
> "this is how JRuby works; it's not compatible, but it's what we needed
> to do to get performance for the 99% case" such as we did with
> un-aliasable "eval". Generally people won't complain, and if they do
> they won't actually be affected by it.


I probably lean towards the latter.  But, insofaras all implementations have
bugs and specs are incomplete, you have some leeway probably.  In addition,
I am not sure if there is a solid language spec for Ruby which also leaves
the playing field a bit hazy.

In that sense, the spec is what is implemented and in the case of Ruby, it
might be whatever is verifiable through RubySpec tests if that is what all
language implementations settle as the de facto standard.  So you might be
right in saying that this is JRuby, and not Ruby, and this might be a
political negotiation between various implementations.

In any case, if (a) expectations of what is unsupported are clear upfront,
and (b) violations of assumptions are detected and flagged in some obvious
fashion rather than failing silently or mysteriously, that might still be
acceptable behavior.

Ultimately, this might be an academic discussion, but probably worth having
:-)


> There's also another key point Tom constantly reminds me of: the
> majority of Ruby application performance is not lost due to Ruby code
> execution speed, but due to the speed of the core classes. If only 10%
> of system performance relates to Ruby code execution, and we double
> it, we've only gained a measly 5%. But if we double the performance of
> the remaining 90% (presumably core classes), we improve overall perf
> by 45%. It's a much bigger job, of course, but it helps put things in
> perspective. It's probably better for us to be moderately
> underoptimized than to have dismally inefficient core classes, if we
> had to choose.


This is true if the problem is at the level of source-code / algorithmic
implementation of the core classes.  But, if the problem is because of how
the core classes perform because of the language implementaiton, this is not
true.  For example, the opts you implement for the language might lead to
good performance for the core classes too.  Obviously, I am speaking
hypothetically since I don't know much about what the performance
bottlenecks are in the core classes.

Subbu.

Reply via email to