Nathan Torkington wrote:
> 
> And there's no law that says some areas can't run *faster* than 10%.

"...where all the children are above average.". 10% across the board
demands that, unless you overclock by 10%. :-)

> But I think we have to be realistic.  We all want a programming
> language that doesn't require you to do anything and runs in zero time,
> but it ain't going to happen.  10% -- that's at least in the realms
> of possibility.

If you get a 10% speedup, then I guarantee that I can write a benchmark
that will get a 100% speedup. The question is whether real-world tasks
will ever get that 100% speedup, or if they'll all get the 20% slowdown
that makes the average work out.

I just don't know if I'd bother to switch to Perl6 for a 10% speedup. If
it made it possible for me to get a 40% speedup on a task I cared about,
even if I had to tell the optimizer exactly what machine instructions to
use, then I'd switch.

I'm ok with the 10% speedup goal, since it'll probably lead to the
larger speedup without the effort of pinpointing real-world-type
benchmarks. It just bothers me a bit that it would be possible to
accomplish 10% without anyone really benefiting. But it's measurable, so
it's a decent compromise.

> It's not CGI people who find speed a problem, it's the folks dealing
> with shitloads of data.  They do the same operations again and again

Agreed. This is exactly where I have prototypes that I can't use as
production components. I regularly traverse 90GB of data, and it can be
a long wait with my perl scripts. If I could shift to using perl for
production tasks with the 90GB, and start to use perl for prototyping
things on the 2.5TB that I don't touch with anything but C now, THAT
would make a difference.

> on 500 million lines of data, and need to squeeze every microsecond
> out of the operations because one microsecond each line corresponds to
> 8 minutes of real-world time.  If the bulk of your program is regexp
> manipulations of text, a 10% speedup in the RE engine corresponds to
> nearly a 10% speedup of your program (for perspective, they could do
> another 50 million lines in that time).

But for me, that means shortening a 20 hour run to an 18 hour run. I'm
not going to wake up any sooner to go check on it.

Sorry, I'm straying too far from measuring goals.

> Once again, I'd tend to compile a set of ten problems that represent
> common XS operations (wrapping a simple function with automatic type
> conversion, defining custom types, mixing Perl-allocated data with
> C-allocated data, all the way up to wrapping C++ objects as blessed
> tied hashes).

Oops. Did you suggest this somewhere and I missed it? This sounds great.

Reply via email to