Since nobody else has chimed in with the obvious (to me, anyways):

I've worked with some code that uses disgustingly huge (>512Mb) arrays, largest implementation was a single 2.5 Gb array (before we took the offending programmer into a room and had a... chat).

I'd be interested in seeing some metrics on the needed extra CPU ticks for determining if an array (or array sub-element) is static or dynamic under the scheme, as well as the extra memory for storing an (many?) extra value(s).

That's not exactly what I had in mind: I don't split whole arrays into static and dynamic (this could be an entirely orthogonal optimization to what I'm saying, similar to what Chrome's V8 engine does currently).

We just know a lot more at compile time about member lookups that are defined by a string literal:

$obj->foo(),   $obj->foo   and   $arr['foo']

as opposed to fully dynamic accesses which we don't know at parse time:

$obj->$bar(),   $obj->$bar   and   $arr[$bar]

And we can optimize based on that. I've not spent significant time with the PHP source code yet, to be honest, but if no one is willing to look at it, I could start tinkering with it in my free time and post patches if I arrive at anything. Then we can test how it impacts real world code.

Regards, Stan Vassilev

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to