On 01/04/2012 11:09 PM, Stas Malyshev wrote:
> Hi!
> 
>> I really don't think this manual per-feature limiting is going to cut it
>> here. The real problem is the predictability of the hashing which we can
>> address by seeding it with a random value. That part is easy enough, the
>> hard part is figuring out where people may be storing these hashes
>> externally and providing some way of associating the seed with these
>> external caches so they will still work.
> 
> I think we'd need and API to access the seed value and to calculate hash
> for given seed value. That would probably allow extensions that store
> hashes with some additional work to do it properly. Though it needs more
> detailed investigation.

Yes, but we still need an actual case to look at. Opcode caches
shouldn't be a problem unless they store some representation on disk
that live across server restarts. In the APC world, nobody does that. Is
there something in common use out there that actually needs this?

Let's do just the GPC fix (the Dmitry version) for 5.3 and turn on
ignore_repeated_errors just during startup and get it out there. That
takes care of the most obvious attack vector for existing users. Leaving
this in place for 5.4 is fine regardless of what we do in 5.4.

I think for 5.4 we should take a couple of days to dig into what would
actually break from seeding the hash. This seems like a much more
elegant solution compared to trying to add limits to all the other
places manually. This manual limit checking also wouldn't cover 3rd
party extensions or even userspace code that might be vulnerable to the
same thing. The only way to fix those cases is with a central hash fix.

Another alternative to seeding would be to use a different hashing
algorithm altogether. That would solve the cross-server issues, at the
likely cost of slower hashing.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to