On Oct 1, 2008, at 1:24 AM, David Hyatt wrote:
On Oct 1, 2008, at 2:52 AM, Darin Fisher wrote:
I can appreciate that you aren't interested in revisiting this
problem after having resolved it finally by adding the clamp. I
believe you when you say you had compelling evidence too.
We are interested in revisiting the problem or we wouldn't be
suggesting a new high resolution timer API.
I'm with Hyatt. The reason we are having this thread is precisely to
revisit the problem.
I don't know how clear I was in the previous email, but basically it
can take a lot of time before you see problems. What happens is a
site makes a change, screws up and puts in an unintentional
setTimeout loop, and then they pwn the CPU of a browser with no
clamp. They don't discover it because every browser has a pretty
high clamp. When we had these issues, they'd basically crop up one
site at a time every so often. The good news is that usually the
sites would fix the problems, but the bad news is it could take a
while, and angry users would be switching to Firefox.
That is what I was alluding to when I said it took us 3.5 years to
first realize we had to add the clamp. The problems come and go, but
they are consistently a problem, and it can take a while to realize it.
However, the bug Mike cited seems to mention problems with the 1ms
limit on some real sites: <http://code.google.com/p/chromium/issues/detail?id=792
>. At least 5 sites are mentioned, including nytimes.
I think we are converging on some good solutions (somewhat lower basic
clamp, new highres API) and I regret if this thread has felt hostile
to anyone.
Regards,
Maciej
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev