On 2/25/14 3:08 PM, Robert O'Callahan wrote:
That's a good point, but Acid3, and to a lesser extent Acid2, are about
testing edge cases and the presence of obscure features. I don't think
they tell you anything significant about parallelism in the mass of
pages on the Web. No single page can, but Acid3 is probably even worse
than picking a page at random. You'd be much better off picking, say, a
Wikipedia page (which I know you've done!) or the HTML5 single-page spec.

So I agree about Acid3 in particular not being particularly interesting for parallelism. (Well, except maybe the SVG stuff, but I don't think that the real-world risk if we have to make SVG slow impacts Servo too much.)

Wikipedia is actually a bit tricky to use as a test case in my experience, because it makes heavy use of floats the way they were designed to be used--for floating objects--and as a result achieves poor parallelism right now. (Mobile Wikipedia is much faster, because it doesn't use floats.) The Alexa Top 50 pages that I've managed to be able to test actually generally achieve significantly better parallelism than Wikipedia does. This is why I think it's important to have some breadth in our tests.

That's a good point too, but the problem is that key parts of the Web
like GMail, Youtube, Facebook etc require so much work for full support
that I don't think we draw broad conclusions until very far in the
future, when it will be far too difficult to make architectural changes.
If you must discover those conclusions earlier then we should probably
piggyback some analysis on an existing browser engine instead of doing
it in Servo.

The good news is that I think CSS layout is by far the largest piece of
the Web where we have implicit parallelism that is subject to
unpredictable hazards. Other big chunks of work, like almost all DOM
APIs, are either obviously not parallelizable or obviously parallelizable.

Right, I'm more interested in the legacy CSS layout stuff that's so prevalent that we can't realistically push back on its use. You do have a good point about it being hard to test things like Gmail or Facebook due to sheer number of dynamic HTML features they use, but maybe we can leverage existing engines and produce static versions of their pages to test in Servo. (I actually already did this to test YouTube.) Based on my (limited) amount of experience trying to guess how much parallelism we'll gain by performing analyses in Gecko (in the previous thread about measuring parallel hazards) I think it's difficult to perform these analyses directly in existing engines without involving our actual implementation in Servo somehow. The chaotic nature of the parallel scheduling with work stealing makes Servo's parallel layout difficult to model and simulate accurately.

Patrick

_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to