I'm working on getting the Intl2 test set from the page cycler up and
running on the Mac, which currently crashes very quickly. It turns out
that one of the test pages has hundreds of images on it, and we
simultaneously make hundreds of URLRequestFileJobs to load them. Each
of those uses a SharedMemory for communication, each of which requires
a file descriptor. This test page generates enough requests at once
that we blow out the file descriptor limit (which defaults to 256 on
the Mac) and fall apart.

It's tempting to say that we should just
  a) bump up the limit, and
  b) make failure to create a SharedMemory non-fatal
At least some degree of b) is probably a good idea, but it's not
entirely clear that we *want* all the layers involved to silently
accept failure. Even if we do, local pages with more images than
whatever limit we set in a) won't load correctly, and making that
limit too high can get ugly.

A seemingly better option would be to limit the number of simultaneous
URLRequestFileJobs we will allow. I assume we have plumbing in place
to deal with limiting the number of simultaneous URLRequestJobs we
make per server; is it flexible enough that it could be extended to
handle file URLs as well? If so, is there any reason that would be a
bad idea? (And can someone point me to the relevant code?)

-Stuart

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to