On Mon, Aug 17, 2009 at 11:43 AM, bsmedberg <bsmedb...@gmail.com> wrote:

>
> At Mozilla we're currently working on implementing multi-process
> plugin hosts similar to the model used by Chromium. However, we're
> having trouble working through the many potential race conditions
> introduced by having multiple processes (and therefore multiple flows
> of control). I've read
> http://dev.chromium.org/developers/design-documents/plugin-architecture
> but that doesn't seem to address most of the issues we're facing.
>

For the detailed view of how we solve these issues, the best place to look
at is the code.  Specifically, plugin_channel.cc & ipc_sync_channel.cc.

To avoid deadlock, when one process is sending a synchronous message, it
responds to other synchronous messages in the meantime.  However to avoid
unnecessary reentrancy, we disable this for synchronous messages from the
plugin process unless that process is itself responding to synchronous
messages.  We did this because otherwise a plugin process could try to force
layout while it's already happening, which is not expected in WebKit.  You
can find more about this in PluginChannel's constructor, we call
SendUnblockingOnlyDuringDispatch which makes it so that synchronous

>
> The most obvious problem is that both processes may send a synchronous
> IPC message at the same time. Assuming that these don't deadlock, the
> native stack for the two calls would end up interleaved. What happens
> when the renderer process and the plugin process send conflicting
> messages at roughly the same time? For example: the browser finishes a
> network request with NPP_DestroyStream and the plugin (responding to a
> UI event, perhaps) calls NPN_DestroyStream simultaneously? I can't
> imagine that a plugin would expect to receive aa NPP_DestroyStream
> message after it has already called NPN_DestroyStream, and this is
> likely to cause erratic plugin behavior.
>

This specific case is not really a problem.  If you look at our
implementation of PluginInstance::NPP_DestroyStream(), we set NPStream.ndata
to NULL.  The second call would early exit if it's already null.


> Are the IPC delegates/stubs responsible for checking the state of each
> call and avoiding improper nesting? Do you have any procedure/system
> for detecting with racy improper nesting? For example, racing pairs
> NPP_Write/NPN_DestroyStream and NPN_RequestRead/NPP_DestroyStream are
> equally unexpected. And all these example come only from the NPStream
> interface; I haven't even begun to examine potential message races in
> the core NPP APIs or the NPObject APIs.
>

We did run into a bunch of these issues early on, but they were all easy to
workaround (i.e. always checking first if the stream is destroyed, since it
could be called afterwards in the situations that you describe).

The hardest issues we've had to solve are related to performance.  The
initial release had poor performance when scrolling with lots of
windowed/windowless plugins.  To solve this, we moved to an asynchronous
painting/scrolling model.  While adding extra complexity and memory usage,
the user experience when scrolling is much better.  The techniques we used
should be applicable in your implementation as well, we can talk more about
this when you're ready (plugin lunch? :) ).


> --BDS
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to