[chromium-dev] Re: Plugin control flow and race conditions

2009-08-18 Thread Peter Kasting
Explicitly adding jam to make him notice this.
(I don't know the answer to your question.  As far as I know we try to avoid
implementing things with synchronous messages as much as possible.)

PK

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Plugin control flow and race conditions

2009-08-18 Thread Amanda Walker

On Mon, Aug 17, 2009 at 2:43 PM, bsmedbergbsmedb...@gmail.com wrote:
 The most obvious problem is that both processes may send a synchronous
 IPC message at the same time. Assuming that these don't deadlock, the
 native stack for the two calls would end up interleaved.

While there may be a possibility for deadlock (I haven't looked--jam
will know more), the native stack on the plugin process side should
not get interleaved (at least using the Chromium IPC mechanism.  In
particular, the plugin process will not process incoming IPC requests
while a synchronous IPC call is outstanding.  So in the case you've
described, if the plugin has called NPN_DestroyStream, it will not
even see the NPP_DestroyStream until NPN_DestroyStream has completed.
At that point, the plugin host process may be able to determine that
the stream has already been destroyed, and not even forward the call
to the plugin itself.

In addition, a number of operations which are normally synchronous in
in-process NPAPI are asynchronous in Chromium (painting, for example).
 This does add complexity, but it helps avoid deadlock and improve
performance.

--Amanda

-- 
Portability is generally the result of advance planning rather than trench
warfare involving #ifdef -- Henry Spencer (1992)

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Plugin control flow and race conditions

2009-08-18 Thread John Abd-El-Malek
On Mon, Aug 17, 2009 at 11:43 AM, bsmedberg bsmedb...@gmail.com wrote:


 At Mozilla we're currently working on implementing multi-process
 plugin hosts similar to the model used by Chromium. However, we're
 having trouble working through the many potential race conditions
 introduced by having multiple processes (and therefore multiple flows
 of control). I've read
 http://dev.chromium.org/developers/design-documents/plugin-architecture
 but that doesn't seem to address most of the issues we're facing.


For the detailed view of how we solve these issues, the best place to look
at is the code.  Specifically, plugin_channel.cc  ipc_sync_channel.cc.

To avoid deadlock, when one process is sending a synchronous message, it
responds to other synchronous messages in the meantime.  However to avoid
unnecessary reentrancy, we disable this for synchronous messages from the
plugin process unless that process is itself responding to synchronous
messages.  We did this because otherwise a plugin process could try to force
layout while it's already happening, which is not expected in WebKit.  You
can find more about this in PluginChannel's constructor, we call
SendUnblockingOnlyDuringDispatch which makes it so that synchronous


 The most obvious problem is that both processes may send a synchronous
 IPC message at the same time. Assuming that these don't deadlock, the
 native stack for the two calls would end up interleaved. What happens
 when the renderer process and the plugin process send conflicting
 messages at roughly the same time? For example: the browser finishes a
 network request with NPP_DestroyStream and the plugin (responding to a
 UI event, perhaps) calls NPN_DestroyStream simultaneously? I can't
 imagine that a plugin would expect to receive aa NPP_DestroyStream
 message after it has already called NPN_DestroyStream, and this is
 likely to cause erratic plugin behavior.


This specific case is not really a problem.  If you look at our
implementation of PluginInstance::NPP_DestroyStream(), we set NPStream.ndata
to NULL.  The second call would early exit if it's already null.


 Are the IPC delegates/stubs responsible for checking the state of each
 call and avoiding improper nesting? Do you have any procedure/system
 for detecting with racy improper nesting? For example, racing pairs
 NPP_Write/NPN_DestroyStream and NPN_RequestRead/NPP_DestroyStream are
 equally unexpected. And all these example come only from the NPStream
 interface; I haven't even begun to examine potential message races in
 the core NPP APIs or the NPObject APIs.


We did run into a bunch of these issues early on, but they were all easy to
workaround (i.e. always checking first if the stream is destroyed, since it
could be called afterwards in the situations that you describe).

The hardest issues we've had to solve are related to performance.  The
initial release had poor performance when scrolling with lots of
windowed/windowless plugins.  To solve this, we moved to an asynchronous
painting/scrolling model.  While adding extra complexity and memory usage,
the user experience when scrolling is much better.  The techniques we used
should be applicable in your implementation as well, we can talk more about
this when you're ready (plugin lunch? :) ).


 --BDS

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---