On Apr 6, 2012, at 2:50 PM, Justin Dolske wrote:

> On 4/2/12 5:07 PM, Lucas Adamski wrote:
>> I have updated the feature page with proposed workflows based upon
>> previous discussions and relatively little recent feedback.
>> 
>> https://wiki.mozilla.org/Opt-in_activation_for_plugins
>> 
>> To summarize, I am proposing:
> 
> 
>> • Some software installs a plugin the user is not aware of. The first
>> time the plugin is activated by a given page, the user is: • given a
>> warning and user must opt-in before enabling
> 
> I'm not sure CTP is the right mechanism for this... We already have UI
> for dealing with externally-installed extensions, seems like we should
> just recycle that. (Though there may be some technical quirks around
> exactly when we can scan/detect new plugins.)
> 
> Alerting at time-of-use is a good idea for outdated plugins (as is being
> discussed elsewhere), but doing that for new installs means that you
> might not be notified for a long time (days? weeks? more?), and are then
> left wondering where it came from.

Good point.  Unless someone feels strong otherwise I'm happy to take this out 
of scope.

>> • User has an up-to-date version of an "uncommon" plugin or one they
>> have not encountered in the last X days • plugin is click-to-play to
>> reduce resource consumption and risk of zero-day security exploits
>> or
> 
> There are probably some UX issues to sort out, but I think the basic
> idea is sound... We should limit the exposure of plugins that are
> infrequently used and/or only used on a handful of sites.
> 
> [Resource usage shouldn't be an issue;, though. We don't launch plugins
> until something tries to use it.]

I meant "tries to use it" as actually desiring to interact with a piece of 
plugin content, rather than simply having all plugin content run when the page 
is loaded (even in the background).  From my informal observations seems like 
most Flash CPU usage in Firefox is due to content I don't intend/want to 
interact with, but maybe I'm misunderstanding something about the plugin 
implementation model.

> 
>> • User has a vulnerable plugin with a known security issue, but no
>> update available • User can only run plugin after very scary warning
>> • User has a vulnerable plugin with a known security issue, and an
>> update is available • User can run plugin after very scary warning to
>> update first
> 
> We should carefully think through the UX here, since users are likely to
> just ignore warnings. Especially for plugins constantly going through
> the exploit-update cycle. :( Upgrading plugins automagically is probably
> the best solution, but also the most complex to implement. (But then,
> you still have to deal with orphaned/unmaintained plugins anyway…)

My concern is that without actively driving updates we won't provide a 
significant security benefit.  Without updates the user will sooner or later be 
convinced to click on the wrong thing.  I'm not attached to a particular UX 
proposal here though.  

I'm also not sure we'd want to be super-aggressive in issuing this block.  For 
example, if no known security exploits are in circulation then maybe we 
shouldn't issue this block until a week or two after the update has gone out.  
The idea is that the average user wouldn't see this most of the time and just 
be updated silently.  

Another technique might be to not worn (at all or as aggressively) on popular / 
trusted sites, but that gets us on the slippery slope of being a trust broker 
and managing static lists.  Bleah.

>> • User is tired of always clicking to play a given plugin (i.e.
>> YouTube, or their favorite Java game site) • A user has clicked on
>> this four times in 30 days, so automatically enable this plugin on
>> this site up to 30 days after last played or until user revokes this
>> permission (about:permissions?)
> 
> As Jared noted, UX is rethinking this. It can seem a bit confusing /
> inconsistent if you don't know what it's actually doing.
> 
> It _might_ be worthwhile to still track, though, if we wanted to be able
> to later block a plugin only on sites where you haven't frequently used it.

I'm looking forward to UX feedback and will happily go with whatever the 
Firefox/UX team recommends.

>> As part of this model, I would like to redefine what our current
>> blocklisting mechanisms mean to be:
>> 
>> Hard block: means plugin is vulnerable, but no update is available
>> Soft block: means plugins is vulnerable and an update is available
> 
> I'd suggest just abandoning those terms. Let's decide what we want to build, 
> and let the naming derive from that.

I agree the terms wouldn't mean much, but I'm thinking it would be nice to 
repurpose the current blocklist service without having to mess with it.  Maybe 
that's not worth it / that big a deal tho.

>> Yes, that means we no longer have a non-defeatable block.  But I'm
>> not sure we've ever actually used that.  Its also ineffective: if
>> said plugin is actually malicious then system has already been
>> thoroughly compromised and blocking it does no good.
> 
> Hmm, seems like we should still retain the option. Especially if it's easy to 
> do a part of the other work we're already doing.
> 
> I could see it being useful for cases where an old version of a plugin is 
> being actively exploited in malware kits.
> 

I'm not sure we'd use it even in that case but if we're willing to support that 
code path there's no other risk to retaining it.

Once we have some UX feedback on the open items (persisting trust decisions, 
and maybe how to handle the "vulnerable plugin; go update" scenario) I'll 
update the feature page to reflect those decisions.  Thanks!
  Lucas. 
  Lucas.
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to