We have data on pre-signing add-ons that we consider malware, but we have no way of knowing (structurally, besides incidental reports on bugzilla with the malware uploaded) the contents of the XPIs in question and/or whether they would have passed the validator - they wouldn't go through the validator, because they would have been distributed outside of AMO (front- or sideloaded - either way we would not have source code).

So really, nobody has data on what will happen in a post-signing world. There's an interesting question about how much the pre-signing system can predict what will happen here, but it's sadly not as clear-cut as you hope.

~ Gijs

On 28/11/2015 19:30, Kartikaya Gupta wrote:
So it seems to me that people are actually in general agreement about
what the validator can and cannot do, but have different evaluations
of the cost-benefit tradeoff.

On the one hand we have the camp (let's say camp A) that believes the
validator provides negligible actual benefit, because it is trival to
bypass, but at the same time provides a huge cost to add-on
developers. And on the other hand we have the camp ("camp B") that
believes the validator provides some non-negligible benefit, even
though it may significantly increase the cost to add-on developers.

From what I have been told from multiple people, Mozilla does have
actual data on the type and number of malicious add-ons in the wild,
and it cannot be published. I don't really like this since it goes
against openness and whatnot, but I can accept that there are
legitimate reasons for not publishing this data. So the question is -
do the people in camp A or the people in camp B have access to this
data? I would argue that whoever has access to the data is in a better
position to make the right call with respect to the cost-benefit
tradeoff, and everybody else should defer to them. If people in both
camps have access to the data, then clearly they have different
interpretations of the data and they should discuss it further.
Presumably they know who they are.

kats


On Sat, Nov 28, 2015 at 10:35 AM, Eric Rescorla <e...@rtfm.com> wrote:
On Sat, Nov 28, 2015 at 2:06 AM, Gijs Kruitbosch <gijskruitbo...@gmail.com>
wrote:

On 27/11/2015 23:46, dstill...@zotero.org wrote:

The issue here is that this new system -- specifically, an automated
scanner sending extensions to manual review -- has been defended by
Jorge's saying, from March when I first brought this up until
yesterday on the hardening bug [1], that he believes the scanner can
"block the majority of malware".


Funny how you omit part of the quote you've listed elsewhere, namely:
"block the majority of malware, but it will never be perfect".

You assert the majority of malware will be 'smarter' than the validator
expects (possibly after initial rejection) and bypass it. Jorge asserts,
from years of experience, that malware authors are lazy and the validator
has already been helpful, in conjunction with manual review.


Did Jorge in fact assert that that as a matter of fact or as a matter of
opinion?
Maybe I missed it.

This seems like an empirical question. how many pieces of obvious malware
(in the sense that once the functionality is found it's clearly malicious
code
as opposed to a mistake, not in the sense that it's easy to find the
functionality)
have been found by the review process? How many pieces of obvious malware
(in the sense above) have passed the review process or otherwise been
found in the wild?

-Ekr
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to