I think that Jack said most of what I would. The incentives all point in the 
wrong direction. 

I suspect that Jon is one of a few people who have been (a) in the hiring 
position, and (b) truly cared about the security of the product, rather than 
marketing it as 'secure'. I also think Jon is relatively rare in that respect. 
I've personally turned down jobs where it's obvious that the client is not like 
Jon.

I don't see anyone (competent) pulling their punches, and, passing through a 
seriously flawed product. But there are other kinds of compromise:

1. Private evaluation report (budgeted to, say, 200 hours) probabilistically 
identifies N serious vulnerabilities. We all know that another 200 hours could 
turn up N more. In fact, the code may be riddled with errors. Original N 
vulnerabilities are patched. What should the public report say? Technically the 
vulnerabilities are all 'fixed'.

2. Client wants to set unusual parameters for the evaluation: e.g., you won't 
get the device, we'll do your data collection for you. Of course you'll note 
this in your report. But what you say is /not/ what people will hear. Are you 
comfortable with this?

3. Client has an odd threat model. It's not something you agree with, but you 
think: could I just explain that this is the model the client has proposed, 
point out the flaws in it and then move forward with an analysis? 

There are probably other examples. I like to think that I've come down on the 
right side of these, but I recognize that these are all pressure points, and 
money /does/ influence where you stand. (I've also seen it from the other side. 
Ugh.)

Matt

On Jun 18, 2012, at 1:20 PM, Jon Callas wrote:

> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
> 
> > The fact that something occurs routinely doesn't actually make it a good 
> > idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
> > 
> > This is CRI, so I'm fairly confident nobody is cutting corners. But that 
> > doesn't mean the practice is a good one. 
> 
> I don't understand.
> 
> A company makes a cryptographic widget that is inherently hard to test or 
> validate. They hire a respected outside firm to do a review. What's wrong 
> with that? I recommend that everyone do that. Un-reviewed crypto is a bane.
> 
> Is it the fact that they released their results that bothers you? Or perhaps 
> that there may have been problems that CRI found that got fixed?
> 
> These also all sound like good things to me.
> 
>       Jon

_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to