>> Or until you find a bug in your automated prover. Or, worse,
>> discover that a vulnerability exists despite your proof, meaning
>> that you either missed a loophole in your spec or your prover has a
>> bug, and you don't have the slightest idea which.
> On that basis, can I presume that you be
>>> I would question you if you suggested to me that you always assume
>>> to _NOT_ include 'security' and only _DO_ include security if
>>> someone asks.
>> "Security" is not a single thing that is included or omitted.
> Again, in my experience that is not true. Programs that are labelled
> 'Secu
der Mouse wrote:
>>
Then either (a) there exist programs which never access out-of-bounds but which
the checker incorrectly flags as doing so, or (b) there exist programs for which
the checker never terminates (quite possibly both). (This is simply the Halting
Theorem rephrased.)
<<
It is of cour
On 4/12/05, der Mouse <[EMAIL PROTECTED]> wrote:
> >> The programmer is neither the application architect nor the system
> >> engineer.
> > In some cases he is. Either way, it doesn't matter. I'm not asking
> > the programmer to re-design the application, I'm asking them to just
> > program the d
Pascal Meunier <[EMAIL PROTECTED]> writes
>Do you think it is possible to enumerate all the ways all vulnerabilities
>can be created? Is the set of all possible exploitable programming mistakes
>bounded?
I believe that one can make a Turing machine halting argument to show
that this is impossib
> [B]uffer overflows can always be avoided, because if there is ANY
> input whatsoever that can produce a buffer overflow, the proofs will
> fail and the problem will be identified.
Then either (a) there exist programs which never access out-of-bounds
but which the checker incorrectly flags as doi
Nash wrote:
** It would be extremely interesting to know how many exploits could
be expected after a reasonable period of execution time. It seems that
as execution time went up we'd be less likely to have an exploit just
"show up". My intuition could be completely wrong, though.
I would think
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]
> Behalf Of Nash
> Sent: 11 April 2005 21:38
> To: Pascal Meunier
> Cc: SC-L@securecoding.org
> Subject: Re: [SC-L] Theoretical question about vulnerabilities
>
>
> Pascal Meunier wrote:
[snip]
>
> > All we can ho
David Crocker wrote:
3. Cross-site scripting. This is a particular form of "HTML injection" and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the specificati
At 4:21 PM -0400 4/11/05, Dave Paris wrote:
>Joel Kamentz wrote:
>> Re: bridges and stuff.
>>
>> I'm tempted to argue (though not with certainty) that it seems that the
>> bridge analogy is flawed
>> in another way --
>> that of the environment. While many programming languages have similarities
>> The programmer is neither the application architect nor the system
>> engineer.
> In some cases he is. Either way, it doesn't matter. I'm not asking
> the programmer to re-design the application, I'm asking them to just
> program the design 'correctly' rather than 'with bugs'
Except that some
11 matches
Mail list logo