On 5/22/07, Amit Klein <[EMAIL PROTECTED]> wrote:
Fair enough. Still, I expect at least the websecurity mailing list to give credit where credit is due...
Hmm, good point, No argument, but...as we see more of this character encoding set awareness I wonder: 1. Where do you draw the line on what is "new"? 2. The Cert advisory is telling us that a lot of folks consider this "new". (Maybe you should link them back to Scambray and McClure's late 90's prediction that Unicode would be the death of IDS?) 3. There are two or more completely separate dialoges here, between network sec, and app sec. A lot of folks deserve credit for related research on the network side, and probably even the VX side of the house. If you look at the VX'er history they dealt with many of the same issues independent of network and appsec, yet we don't credit any of them... (probably because they largely wrote in Russian and Polish). 3. The reality is that we are going to see stuff like Cert advisories for things that are (or should be) pretty damn obvious, and redundant, as people start to understand charsets and encoding types more. Let's say I found a web-based triple-decode shellcode canonicalization recently: is that a "new vuln"? Canonicalization order: Unix Shellcode --> Hex URL --> HTML Hexdecimal Reference --> raw text Should I publish a Cert advisory on this? I'm pretty sure their IDS isn't gonna catch it. In fact, I am pretty sure no one's is. Who do I credit? Not trying to escape responsibility by any means; I am having trouble getting my head around the depth of this hole though. Thanks, -- Arian Evans software security stuff "Diplomacy is the art of saying "Nice doggie" until you can find a rock." -- Will Rogers
_______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.grok.org.uk/full-disclosure-charter.html Hosted and sponsored by Secunia - http://secunia.com/