> Le 2 oct. 2017 à 21:40, Chris Lattner <clatt...@nondot.org> a écrit :
> 
>> On Oct 2, 2017, at 1:13 AM, Félix Cloutier via swift-evolution 
>> <swift-evolution@swift.org> wrote:
>> 
>> If you tried hard enough, you could probably create a variable that looks 
>> like it's shadowing one from an outer scope while it actually isn't, and use 
>> the two to confuse readers. This could trick people into thinking that some 
>> dangerous/backdoor code is actually good and safe, especially in the 
>> open-source world where you can't always trust your contributors.
>> 
>> On one hand, other than the complexity of telling if two characters are 
>> lookalikes, I don't know why Αrray (GREEK CAPITAL LETTER ALPHA) and Array 
>> (LATIN CAPITAL LETTER A) should be considered different identifiers. On the 
>> other hand, I struggle to imagine the specifics of an exploit that uses 
>> that. You'd have to work pretty hard to assemble all the pieces of a 
>> backdoor in visually-similar variable names without arousing suspicion.
> 
> I don’t think this is something we have to try hard to avoid.  It is true 
> that some characters look similar, particularly in some fonts, but this isn’t 
> new:
> 
>   let a1 = 42
>   let al = 12
>   let b = al + a1 

There is a fundamental difference between similar characters and characters 
that are meant to be visually identical. People judge the quality of a font by 
its Unicode support, and that means that only "low-quality" fonts would render, 
say, LATIN CAPITAL LETTER T and GREEK CAPITAL LETTER TAU differently.

> If there were real code that was maliciously shadowing to try to cause 
> confusion, then you have a more serious problem on your hands than someone 
> accidentally misunderstanding which one to use.

I'm not sure I understand. If the "more serious problem" you're talking about 
is that your popular project is a valuable target to subvert, then there is no 
question that being backdoored would be more serious than people not reading 
your code right. I don't see how it pushes the problem out of scope, though.

As a security guy, I take my role of thinking about how anything can be abused 
very seriously. Backdoored open source projects turn up every now and then.

This code is backdoored. I challenge you to spot the bug:

func shellEscape(_ args: [String]) -> [String]?
func isWhitelisted(_ tool: String) -> Bool

func execute(externalTool: String, parameters: [String]) {
    if isWhitelisted(externalTool), let pаrameters = shellEscape(parameters) {
        print("Running tool \(pаrameters[0])")"
        system(parameters.joined(separator: " "))
    }
}

> All I’m saying is that we shouldn’t complicate the design to solve this 
> problem (IMO).  If it falls out of the solution somehow (e.g. just disallow 
> invisible characters) then that’s great of course!

How did you identify the bug in the snippet from above? Is it practical enough 
that you would, for instance, recommend that the server group do that test on 
every PR that they receive going forward?

I think that it's hard to build something meaningful without making it look 
suspicious. It's already kind of fishy that my shellEscape function returns an 
Optional, and people will eventually figure out that the parameters are not, in 
fact, shell-escaped. Still, I feel that it should be recognized that security 
is more than buffer overflows and integer overflows, and if there ever is an 
underhanded Swift code contest, that'll be my entry.

Félix

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to