On Friday 24 Jan 2003 9:19 pm, Matthew Toseland wrote: > > This could be worked around to some extent by providing a built in method > > to download multiple files in parallel, but only get them back after they > > have all downloaded, or failed. > > > > But then again, wouldn't it be just as harmful to repeatedly request > > different files and time them one by one, and get timing information that > > way, statistically analyzed if necessary? How are parallel requests worse > > in this regard? > > Yeah, we might have to disable the clock too.
That would, unfortunately, have lots of knock-on effects, especially if you went with JS as the scripting language (which is probably one of the better choices for this application). setTimeout() function could be abused to find out timing information. It would be very bad to disable this because it is just about the only way I can think of creating a separate thread in JS. For doing things in parallel, there is very little choice. It is also the only way to emulate the effect of a sleep function. Both things require some unorthodox programming techniques, but at least it is doable. You could, however, add a random amount of time (within reason) to the setTimeout() timeout request, this breaking most of the potential timing attacks. You cannot really prevent all possibilities, because even something as simple as a loop could be used to get timing information. The simplest solution I can think of is to not disable the clock at all. Instead, provide a downloading method that waits for some reasonably short, non-deterministic amount of time before responding with the data. That seems like a much more elegant solution, especially if malicious action is always dependant on data transfers. Regards. Gordan _______________________________________________ Tech mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/tech
