"Ren� G. Eberhard" wrote:
> 
> ----- Original Message -----
> From: Gabriel Belingueres <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, September 27, 1999 3:04 PM
> Subject: Re: Web Traffic Analysis
> 
> > > 2. Doing internet banking?
> > >    Can you gain relevant information using statistical analysis?
> > >    You possibly find out that someone did a wired transfer. But
> > >    (if the system is designed well) you don't know who.
> >
> > If the attacker have an account in the same bank, so he/she can retrieve
> > at least one time those html files protected with HTTPS, then could know
> > what kind of transactions or queries the client is doing.
> > Who? Very probably, the bank is using strong crypto (RC4 with 128 bits
> > of key, for example), and maybe the bank give to him/her a free
> > certificate for being a customer (I don't know if any bank in the world
> > have this service, but would be nice anyway :).
> > As the SSL handshake message Certificate travels in the clear, the
> > attacker may have a good chance of know who did the transactions. Even,
> > worst, because the SSL handshake is previous to the web transactions,
> > the attacker even could decide to mount an active attack against that
> > customer, or not.
> 
> Bad implementation. That's not the problem.

You're right. It is problem of SSL. We can't do nothing about it.

> > > Would it be possible to add a compression layer with different
> compression
> > > levels?
> >
> > Do you mean in SSL and TLS implementations? or the protocol we are
> > discussing now?
> > Anyway, I don't know much about compression, but I think it must be a
> > "one-to-one" method.
> 
> TLS has a compression layer but no compression algorithms are defined up to
> now. This specific "against traffic analysis compression algorithm" may
> compress or pad messages. I think that's the best idea. Impementing an
> additional protocol (as you suggest) is too complicated and no one will
> implement it.

But the only think that cover this is a random padding (actually, when
compressed, the length is < than the original TLSPlaintext. Also, it is
kind of dangerous. Many IP layer are configured to transmit packets
until 1460 bytes.
What happen if a TLSPlaintext record compress to less than 1400 bytes
(the background of a image, for example), and the next TLSCiphertext is
bigger than 1460 bytes (the foreground of the image)? It is an anomalous
situation, because the attacker expect that the only packet with length
< 1460 bytes to be the last one, not an intermediate one! The attacker
can discover information from that.

> The idea with the escape code is good but not practical. A few weeks ago
> I suggested to add additional data in the client hello for solving the
> "draft-ietf-tls-http-upgrade-02.txt" problem without success.

I believe you'd have no success because modifying the SSL protocol for
this internet draft questions the very nature of the intented protocol:
upgrading to TLS <WITHIN> HTTP/1.1.
I did not take care of this draft because it is not the common practice.
If it is true that the "dual port" mechanism will be deprecated, I could
think a way to use this new mechanism to provide this protocol, such as
adding an additional entry to the Upgrade: token, could be something
like this:

GET http://www.xxx.com/index.html
Host: www.xxx.com
Upgrade: SOMETHING/1.0, TLS/1.0
Connection: Upgrade

> Your issue can be solved with the compression layer. The upgrading
> problem really needs the additional data.

As say above, the only problem it solves is a "resource length" problem.
That is, only change the lengths of the resources retrieved by the
server. However, still remains the problem of the <number of resources>
retrieved, that is the number of TLS connections.
Of course, compression algorithms always are good to provide short
messages. But...I don't know if the compression algorithm will going to
compress anything after all, because a number of resources retrieved in
HTML pages is compressed already (gif adn jpg images, for example). An
emphirical analysis of the compression ratios of web sites should be of
great help in decide if providing compression algorithms to the Web will
be useful. Do you know somebody who have this done?

Besides there is pleny of work to do yet, do you think that IETF will
accept this as an Internet draft? I think yes because the
draft-ietf-tls-http-upgrade-02.txt protocol is not implemented yet, and
IETF accepted that. But I read that IETF preffers "proved" solutions. I
don't have an implementation for this (yet). What do you think?

Regards,
-- 
Gabriel Belingueres
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to