> From: owner-openssl-us...@openssl.org On Behalf Of Ger Hobbelt
> Sent: Monday, 18 May, 2009 13:04

> Quite a bit has been covered in the answers so far, but 
> there's still some material left.

Apparently. Much that I agree with, or is redundant, snipped.

> Considering the 'guaranteed delivery' words in your text, 
> IIRC (it's been a while) there is only one (generic) 
> operating system out there which comes with a **guaranteed** 
> [message] delivery protocol included, and that would be VMS. 
> If my brain hasn't fissured beyond recovery, this is done 
> through a particular $QIO mode. I'll get to that in a second.
> 
See below.

> Today, usually you have IP network stacks, but keep in mind 
> that PPP and others are not IP, yet the dichotomy you see in 
> IP exists with all other network stacks as well:
> 
PPP is designed and normally used as transport for IP.
(Architecturally it could support another stack, but 
I've never heard of anyone doing so.) What PPP should 
replace is the link level -- Ethernet, DSL, etc. (But 
then there are convenience hacks like PPP-over-Ethernet.)

> a) there's UDP, which is 'best effort' _message_ based 
> transport. That means individual messages are important as 
> individuals to the protocol-using layer above it as UDP will 
> deliver messages, individually. They may be fragmented along 
> the way, they even might be duplicated and the order in which 
> they arrive is never guaranteed. <snip>

Almost. UDP simply uses IP transport, and _as delivered_ 
IP can lose, duplicate, or reorder datagrams. IP uses 
fragmentation 'along the way' but reassembles before delivery, 
so a UDP/IP application will never _see_ fragments. (It may 
see higher loss rates because loss of any fragment forces IP
to discard the whole datagram, after waiting a little while.)

> As such, UDP only uses a very simple header:
> to/from/data-length/data-checksum, nothing more.
> 
to and from being ports within each host; the host addresses 
themselves are already handled in the IP header/level.

> b) next, there's TCP, which is /not/ a message-based protocol 
> a la UDP but a stream protocol. That means that the TCP 
> network layer takes care of ensuring 'in-order arrival' 
> (aided by the larger header, which includes a sequence 
> field). [... with] retransmission and window[ing ...]
> TCP also offers 'best effort' delivery, which some would call 
> guaranteed delivery (which it isn't), in that is does 
> /guarantee/ you'll get your data in order, non-duplicated, at 
> the receiver's end.
> 
More precisely, it guarantees(*) the data the receiver gets 
is in-order without duplication or internal loss; the only 
failure that can happen is that any data after a given point 
is lost, with an error indication saying so. (However, 
as discussed, the _sender_ can't reliably determine that;
for the sender to know for sure, the receiver must return 
an application-level acknowledgement; in many cases, an 
application-level response implicitly acknowledges the 
request as well as providing whatever the response is.)

(* The guarantee against data modification is only a checksum 
at the TCP level, plus whatever the underlying transport links 
have. This is usually adequate against random hardware errors 
-- not absolutely always -- but is basically useless against 
a deliberate attacker like Mallory.)

> [...] telnet can, 
> philosophically speaking, be considered as messaging over a 
> TCP stream, but I'll leave it at that because SSL certainly 
> was /not/ designed for that type of usage and very special 
> things must be done at the SSL layer level to make it 
> 'usable' in that way. But I digress.)
> 
Huh? Of all the higher protocols commonly run over TCP, 
telnet is probably the MOST stream-oriented. The NVT is 
basically a character-stream full-duplex terminal like a 
good old model 33 Teletype. It has some 'clumps' of data 
in it, like the subnegotiation sequences of some options, 
but mostly it's streams. Except TN3270 mode, which is just 
a godawful hack better forgotten.

In its canonical (not sole) usage by a human at some device, 
telnet traffic is usually _bursty_, if that's what you meant.
But so are other interactive applications like some HTTP,
some SMTP, some FTP, etc. Also some things on UDP.

> Note also that TCP does however /not/ guarantee delivery of 
> everything you sent (which is 'guaranteed delivery' to the 
> letter) because any [persisting] network failure will result 
> in TCP connection loss and all you got at the receiver's is 
> all you'll ever get as far as TCP is
> concerned: party's over, so to speak. If you /want/ the 
> missing /remainder/ of that data stream, you'll need to 
> implement your own mechanism on top of TCP to accomplish 
> anything that would get close to 'guaranteed delivery'. E.g. 
> by trying to establish a new connection with the sender and, 
> through your application-level protocol, request the 
> remainder of the data from the sender once the connection is 
> established.
> 
Right.

> Which brings us to the pinnacle in network communications: 
> guaranteed delivery.
> 
> 
> c) the 'guaranteed delivery' I mentioned before: VMS offers 
> this as a message-based protocol, but you can easily convert 
> that into becoming a stream protocol by cutting the stream 
> into messages and transfer them. How does it work? Well, TCP 
> is connection-based, so once you loose the connection (for 
> whatever reason: network failure or
> otherwise) you're done.
> VMS actually queues your messages (data chunks if you want to 
> play stream over message protocol) and each message is 
> potentially held in store indefinitely, until message from 
> the receiver comes back that, yes, we got this one. This 
> includes all the measures above: connection re-establishment 
> upon failure to deliver (i.e. retry mechanisms at both 
> message and node visibility levels), etc. It's been too long, 
> but I seem to recall it also had options where you actually 
> could request TCP-like behaviour: both in-order message 
> reception and non-duplication. But my brain may be playing 
> tricks on me there.
> 
VMS may be the only place this was _in the OS_, I'm not sure,
but the same functionality has been created by many others.

Aside: It has often been termed 'transactional' because it tries to 
provide for communications the same idealized behavior that (most) 
database systems try to do for data storage. I.e., the database 
idea is that whatever changes you write (insert, update, delete) 
are consistent with the input you used (read) and either fail 
entirely and detectably (so you can retry them or do something 
else as appropriate) or else are done completely and permanently 
(unless and until you intentionally change them again). This is 
denoted by the acronym ACID = Atomic, Consistent, Isolated, Durable. 
This term is somewhat unfortunate because in the comms area 
'transaction' and 'transactional' is also used for styles of 
application access and interface where you do input->output 
operations that are self-contained and independent of other ones, 
versus a 'conversational' or 'session' style where you do a series 
of things that are related and later ones depend on earlier ones.
(This is really a spectrum rather than just a yes/no attribute;
nearly all UIs have _some_ state and nearly all operations have 
_some_ information at least optionally that could be redundant.)

All solutions I've seen store in-doubt data and resolve it using 
a new connection, after possibly long delay, as you describe.
Since the protocol cannot distinguish between 'peer is currently 
unreachable' and 'peer is permanently destroyed', implementations 
usually have some way to manually force resolution if necessary, 
so storage doesn't remain 'forever'.

For many years the biggest thing is this area was IBM's LU6.2 
protocol aka APPC Application Program to Program Communication.
Or more precisely the 'syncpoint' option of LU6.2, which also 
did other (purportedly useful) things not relevant here. This 
was implemented by the widely-used CICS and IMS application 
platforms, and also by a huge variety of other systems and 
applications that needed or wanted to interface to those.
This was part of IBM's SNA (System Network Architecture) stack 
and didn't directly use TCP (or IP), but was easily enough 
tunnelled over or shimmed to Internet protocols once they 
became dominant. (SNA itself dates to the early days of ARPAnet 
well before Internet protocols were designed, and when the main 
competition was the CCITT/ISO Open System Interconnection stack.)

IBM then developed a product MQSeries which originally basically 
just routed and executed APPC-synced transfers but was then 
expanded to other transport particularly TCP/IP and ported; 
it has been included with MSWindows since I believe XP.

ISO also had an OSI transactional protocol, at level 6 IIRC, 
but it sunk into oblivion with the rest of OSI.

I've worked with several wire-transfer and security-trading 
systems where an individual message/transaction can be for 
many millions even billions of dollars. All these systems 
implemented their own 'guaranteed' protocols with the same 
principles of sequence numbering, end-to-end acknowledgement, 
and some kind of liveness/recency check, but details vary.
Some of them save and can redeliver a whole day's traffic 
in the worst case, or at least used to in days when online 
remoting was slower, more expensive and less robust.

> What does SSL do to prevent and/or detect attacks? 

> First, replay. [snip: tamper-detection scenarios]

> a-c) assume you don't have the session key, which is 
> transmitted by the SSL protocol at connection setup (while 
> 'renegotiation' can replace the session key with a fresh one 
> along the way, which is a good thing to have when you are 
> transmitting lots of data over a single connection: [...]
> Since the session key is transmitted, we should be able to 
> get our grubby little hands on that one, right?
> 
> That's where it becomes really interesting, because that's 
> what public key crypto was created for: to prevent you from 
> grabbing this. It takes a lot of math, but the crux is that 
> at connection setup time, SSL uses public key cryptography 
> (RSA, DSA, ...) to transmit a few items that are really 
> really valuable. One of these is the (symmetric
> cipher) session key to be used by sender and receiver: that's 
> your key/cipher agreement right there.
> Public key crypto is designed to give you a bloody hard time 
> to decrypt publicly available [encrypted] data while a single 
> other individual (Mr. B) will be able to decrypt it anyhow.

Not quite. For RSA, the key data is indeed publickey-encrypted 
by the client and can be and is decrypted only by the server.
(The data isn't actually the key itself, rather the premaster 
secret from which the keys are derived, but as far as security 
goes these are equivalent.) For DH, the whole key agreement 
algorithm is different.  Each peer chooses effectively a factor 
and sends an irreversible transform of it (y=g^x mod P)
to the other. Each peer then combines its own private value 
with the public value from the other to give the key data 
(again actually the premaster, and the keys are derived).
An eavesdropper who sees both public values (g^a and g^b) is 
still unable to determine g^(ab) which is the critical thing.
DSA does not apply to key agreement, only authentication.

> [...] To ensure B will be able to see /he/
> (A) sent this, he'll also first 'sign' the valuables by 
> encrypting it using his own private key.

Not as such. For RSA and static-DH, the *server* doesn't sign 
anything; it implicitly proves (current) possession by 
completing key agreement correctly, confirmed by Finished.
For ephemeral-DH, he does sign his key factor. For anon-DH, 
of course, no authentication is done. 

Authentication of the client is optional; if done, again 
for stat-DH it is implicitly proved by the key agreement, 
and for RSA and eph-DH he signs the running handshake hash, 
which includes among other things the (already-encrypted) 
RSA premaster OR his (unencrypted) eph-DH factor.

> d) You, as MITM, can try to 'mimic' A and/or B. Since the 
> public key crypto used is strong enough to hold you off, i.e. 
> allow each of them discover your attempts at impersonation / 
> data injection / data elimination / replay (there's also a 
> timer aboard to prevent you from recording the whole 
> conversation and replaying the conversation as-is at a later 
> time, by the way -- that's attack #d ;-) )
> 
The time value (in this context usually called a 'clock' not 
a 'timer') can't be relied on for this because clocks can also 
be subverted. Instead both ends provide random (varying and 
unpredictable) values which contribute to the key(s) and 
thus prevent replayed negotiation from Finish'ing.

> e) But you /can/ try to subvert the whole scheme in one way: 
[the certificate/chain/authority(s), but out of scope]

Right.

> Given the above, what can an attacker accomplish?
> 
> All tampering leads to connection disruption, [...]
> You can [try to] impersonate nodes by making the others out 
> there mistakenly trust your forged certificate instead. [...]
> Guaranteed delivery of data? The above should've made it 
> abundantly clear this is not to be had: [...]
> Hence within a single SSL connection there can only be 'best effort'
> delivery, which can (and does) only /guarantee/ 'in order' 
> and 'non-duplicated' data reception with 'no gaps' in the 
> data received /so far/.
> Upper layer protocol will need to be added on top of this if 
> you require anything resembling guaranteed delivery (or the 
> next approximation thereof: explicit reception 
> acknowledgement by the receiving end node).
> 
Right.

SSL does support carrying _crypto_ context from one connection 
to another (session reuse). But not data or application state;
that must be done by something else (higher).



______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to