On 10/10/14 10:24, Trevor Perrin wrote:
> On Mon, Oct 6, 2014 at 10:53 AM, Ximin Luo <[email protected]> wrote:
>>
>> Committing to delivering messages in causal order has been controversial for
>> those with resource constraints [2], because it involves queueing some 
>> messages
>> from being displayed until their ancestors arrive. The argument is that this
>> breaks user expectations, and/or would not work if delivery is unreliable
>> (assuming a reliability algorithm would be too costly).
> 
> Here's Moxie's argument from [2]:
> 
> "A fundamental premise of asynchronous messaging is that any message
> can be delayed hours or days for some or all recipients, or that any
> message can be lost entirely at any point in time for some or all
> recipients."
> 
> You're wondering if that premise can be changed via a "reliability
> algorithm" that retransmits lost messages.  If you're considering
> transcript consistency for text messaging like TextSecure, I think we
> should stick to Moxie's premise.
> 

I'm not saying we should change these premises, but that we can deal with it in 
ways other than "display messages potentially out-of-order". As I see it, there 
are two options: one is to provide reliability, the other is to have a less 
strict of consistency ("single-message consistency", mentioned in the first 
post). The two models I have in mind:

1. "I believe all other recipients R also received message m from author s" 
("single-message consistency")
2. "I believe all other recipients R also received message m from author s, as 
well as all its ancestors" ("history/transcript consistency")

(2) implies (1). OTOH, you can use (1) plus explicit ordering information, to 
achieve (2). Then, you *can* display messages immediately on receive. I gave an 
example of this in my first post; but it is much more costly than achieving (2) 
directly. So, may not work in the presence of resource constraints, e.g. on 
message sizes in mobile push notification systems. The book-keeping data 
structures also become *much* more complex, and trying to implement more 
advanced behaviours such as concurrent join on top of it, I don't even want to 
think about.

So, this is why I prefer buffering; it is far simpler. I also have several 
arguments on why it is not that big of a deal, UX-wise and cost-wise. For 
example, buffering can be invisible to the end-user. If they don't see this 
behaviour, then there is no "experience" degradation to complain about. This 
may seem a little facetious, but it is basically what TCP does.

> It would be good to have an algorithm that works for transports where
> there isn't a server seeing all messages and providing reliable
> retransmission (e.g. SMS, Google Cloud Messaging, email, Pond).  But
> even where a server is available for this, the reliability you're
> asking for is costly.
> 

I *have* an algorithm that works without a central server. I did not try to 
optimise for "least messages", or "small message sizes", though - because I 
don't know what I should be optimising for.

"Costly" depends on the constraints. Under certain situations, e.g. where it's 
easy to detect which other members are online, reliability is less costly. So 
it would be good to know what are the strictest and/or average constraints are.

> ---
> 
> One thought: As Andy mentioned, we don't have a taxonomy of attacks
> against transcript-consistency.  That makes it hard to have these
> discussions, because it's hard to know what all the attacks are, and
> how much it's worth to defend them.  So that might be a good way
> forward.
> 

Attacks are not directly relevant here; the problem I am talking about in this 
thread, is not than an attacker can make me falsely believe I have reached 
(either single-message or transcript) consistency, but that unreliable 
transports can make me truly believe I have not reached consistency and that 
(without behaving differently) *this is indistinguishable from an attack*. This 
would result in a bad user experience, because you would get spurious warnings 
about "inconsistent transcript" a lot of the time.

However, behaving differently (reliability algorithm, buffering to achieve 
causal ordering) can make me gain knowledge of consistency with a greater 
chance of success, so we can be more sure that "not reaching consistency" is 
due to malicious transport or sender. This results in better user experience, 
because you have less spurious warnings.

X

-- 
GPG: 4096R/1318EFAC5FBBDBCE
git://github.com/infinity0/pubkeys.git

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Messaging mailing list
[email protected]
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to