Good morning Christian and Conner,

The construction we came up with allows multiple rendezvous nodes, unlike the 
HORNET construction that is inherently only a single rendezvous node.
Perhaps the extra flexibility comes with some security degradation?

Regards,
ZmnSCPxj

Sent with [ProtonMail](https://protonmail.com) Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, November 14, 2018 7:40 PM, Christian Decker 
<decker.christ...@gmail.com> wrote:

> Hi Conner,
>
> thanks for the pointers, looking forward to reading up on the
> wrap resistance. I don't quite follow if you're against the
> re-wrapping for spontaneous re-routing, or the entire rendez-vous
> construction we came up with in Australia. If it's the latter, do
> you have an alternative construction that we might look at?
> Hornet requires the onion-in-onion initial sphinx setup IIRC
> which is pretty much what we came up with here (with the
> exception that we manage to have the second onion be hidden in
> the first one's header instead of the payload).
>
> Cheers,
> Christian
>
> On Tue, Nov 13, 2018 at 9:21 PM Conner Fromknecht 
> <conner@lightning.engineering> wrote:
>
>> Good morning all,
>>
>> Taking a step back—even if key switching can be done mathematically, it seems
>> dubious that we would want to introduce re-routing or rendezvous routing in 
>> this
>> manner. If the example provided _could_ be done, it would directly violate 
>> the
>> wrap-resistance property of the ideal onion routing scheme defined in [1]. 
>> This
>> property is proven for Sphinx in section 4.3 of [2]. Schemes like HORNET [3]
>> support rendezvous routing and are formally proven in this model. Seems this
>> would be the obvious path forward, given that we've already done a 
>> considerable
>> amount of work towards implementing HORNET via Sphinx.
>>
>> Cheers,
>> Conner
>>
>> [1] A Formal Treatment of Onion Routing:
>> https://www.iacr.org/cryptodb/archive/2005/CRYPTO/1091/1091.pdf
>> [2] Sphinx: https://cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf
>> [3] HORNET: https://arxiv.org/pdf/1507.05724.pdf
>> On Mon, Nov 12, 2018 at 8:47 PM ZmnSCPxj via Lightning-dev
>> <lightning-dev@lists.linuxfoundation.org> wrote:
>>>
>>> Good morning Christian,
>>>
>>> I am nowhere near a mathematician, thus, cannot countercheck your expertise 
>>> here (and cannot give a counterproposal thusly).
>>>
>>> But I want to point out the below scenarios:
>>>
>>> 1.  C is the payer.  He is in contact with an unknown payee (who in reality 
>>> is E).  E provides the onion-wrapped route D->E with ephemeral key and 
>>> other data necessary, as well as informing C that D is the rendez-vous 
>>> point.  Then C creates a route from itself to D (via channel C->D or via 
>>> C->A->D).
>>>
>>> 2.  B is the payer.  He knows the entire route B->C->D->E and knows that 
>>> payee is C.  Unfortunately the C<->D channel is low capacity or down or etc 
>>> etc.  At C, B has provided the onion-wrapped route D->E with ephemeral key 
>>> and other data necessary, as well as informing to C that D is the next 
>>> node.  Then C either pays via C->D or via C->A->D.
>>>
>>> Even if there is an off-by-one error in our thinking about rendez-vous 
>>> nodes, could it not be compensated also by an off-by-one in the link-level 
>>> payment splitting via intermediary rendez-vous node?
>>> In short, D is the one that switches keys instead of A.
>>>
>>> The operation of processing a hop would be:
>>>
>>> 1.  Unwrap the onion with current ephemeral key.
>>> 2.  Dispatch based on realm byte.
>>> 2.1.  If realm byte 0:
>>> 2.1.1.  Normal routing behavior, extract HMAC, etc etc
>>> 2.2.  If realm byte 2 "switch ephemeral keys":
>>> 2.2.1.  Set current ephemeral key to bytes 1 -> 32 of packet.
>>> 2.2.2.  Shift onion by one hop packet.
>>> 2.2.3.  Goto 1.
>>>
>>> Would that not work?
>>> (I am being naive here, as I am not a mathist and I did not understand half 
>>> what you wrote, sorry)
>>>
>>> Then at C, we have the onion from D->E, we also know the next ephemeral key 
>>> to use (we can derive it since we would pass it to D anyway).
>>> It rightshifts the onion by one, storing the next ephemeral key to the new 
>>> hop it just allocated.
>>> Then it encrypts the onion using a new ephemeral key that it will use to 
>>> generate the D<-A<-C part of the onion.
>>>
>>> Regards,
>>> ZmnSCPxj
>>>
>>>
>>> Sent with ProtonMail Secure Email.
>>>
>>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>>> On Tuesday, November 13, 2018 11:45 AM, Christian Decker 
>>> <decker.christ...@gmail.com> wrote:
>>>
>>> > Great proposal ZmnSCPxj, but I think I need to raise a small issue with
>>> > it. While writing up the proposal for rendez-vous I came across a
>>> > problem with the mechanism I described during the spec meeting: the
>>> > padding at the rendez-vous point would usually zero-padded and then
>>> > encrypted in one go with the shared secret that was generated from the
>>> > previous ephemeral key (i.e., the one before the switch). That ephemeral
>>> > key is not known to the recipient (barring additional rounds of
>>> > communication) so the recipient would be unable to compute the correct
>>> > MACs. There are a number of solutions to this, basically setting the
>>> > padding to something that the recipient could know when generating its
>>> > half onion.
>>> >
>>> > My current favorite goes like this:
>>> >
>>> > 1.  Rendez-vous RV receives an onion, performs ECDH like normal to get
>>> >     the shared secret, decrypts its payload, simultaneously encrypts
>>> >     the padding.
>>> >
>>> > 2.  It extracts its per-hop payload and shifts the entire packet over
>>> >     (shift its payload out and the newly generated padding in)
>>> >
>>> > 3.  It then notices that it should perform an ephemeral key switch, now
>>> >     deviating from the normal protocol (which would just be to generate
>>> >     the new ephemeral key, serialize and forward)
>>> >     3.1. It zero-fills the padding that it just added (so we are in a
>>> >     state that the recipient knew when generating its partial onion
>>> >     3.2 It performs ECDH with the switched in ephemeral key to get a new
>>> >     shared secret that which is then used to unwrap one additional
>>> >     layer of encryption, and most importantly encrypt the padding so
>>> >     the next hop doesn't see the zero-filled padding.
>>> >     3.3 Only then will it generate the new ephemeral key for the next
>>> >     hop, based on the switched in ephemeral key and the newly
>>> >     generated shared secret, serialize the packet and forward it.
>>> >
>>> >     This has the advantage of reusing all the existing machinery but
>>> >     assembling it a bit differently, by adding a little detour when
>>> >     generating the next onion. It involves one additional ECDH at the
>>> >     rendez-vous, one ChaCha20 encryption and one scalar multiplication to
>>> >     generate the next ephemeral keys. It does not need more space than the
>>> >     single ephemeral key in the per-hop payload.
>>> >
>>> >     And now for the reason that I write this as a reply to your post: with
>>> >     this scheme it is not possible for C to find an ephemeral key that 
>>> > would
>>> >     end up identical to the one that D would require to decrypt the onion
>>> >     correctly. This would not be an issue if D is informed about this 
>>> > split
>>> >     and would basically accept whatever it gets, but that kind of defeats
>>> >     the transparency that you were going for with your proposal.
>>> >
>>> >     I'm open for other proposals but I currently can't think of a way to
>>> >     make sure that a) the recipient can deterministically generate the 
>>> > same
>>> >     padding that RV will generate, and b) hide the fact that RV was 
>>> > indeed a
>>> >     rendez-vous point (e.g., by leaving the padding be a well known
>>> >     constant).
>>> >
>>> >     Sorry for this problem, I had a mental off-by-one at the meeting that 
>>> > I
>>> >     hadn't considered, the solution should work, but it makes this kind of
>>> >     things a bit harder.
>>> >
>>> >     Cheers,
>>> >     Christian
>>> >
>>> >     ZmnSCPxj via Lightning-dev lightning-dev@lists.linuxfoundation.org
>>> >
>>> >
>>> > writes:
>>> >
>>> > > Good morning list,
>>> > > As was discussed directly in summit, we accept link-lvel payment 
>>> > > splitting (scid is not binding), and provisionally accept rendez-vous 
>>> > > routing.
>>> > > It strikes me, that even if your node has only a single channel to the 
>>> > > next node (c-lightning), it is possible, to still perform link-level 
>>> > > payment splitting/re-routing.
>>> > > For instance, consider this below graph:
>>> > >
>>> > >       E<---D--->C<---B
>>> > >            ^  /
>>> > >            | /
>>> > >            |L
>>> > >            A
>>> > >
>>> > >
>>> > > In the above, B requests a route from B->C->D->E.
>>> > > However, C cannot send to D, since the channel direction is saturated 
>>> > > in favor of D.
>>> > > Alternately, C can route to D via A instead. It holds the (encrypted) 
>>> > > route from D to E. It can take that sub-route and treat it as a partial 
>>> > > route-to-payee under rendez-vous routing, as long as node A supports 
>>> > > rendez-vous routing.
>>> > > This can allow re-routing or payment splitting over multiple hops.
>>> > > Even though C does not know the number of remaining hops between D and 
>>> > > the destination, its alternative is to earn nothing anyway as its only 
>>> > > alternative is to fail the routing. At least with this, there is a 
>>> > > chance it can succeed to send the payment to the final destination.
>>> > > Regards,
>>> > > ZmnSCPxj
>>> > >
>>> > > Lightning-dev mailing list
>>> > > Lightning-dev@lists.linuxfoundation.org
>>> > > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>>
>>> _______________________________________________
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to