Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-11-02 Thread Bastien TEINTURIER via Lightning-dev
Good morning Joost and Z,

So in your proposal, an htlc that is received by a routing node has the
> following properties:
> * htlc amount
> * forward up-front payment (anti-spam)
> * backward up-front payment (anti-hold)
> * grace period
> The routing node forwards this to the next hop with
> * lower htlc amount (to earn routing fees when the htlc settles)
> * lower forward up-front payment (to make sure that an attacker at the
> other end loses money when failing quickly)
> * higher backward up-front payment (to make sure that an attacker at the
> other end loses money when holding)
> * shorter grace period (so that there is time to fail back and not lose
> the backward up-front payment)


That's exactly it, this is a good summary.

An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.


Yes, that is a good observation.
But shouldn't the swap service take that into account in the fee it
collects to
perform the swap? That way it is in fact the user who pays for that fee.

Cheers,
Bastien

Le mer. 28 oct. 2020 à 02:13, ZmnSCPxj  a écrit :

> Good morning Bastien, Joost, and all,
>
> An issue with the bidirectional upfront/hold fees is related to trustless
> offchain-to-onchain swaps, like Boltz and Lightning Loop.
>
> As the claiming of the offchain side is dependent on claiming of the
> onchain side of the trustless swap mechanism, which is *definitely* slow,
> the swap service will in general be forced to pay up the hold fees.
>
> It seems to me that the hold-fees mechanism cannot be ported over in the
> onchain side, so even if you set a "reasonable" grace period at the swap
> service of say 1 hour (and assuming forwarding nodes are OK with that
> humongous grace period!), the onchain side of the swap can delay the
> release of onchain.
>
> To mitigate against this, the swap service would need to issue a separate
> invoice to pay for the hold fee for the "real" swap payment.
> The Boltz protocol supports a separate mining-fee invoice (disabled on the
> Boltz production servers) that is issued after the invoice is "locked in"
> at the swap service, but I think that in view of the use of hold fee, a
> combined mining-fee+hold-fee invoice would have to be issued at the same
> time as the "real" swap invoice.
>
> Regards,
> ZmnSCPxj
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Hey Joost and Z,

I brought up the question about the amounts because it could be that
> amounts high enough to thwart attacks are too high for honest users or
> certain uses.


I don't think this is a concern for this proposal, unless there's an attack
vector I missed.
The reason I claim that is that the backwards upfront payment can be made
somewhat big without any
negative impact on honest nodes. If you're an honest intermediate node,
only two cases are possible:

* your downstream peer settled the HTLC quickly (before the grace period
ends): in that case you
refund him his upfront fee, and you have time to settle the HTLC upstream
while still honoring
the grace period, so it will be refunded to you as well (unless you delay
the settlement upstream
for whatever reason, in which case you deserve to pay the hold_fee)
* your grace period has expired, so you can't get a refund upstream: if
that happens, the grace
period with your downstream node has also expired, so you're earning money
downstream and paying
money upstream, and you'll usually even take a small positive spread so
everything's good

The only node that can end up loosing money on the backwards upfront
payment is the last node in
the route. But that node should always settle the HTLC quickly (or decide
to hodl it, but in that
case it's normal that it pays the hold_fee).

But what happens if the attacker is also on the other end of the
> uncontrolled spam payment? Not holding the payment, but still collecting
> the forward payments?


That's what I call short-lived `controlled spam`. In that case the attacker
pays the forward fee at
the beginning of the route but has it refunded at the end of the route. If
the attacker doesn't
want to lose any money, he has to release the HTLC before the grace period
ends (which is going to
be short-lived - at least compared to block times). This gives an
opportunity for legitimate payments
to use the HTLC slots (but it's a race between the attacker and the
legitimate users).

It's not ideal, because the attacker isn't penalized...the only way I think
we can penalize this
kind of attack is if the forward fee decrements at each hop, but in that
case it needs to be in the
onion (to avoid probing) and the delta needs to be high enough to actually
penalize the attacker.
Time to bikeshed some numbers!

C can trivially grief D here, making it look like D is delaying, by
> delaying its own `commitment_signed` containing the *removal* of the HTLC.


You're right to dive into these, there may be something here.
But I think your example doesn't work, let me know if I'm mistaken.
D is the one who decides whether he'll be refunded or not, because D is the
first to send the
`commit_sig` that removes the HTLC. I think we would extend `commit_sig`
with a tlv field that
indicates "I refunded myself for HTLC N" to help C compute the same commit
tx and verify sigs.

I agree with you that the details of how we'll implement the grace period
may have griefing attacks
depending on how we do it, it's worth exploring further.

Cheers,
Bastien

Le ven. 23 oct. 2020 à 12:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
>
> > > And in this case C earns.
> >
> > > Can C delay the refund to D to after the grace period even if D
> settled the HTLC quickly?
> >
> > Yes C earns, but D has misbehaved. As a final recipient, D isn't
> dependent on anyone downstream.
> > An honest D should settle the HTLC before the `grace_period` ends. If D
> chooses to hold the HTLC
> > for a while, then it's fair that he pays C for this.
>
>
> Okay, now let us consider the case where the supposedly-delaying party is
> not the final destination.
>
> So, suppose D indicates to C that it should fail the HTLC.
> In this case, C cannot immediately propagate the `update_fail_htlc`
> upstream, since the latest commitment transaction for the C<->D channel
> still contains the HTLC.
>
> In addition, our state machine is hand-over-hand, i.e. there is a small
> window where there are two valid commitment transactions.
> What happens is we sign the next commitment transaction and *then* revoke
> the previous one.
>
> So I think C can only safely propagate its own upstream `update_fail_htlc`
> once it receives the `revoke_and_ack` from D.
>
> So the time measured for the grace period between C and D should be from C
> sending `update_add_htlc` to C receiving `revoke_and_ack` from D, in case
> the HTLC fails.
> This is the time period that D is allowed to consume, and if it exceeds
> the grace period, it is penalized.
>
> (In this situation, it is immaterial if D is the destination: C cannot
> know this fact.)
>
> So let us diagram this better:
>
>  C   D
>  |update_add_htlc--->| ---
>  |---commitment_signed-->|  ^
>  |  |<--commitment_signed---|  |
>  |-revoke_and_ack--->|  |
>  |   | grace period
>  |<--update_fail_htlc|  |
>  |<--commitment_signed---|  |

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-23 Thread Bastien TEINTURIER via Lightning-dev
Thanks for your answers,

My first instinct is that additional complications are worse in general.
> However, it looks like simpler solutions are truly not enough, so adding
> the complication may very well be necessary.


I agree with both these statements ;). I'd love to find a simpler solution,
but this is the simplest
I've been able to come up with for now that seems to work without adding
griefing vectors...

The succeeding text refers to HTLCs "settling".


As you noted, settling means getting the HTLC removed from the commitment
transaction.
It includes both fulfills and fails, otherwise the proposal indeed doesn't
penalize spam.

If we also require that the hold fee be funded from the main output, then
> we cannot use single-funded channels, except perhaps with `push_msat`.


I see what you mean, the first payment cannot require a hold fee since the
fundee doesn't have a
main output. I think it's ok, it's the same thing as the reserve not being
met initially.

But you're right that there are potentially other mechanisms to enforce the
fee (like your suggestion
of subtracting from the HTLC output), I chose the simplest for now but we
can (and will) revisit
that choice if we think that the overall mechanisms work!

And in this case C earns.

Can C delay the refund to D to after the grace period even if D settled the
> HTLC quickly?


Yes C earns, but D has misbehaved. As a final recipient, D isn't dependent
on anyone downstream.
An honest D should settle the HTLC before the `grace_period` ends. If D
chooses to hold the HTLC
for a while, then it's fair that he pays C for this.

it is the fault of the peer for getting disconnected and having a delay in
> reconnecting, possibly forfeiting the hold fee because of that.


I think I agree with that, but we'll need to think about the pros and cons
when we get to details.

Is 1msat going to even deter anyone?

I am wondering though what the values for the fwd and bwd fees should be. I
> agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.


These values are only chosen for the simplicity of the example's sake. If
we agree the proposal works
to fight spam, we will do some calculations to figure a good value for
this. But I think finding the
right base values will not be the hard part, so we'll focus on this if
we're convinced the proposal
is worth exploring in full details.

It is interesting that the forward and backward payments are relatively
> independent of each other


To explain this further, I think it's important to highlight that the
forward fee is meant to fight
`uncontrolled spam` (where the recipient is an honest node) while the
backward fee is meant to fight
`controlled spam` (where the recipient also belongs to the attacker).

The reason it works is because the `uncontrolled spam` requires the
attacker to send a large volume
of HTLCs, so a very small forward fee gets magnified. The backward fee will
be much bigger because
in `controlled spam`, the attacker doesn't need a large volume of HTLCs but
holds them for a long
time. What I think is nice is that this proposal has only a tiny cost for
honest senders (the
forward fee).

What I'd really like to explore is whether there is a type of spam that I
missed or griefing attacks
that appear because of the mechanisms I introduce. TBH I think the
implementation details (amounts,
grace periods and their deltas, when to start counting, etc) are things
we'll be able to figure out
collectively later.

Thanks again for your time!
Bastien


Le ven. 23 oct. 2020 à 07:58, Joost Jager  a écrit :

> Hi Bastien,
>
> We add a forward upfront payment of 1 msat (fixed) that is paid
>> unconditionally when offering an HTLC.
>> We add a backwards upfront payment of `hold_fees` that is paid when
>> receiving an HTLC, but refunded
>> if the HTLC is settled before the `hold_grace_period` ends (see footnotes
>> about this).
>>
>
> It is interesting that the forward and backward payments are relatively
> independent of each other. In particular the forward anti-spam payment
> could quite easily be implemented to help protect the network. As you said,
> just transfer that fixed fee for every `update_add_htlc` message from the
> offerer to the receiver.
>
> I am wondering though what the values for the fwd and bwd fees should be.
> I agree with ZmnSCPxj that 1 msat for the fwd is probably not going to be
> enough.
>
> Maybe a way to approach it is this: suppose routing nodes are able to make
> 5% per year on their committed capital. An aggressive routing node could be
> willing to spend up to that amount to take down a competitor.
>
> Suppose the network consists only of 1 BTC, 483 slot channels. What should
> the fwd and bwd fees be so that even an attacked routing node will still
> earn that 5% (not through forwarding fees, but through hold fees) in both
> the controlled and the uncontrolled spam scenario?
>
> - Joost
>
___
Lightning-dev mailing 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-22 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Sorry in advance for the lengthy email, but I think it's worth detailing my
hybrid proposal
(bidirectional upfront payments), it feels to me like a workable solution
that builds on
previous proposals. You can safely ignore the details at the end of the
email and focus only on
the high-level mechanism at first.

Let's consider the following route: A -> B -> C -> D

We add a `hold_grace_period_delta` field to `channel_update` (in seconds).
We add two new fields in the tlv extension of `update_add_htlc`:

* `hold_grace_period` (seconds)
* `hold_fees` (msat)

We add an `outgoing_hold_grace_period` field in the onion per-hop payload.

When nodes receive an `update_add_htlc`, they verify that:

* `hold_fees` is not unreasonable large
* `hold_grace_period` is not unreasonably small or large
* `hold_grace_period` - `outgoing_hold_grace_period` >=
`hold_grace_period_delta`

Otherwise they immediately fail the HTLC instead of relaying it.

For the example we assume all nodes use `hold_grace_period_delta = 10`.

We add a forward upfront payment of 1 msat (fixed) that is paid
unconditionally when offering an HTLC.
We add a backwards upfront payment of `hold_fees` that is paid when
receiving an HTLC, but refunded
if the HTLC is settled before the `hold_grace_period` ends (see footnotes
about this).

* A sends an HTLC to B:
* `hold_grace_period = 100 sec`
* `hold_fees = 5 msat`
* `next_hold_grace_period = 90 sec`
* forward upfront payment: 1 msat is deduced from A's main output and added
to B's main output
* backwards upfront payment: 5 msat are deduced from B's main output and
added to A's main output
* B forwards the HTLC to C:
* `hold_grace_period = 90 sec`
* `hold_fees = 6 msat`
* `next_hold_grace_period = 80 sec`
* forward upfront payment: 1 msat is deduced from B's main output and added
to C's main output
* backwards upfront payment: 6 msat are deduced from C's main output and
added to B's main output
* C forwards the HTLC to D:
* `hold_grace_period = 80 sec`
* `hold_fees = 7 msat`
* `next_hold_grace_period = 70 sec`
* forward upfront payment: 1 msat is deduced from C's main output and added
to D's main output
* backwards upfront payment: 7 msat are deduced from D's main output and
added to C's main output

* Scenario 1: D settles the HTLC quickly:
* all backwards upfront payments are refunded (returned to the respective
main outputs)
* only the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 2: D settles the HTLC after the grace period:
* D's backwards upfront payment is not refunded
* If C and B relay the settlement upstream quickly (before
`hold_grace_period_delta`) their backwards
upfront payments are refunded
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 3: C delays the HTLC:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* C delays before settling upstream: it can ensure B will not get refunded,
but C will not get
refunded either so B gains the difference in backwards upfront payments
(which protects against
`controlled spam`)
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

* Scenario 4: the channel B <-> C closes:
* D settles before its `grace_period`, so its backwards upfront payment is
refunded by C
* for whatever reason (malicious or not) the B <-> C channel closes
* this ensures that C's backwards upfront payment is paid to B
* if C publishes an HTLC-fulfill quickly, B may have his backwards upfront
payment refunded by A
* if B is forced to wait for his HTLC-timeout, his backwards upfront
payment will not be refunded
but it's ok because B got C's backwards upfront payment
* all the forward upfront payments have been paid (to protect against
`uncontrolled spam`)

If done naively, this mechanism may allow intermediate nodes to deanonymize
sender/recipient.
If the base `grace_period` and `hold_fees` are randomized, I believe this
attack vector disappears,
but it's worth exploring in more details.

The most painful part of this proposal will be handling the `grace_period`:

* when do you start counting: when you send/receive `update_add_htlc`,
`commit_sig` or
`revoke_and_ack`?
* what happens if there is a disconnection (how do you account for the
delay of reconnecting)?
* what happens if the remote settles after the `grace_period`, but refunds
himself when sending his
`commit_sig` (making it look like from his point of view he settled before
the `grace_period`)?
I think in that case the behavior should be to give your peers some leeway
and let them get away
with it, but record it. If they're doing it too often, close channels and
ban them; stealing
upfront fees should never be worth losing channels.

I chose to make the backwards upfront payment fixed instead of scaling it
based on the time an HTLC
is left pending; it's slightly less penalizing for spammers, but is less
complex and 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-19 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

I've started summarizing proposals, attacks and threat models on github [1].
I'm hoping it will help readers get up-to-speed and avoid falling in the
same pitfalls we already
fell into with previous proposals.

I've kept it very high-level for now; we can add nitty-gritty technical
details as we slowly
converge towards acceptable solutions. I have probably missed subtleties
from previous proposals;
feel free to contribute to correct my mistakes. I have omitted for examples
the details of Rusty's
previous proposal since he mentioned a new, better one that will be
described soon.

While doing this exercise, I couldn't find a reason why the `reverse
upfront payment` proposal
would be broken (notice that I described it using a flat amount after a
grace period, not an amount
based on the time HTLCs are held). Can someone point me to the most obvious
attacks on it?

It feels to me that its only issue is that it still allows spamming for
durations smaller than the
grace period; my gut feeling is that if we add a smaller forward direction
upfront payment to
complement it it could be a working solution.

Pasting it here for completeness:

### Reverse upfront payment

This proposal builds on the previous one, but reverses the flow. Nodes pay
a fee for *receiving*
HTLCs instead of *sending* them.

```text
A -> B -> C -> D

B pays A to receive the HTLC.
Then C pays B to receive the forwarded HTLC.
Then D pays C to receive the forwarded HTLC.
```

There must be a grace period during which no fees are paid; otherwise the
`uncontrolled spam` attack
allows the attacker to force all nodes in the route to pay fees while he's
not paying anything.

The fee cannot be the same at each hop, otherwise it's free for the
attacker when he is at both
ends of the payment route.

This fee must increase as the HTLC travels downstream: this ensures that
nodes that hold HTLCs
longer are penalized more than nodes that fail them fast, and if a node has
to hold an HTLC for a
long time because it's stuck downstream, they will receive more fees than
what they have to pay.

The grace period cannot be the same at each hop either, otherwise the
attacker can force Bob to be
the only one to pay fees. Similarly to how we have `cltv_expiry_delta`,
nodes must have a
`grace_period_delta` and the `grace_period` must be bigger upstream than
downstream.

Drawbacks:

* The attacker can still lock HTLCs for the duration of the `grace_period`
and repeat the attack
continuously

Open questions:

* Does the fee need to be based on the time the HTLC is held?
* What happens when a channel closes and HTLC-timeout has to be redeemed
on-chain?
* Can we implement this without exposing the route length to intermediate
nodes?

Cheers,
Bastien

[1] https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md

Le dim. 18 oct. 2020 à 09:25, Joost Jager  a écrit :

> > We've looked at all kinds of trustless payment schemes to keep users
>>
>> > honest, but it appears that none of them is satisfactory. Maybe it is
>> even
>> > theoretically impossible to create a scheme that is trustless and has
>> all
>> > the properties that we're looking for. (A proof of that would also be
>>
>> > useful information to have.)
>>
>> I don't think anyone has drawn yet a formal proof of this, but roughly a
>> routing peer Bob, aiming to prevent resource abuse at HTLC relay is seeking
>> to answer the following question "Is this payment coming from Alice and
>> going to Caroll will compensate for my resources consumption ?". With the
>> current LN system, the compensation is conditional on payment settlement
>> success and both Alice and Caroll are distrusted yet discretionary on
>> failure/success. Thus the underscored question is undecidable for a routing
>> peer making relay decisions only on packet observation.
>>
>> One way to mitigate this, is to introduce statistical observation of
>> sender/receiver, namely a reputation system. It can be achieved through a
>> scoring system, web-of-trust, or whatever other solution with the same
>> properties.
>> But still it must be underscored that statistical observations are only
>> probabilistic and don't provide resource consumption security to Bob, the
>> routing peer, in a deterministic way. A well-scored peer may start to
>> suddenly misbehave.
>>
>> In that sense, the efficiency evaluation of a reputation-based solution
>> to deter DoS must be evaluated based based on the loss of the reputation
>> bearer related to the potential damage which can be inflicted. It's just
>> reputation sounds harder to compute accurately than a pure payment-based
>> DoS protection system.
>>
>
> I can totally see the issues and complexity of a reputation-based system.
> With 'trustless payment scheme' I meant indeed a trustless pure
> payment-based DoS protection system and the question whether such a system
> can be proven to not exist. A sender would pay an up-front amount to cover
> the maximum cost, but with the 

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.

One of the benefits of the current HTLC state machine is that once you
describe your state as a set
of local changes (proposed by you) plus a set of remote changes (proposed
by them), where each of
these is split between proposed, signed and acked updates, the flow is
straightforward to implement
and deterministic.

The only tricky part (where we've seen recurring compatibility issues) is
what happens on
reconnections. But it seems to me that the only missing requirement in the
spec is on the order of
messages sent, and more specifically that if you are supposed to send a
`revoke_and_ack`, you must
send that first (or at least before sending any `commit_sig`). Adding test
scenarios in the spec
could help implementers get this right.

It's a bit tricky to get it right at first, but once you get it right you
don't need to touch that
code again and everything runs smoothly. We're pretty close to that state,
so why would we want to
start from scratch? Or am I missing something?

Cheers,
Bastien

Le mar. 13 oct. 2020 à 13:58, Christian Decker 
a écrit :

> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).
>
> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.
>
> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)
>
> Cheers,
> Christian
>
> Rusty Russell  writes:
> > Hi all,
> >
> > Our HTLC state machine is optimal, but complex[1]; the Lightning
> > Labs team recently did some excellent work finding another place the spec
> > is insufficient[2].  Also, the suggestion for more dynamic changes makes
> it
> > more difficult, usually requiring forced quiescence.
> >
> > The following protocol returns to my earlier thoughts, with cost of
> > latency in some cases.
> >
> > 1. The protocol is half-duplex, with each side taking turns; opener
> first.
> > 2. It's still the same form, but it's always one-direction so both sides
> >stay in sync.
> > update+-> commitsig-> <-revocation <-commitsig revocation->
> > 3. A new message pair "turn_request" and "turn_reply" let you request
> >when it's not your turn.
> > 4. If you get an update in reply to your turn_request, you lost the race
> >and have to defer your own updates until after peer is finished.
> > 5. On reconnect, you send two flags: send-in-progress (if you have
> >sent the initial commitsig but not the final revocation) and
> >receive-in-progress (if you have received the initial commitsig
> >not not received the final revocation).  If either is set,
> >the sender (as indicated by the flags) retransmits the entire
> >sequence.
> >Otherwise, (arbitrarily) opener goes first again.
> >
> > Pros:
> > 1. Way simpler.  There is only ever one pair of commitment txs for any
> >given commitment index.
> > 2. Fee changes are now deterministic.  No worrying about the case where
> >the peer's changes are also in flight.
> > 3. Dynamic changes can probably happen more simply, since we always
> >negotiate both sides at once.
> >
> > Cons:
> > 1. If it's not your turn, it adds 1 RTT latency.
> >
> > Unchanged:
> > 1. Database accesses are unchanged; you need to commit when you send or
> >receive a commitsig.
> > 2. You can use the same state machine as before, but one day (when
> >this would be compulsory) you'll be able signficantly simplify;
> >you'll need to record the index at which HTLCs were changed
> >(added/removed) in case peer wants you to rexmit though.
> >
> > Cheers,
> > Rusty.
> >
> > [1] This is my fault; I was persuaded early on that optimality was more
> > important than simplicity in a classic nerd-snipe.
> > [2] 

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
Hey laolu,

I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal.


Yes, maybe it's better to not offer two mechanisms and wait for dynamic
commitments to offer that
flexibility.

Instead, you may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.


Exactly, that's the kind of heuristic I had in mind. Peers need to slowly
build trust before you
give them access to more resources.

This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information


Agreed, it's easy to implement locally but it's not going to be very nice
to your peer, who has
no way of knowing why you're rejecting HTLCs and may end up closing the
channel because it sees
weird behavior. That's why we need to offer an explicit re-negotiation of
these parameters, let's
keep this use-case in mind when designing dynamic commitments!

Cheers,
Bastien

Le lun. 12 oct. 2020 à 20:59, Olaoluwa Osuntokun  a
écrit :

>
> > I suggest adding tlv records in `commitment_signed` to tell our channel >
> > peer that we're changing the values of these fields.
>
> I think this fits in nicely with the "parameter re-negotiation" portion of
> my
> loose Dynamic commitments proposal. Note that in that paradigm, something
> like this would be a distinct message, and also only be allowed with a
> "clean commitment" (as otherwise what if I reduce the number of slots to a
> value that is lower than the number of active slots?). With this, both
> sides
> would be able to propose/accept/deny updates to the flow control parameters
> that can be used to either increase the security of a channel, or implement
> a sort of "slow start" protocol for any new peers that connect to you.
>
> Similar to congestion window expansion/contraction in TCP, when a new peer
> connects to you, you likely don't want to allow them to be able to consume
> all the newly allocated bandwidth in an outgoing direction. Instead, you
> may
> want to only allow them to utilize say 10% of the available HTLC bandwidth,
> slowly increasing based on successful payments, and drastically
> (multiplicatively) decreasing when you encounter very long lived HTLCs, or
> an excessive number of failures.
>
> A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
> several classes of attacks (supplementing any mitigations by "channel
> acceptor" hooks), and also give forwarding nodes more _control_ of exactly
> how their allocated bandwidth is utilized by all connected peers.  This is
> possible to some degree today (by using an implicit value lower than
> the negotiated values), but the implicit route doesn't give the other party
> any information, and may end up in weird re-send loops (as they _why_ an
> HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
> state, since we don't have any sort of "unadd", then the channel may end up
> borked if the violating party keeps retransmitting the same update upon
> reconnection.
>
> > Are there other fields you think would need to become dynamic as well?
>
> One other value that IMO should be dynamic to protect against future
> unexpected events is the dust limit. "It Is Known", that this value
> "doesn't
> really change", but we should be able to upgrade _all_ channels on the fly
> if it does for w/e reason.
>
> -- Laolu
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-14 Thread Bastien TEINTURIER via Lightning-dev
I totally agree with the simplicity argument, I wanted to raise this
because it's (IMO) an issue
today because of the way we deal with on-chain fees, but it's less
impactful once update_fee is
scoped to some min_relay_fee.

Let's put this aside for now then and we can revisit later if needed.

Thanks for the feedback everyone!
Bastien

Le lun. 12 oct. 2020 à 20:49, Olaoluwa Osuntokun  a
écrit :

> > It seems to me that the "funder pays all the commit tx fees" rule exists
> > solely for simplicity (which was totally reasonable).
>
> At this stage, I've learned that simplicity (when doing anything that
> involves multi-party on-chain fee negotiating/verification/enforcement can
> really go a long way). Just think about all the edge cases w.r.t
> _allocating
> enough funds to pay for fees_ we've discovered over the past few years in
> the state machine. I fear adding a more elaborate fee splitting mechanism
> would only blow up the number of obscure edge cases that may lead to a
> channel temporarily or permanently being "borked".
>
> If we're going to add a "fairer" way of splitting fees, we'll really need
> to
> dig down pre-deployment to ensure that we've explored any resulting edge
> cases within our solution space, as we'll only be _adding_ complexity to
> fee
> splitting.
>
> IMO, anchor commitments in their "final form" (fixed fee rate on commitment
> transaction, only "emergency" use of update_fee) significantly simplifies
> things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
> fees". However, as you note this doesn't fully distribute the worst-case
> cost of needing to go to chain with a "fully loaded" commitment
> transaction.
> Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
> perspective, once again putting the burden on the broadcaster/confirmer to
> make up the difference.
>
> -- Laolu
>
>
> On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels would
>> only exist between peers
>> that trust each other, which is quite limiting (I'm hoping we can do
>> better).
>>
>> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
>> that steal funds come from
>> the fact that a routing node has paid downstream but cannot claim the
>> upstream HTLCs (correct me
>> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
>> the HTLCs they offer
>> while they're pending in the commit-tx, regardless of whether they're
>> funder or fundee.
>>
>> The simplest way to do this would be to deduce the HTLC cost (172 *
>> feerate) from the offerer's
>> main output (instead of the funder's main output, while keeping the base
>> commit tx weight paid
>> by the funder).
>>
>> A more extreme proposal would be to tie the *total* commit-tx fee to the
>> channel usage:
>>
>> * if there are no pending HTLCs, the funder pays all the fee
>> * if there are pending HTLCs, each node pays a proportion of the fee
>> proportional to the number of
>> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
>> pays 75% of the
>> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
>> redistributed.
>>
>> This model uses the on-chain fee as collateral for usage of the channel.
>> If Alice wants to forward
>> HTLCs through this channel (because she has something to gain - routing
>> fees), she should be taking
>> on some of the associated risk, not Bob. Bob will be taking the same risk
>> downstream if he chooses
>> to forw

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-12 Thread Bastien TEINTURIER via Lightning-dev
Good morning,

For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.


That's an interesting comment, it may be worth exploring.
IIUC you're suggesting that payments may look like this:

* Alice wants to reach Dave by going through Bob and Carol
* An onion encodes the route Alice -> Bob -> Carol -> Dave
* When Bob receives that onion and discovers that Carol is the next node,
he finds a route to Carol
and sends it along that route, but it's not an onion, it's "clear-text"
routing
* When Carol receives that message, she unwraps the Alice -> Bob -> Carol
-> Dave onion to discover
that Dave is the next hop and applies the same steps as Bob

It looks a lot like Trampoline, but Trampoline does onion routing between
intermediate nodes.
Your proposal would replace that with a potentially more efficient but less
private routing scheme.
As long as the Trampoline route does use onion routing, it could make
sense...

For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?


Invoices to the rescue!
Since lightning payments are invoice-based, recipients would add to the
invoice a few nodes that
are close to them (or a partial route, which would probably be better for
privacy).

Thanks,
Bastien

Le dim. 11 oct. 2020 à 10:50, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Hey Zman,
> >
> > > raising the minimum payment size is another headache
> >
> > It's true that it may (depending on the algorithm) lower the success
> rate of MPP-split.
> > But it's already a parameter that node operators can configure at will
> (at channel creation time),
> > so IMO it's a complexity we have to deal with anyway. Making it dynamic
> shouldn't have a high
> > impact on MPP algorithms (apart from failures while `channel_update`s
> are propagating).
>
> Right, it should not have much impact.
>
> For the most part, when considering the possibility of splicing in the
> future, we should consider that such parameters must be made changeable
> largely.
>
>
> >
> > To be fully honest, my (maybe unpopular) opinion about MPP is that it's
> not necessary on the
> > network's backbone, only at its edges. Once the network matures, I
> expect channels between
> > "serious" routing nodes to be way bigger than the size of individual
> payments. The only places
> > where there may be small or almost-empty channels are between end-users
> (wallets) and
> > routing nodes.
> > If something like Trampoline were to be implemented, MPP would only be
> needed to reach a
> > first routing node (short route), that routing node would aggregate the
> parts and forward as a
> > single HTLC to the next routing node. It would be split again once it
> reaches the other edge
> > of the network (for a short route as well). In a network like this, the
> MPP routes would only have
> > to be computed on a small subset of the network, which makes brute-force
> algorithms completely
> > reasonable and the success rate higher.
>
> This makes me wonder if we really need the onions-per-channel model we
> currently use.
>
> For instance, Tor is basically two-layer: there is a lower-level TCP/IP
> layer where packets are sent out to specific nodes on the network and this
> layer is completely open about where the packet should go, but there is a
> higher layer where onion routing between nodes is used.
>
> We could imitate this, with HTLC packets that openly show the next
> destination node, but once all parts reach the destination node, it decodes
> and turns out to be an onion to be sent to the next destination node, and
> the current destination node is just another forwarder.
>
> HTLC packets could be split arbitrarily, and later nodes could potentially
> merge with the lower CLTV used in subsequent hops.
>
> Or not, *shrug*.
> It has the bad problem of being more expensive on average than purely
> source-based routing, and probably having worse payment latency.
>
>
> For your proposal, how sure is the receiver that the input end of the
> trampoline node is "nearer" to the payer than itself?
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-09 Thread Bastien TEINTURIER via Lightning-dev
Hey Zman,

raising the minimum payment size is another headache
>

It's true that it may (depending on the algorithm) lower the success rate
of MPP-split.
But it's already a parameter that node operators can configure at will (at
channel creation time),
so IMO it's a complexity we have to deal with anyway. Making it dynamic
shouldn't have a high
impact on MPP algorithms (apart from failures while `channel_update`s are
propagating).

To be fully honest, my (maybe unpopular) opinion about MPP is that it's not
necessary on the
network's backbone, only at its edges. Once the network matures, I expect
channels between
"serious" routing nodes to be way bigger than the size of individual
payments. The only places
where there may be small or almost-empty channels are between end-users
(wallets) and
routing nodes.
If something like Trampoline were to be implemented, MPP would only be
needed to reach a
first routing node (short route), that routing node would aggregate the
parts and forward as a
single HTLC to the next routing node. It would be split again once it
reaches the other edge
of the network (for a short route as well). In a network like this, the MPP
routes would only have
to be computed on a small subset of the network, which makes brute-force
algorithms completely
reasonable and the success rate higher.

This is an interesting fork of the discussion, but I don't think it's a
good reason to prevent these
parameters from being updated on live channels, what do you think?

Bastien


Le jeu. 8 oct. 2020 à 22:05, ZmnSCPxj  a écrit :

> Good morning t-bast,
>
> > Please forget about channel jamming, upfront fees et al and simply
> consider the parameters I'm
> > mentioning. It feels to me that these are by nature dynamic channel
> parameters (some of them are
> > even present in `channel_update`, but no-one updates them yet because
> direct peers don't take the
> > update into account anyway). I'd like to raise `htlc_minimum_msat` on
> some big channels because
> > I'd like these channels to be used only for big-ish payments. Today I
> can't, I have to close that
> > channel and open a new one for such a trivial configuration update,
> which is sad.
>
> At the risk of once more derailing the conversation: from the MPP
> trenches, raising the minimum payment size is another headache.
> The general assumption with MPP is that smaller amounts are more likely to
> get through, but if anyone is making a significant bump up in
> `htlc_minimum_msat`, that assumption is upended and we have to reconsider
> if we may actually want to merge multiple failing splits into one, as well
> as considering asymmetric splits (in particular asymmetric presplits)
> because maybe the smaller splits will be unable to pass through the bigger
> channels but the bigger-side split *might*.
>
> On the other hand: one can consider that the use of big payments as an
> aggregation.
> For example: a forwarding node might support smaller `htlc_minimum_msat`,
> then after making multiple such forwards, find that a channel is now
> heavily balanced towards one side or another.
> It can then make a single large rebalance via one of the
> high-`htlc_minimum_msat` channels t-bast is running.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Thanks (again) Antoine and Zman for your answers,

On the other hand, a quick skim of your proposal suggests that it still
> respects the "initiator pays" principle.
> Basically, the fundee only pays fees for HTLCs they initiated, which is
> not relevant to the above attack (since in the above attack, my node is a
> dead end, you will never send out an HTLC through my channel to rebalance).
> So it should still be acceptable.


I agree, my proposal would have the same result as today's behavior in that
case.
Unless your throw-away node waited for me to add an HTLC in its channel, in
that case I would pay a
part of the fee (since I'm adding that HTLC). That leans towards the first
of my two proposals,
where the funder always pays the "base" fee and htlc fees are split
depending on who proposed the HTLC.

The channel initiator shouldn't have to pay for channel-closing as it's
> somehow a liquidity allocation decision


I agree 100%. Especially since mutual closing should be preferred most of
the time.

That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.


Yes, this is an issue, but the only way to fix it today is to never be the
funder, always be fundee
and I think that creates unhealthy, assymetric incentives.

This is a scenario where the other node will only burn you once; if you
notice that behavior you'll
be forced to pay on-chain fees, but you'll ban this peer. And if he opened
the channel to you, he'll
still be paying the "base" fee. I don't think there's a silver bullet here
where you can completely
avoid being bitten by such malicious nodes, but you can reduce exposure and
ban them after the fact.

Another note on using a minimal relay fee; in a potential future where
on-chain fees are always
high and layer 1 is consistently busy, even that minimal relay fee will be
costly. You'll want your
peer to pay for the HTLCs it's responsible for to split the on-chain fee
more fairly. So I believe
moving (slightly) away from the "funder pays all" model is desirable (or at
least it's worth
exploring seriously in order to have a better reason to dismiss it than
"it's simpler").

Does that make sense?

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:30, Antoine Riard  a
écrit :

> Hello Bastien,
>
> I'm all in for a model where channel transactions are pre-signed with a
> reasonable minimal relay fee and the adjustment is done by the closer. The
> channel initiator shouldn't have to pay for channel-closing as it's somehow
> a liquidity allocation decision ("My balance could be better allocated
> elsewhere than in this channel").
>
> That said, a channel closing might be triggered due to a security
> mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
> can easily loop a HTLC forwarding on an honest peer. Then not cancel it
> on-time to force the honest counterparty to pay onchain fees to avoid a
> offered HTLC not being claimed back on time.
>
> AFAICT, this issue is not solved by anchor outputs. A way to decentivize
> this kind of behavior from a malicious counterparty is an upfront payment
> where the upholding HTLC fee * HTLC block-buffer-before-onchain is higher
> than the cost of going onchain. It should cost higher for the counterparty
> to withhold a HTLC than paying onchain-fees to close the channel.
>
> Or can you think about another mitigation for the issue raised above ?
>
> Antoine
>
> Le lun. 5 oct. 2020 à 09:13, Bastien TEINTURIER via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> a écrit :
>
>> Good morning list,
>>
>> It seems to me that the "funder pays all the commit tx fees" rule exists
>> solely for simplicity
>> (which was totally reasonable). I haven't been able to find much
>> discussion about this decision
>> on the mailing list nor in the spec commits.
>>
>> At first glance, it's true that at the beginning of the channel lifetime,
>> the funder should be
>> responsible for the fee (it's his decision to open a channel after all).
>> But as time goes by and
>> both peers earn value from this channel, this rule becomes questionable.
>> We've discovered since
>> then that there is some risk associated with having pending HTLCs
>> (flood-and-loot type of attacks,
>> pinning, channel jamming, etc).
>>
>> I think that *in some cases*, fundees should be paying a portion of the
>> commit-tx on-chain fees,
>> otherwise we may end up with a web-of-trust network where channels wou

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
Good morning Antoine and Zman,

Thanks for your answers!

I was thinking dynamic policy adjustment would be covered by the dynamic
> commitment mechanism proposed by Laolu


I didn't mention this as I think we still have a long-ish way to go before
dynamic commitments
are spec-ed, implemented and deployed, and I think the parameters I'm
interested in don't require
that complexity to be updated.

Please forget about channel jamming, upfront fees et al and simply consider
the parameters I'm
mentioning. It feels to me that these are by nature dynamic channel
parameters (some of them are
even present in `channel_update`, but no-one updates them yet because
direct peers don't take the
update into account anyway). I'd like to raise `htlc_minimum_msat` on some
big channels because
I'd like these channels to be used only for big-ish payments. Today I
can't, I have to close that
channel and open a new one for such a trivial configuration update, which
is sad.

There is no need to stop the channel's operations while you're updating
these parameters, since
they can be updated unilaterally anyway. The only downside is that if you
make your policy stricter,
your peer may send you some HTLCs that you will immediately fail
afterwards; it's only a minor
inconvenience that won't trigger a channel closure.

I'd like to know if other implementations than eclair have specificities
that would make this
feature particularly hard to implement or undesirable.

Thanks,
Bastien

Le mar. 6 oct. 2020 à 18:43, ZmnSCPxj  a écrit :

> Good morning Antoine, and Bastien,
>
>
> > Instead of relying on reputation, the other alternative is just to have
> an upfront payment system, where a relay node doesn't have to account for a
> HTLC issuer reputation to decide acceptance and can just forward a HTLC as
> long it paid enough. More, I think it's better to mitigate jamming with a
> fees-based system than a web-of-trust one, less burden on network newcomers.
>
> Let us consider some of the complications here.
>
> A newcomer wants to make an outgoing payment.
> Speculatively, it connects to some existing nodes based on some policy.
>
> Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
>
> In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> In order to do that, we would have to offer the upfront fees for that
> node, to the node we *are* channeled with, so it can forward this as well.
>
> * We can give the upfront fee outright to the first hop, and trust that if
> it forwards, it will also forward the upfront fee for the next hop.
>   * The first hop would then prefer to just fail the HTLC then and there
> and steal all the upfront fees.
> * After all, the offerrer is a newcomer, and might be the sybil of a
> hacker that is trying to tie up its liquidity.
>   The first hop would (1) avoid this risk and (2) earn more upfront
> fees because it does not forward those fees to later hops.
>   * This is arguably custodial and not your keys not your coins applies.
> Thus, it returns us back to tr\*st anyway.
> * We can require that the first hop prove *where* along the route errored.
>  If it provably failed at a later hop, then the first hop can claim more
> as upfront fees, since it will forward the upfront fees to the later hop as
> well.
>   * This has to be enforcable onchain in case the channel gets dropped
> onchain.
> Is there a proposal SCRIPT which can enforce this?
>   * If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> * On the other hand, sub-satoshi amounts are not enforcable onchain
> too, and nobody cares, so...
>
> On the other hand, a web-of-tr\*st might not be *that* bad.
>
> One can say that "tr\*st is risk", and consider that the size and age of a
> channel to a peer represents your tr\*st that that peer will behave
> correctly for fast and timely resolution of payments.
> And anyone can look at the blockchain and the network gossip to get an
> idea of who is generally considered tr\*stworthy, and since that
> information is backed by Bitcoins locked in channels, this is reasonably
> hard to fake.
>
> On the other hand, this risks centralization around existing, long-lived
> nodes.
> *Sigh*.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Incremental Routing (Was: Making (some) channel limits dynamic)

2020-10-08 Thread Bastien TEINTURIER via Lightning-dev
If I remember correctly, it looks very similar to how I2P establishes
tunnels, it may be worth
diving in their documentation to fish for ideas.

However in their case the goal is to establish a long-lived tunnel, which
is why it's ok to have
a slow and costly protocol. It feels to me that for payments, this is a lot
of messages and delays,
I'm not sure this is feasible at a reasonable scale...

Cheers,
Bastien

Le mer. 7 oct. 2020 à 19:34, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning Antoine, Bastien, and list,
>
> > > Instead of relying on reputation, the other alternative is just to
> have an upfront payment system, where a relay node doesn't have to account
> for a HTLC issuer reputation to decide acceptance and can just forward a
> HTLC as long it paid enough. More, I think it's better to mitigate jamming
> with a fees-based system than a web-of-trust one, less burden on network
> newcomers.
> >
> > Let us consider some of the complications here.
> >
> > A newcomer wants to make an outgoing payment.
> > Speculatively, it connects to some existing nodes based on some policy.
> >
> > Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
> >
> > In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> > In order to do that, we would have to offer the upfront fees for that
> node, to the node we are channeled with, so it can forward this as well.
> >
> > -   We can give the upfront fee outright to the first hop, and trust
> that if it forwards, it will also forward the upfront fee for the next hop.
> > -   The first hop would then prefer to just fail the HTLC then and
> there and steal all the upfront fees.
> > -   After all, the offerrer is a newcomer, and might be the
> sybil of a hacker that is trying to tie up its liquidity.
> > The first hop would (1) avoid this risk and (2) earn more
> upfront fees because it does not forward those fees to later hops.
> >
> > -   This is arguably custodial and not your keys not your coins
> applies.
> > Thus, it returns us back to tr\*st anyway.
> >
> > -   We can require that the first hop prove where along the route
> errored.
> > If it provably failed at a later hop, then the first hop can claim
> more as upfront fees, since it will forward the upfront fees to the later
> hop as well.
> > -   This has to be enforcable onchain in case the channel gets
> dropped onchain.
> > Is there a proposal SCRIPT which can enforce this?
> >
> > -   If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> > -   On the other hand, sub-satoshi amounts are not enforcable
> onchain too, and nobody cares, so...
>
> One thing I have been thinking about, but have not proposed seriously yet,
> would be "incremental routing".
>
> Basically, the route of pending HTLCs also doubles as an encrypted
> bidirectional tunnel.
>
> Let me first describe how I imagine this "incremental routing" would look
> like.
>
> First, you offer an HTLC with a direct peer.
> The data with this HTLC includes a point, which the peer will ECDH with
> its own privkey, to form a shared secret.
> You can then send additional messages to that node, which it will decrypt
> using the shared secret as the symmetric encryption key.
> The node can also reply to those messages, by encrypting it with the same
> symmetric encryption key.
> Typically this will be via a stream cipher which is XORed with the real
> data.
>
> One of the messages you can send to that node (your direct peer) would be
> "please send out an HTLC to this peer of yours".
> Together with that message, you could also bump up the value of the HTLC,
> and possibly the CLTV delta, you have with that node.
> This bumping up is the forwarding fee and resolution time you have to give
> to that node in order to have it safely put an HTLC to the next hop.
>
> If there is a problem on the next hop, the node replies back, saying it
> cannot forward the HTLC further.
> Your node can then respond by giving an alternative next hop, which that
> node can reply back is also not available, etc. until you say "give up" and
> that node will just fail the HTLC.
>
> However, suppose the next hop is online and there is enough space in the
> channel.
> That node then establishes the HTLC with the next hop.
>
> At this point, you can then send a message to the direct peer which is
> nothing more than "send the rest of this message as the message to the next
> hop on the same HTLC, then wait for a reply and wrap it and reply to me".
> This is effectively onion-wrapping the message to the peer of your peer,
> and waiting for an onion-wrapped reply from the peer of your peer.
>
> You 

Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Hi darosior,

This is true, but we haven't solved yet how to estimate a good enough
`min_relay_fee` that works
for end-to-end tx propagation over the network.

We've discussed this during the last two spec meetings, but it's still
unclear whether we'll be able to solve
this before package-relay lands in bitcoin, so I wanted to explore this as
a potential more short-term
solution. But maybe it's not worth the effort and we should focus more on
anchors and `min_relay_fee`,
we'll see ;)

Bastien

Le lun. 5 oct. 2020 à 15:25, darosior  a écrit :

> Hi Bastien,
>
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
>
> Agreed.
> However in an anchor outputs future the funder only pays for the
> "backbone" fees of the channel and the fees necessary to secure the
> confirmation of transactions is paid in second stage by each interested
> party (*). It seems to me to be a reasonable middle-ground.
>
> (*) Credits to ZmnSCPxj for pointing this out to me on IRC.
>
> Darosior
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Why should funders always pay on-chain fees?

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

It seems to me that the "funder pays all the commit tx fees" rule exists
solely for simplicity
(which was totally reasonable). I haven't been able to find much discussion
about this decision
on the mailing list nor in the spec commits.

At first glance, it's true that at the beginning of the channel lifetime,
the funder should be
responsible for the fee (it's his decision to open a channel after all).
But as time goes by and
both peers earn value from this channel, this rule becomes questionable.
We've discovered since
then that there is some risk associated with having pending HTLCs
(flood-and-loot type of attacks,
pinning, channel jamming, etc).

I think that *in some cases*, fundees should be paying a portion of the
commit-tx on-chain fees,
otherwise we may end up with a web-of-trust network where channels would
only exist between peers
that trust each other, which is quite limiting (I'm hoping we can do
better).

Routing nodes may be at risk when they *receive* HTLCs. All the attacks
that steal funds come from
the fact that a routing node has paid downstream but cannot claim the
upstream HTLCs (correct me
if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
the HTLCs they offer
while they're pending in the commit-tx, regardless of whether they're
funder or fundee.

The simplest way to do this would be to deduce the HTLC cost (172 *
feerate) from the offerer's
main output (instead of the funder's main output, while keeping the base
commit tx weight paid
by the funder).

A more extreme proposal would be to tie the *total* commit-tx fee to the
channel usage:

* if there are no pending HTLCs, the funder pays all the fee
* if there are pending HTLCs, each node pays a proportion of the fee
proportional to the number of
HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
pays 75% of the
commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
redistributed.

This model uses the on-chain fee as collateral for usage of the channel. If
Alice wants to forward
HTLCs through this channel (because she has something to gain - routing
fees), she should be taking
on some of the associated risk, not Bob. Bob will be taking the same risk
downstream if he chooses
to forward.

I believe it also forces the fundee to care about on-chain feerates, which
is a healthy incentive.
It may create a feedback loop between on-chain feerates and routing fees,
which I believe is also
a good long-term thing (but it's hard to predict as there may be negative
side-effects as well).

What do you all think? Is this a terrible idea? Is it okay-ish, but not
worth the additional
complexity? Is it an amazing idea worth a lightning nobel? Please don't
take any of my claims
for granted and challenge them, there may be negative side-effects I'm
completely missing, this is
a fragile game of incentives...

Side-note: don't forget to take into account that the fees for HTLC
transactions (second-level txs)
are always paid by the party that broadcasts them (which makes sense). I
still think this is not
enough and can even be abused by fundees in some setups.

Thanks,
Bastien
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Making (some) channel limits dynamic

2020-10-05 Thread Bastien TEINTURIER via Lightning-dev
Good evening list,

Recent discussions around channel jamming [1] have highlighted again the
need to think twice when
configuring your channels parameters. There are currently parameters that
are set once at channel
creation that would benefit a lot from being configurable throughout the
lifetime of the channel
to avoid closing channels when we just want to reconfigure them:

* max_htlc_value_in_flight_msat
* max_accepted_htlcs
* htlc_minimum_msat
* htlc_maximum_msat

Nodes can currently unilaterally udpate these by applying forwarding
heuristics, but it would be
better to tell our peer about the limits we want to put in place (otherwise
we're wasting a whole
cycle of add/commit/revoke/fail messages for no good reason).

I suggest adding tlv records in `commitment_signed` to tell our channel
peer that we're changing
the values of these fields.

Is someone opposed to that?
Are there other fields you think would need to become dynamic as well?
Do you think that needs a new message instead of using extensions of
`commitment_signed`?

Cheers,
Bastien

[1] https://twitter.com/joostjgr/status/1308414364911841281
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Bastien TEINTURIER via Lightning-dev
Thanks for sharing this, I think it's the right time to start experimenting
with
that kind of feature (especially in the light of Taproot and the package
relay
work / pinning transactions issue).

we can start to experiment with flow-control like
> ideas such as limiting a new channel peer to only a handful of HTLC slots,
> which is then progressively increased based on "good behavior" (or the
> other
> way around as well)


Note that this is already possible today, a node can unilaterally decide its
internal rules for accepting channels/HTLCs. But it's true that it would be
nicer to communicate these rules with your peer to reduce inefficiencies
(e.g. proposing HTLCs that we know will be rejected).

* `open_channel` and `accept_channel` gain a new `channel_type` TLV field.
> * retroactively the OG commitment format is numbered as `channel_type=0`,
> `static_remote_key`, as `channel_type=1`, and anchors as
> `channel_type=2`


ACK! Internally eclair (and I believe lnd as well) has exactly that field in
its DB, with exactly those values.

* an empty `commit_sig` message (one that covers no updates) is
> disallowed, unless the `commit_sig` has a `channel_type`, `c_n` that
> differs from the channel type of the prior commitment, `c_n-1`.


That sounds reasonable, as changing the `channel_type` is actually an update
(it results in changes in the commit tx and/or htlc txs).

An alternative to attaching the `channel_type` message to the `commit_sig`
> and having _that_ kick off the commitment upgrade, we could instead
> possibly
> add a _new_ update message (like `update_fee`) to make the process more
> explicit. In either case, we may want to restrict things a bit by only
> allowing the initiator to trigger a commitment format update.


I prefer that alternative. I think it's better to explicitly signal that we
want to pause the channel while we upgrade the commitment format (and stop
accepting HTLCs while we're updating, like we do once we've exchanged the
`shutdown` message). Otherwise the asynchronocity of the protocol is likely
to
create months (years?) of tracking unwanted force-closes because of races
between `commig_sig`s with the new and old commitment format.

Updating the commitment format should be a rare enough operation that we can
afford to synchronize with a two-way `update_commitment_format` handshake,
then
temporarily freeze the channel.

The tricky part will be how we handle "dangling" operations that were sent
by
the remote peer *after* we sent our `update_commitment_format` but *before*
they received it. The simplest choice is probably to have the initiator just
ignore these messages, and the non-initiator enqueue these un-acked messages
and replay them after the commitment format update completes (or just drop
them
and cancel corresponding upstream HTLCs if needed).

Regarding initiating the commitment format update, how do you see this
happen?
The funder activates a new feature on his (e.g. `option_anchor_outputs`),
and
broadcasts it in `init` and `node_announcement`, then waits until the remote
also activates it in its `init` message and then reacts to this by
triggering
the update process?

Thanks,
Bastien

Le mar. 21 juil. 2020 à 03:18, Olaoluwa Osuntokun  a
écrit :

> Hi y'all,
>
> In this post, I'd like to share an early version of an extension to the
> spec
> and channel state machine that would allow for on-the-fly commitment
> _format/type_ changes. Notably, this would allow for us to _upgrade_
> commitment types without any on-chain activity, executed in a
> de-synchronized and distributed manner. The core realization these proposal
> is based on the fact that the funding output is the _only_ component of a
> channel that's actually set in stone (requires an on-chain transaction to
> modify).
>
>
> # Motivation
>
> (you can skip this section if you already know why something like this is
> important)
>
> First, some motivation. As y'all are likely aware, the current deployed
> commitment format has changed once so far: to introduce the
> `static_remote_key` variant which makes channels safer by sending the funds
> of the party that was force closed on to a plain pubkey w/o any extra
> tweaks
> or derivation. This makes channel recovery safer, as the party that may
> have
> lost data (or can't continue the channel), no longer needs to learn of a
> secret value sent to them by the other party to be able to claim their
> funds. However, as this new format was introduced sometime after the
> initial
> bootstrapping phase of the network, most channels in the wild today _are
> not_ using this safer format.  Transitioning _all_ the existing channels to
> this new format as is, would require closing them _all_, generating tens of
> thousands of on-chain transactions (to close, then re-open), not to mention
> chain fees.
>
> With dynamic commitments, users will be able to upgrade their _existing_
> channels to new safer types, without any new on-chain transactions!
>
> Anchor output based 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-22 Thread Bastien TEINTURIER via Lightning-dev
Hey ZmnSCPxj,

I agree that in theory this looks possible, but doing it in practice with
accurate control
of what parts of the network get what tx feels impractical to me (but maybe
I'm wrong!).

It feels to me that an attacker who would be able to do this would break
*any* off-chain
construction that relies on absolute timeouts, so I'm hoping this is
insanely hard to
achieve without cooperation from a miners subset. Let me know if I'm too
optimistic on
this!

Cheers,
Bastien

Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :

> Good morning Bastien,
>
> > Thanks for the detailed write-up on how it affects incentives and
> centralization,
> > these are good points. I need to spend more time thinking about them.
> >
> > > This is one reason I suggested using independent pay-to-preimage
> > > transactions[1]
> >
> > While this works as a technical solution, I think it has some incentives
> issues too.
> > In this attack, I believe the miners that hide the preimage tx in their
> mempool have
> > to be accomplice with the attacker, otherwise they would share that tx
> with some of
> > their peers, and some non-miner nodes would get that preimage tx and be
> able to
> > gossip them off-chain (and even relay them to other mempools).
>
> I believe this is technically possible with current mempool rules, without
> miners cooperating with the attacker.
>
> Basically, the attacker releases two transactions with near-equal fees, so
> that neither can RBF the other.
> It releases the preimage tx near miners, and the timelock tx near
> non-miners.
>
> Nodes at the boundaries between those that receive the preimage tx and the
> timelock tx will receive both.
> However, they will receive one or the other first.
> Which one they receive first will be what they keep, and they will reject
> the other (and *not* propagate the other), because the difference in fees
> is not enough to get past the RBF rules (which requires not just a feerate
> increase, but also an increase in absolute fee, of at least the minimum
> relay feerate times transaction size).
>
> Because they reject the other tx, they do not propagate the other tx, so
> the boundary between the two txes is inviolate, neither can get past that
> boundary, this occurs even if everyone is running 100% unmodified Bitcoin
> Core code.
>
> I am not a mempool expert and my understanding may be incorrect.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-20 Thread Bastien TEINTURIER via Lightning-dev
Hello Dave and list,

Thanks for your quick answers!

The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.
>

Exactly, if the attacker submits an outdated transaction he would be
shooting himself in the foot,
as we could claim the revocation paths when seeing the transaction in a
block and get all the
channel funds (since the attacker's outputs will be CSV-locked).

The only way your Bitcoin peer will relay your blind child
> is if it already has the parent transaction.
>

That's an excellent point that I missed in the blind CPFP carve-out trick!
I think this makes the
blind CPFP carve-out quite hard in practice (even using getdata - thanks
for detailing that option)...

In the worst case scenario where most miners' mempools contain the
attacker's tx and the rest of
the network's mempools contains the honest participant's tx, I think there
isn't much we can do.
We're simply missing information, so it looks like the only good solution
is to avoid being in that
situation by having a foot in miners' mempools. Do you think it's
unreasonable to expect at least
some LN nodes to also invest in running nodes in mining pools, ensuring
that they learn about
attackers' txs and can potentially share discovered preimages with the
network off-chain (by
gossiping preimages found in the mempool over LN)? I think that these
recent attacks show that
we need (at least some) off-chain nodes to be somewhat heavily invested in
on-chain operations
(layers can't be fully decoupled with the current security assumptions -
maybe Eltoo will help
change that in the future?).

Thank you for your time!
Bastien



Le ven. 19 juin 2020 à 22:53, David A. Harding  a écrit :

> On Fri, Jun 19, 2020 at 03:58:46PM -0400, David A. Harding via bitcoin-dev
> wrote:
> > I think you're assuming here that the attacker broadcast a particular
> > state.
>
> Whoops, I managed to confuse myself despite looking at Bastien's
> excellent explainer.  The attacker would be broadcasting the latest
> state, so the honest counterparty would only need to send one blind
> child.  However, the blind child will only be relayed by a Bitcoin peer
> if the peer also has the parent transaction (the latest state) and, if
> it has the parent transaction, you should be able to just getdata('tx',
> $txid) that transaction from the peer without CPFPing anything.  That
> will give you the preimage and so you can immediately resolve the HTLC
> with the upstream channel.
>
> Revising my conclusion from the previous post:
>
> I think the strongman argument for the attack would be that the attacker
> will be able to perform a targeted relay of the low-feerate
> preimage-containing transaction to just miners---everyone else on the
> network will receive the honest user's higher-feerate expired-timelock
> transaction.  Unless the honest user happens to have a connection to a
> miner's node, the user will neither be able to CPFP fee bump nor use
> getdata to retrieve the preimage.
>
> Sorry for the confusion.
>
> -Dave
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-19 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Sorry for being (very) late to the party on that subject, but better late
than never.

A lot of ideas have been thrown at the problem and are scattered across
emails, IRC discussions,
and github issues. I've spent some time putting it all together in one
gist, hoping that it will
help stir the discussion forward as well as give newcomers all the
background they need to ramp up
on this issue and join the discussion, bringing new ideas to the table.

The gist is here, and I'd appreciate your feedback if I have wrongly
interpreted some of the ideas:
https://gist.github.com/t-bast/22320336e0816ca5578fdca4ad824d12

Readers of this list can probably directly skip to the "Future work"
section. I believe my
"alternative proposal" should loosely reflect Matt's proposal from the very
first mail of this
thread; note that I included anchors and new txs only in some places, as I
think they aren't
necessary everywhere.

My current state-of-mind (subject to change as I discover more potential
attacks) is that:

* The proposal to add more anchors and pre-signed txs adds non-negligible
complexity and hurts
small HTLCs, so it would be better if we didn't need it
* The blind CPFP carve-out trick is a one shot, so you'll likely need to
pay a lot of fees for it
to work which still makes you lose money in case an attacker targets you
(but the money goes to
miners, not to the attacker - unless he is the miner). It's potentially
hard to estimate what fee
you should put into that blind CPFP carve-out because you have no idea what
the current fee of the
pinned success transaction package is, so it's unsure if that solution will
really work in practice
* If we take a step back, the only attack we need to protect against is an
attacker pinning a
preimage transaction while preventing us from learning that preimage for at
least `N` blocks
(see the gist for the complete explanation). Please correct me if that
claim is incorrect as it
will invalidate my conclusion! Thus if we have:
* a high enough `cltv_expiry_delta`
* [off-chain preimage broadcast](
https://github.com/lightningnetwork/lightning-rfc/issues/783)
(or David's proposal to do it by sending txs that can be redeemed via only
the preimage)
* LN hubs (or any party commercially investing in running a lightning node)
participating in
various mining pools to help discover preimages
* decent mitigations for eclipse attacks
* then the official anchor outputs proposal should be safe enough and is
much simpler?

Thank you for reading, I hope the work I put into this gist will be useful
for some of you.

Bastien

Le ven. 24 avr. 2020 à 00:47, Matt Corallo via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

>
>
> On 4/23/20 8:46 AM, ZmnSCPxj wrote:
> >>> -   Miners, being economically rational, accept this proposal and
> include this in a block.
> >>>
> >>> The proposal by Matt is then:
> >>>
> >>> -   The hashlock branch should instead be:
> >>> -   B and C must agree, and show the preimage of some hash H (hashlock
> branch).
> >>> -   Then B and C agree that B provides a signature spending the
> hashlock branch, to a transaction with the outputs:
> >>> -   Normal payment to C.
> >>> -   Hook output to B, which B can use to CPFP this transaction.
> >>> -   Hook output to C, which C can use to CPFP this transaction.
> >>> -   B can still (somehow) not maintain a mempool, by:
> >>> -   B broadcasts its timelock transaction.
> >>> -   B tries to CPFP the above hashlock transaction.
> >>> -   If CPFP succeeds, it means the above hashlock transaction exists
> and B queries the peer for this transaction, extracting the preimage and
> claiming the A->B HTLC.
> >>
> >> Note that no query is required. The problem has been solved and the
> preimage-containing transaction should now confirm just fine.
> >
> > Ah, right, so it gets confirmed and the `blocksonly` B sees it in a
> block.
> >
> > Even if C hooks a tree of low-fee transactions on its hook output or
> normal payment, miners will still be willing to confirm this and the B hook
> CPFP transaction without, right?
>
> Correct, once it makes it into the mempool we can CPFP it and all the
> regular sub-package CPFP calculation will pick it
> and its descendants up. Of course this relies on it not spending any other
> unconfirmed inputs.
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Bastien TEINTURIER via Lightning-dev
Hi Antoine and list,

Thanks for raising this. There's one step I'd like to understand further:

* Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
> on Alice's transaction,
> feerate is maliciously chosen to get in network mempools but never to
> confirm. Absolute fee must
> be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
> race.
>

Can you detail how the "absolute fee" is computed here?
Doesn't that mean that if this had a higher fee than the htlc-timeout, and
the htlc-timeout fee was
chosen to confirm quickly (that's why we have an annoying `update_fee`),
the htlc-success will confirm
quickly (which makes the problem disappear)?
Because once the commit tx is confirmed, the "package" consists of only the
htlc-success, doesn't it?

I think the devil will be in the details here, so it's worth expanding on
the fee calculation imho.

Thanks!
Bastien

Le mer. 22 avr. 2020 à 10:01, Antoine Riard  a
écrit :

> Personally, I would have wait a bit before to go public on this, like
> letting some implementations
> increasing their CLTV deltas, but anyway, it's here now.
>
> Mempool-pinning attacks were already discussed on this list [0], but what
> we found is you
> can _reverse_ the scenario, where it's not the malicious party delaying
> confirmation of honest
> party transactions but malicious deliberately stucking its own
> transactions in the mempool to avoid
> confirmation of timeout. And therefore gaming inter-link timelock to
> provoke an unbalanced
> settlement for the victim ("aka you pay forward, but don't get pay
> backward").
>
> How much attacks are practical is based on how you can leverage mempool
> rules to pin your own
> transaction. What you're looking for is a  _mempool-obstruction_ trick,
> i.e a way to get honest party
> transaction being bounce off due to your transaction being already there.
>
> Beyond disabling RBF on your transaction (with current protocol, not
> anchor proposal), there is
> two likely candidates:
> * BIP 125 rule 3: "The replacement transaction pays an absolute fee of at
> least the sum paid by the original transactions."
> * BIP 125 rule 5: "The number of original transactions to be replaced and
> their descendant transactions which will be evicted from the mempool must
> not exceed a total of 100 transactions."
>
> Let's go through whole scenario:
> * Mallory and Eve are colluding
> * Eve and Mallory are opening channels with Alice, Mallory do a bit of
> rebalancing
> to get full incoming capacity, like receiving funds on an onchain address
> through another Alice
> link
> * Eve send a HTLC #1 to Mallory through Alice expirying at block 100
> * Eve send a second HTLC #2 to Mallory through Alice, expirying at block
> 110 on outgoing link
> (A<->M), 120 on incoming link (E<->A)
> * Before block 100, without cancellation from Mallory, Alice will
> force-close channel and broadcast
> her local commitment and HTLC-timeout to get back HTLC #1
> * Alice can't broadcast HTLC-timeout for HTLC #2 as it's only expires at
> 110
> * Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
> on Alice's transaction,
> feerate is maliciously chosen to get in network mempools but never to
> confirm. Absolute fee must
> be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
> race.
> * As Alice doesn't watch the mempool, she is never going to learn the
> preimage to redeeem incoming
> HTLC #2
> * At block 110, Alice is going to broadcast HTLC-timeout #2, feerate may
> be higher but as absolute
> fee is lower, it's going to be rejected from network mempools as
> replacement for Pinning Preimage
> Tx (BIP 125 rule 3)
> * At block 120, Eve closes channel and HTLC-timeout HTLC #2
> * Mallory can RBF its Pinning Preimage Tx by a high-feerate one and get it
> confirmed
>
> New anchor_output proposal, by disabling RBF, forces attacker to bid on
> the absolute fee. It may
> be now a risk to loose the fee if Pinning Tx is confirming. You may extend
> your "pinning
> lease" by ejecting your malicious tx, like conflicting or trimming out of
> the mempool one of its
> parents. And then reannounce your preimage tx with a
> lower-feerate-but-still-high-fee before a
> new block and a honest HTLC-timeout rebroadcast.
>
> AFAICT, even with anchor_output deployed, even assuming empty mempools,
> success rate and economic
> rationality of attacks is finding such cheap, reliable "pinning lease
> extension" trick.
>
> I think any mempool watching mitigation is at best a cat-and-mouse hack.
> Contrary to node
> advancing towards a global blockchain view thanks to PoW, network mempools
> don't have a convergence
> guarantee. This means,  in a distributed system like bitcoin, node don't
> see events in the same
> order, Alice may observe tx X, tx Y, tx Z and Bob may observe tx Z, tx X,
> tx Y. And order of events
> affects if a future event is going to be rejected or not, like if tx Z
> disable-RBF and tx X try to
> replace Z, 

[Lightning-dev] A better encoding for lightning invoices

2020-04-01 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

In Bolt 11 we decided to use bech32 to encode lightning invoices.
While bech32 has some nice properties, it isn't well suited for invoices.
The main drawback of Bolt 11 invoices is their size: when you start adding
routing hints or rendezvous onions, invoices become huge and hard to share.

Empirical evidence shows that most lightning transactions are done over
Twitter
(73,41% of all lightning payments in 2019 were made via tweets).
Since Twitter only allows up to 280 characters per tweet, this has severely
impacted
the development of new features for lightning. Anything that made an
invoice bigger
ended up being unused as users were left without any option to share those
invoices.

After several months of research and experimentation at Acinq Research, we
have come
up with a highly efficient invoice encoding, optimized primarily for
Twitter. This has
caused further delays in the development of our iOS wallet, but we felt
this was a
much higher priority.

Our encoding uses an AI-optimized mapping from 11-bit words to Twitter
emojis.
Early results show that emoji invoices are more than 2 times smaller than
legacy invoices.

Reckless users are already using this in production:

https://twitter.com/realtbast/status/1245258812279398400?s=20
https://twitter.com/acinq_co/status/1245258815597096960

There is a spec PR available at [1], along with reference eclair code [2].
We plan to release this feature in the next update of our Phoenix wallet
[3].

We'd like feedback from this list on how to improve this further. We
believe the
same encoding could be used to compress the bitcoin blockchain. With more
training
data, we believe our AI-optimized mapping could allow bitcoin blocks to fit
in a
single tweet; we would then be able to use Twitter feeds to store the whole
blockchain.

Cheers,
Bastien

[1] https://github.com/lightningnetwork/lightning-rfc/pull/762
[2] https://github.com/ACINQ/eclair/tree/emoji-encoding
[3] https://phoenix.acinq.co/
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Blind paths revisited

2020-03-13 Thread Bastien TEINTURIER via Lightning-dev
Good morning ZmnSCPxj,

Ok I see what you mean. In that case it would be slightly different from
the current path blinding proposal.
The recipient would need to give the sender all the blinding points for
each hop in the blinded path.
Currently the recipient only gives one blinding point and then each node in
the blinded path is able to
compute the next blinding point and send it to the next node.

This may work, but I think it deserves a closer look. The security
assumptions in multi-hop locks is that
the sender can choose every secret from a random distribution. If instead
these secrets are provided by
the recipient, this may open up some attack vectors on the sender. Maybe
the sender can tweak each
recipient secret with a secret of its own, but one would need to write the
exact maths down to verify that
it works end-to-end.

I'm not saying it's impossible, I'm just saying that it's not trivial at
all and the devil is in the details ;)

Cheers,
Bastien

Le ven. 13 mars 2020 à 01:42, ZmnSCPxj  a écrit :

> Good morning tbast, rusty, and list,
>
>
> > As for ZmnSCPxj's suggestion, I think there is the same kind of issue.
> > The secrets we establish with anonymous multi-hops locks are between the
> *sender*
> > and each of the hops. In the route blinding case, what we're adding are
> secrets
> > between the *recipient* and the hops, and we don't want the sender to be
> able to
> > influence those. It's a kind of reverse Sphinx. So I'm not sure yet the
> recipient
> > could safely contribute to those secrets, but maybe we'll find a nice
> trick in
> > the future!
>
> Not quite?
>
> The recipient knows the secrets from the first recipient-selected-hop to
> itself, and, if it knows the payment scalar, can subtract those secrets
> from the receiver scalar.
> Thus the sender only has to arrange to deliver the payment point to the
> first recipient-selected-hop, the rest of the recipient-selected-hops will
> add their blinding scalars (which come from the recipient), and the final
> recipient can linearly deduct those.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Blind paths revisited

2020-03-11 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

Thanks Rusty for following up on this, I'm glad it may be useful for offers!
I certainly want this as well for wallet users' privacy.

I have gathered my proposal in a better format than my previous gist here:
https://github.com/lightningnetwork/lightning-rfc/blob/route-blinding/proposals/route-blinding.md

You will note that I've been able to simplify the scheme a bit compared to
my
gist. It's now very clear that this is exactly the same kind of secrets
derivation than what Sphinx does. I still have things I want to add to the
proposal, but at least the crypto part should be ready to review (and I
think
it does need more eyes on it).

Feel free to add comments directly on the branch commits, it may be easier
to
review that way. Let me know if you think I should turn it into a draft PR
to
facilitate discussions. It kept it vague on some specific parts on purpose
(such as invoice fields, encrypted blob format); we will learn from early
prototype implementations and enrich the proposal as we go.

A few comments on your previous mails. I have removed the (ab)use of
`payment_secret`, but I think your comment on using the `blinding` to
replace
it would not work because that blinding is known by the next-to-last node
(which computes it and forwards it to the final node).
The goal of `payment_secret` is explicitly to avoid having the next-to-last
node
discover it to prevent him from probing. But I think that you didn't plan on
doing the blinding the same way I'm doing it, which may explain the
difference.

As for ZmnSCPxj's suggestion, I think there is the same kind of issue.
The secrets we establish with anonymous multi-hops locks are between the
*sender*
and each of the hops. In the route blinding case, what we're adding are
secrets
between the *recipient* and the hops, and we don't want the sender to be
able to
influence those. It's a kind of reverse Sphinx. So I'm not sure yet the
recipient
could safely contribute to those secrets, but maybe we'll find a nice trick
in
the future!

Cheers,
Bastien

Le mer. 11 mars 2020 à 00:22, Rusty Russell  a
écrit :

> ZmnSCPxj  writes:
> > Good morning Rusty, et al.,
> >
> >
> >> Note that this means no payment secret is necessary, since the incoming
> >> `blinding` serves the same purpose. If we wanted to, we could (ab)use
> >> payment_secret as the first 32-bytes to put in Carol's enc1 (i.e. it's
> >> the ECDH for Carol to decrypt enc1).
> >
> > I confess to not reading everything in detail, but it seems to me that,
> with payment point + scalar and path decorrelation, we need to establish a
> secret with each hop anyway (the blinding scalar for path decorrelation),
> so if you need a secret per hop, possibly this could be reused as well?
>
> Indeed, this could be used the same way, though for that secret it can
> simply be placed inside the onion rather than passed alongside.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Sphinx Rendezvous Update

2020-02-25 Thread Bastien TEINTURIER via Lightning-dev
erate the prefill stream and insert it in the correct place,
>before the HMAC. This reconstitutes the original routing packet
>  - Swap out the original onion with the reconstituted onion and forward.
>
> My writeup [1] is an early draft, but I wanted to get it out early to
> give the discussion a basis to work off. I'll revisit it a couple of
> times before opening a PR, but feel free to shout at me if I have
> forgotten to consider something :-)
>
> Cheers,
> Christian
>
> [1]
> https://github.com/lightningnetwork/lightning-rfc/blob/rendez-vous/proposals/0001-rendez-vous.md
> [2] https://gist.github.com/cdecker/ec06452bc470749d9f6d2de73651c5fd
>
> Bastien TEINTURIER via Lightning-dev
>  writes:
> > Good morning list,
> >
> > After exploring decoys [1], which is a cheap way of doing route blinding,
> > I'm turning back to exploring rendezvous.
> > The previous mails on the mailing list mentioned that there was a
> > technicality
> > to make the HMACs check out, but didn't provide a lot of details.
> > The issue is that the filler generation needs to take into account some
> hops
> > that will be added *later*, by the payer.
> >
> > However it is quite easy to work-around, with a few space trade-offs.
> > Let's consider a typical rendezvous setup, where Alice wants to be paid
> via
> > rendezvous Bob, and Carol wants to pay that invoice:
> >
> > Carol -> ... -> Bob -> ... -> Alice
> >
> > If Alice knows how many bytes Carol is going to use for her part of the
> > onion
> > payloads, Alice can easily take them into account when generating her
> > filler by
> > pre-pending the same amount of `0` bytes. It seems reasonable to impose a
> > fixed
> > number of onion bytes for each side of the rendezvous (650 each?) so
> Alice
> > would
> > know that amount.
> >
> > When Carol completes the onion with her part of the route, she simply
> needs
> > to
> > generate filler data for her part of the route following the normal
> Sphinx
> > protocol
> > and apply it to the onion she found in the invoice.
> >
> > But the tricky part is that she needs to give Bob a way of generating the
> > same
> > filler data to unapply it. Then all HMACs correctly check out.
> >
> > I see two ways of doing that:
> >
> > * Carol simply sends that filler (650 bytes), probably via a TLV in
> > `update_add_htlc`.
> > This means every intermediate hop needs to forward that, which is painful
> > and
> > potentially leaking too much data.
> > * Carol provides Bob with the rho keys used to generate her filler, and
> the
> > length
> > used by each hop. This leaks to Bob an upper bound on the number of hops
> > and the
> > number of bytes sent to each hop.
> >
> > Since shift-and-xor kind of crypto is hard to read as equations, but very
> > easy to
> > read as diagrams, I spent a bit of time doing beautiful ASCII art [2].
> > Don't hesitate
> > to have a look at it to find more details about how that works. You can
> > also print
> > that on t-shirts to look fancy at conferences. I also have some sample
> code
> > working
> > in eclair [3] for those who can read Scala without getting headaches.
> >
> > Are there other tricks we can use to reconcile both sides of the onion at
> > Bob's?
> > Maybe cdecker (or someone else) has an ace up his sleeve for me there? :)
> >
> > One important thing to note is that rendezvous on normal onions will be
> > costly to
> > integrate into invoices: it takes 1366 bytes to include one onion, and if
> > we want
> > to handle route failures or let the sender use multi-part, we will need
> to
> > have a
> > handful of pre-encrypted onions in the invoice (hence a few kB, which may
> > not be
> > practical for QR codes).
> >
> > But I did mention before that doing rendezvous on the trampoline onion
> > could have
> > better properties [4]. When doing that, having Carol transmit her filler
> > data only
> > to Bob, via the outer onion payload becomes practical and doesn't leak
> > information.
> > Multi-part would work with a single trampoline onion in the invoice (~500
> > bytes),
> > because nodes can do MPP between trampoline nodes thanks to the
> > onion-in-onion
> > construction. We simply need to decide the size of the trampoline onion
> to
> > allow
> > each side of the rendezvous to be able to insert a number of hops we're
> > comfortable
> > with. You can find more details in the "Rendezvous on a trampoline&

[Lightning-dev] Sphinx Rendezvous Update

2020-02-24 Thread Bastien TEINTURIER via Lightning-dev
Good morning list,

After exploring decoys [1], which is a cheap way of doing route blinding,
I'm turning back to exploring rendezvous.
The previous mails on the mailing list mentioned that there was a
technicality
to make the HMACs check out, but didn't provide a lot of details.
The issue is that the filler generation needs to take into account some hops
that will be added *later*, by the payer.

However it is quite easy to work-around, with a few space trade-offs.
Let's consider a typical rendezvous setup, where Alice wants to be paid via
rendezvous Bob, and Carol wants to pay that invoice:

Carol -> ... -> Bob -> ... -> Alice

If Alice knows how many bytes Carol is going to use for her part of the
onion
payloads, Alice can easily take them into account when generating her
filler by
pre-pending the same amount of `0` bytes. It seems reasonable to impose a
fixed
number of onion bytes for each side of the rendezvous (650 each?) so Alice
would
know that amount.

When Carol completes the onion with her part of the route, she simply needs
to
generate filler data for her part of the route following the normal Sphinx
protocol
and apply it to the onion she found in the invoice.

But the tricky part is that she needs to give Bob a way of generating the
same
filler data to unapply it. Then all HMACs correctly check out.

I see two ways of doing that:

* Carol simply sends that filler (650 bytes), probably via a TLV in
`update_add_htlc`.
This means every intermediate hop needs to forward that, which is painful
and
potentially leaking too much data.
* Carol provides Bob with the rho keys used to generate her filler, and the
length
used by each hop. This leaks to Bob an upper bound on the number of hops
and the
number of bytes sent to each hop.

Since shift-and-xor kind of crypto is hard to read as equations, but very
easy to
read as diagrams, I spent a bit of time doing beautiful ASCII art [2].
Don't hesitate
to have a look at it to find more details about how that works. You can
also print
that on t-shirts to look fancy at conferences. I also have some sample code
working
in eclair [3] for those who can read Scala without getting headaches.

Are there other tricks we can use to reconcile both sides of the onion at
Bob's?
Maybe cdecker (or someone else) has an ace up his sleeve for me there? :)

One important thing to note is that rendezvous on normal onions will be
costly to
integrate into invoices: it takes 1366 bytes to include one onion, and if
we want
to handle route failures or let the sender use multi-part, we will need to
have a
handful of pre-encrypted onions in the invoice (hence a few kB, which may
not be
practical for QR codes).

But I did mention before that doing rendezvous on the trampoline onion
could have
better properties [4]. When doing that, having Carol transmit her filler
data only
to Bob, via the outer onion payload becomes practical and doesn't leak
information.
Multi-part would work with a single trampoline onion in the invoice (~500
bytes),
because nodes can do MPP between trampoline nodes thanks to the
onion-in-onion
construction. We simply need to decide the size of the trampoline onion to
allow
each side of the rendezvous to be able to insert a number of hops we're
comfortable
with. You can find more details in the "Rendezvous on a trampoline" section
of [2].

I'm really interested in other approaches to making rendezvous work with
the HMACs
correctly checking out. If people on this list have drafts, intuitions or
random
thoughts about possible constructions, please share them, I'd be happy to
dive into
them to explore alternatives to the one I found, hoping we can make this
work and
provide this feature to our users in the near future.

A small side-note on Hornet. Hornet does offer many features that I believe
we will
want in Lightning in the future. It may seem that doing a custom rendezvous
scheme
is a waste of time since we'll ditch it once/if we implement Hornet. While
that is
true in the long run, I believe that if we're able to find a rendezvous
scheme that
isn't too much work to implement, it makes sense to have something
available soon-ish.
Hornet will likely be a longer-term effort that we won't get as soon as
we'd like
(especially since it will probably require a network-wide update). But who
knows, maybe
we may see that we are trying to create many features that are already
built into Hornet
(rendezvous, directed message support, etc) and will decide to implement
Hornet sooner
than expected?

Cheers,
Bastien

[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002435.html
[2] https://gist.github.com/t-bast/ab42a7f52eb2e73105557957c8359601
[3] https://github.com/ACINQ/eclair/tree/sphinx-rendezvous
[4]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002237.html
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org