Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-26 Thread Rusty Russell
Matt Corallo  writes:
>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>> 
>> Matt Corallo  writes:
>>> Somehow I missed this thread, but I did note in a previous meeting - these 
>>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>>> tests for precisely these types of message-non-delivery-and-resending 
>>> production desync bugs for several years. When it initially landed it 
>>> forced several rewrites of parts of the state machine, but quickly 
>>> exhausted the bug fruit (though catches other classes of bugs occasionally 
>>> as well). The state machine here is really not that big - while I agree 
>>> simplifying it where possible is nice, ripping things out to replace them 
>>> with fresh code (which would need similar testing) is probably not the most 
>>> obvious decrease in complexity.
>> 
>> It's historically had more bugs than anything else in the protocol.  We
>> literally found another one in feerate negotiation since the last
>> c-lightning release :(
>> 
>> I'd rather not have bugs than try to catch them all.
>
> I promise it’s much less work than it sounds like, and avoids having to debug 
> these things based on logs, which is a huge pain :). Definitely less work 
> than a new state machine:).

But the entire point of this proposal is that it's a subset of the
existing state machine?

>> You could propose a splice (or update to anchors, or whatever) any time
>> when it's your turn, as long as you haven't proposed any other updates.
>> That's simple!
>
> I presume you’d need to take it a few steps further - if the last
> message received required a response CS/RAA, you must still wait until
> things have settled down. I guess it also depends on the exact
> semantics of a “turn based” message protocol - if you received some
> updates and a signature, are you allowed to add more updates after you
> send your CS/RAA (then you have a good chunk of today’s complexity),
> or do you have to wait until they send you back their last RAA (in
> which case presumably they aren’t allowed to include anything else as
> then they’d be able to monopolize update windows). In the first case
> you still have the same issues of today, in the second less so, but
> you’re doing a similar “ok, just pause updates and wait for things to
> settle “, I think.

Yes, as the original proposal stated: you propose changes, send
commitment_signed, receive revoke_and_ack and commitment_signed, then
send revoke_and_ack.  Then both sides are in sync, and the other side
has a turn.

The only "twist" is that if it's your turn and you receive an update,
you can either reply with a "yield" message, or ignore it.

>> Instead, *both* sides have to send a splice message to synchronize, and
>> they can only do so once all in-flight changes have cleared. You have
>> to resolve simultaneous splice attempts (we use "highest feerate"
>> tiebreak by node_id), and keep track of this stage while you clear
>> in-flight changes.
>
> Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
> that’s new) and the loser has to store their proposal in a holding cell for 
> later (which they have to do in turn-based anyway). Logic to check if there’s 
> unsettled things in RAA handling is pretty similar to turn-based, and logic 
> to reject other messages is the same as shutdown handling today.

Nope, with the simplified protocol you can `update_splice` at any time
instead of your normal update, since both sides are already in sync.

>> Here's the subset of requirements from the draft which relate to this:
>> 
>> The sender:
>> - MUST NOT send another splice message while a splice is being negotiated.
>> - MUST NOT send a splice message after sending uncommitted changes.
>> - MUST NOT send other channel updates until splice negotiation has completed.
>> 
>> The receiver:
>> - MUST respond with a `splice` message of its own if it has not already.
>> - MUST NOT reply with `splice` until all commitment updates are resolved by 
>> both peers.
>
> Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
> HTLCs left”.

Yes, and in fact this protocol was flawed and had to be revised, as it
did not actually mean both sides were committed in the case of
simultaneous splice proposals :(

>> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>>  the splice.
>
> If we like turn based, why not just deterministic throw out one slice? :)

Because while I am going to implement turn-based, I'm not sure if anyone
else is.  I guess we'll see?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Matt Corallo

I looked into this more closely, and as far as I understand it, the spec

already states that you should not count dust HTLCs:

Oops! We do the same thing, we will fix that.

On 4/26/21 11:03, Eugene Siegel wrote:
I would have to think more about the issue where it's not possible to lower the feerate though.  That seems 
like a spec issue?


There is no current in-protocol way to change the dust limit, so the issue 
doesn't apply here.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Eugene Siegel
Lnd counts dust + trimmed HTLCs towards max_accepted_htlcs.  We definitely
shouldn't be counting dust towards that amount.  I would have to think more
about the issue where it's not possible to lower the feerate though.  That
seems like a spec issue?
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Bastien TEINTURIER
I looked into this more closely, and as far as I understand it, the spec
already states that you should not count dust HTLCs:

*if result would be offering more than the remote's max_accepted_htlcs
HTLCs, in the remote commitment transaction: *

   - *MUST NOT add an HTLC.*

Note that it clearly says "in the remote commitment transaction", which
means
you don't count HTLCs that are dust or trimmed.

That matches eclair's behavior: we don't count dust HTLCs towards that
limit.
Is lnd including them in that count? What about other implementations?
If that's the case, that can simply be fixed in lnd without any spec change
IMHO.

Note that this also excludes trimmed HTLCs from the count, which means that
nodes that set `max_accepted_htlcs` to 483 may be exposed to the issue I
described earlier (impossible to lower the feerate because the HTLC count
would
become greater than the limit).

Bastien

Le sam. 24 avr. 2021 à 10:01, Bastien TEINTURIER  a
écrit :

> You're right, I was thinking about trimmed HTLCs (which can re-appear in
> the commit tx
> if you lower the feerate via update_fee).
>
> Dust HTLCs will never appear in the commit tx regardless of subsequent
> update_fees,
> so Eugene's suggestion could make sense!
>
> Le sam. 24 avr. 2021 à 06:02, Matt Corallo  a
> écrit :
>
>> The update_fee message does not, as far as I recall, change the dust
>> limit for outputs in a channel (though I’ve suggested making such a change).
>>
>> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
>>
>> 
>> Hi Eugene,
>>
>> The reason dust HTLCs count for the 483 HTLC limit is because of
>> `update_fee`.
>> If you don't count them and exceed the 483 HTLC limit, you can't lower
>> the fee anymore
>> because some HTLCs that were previously dust won't be dust anymore and
>> you may end
>> up with more than 483 HTLC outputs in your commitment, which opens the
>> door to other
>> kinds of attacks.
>>
>> This is the first issue that comes to mind, but there may be other
>> drawbacks if we dig into
>> this enough with an attacker's mindset.
>>
>> Bastien
>>
>> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a
>> écrit :
>>
>>> I propose a simple mitigation to increase the capital requirement of
>>> channel-jamming attacks. This would prevent an unsophisticated attacker
>>> with low capital from jamming a target channel.  It seems to me that this
>>> is a *free* mitigation without any downsides (besides code-writing), so I'd
>>> like to hear other opinions.
>>>
>>> In a commitment transaction, we trim dust HTLC outputs.  I believe that
>>> the reason for the 483 HTLC limit each side has in the spec is to prevent
>>> commitment tx's from growing unreasonably large, and to ensure they are
>>> still valid tx's that can be included in a block.  If we don't include dust
>>> HTLCs in this calculation, since they are not on the commitment tx, we
>>> still allow 483 (x2) non-dust HTLCs to be included on the commitment tx.
>>> There could be a configurable limit on the number of outstanding dust
>>> HTLCs, but the point is that it doesn't affect the non-dust throughput of
>>> the channel.  This raises the capital requirement of channel-jamming so
>>> that each HTLC must be non-dust, rather than spamming 1 sat payments.
>>>
>>> Interested in others' thoughts.
>>>
>>> Eugene (Crypt-iQ)
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev