Matt Corallo writes:
> On 4/27/21 01:04, Rusty Russell wrote:
>> Matt Corallo writes:
On Apr 24, 2021, at 01:56, Rusty Russell wrote:
Matt Corallo writes:
>>> I promise it’s much less work than it sounds like, and avoids having to
>>> debug these things based on logs, which is
Matt Corallo writes:
> On 4/27/21 17:32, Rusty Russell wrote:
>> OK, draft is up:
>>
>> https://github.com/lightningnetwork/lightning-rfc/pull/867
>>
>> I have to actually implement it now (though the real win comes from
>> making it compulsory, but that's a fair way away).
>>
>> Notab
On 4/27/21 17:32, Rusty Russell wrote:
OK, draft is up:
https://github.com/lightningnetwork/lightning-rfc/pull/867
I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).
Notably, I added the requirement that update_fee
On 4/27/21 01:04, Rusty Russell wrote:
Matt Corallo writes:
On Apr 24, 2021, at 01:56, Rusty Russell wrote:
Matt Corallo writes:
I promise it’s much less work than it sounds like, and avoids having to debug
these things based on logs, which is a huge pain :). Definitely less work than
OK, draft is up:
https://github.com/lightningnetwork/lightning-rfc/pull/867
I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).
Notably, I added the requirement that update_fee messages be on their
own. This means there'
Matt Corallo writes:
>> On Apr 24, 2021, at 01:56, Rusty Russell wrote:
>>
>> Matt Corallo writes:
>>> Somehow I missed this thread, but I did note in a previous meeting - these
>>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively
>>> tests for precisely these types
> On Apr 24, 2021, at 01:56, Rusty Russell wrote:
>
> Matt Corallo writes:
>> Somehow I missed this thread, but I did note in a previous meeting - these
>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively
>> tests for precisely these types of message-non-delivery-a
Matt Corallo writes:
> Somehow I missed this thread, but I did note in a previous meeting - these
> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively
> tests for precisely these types of message-non-delivery-and-resending
> production desync bugs for several years. When
> On Apr 20, 2021, at 17:19, Rusty Russell wrote:
>
> After consideration, I prefer alternation. It fits better with the
> existing implementations, and it is more optimal than reflection for
> optimized implementations.
>
> In particular, you have a rule that says you can send updates and
>
Christian Decker writes:
> Rusty Russell writes:
>>> This is in stark contrast to the leader-based approach, where both
>>> parties can just keep queuing updates without silent times to
>>> transferring the token from one end to the other.
>>
>> You've swayed me, but it needs new wire msgs to ind
Rusty Russell writes:
>> This is in stark contrast to the leader-based approach, where both
>> parties can just keep queuing updates without silent times to
>> transferring the token from one end to the other.
>
> You've swayed me, but it needs new wire msgs to indicate "these are
> your proposals
Christian Decker writes:
>> And you don't get the benefit of the turn-taking approach, which is that
>> you can have a known state for fee changes. Even if you change it to
>> have opener always the leader, it still has to handle the case where
>> incoming changes are not allowed under the new fe
> And you don't get the benefit of the turn-taking approach, which is that
> you can have a known state for fee changes. Even if you change it to
> have opener always the leader, it still has to handle the case where
> incoming changes are not allowed under the new fee regime (and similar
> issu
Christian Decker writes:
> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
> 1. The node with the lexicographically lower node_id is determined to
> be the leader.
> 2. The leader receives proposals for changes from itself and the peer
> and orders them int
Bastien TEINTURIER writes:
> It's a bit tricky to get it right at first, but once you get it right you
> don't need to touch that
> code again and everything runs smoothly. We're pretty close to that state,
> so why would we want to
> start from scratch? Or am I missing something?
Well, if you've
To be honest the current protocol can be hard to grasp at first (mostly
because it's hard to reason
about two commit txs being constantly out of sync), but from an
implementation's point of view I'm
not sure your proposals are simpler.
One of the benefits of the current HTLC state machine is that
I wonder if we should just go the tried-and-tested leader-based
mechanism:
1. The node with the lexicographically lower node_id is determined to
be the leader.
2. The leader receives proposals for changes from itself and the peer
and orders them into a logical sequence of changes
3. The
Hi all,
Our HTLC state machine is optimal, but complex[1]; the Lightning
Labs team recently did some excellent work finding another place the spec
is insufficient[2]. Also, the suggestion for more dynamic changes makes it
more difficult, usually requiring forced quiescence.
The following
18 matches
Mail list logo