Re: [Lightning-dev] Mailing List Future

2023-12-04 Thread Matt Corallo
Ah, there was also some concern over archiving of a discourse, which we'd hoped was trivial through 
"mailing list mode", but apparently that may not be the case. Further thoughts on that are welcome.


Matt

On 12/4/23 12:32 PM, Matt Corallo wrote:
On the call the vote was split between "host our own ML, or maybe use groups.io" and "some discourse 
instance, probably delvingbitcoin.org", with a weak majority for the second.


With that in mind, we're gonna give delvingbitcoin.org a try, at least for two weeks, and then we 
can come back to this and discuss!


Some folks indicated they'd like to interact with it via the supposed "mailing list mode", but sadly 
AJ indicated it may not be super reliable (and seems to be disabled on delvingbitcoin.org). I've 
asked AJ to enable it so people can maybe try it out but we'll see if it works. Sadly if you don't 
enable this mode it appears you cannot post via email.


See-also discussion of using a category vs tags at 
https://delvingbitcoin.org/t/can-we-get-a-lightning-protocol-design-category-wg/242/3


Because this is new, we'll have another discussion in two weeks about if people are happy with this 
as a solution/if we should host our own discourse (not sure why)/if we should go back to the mailing 
list plan. Those who have a view can also respond to this email and I'll raise it in the meeting.


Matt

On 11/26/23 8:51 AM, Matt Corallo wrote:
During the last meeting it came up that the mailing list here will likely shut down somewhere 
around the end of the year. We listed basically the following options for future discussion forums:


* google groups as a mailing list hoster. One question was whether its friendly to subscribing 
without a gmail account, which may be limiting to some.
* github discussions on the lightning org. One question is whether the moderation tools here are 
sufficient.
* Someone (probably me) host a mailman instance and we use another mailing list. I dug into this a 
bit and am happy to do this, on the one condition that the ML remains fully moderated, though this 
doens't seem like a substantial burden today. One question is if spam-foldering will be an issue, 
but with full moderation I'm pretty confident this will be tolerable, at least for those with 
email hosted anywhere but Microsoft lol.
* A discourse instance (either we host one or we use delvingbitcoin, which AJ hosts and has 
previously offered to set up a lightning section on).


There was some loose discussion, but I'm not sure there's going to be a clear conclusion. Thus, I 
think we should simply vote at the next meeting after a time-boxed minute or two discussion. If 
anyone has any thoughts or would like to have their voice heard, they can join the meeting in a 
week and a day or can respond here and I'll do my best to repeat the views expressed on the call.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mailing List Future

2023-12-04 Thread Matt Corallo
On the call the vote was split between "host our own ML, or maybe use groups.io" and "some discourse 
instance, probably delvingbitcoin.org", with a weak majority for the second.


With that in mind, we're gonna give delvingbitcoin.org a try, at least for two weeks, and then we 
can come back to this and discuss!


Some folks indicated they'd like to interact with it via the supposed "mailing list mode", but sadly 
AJ indicated it may not be super reliable (and seems to be disabled on delvingbitcoin.org). I've 
asked AJ to enable it so people can maybe try it out but we'll see if it works. Sadly if you don't 
enable this mode it appears you cannot post via email.


See-also discussion of using a category vs tags at 
https://delvingbitcoin.org/t/can-we-get-a-lightning-protocol-design-category-wg/242/3


Because this is new, we'll have another discussion in two weeks about if people are happy with this 
as a solution/if we should host our own discourse (not sure why)/if we should go back to the mailing 
list plan. Those who have a view can also respond to this email and I'll raise it in the meeting.


Matt

On 11/26/23 8:51 AM, Matt Corallo wrote:
During the last meeting it came up that the mailing list here will likely shut down somewhere around 
the end of the year. We listed basically the following options for future discussion forums:


* google groups as a mailing list hoster. One question was whether its friendly to subscribing 
without a gmail account, which may be limiting to some.
* github discussions on the lightning org. One question is whether the moderation tools here are 
sufficient.
* Someone (probably me) host a mailman instance and we use another mailing list. I dug into this a 
bit and am happy to do this, on the one condition that the ML remains fully moderated, though this 
doens't seem like a substantial burden today. One question is if spam-foldering will be an issue, 
but with full moderation I'm pretty confident this will be tolerable, at least for those with email 
hosted anywhere but Microsoft lol.
* A discourse instance (either we host one or we use delvingbitcoin, which AJ hosts and has 
previously offered to set up a lightning section on).


There was some loose discussion, but I'm not sure there's going to be a clear conclusion. Thus, I 
think we should simply vote at the next meeting after a time-boxed minute or two discussion. If 
anyone has any thoughts or would like to have their voice heard, they can join the meeting in a week 
and a day or can respond here and I'll do my best to repeat the views expressed on the call.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Mailing List Future

2023-11-26 Thread Matt Corallo
During the last meeting it came up that the mailing list here will likely shut down somewhere around 
the end of the year. We listed basically the following options for future discussion forums:


* google groups as a mailing list hoster. One question was whether its friendly to subscribing 
without a gmail account, which may be limiting to some.
* github discussions on the lightning org. One question is whether the moderation tools here are 
sufficient.
* Someone (probably me) host a mailman instance and we use another mailing list. I dug into this a 
bit and am happy to do this, on the one condition that the ML remains fully moderated, though this 
doens't seem like a substantial burden today. One question is if spam-foldering will be an issue, 
but with full moderation I'm pretty confident this will be tolerable, at least for those with email 
hosted anywhere but Microsoft lol.
* A discourse instance (either we host one or we use delvingbitcoin, which AJ hosts and has 
previously offered to set up a lightning section on).


There was some loose discussion, but I'm not sure there's going to be a clear conclusion. Thus, I 
think we should simply vote at the next meeting after a time-boxed minute or two discussion. If 
anyone has any thoughts or would like to have their voice heard, they can join the meeting in a week 
and a day or can respond here and I'll do my best to repeat the views expressed on the call.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-20 Thread Matt Corallo




On 11/20/23 6:53 AM, Andy Schroder wrote:


- I would omit suggesting to use DoH from the spec. DoH seems a bit centralized to me and that's 
up to the client to decide what to do. DNS itself is a hierarchically distributed system, so 
there is redundancy built into it (which has its flaw at the root nameserver / ICANN level) and 
it seems to me like DoH is taking much of that distributed design away. It seems like if you are 
concerned about your ISP snooping your traffic, you should use a tunnel so that your traffic is 
obfuscated that way, that way things are done at the IP level and not way up at the HTTPS level. 
Are you resorting to DoH because many ISP block traffic for DNSSEC records traffic through their 
networks? Either way, how you query DNS seems like that should be left up to the client and not 
really part of the spec.


It is, but its worth mentioning in large part because almost certainly ~all implementations will 
use it. While I agree that it'd be very nice to not use it, in order to do so clients would need 
to (a) actually be able to query TXT records, which isn't in standard operating system libraries, 
so would probably mean DoH to 127.0.0.53 or so, (b) trust the resolver's DNSSEC validation, which 
means having some confidence its local, and not a coffee shop/etc.


Given the level of trust you have to have here in the DNS resolution, its almost certainly best to 
cross-validate with at least multiple DoH services, unless you are validating the DNSSEC chain 
yourself (which I'd really strongly prefer as the best solution here, but I'm unaware of any open 
source code to do so). 


delv, part of bind9, does recursive DNSSEC validation locally: 
https://manpages.ubuntu.com/manpages/jammy/en/man1/delv.1.html


Sadly this doesn't really solve the issue. Lightning nodes need to be able to get a DNSSEC tree in a 
cross-platform way (which "just call delv" is not) ideally without relying on sending UDP directly 
at all. What this really means is that we'll eventually want to use the RFC 9102 CHAIN serialization 
format and put that in the node_announcement, but to do that we need some kind of (cross-platform 
library) client software which can take a serialized CHAIN and validate it. I'm unaware of any such 
software, though in theory it shouldn't be that hard to write.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-20 Thread Matt Corallo




On 11/17/23 8:28 AM, Andy Schroder wrote:

#Comments


## General

- I agree that option 3 and 1 should be used. However, you say "clients (mobile wallets) would first 
make a DNS request corresponding to option 3, and if that fails, they would fallback to option 1. 
Domain owners would implement only one of those two options, depending on their DNS capabilities." . 
However, it seems to me like if we query for a specific user at the domain and it exists, use Option 
3, but if it doesn't, then fall back to Option 1. So, they can actually implement both, depending on 
the user.


Yea, option 1 could reasonably take precedence, however the tradeoff in that case would be revealing 
*who* you're paying, not just which service you're paying through, to any (honest but curious) DoH 
resolver.


- I would omit suggesting to use DoH from the spec. DoH seems a bit centralized to me and that's up 
to the client to decide what to do. DNS itself is a hierarchically distributed system, so there is 
redundancy built into it (which has its flaw at the root nameserver / ICANN level) and it seems to 
me like DoH is taking much of that distributed design away. It seems like if you are concerned about 
your ISP snooping your traffic, you should use a tunnel so that your traffic is obfuscated that way, 
that way things are done at the IP level and not way up at the HTTPS level. Are you resorting to DoH 
because many ISP block traffic for DNSSEC records traffic through their networks? Either way, how 
you query DNS seems like that should be left up to the client and not really part of the spec.


It is, but its worth mentioning in large part because almost certainly ~all implementations will use 
it. While I agree that it'd be very nice to not use it, in order to do so clients would need to (a) 
actually be able to query TXT records, which isn't in standard operating system libraries, so would 
probably mean DoH to 127.0.0.53 or so, (b) trust the resolver's DNSSEC validation, which means 
having some confidence its local, and not a coffee shop/etc.


Given the level of trust you have to have here in the DNS resolution, its almost certainly best to 
cross-validate with at least multiple DoH services, unless you are validating the DNSSEC chain 
yourself (which I'd really strongly prefer as the best solution here, but I'm unaware of any open 
source code to do so).


- Is there a minimum path/offer expiry? Wondering if those might be way lower than the DNS record 
expiry? Seems like we want the expiry of the DNS record to be less than the path expiry because 
there will be some latency in propagating a record with a new blinded path or offer through an 
organization's redundant nameservers. Also, when you create the offer with an expiry and add it to 
the DNS record, that expiry is part of the offer data itself and relative to when it was *created*, 
but your local computer will have an expiry that starts at the time you *fetched* that DNS record.


While offers can expire arbitrarily, I anticipate users of this system will fetch long-lived offers, 
eg ones that expire in a year or two.



- Will we hit any DNS record length limits with the blinded path or offer that 
need to be considered?


We certainly shouldn't. You can put a full PGP key in the DNS:

$ dig 
4f31fa50e5bd5ff45684e560fc24aeee527a43739ab611c49c51098a._openpgpkey.mattcorallo.com
 type61



## Option 1

I think you should also add an option for a type that allows different users to have different 
blinded paths. From a scalability perspective, one may not want to serve all users on the same node. 
Also, the user may use their own lightning node instead of the domain operators.


| hostname   | record type | value   | TTL |
||-|-|-|
| bob._lnaddress.domain.com. | TXT | path: | path expiry |


The statement

"Note that Alice cannot verify that the offer she receives is really from Bob: she has to TOFU 
(trust on first use). But that's something we fundamentally cannot fix if the only information Alice 
has is `b...@domain.com`. However, Alice obtains a signed statement from Bob's LSP that attests that 
`b...@domain.com` is associated with the Bolt12 offer she receives. If she later discovers that this 
was invalid, she can publish that proof to show the world that Bob's LSP is malicious"


- I think should be revised to not use "LSP". We don't necessarily know if it is an LSP or a self 
hosted domain and node. It could be an LSP, but maybe not.


- I think we should say that we cannot verify the offer *if* Bob does not self host and uses an LSP. 
If Bob self hosts, we know it's from Bob if DNSSEC validates and the root nameservers and the tld 
nameservers are honest.


- I think there should be a QR code format that accompanies this so that phone apps can easily 
validate the path (or for option 3 below the offer).



## Option 2


- Seems to be a bad idea

Re: [Lightning-dev] Lightning Address in a Bolt 12 world

2023-11-20 Thread Matt Corallo



On 11/17/23 9:54 AM, Tony Giorgio via Lightning-dev wrote:

Bastien,

Maybe I'm misunderstanding option 1 or perhaps it's not clear. Are you saying with that option, all 
it takes is a single DNS entry for "serviceprovider.com" to service unlimited users? The 
interchanging between "bob" and "domain owner" is a bit confusing in your gist. I think it would be 
beneficial to make it clear what actions a service provider needs to do on behalf of their users vs 
what a user w/ a domain needs to do.


That is true for option 3. I think in practice the design that makes the most sense is both option 1 
and option 3. This allows service providers who host many users and are lazy or have some super 
restrictive DNS setup to add a single TXT record and call it a day. For those willing to run a copy 
of BIND on a server somewhere, they can avoid the extra round-trip and put a few million records in 
a zonefile no problem.


Another nice thing about the dual approach (or just option 1) is that a user using any standard 
non-custodial (or even custodial) lightning wallet that supports BOLT12 can just take their offer, 
put it in their own domain name as a single TXT record, and now they have a nice address on their 
own domain without trusting their LSP at all.


I like the idea that a semi-technical user with any domain can do this without setting up a web 
server, but I will say from my own personal dev experience, I don't know a single dev that has ever 
programmatically set up thousands to millions of DNS entries in real time. If the goal is to get 
devs to migrate from LNURL to LNDNS and to migrate from bolt11 to bolt12, I'm afraid the hurdle is 
going to get even bigger. So if I'm mistaken and option 1 just has a one-time DNS entry that service 
all users, then please let me know. I like the proposal otherwise.


I have (bitcoinheaders.net), its really incredibly easy. Almost easier than setting up a web server 
to handle lnurl. That said, I do agree with you, less because its hard, more because many 
enterprises are scared to run their own DNS infra, in part because when it fails you're totally 
hosed. For them, having option 3 support in senders would ensure its just as easy as LNURL, at least 
as long as their node software supports the relevant configuration.



Thank you,

Tony


On 11/17/23 3:08 AM, Bastien TEINTURIER wrote:

Hi Tony,

> For completeness, would you be willing to demonstrate what it might
> look like if it were bolt12 in the normal LNURL way?

Not sure that would provide "completeness", but I guess it would work
quite similarly, but instead of putting data in DNS records, that data
would be directly on files on the service provider's web server and
fetched over HTTPS, thus revealing the user's IP address and who they
want to pay.

> At scale, that would be much more difficult for LNURL service
> providers to implement for their potentially thousands to millions
> of users.

Why would that be the case? I was told handling a few million entries in
a zonefile isn't a challenge at all. And it is only necessary if the
service provider absolutely wants to only support option 3. With option
1, the service provider has a single DNS record to create. If the
service provider doesn't need to hide its node_id, the blinded path can
be empty, which guarantees that the record never expires (unless they
want to change their node_id).

On the client-side, this is very simple as well: clients should use DoH,
so they simply make HTTPS requests (no need for deep integration in the
DNS stack). Clients should first try option 3, and if that query doesn't
return a result, they fallback to option 1. This only needs to happen
once in a while, after that they can save the offer in their contact
list and reuse it until it expires, at which point they make the queries
again.

Cheers,
Bastien

Le jeu. 16 nov. 2023 à 18:52, Tony Giorgio  a écrit 
:

Bastien,

For completeness, would you be willing to demonstrate what it might look 
like if it were
bolt12 in the normal LNURL way? The concern is mostly what you brought up 
with relying on DNS
entries instead of a typical web server. At scale, that would be much more 
difficult for LNURL
service providers to implement for their potentially thousands to millions 
of users.

Something like Oblivious HTTP could be promising to remove the knowledge of 
IP for some of the
larger LNURL service providers.

Tony

On 11/16/23 7:51 AM, Bastien TEINTURIER wrote:

Good morning list,

Most of you already know and love lightning addresses [1].
I wanted to revisit that protocol, to see how we could improve it and
fix its privacy drawbacks, while preserving the nice UX improvements
that it brings.

I have prepared a gist with three different designs that achieve those
goals [2]. I'm attaching the contents of that gist below. I'd like to
turn it into a bLIP once I collect enough feedback from the community.

I don't

Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-23 Thread Matt Corallo




On 10/20/23 7:43 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 09:55:12PM -0400, Matt Corallo wrote:

Quite the contrary. Schnorr signatures are 64 bytes, so in situations like
lightning where the transaction form is deterministically derived, signing 100
extra transactions requires just 6400 extra bytes. Even a very slow 100KB/s
connection can transfer that in 64ms; latency will still dominate.


Lightning today isn't all that much data, but multiply it by 100 and we
start racking up data enough that people may start to have to store a really
material amount of data for larger nodes and dealing with that starts to be
a much bigger pain then when we're talking about a GiB or twenty.


We are talking about storing ephemeral data here, HTLC transactions and
possibly commitment transactions. Since lightning uses disclosed secrets to
invalidate old state, you do not need to keep every signature from your
counterparty indefinitely.


Mmm, fair point, yes.


RBF has a minimum incremental relay fee of 1sat/vByte by default. So if you use
those 100 pre-signed transaction variants to do nothing more than sign every
possible minimum incremental relay, you've covered a range of 1sat/vByte to
100sat/vByte. I believe that is sufficient to get mined for any block in
Bitcoin's entire modern history.

CPFP meanwhile requires two transactions, and thus extra bytes. Other than edge
cases with very large transactions in low-fee environments, there's no
circumstance where CPFP beats RBF.


What I was referring to is that if we have the SIGHASH_SINGLE|ANYONECANPAY
we can combine many HTLC claims into one transaction, vs pre-signing means
we're stuck with a ton of individual transactions.


Since SIGHASH_SINGLE requires one output per input, the savings you get by
combining multiple SIGHASH_SINGLE transactions together aren't very
significant. Just 18 bytes for nVersion, nLockTime, and the txin and txout size
fields. The HTLC-timeout transaction is 166.5 vBytes, so that's a savings of
just 11%


Yep, its not a lot, but for a thing that's inherently super chain-spammy, its 
still quite nice.


Of course, if you _do_ need to fee bump and add an additional input, that input
takes up space, and you'll probably need a change output. At which point you
again would probably have been better off with a pre-signed transaction.

You are also assuming there's lots of HTLC's in flight that need to be spent.
That's very often not the case.


In general, yes, in force-close cases often there's been some failure which is repeated in several 
HTLCs :).


More generally, I think we're getting lost here - this isn't really a material change for 
lightning's trust model - its already the case that a peer that is willing to put a lot of work in 
can probably steal your money, and there's now just one more way they can do that. We really don't 
need to rush to "fix lightning" here, we can do it right and fix it at the ecosystem level. It 
shouldn't be the case that a policy restriction results in both screwing up a L2 network *and* 
results in miners getting paid less. That's a policy bug.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-22 Thread Matt Corallo




On 10/20/23 8:15 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 05:05:48PM -0400, Matt Corallo wrote:

Sadly this only is really viable for pre-anchor channels. With anchor
channels the attack can be performed by either side of the closure, as the
HTLCs are now, at max, only signed SIGHASH_SINGLE|ANYONECANPAY, allowing you
to add more inputs and perform this attack even as the broadcaster.

I don't think its really viable to walk that change back to fix this, as it
also fixed plenty of other issues with channel usability and important
edge-cases.


What are anchor outputs used for other than increasing fees?

Because if we've pre-signed the full fee range, there is simply no need for
anchor outputs. Under any circumstance we can broadcast a transaction with a
sufficiently high fee to get mined.



Indeed, that is what anchor outputs are for. Removing the pre-set feerate solved a number of issues 
with edge-cases and helped address the fee-inflation attack. Now, just using pre-signed transactions 
doesn't have to re-introduce those issues - as long as the broadcaster gets to pick which of the 
possible transactions they broadcast its just another transaction of theirs.


Still, I'm generally really dubious of the multiple pre-signed transaction thing, (a) it would mean 
more fee overhead (not the end of the world for a force-closure, but it sucks to have all these 
individual transactions rolling around and be unable to batch), but more importantly (b) its a bunch 
of overhead to keep track of a ton of variants across a sufficiently granular set of feerates for it 
to not result in substantially overspending on fees.


Like I mentioned in the previous mail, this is really a policy bug - we're talking about a 
transaction pattern that might well happen where miners aren't getting the optimal value in 
transaction fees (potentially by a good bit). This needs to be fixed at the policy/Bitcoin Core 
layer, not in the lightning world (as much as its pretty resource-intensive to fix in the policy 
domain, I think).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-22 Thread Matt Corallo




On 10/20/23 9:25 PM, Peter Todd wrote:

On Fri, Oct 20, 2023 at 09:03:49PM -0400, Matt Corallo wrote:

What are anchor outputs used for other than increasing fees?

Because if we've pre-signed the full fee range, there is simply no need for
anchor outputs. Under any circumstance we can broadcast a transaction with a
sufficiently high fee to get mined.



Indeed, that is what anchor outputs are for. Removing the pre-set feerate
solved a number of issues with edge-cases and helped address the
fee-inflation attack. Now, just using pre-signed transactions doesn't have
to re-introduce those issues - as long as the broadcaster gets to pick which
of the possible transactions they broadcast its just another transaction of
theirs.

Still, I'm generally really dubious of the multiple pre-signed transaction
thing, (a) it would mean more fee overhead (not the end of the world for a
force-closure, but it sucks to have all these individual transactions
rolling around and be unable to batch), but more importantly (b) its a bunch
of overhead to keep track of a ton of variants across a sufficiently
granular set of feerates for it to not result in substantially overspending
on fees.


Quite the contrary. Schnorr signatures are 64 bytes, so in situations like
lightning where the transaction form is deterministically derived, signing 100
extra transactions requires just 6400 extra bytes. Even a very slow 100KB/s
connection can transfer that in 64ms; latency will still dominate.


Lightning today isn't all that much data, but multiply it by 100 and we start racking up data enough 
that people may start to have to store a really material amount of data for larger nodes and dealing 
with that starts to be a much bigger pain then when we're talking about a GiB or twenty.



RBF has a minimum incremental relay fee of 1sat/vByte by default. So if you use
those 100 pre-signed transaction variants to do nothing more than sign every
possible minimum incremental relay, you've covered a range of 1sat/vByte to
100sat/vByte. I believe that is sufficient to get mined for any block in
Bitcoin's entire modern history.

CPFP meanwhile requires two transactions, and thus extra bytes. Other than edge
cases with very large transactions in low-fee environments, there's no
circumstance where CPFP beats RBF.


What I was referring to is that if we have the SIGHASH_SINGLE|ANYONECANPAY we can combine many HTLC 
claims into one transaction, vs pre-signing means we're stuck with a ton of individual transactions.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-22 Thread Matt Corallo
Sadly this only is really viable for pre-anchor channels. With anchor channels the attack can be 
performed by either side of the closure, as the HTLCs are now, at max, only signed 
SIGHASH_SINGLE|ANYONECANPAY, allowing you to add more inputs and perform this attack even as the 
broadcaster.


I don't think its really viable to walk that change back to fix this, as it also fixed plenty of 
other issues with channel usability and important edge-cases.


I'll highlight that fixing this issue on the lightning end isn't really the right approach generally 
- we're talking about a case where a lightning counterparty engineered a transaction broadcast 
ordering such that miners are *not* including the optimal set of transactions for fee revenue. Given 
such a scenario exists (and its not unrealistic to think someone might wish to engineer such a 
situation), the fix ultimately needs to lie with Bitcoin Core (or other parts of the mining stack).


Now, fixing this in the Bitcoin Core stack is no trivial deal - the reason for this attack is to 
keep enough history to fix it Bitcoin Core would need unbounded memory. However, its not hard to 
imagine a simple external piece of software which monitors the mempool for transactions which were 
replaced out but which may be able to re-enter the mempool later with other replacements and store 
them on disk. From there, this software could optimize the revenue of block template selection, 
while also accidentally fixing this issue.


Matt

On 10/20/23 2:35 PM, Matt Morehouse via bitcoin-dev wrote:

I think if we apply this presigned fee multiplier idea to HTLC spends,
we can prevent replacement cycles from happening.

We could modify HTLC scripts so that *both* parties can only spend the
HTLC via presigned second-stage transactions, and we can always sign
those with SIGHASH_ALL.  This will prevent the attacker from adding
inputs to their presigned transaction, so (AFAICT) a replacement
cycling attack becomes impossible.

The tradeoff is more bookkeeping and less fee granularity when
claiming HTLCs on chain.

On Fri, Oct 20, 2023 at 11:04 AM Peter Todd via bitcoin-dev
 wrote:


On Fri, Oct 20, 2023 at 10:31:03AM +, Peter Todd via bitcoin-dev wrote:

As I have suggested before, the correct way to do pre-signed transactions is to
pre-sign enough *different* transactions to cover all reasonable needs for
bumping fees. Even if you just increase the fee by 2x each time, pre-signing 10
different replacement transactions covers a fee range of 1024x. And you
obviously can improve on this by increasing the multiplier towards the end of
the range.


To be clear, when I say "increasing the multiplier", I mean, starting with a
smaller multiplier at the beginning of the range, and ending with a bigger one.

Eg feebumping with fee increases pre-signed for something like:

 1.1
 1.2
 1.4
 1.8
 2.6
 4.2
 7.4

etc.

That would use most of the range for smaller bumps, as a %, with larger % bumps
reserved for the end where our strategy is changing to something more
"scorched-earth"

And of course, applying this idea properly to commitment transactions will mean
that the replacements may have HTLCs removed, when their value drops below the
fees necessary to get those outputs mined.

Note too that we can sign simultaneous variants of transactions that deduct the
fees from different party's outputs. Eg Alice can give Bob the ability to
broadcast higher and higher fee txs, taking the fees from Bob's output(s), and
Bob can give Alice the same ability, taking the fees from Alice's output(s). I
haven't thought through how this would work with musig. But you can certainly
do that with plain old OP_CheckMultisig.

--
https://petertodd.org 'peter'[:-1]@petertodd.org
___
bitcoin-dev mailing list
bitcoin-...@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-...@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-20 Thread Matt Corallo
That certainly helps, yes, and I think many nodes do something akin to this already, but I'm not 
sure we can say that the problem has been fixed if the victim has to spend way more than the 
prevailing mempool fees (and potentially burn a large % of their HTLC value) :).


Matt

On 10/19/23 12:23 PM, Matt Morehouse wrote:

On Wed, Oct 18, 2023 at 12:34 AM Matt Corallo via bitcoin-dev
 wrote:


There appears to be some confusion about this issue and the mitigations. To be 
clear, the deployed
mitigations are not expected to fix this issue, its arguable if they provide 
anything more than a PR
statement.

There are two discussed mitigations here - mempool scanning and transaction 
re-signing/re-broadcasting.

Mempool scanning relies on regularly checking the mempool of a local node to 
see if we can catch the
replacement cycle mid-cycle. It only works if wee see the first transaction 
before the second
transaction replaces it.

Today, a large majority of lightning nodes run on machines with a Bitcoin node 
on the same IP
address, making it very clear what the "local node" of the lightning node is. 
An attacker can
trivially use this information to connect to said local node and do the 
replacement quickly,
preventing the victim from seeing the replacement.

More generally, however, similar discoverability is true for mining pools. An 
attacker performing
this attack is likely to do the replacement attack on a miner's node directly, 
potentially reducing
the reach of the intermediate transaction to only miners, such that the victim 
can never discover it
at all.

The second mitigation is similarly pathetic. Re-signing and re-broadcasting the 
victim's transaction
in an attempt to get it to miners even if its been removed may work, if the 
attacker is super lazy
and didn't finish writing their attack system. If the attacker is connected to 
a large majority of
hashrate (which has historically been fairly doable), they can simply do their 
replacement in a
cycle aggressively and arbitrarily reduce the probability that the victim's 
transaction gets confirmed.


What if the honest node aggressively fee-bumps and retransmits the
HTLC-timeout as the CLTV delta deadline approaches, as suggested by
Ziggie?  Say, within 10 blocks of the deadline, the honest node starts
increasing the fee by 1/10th the HTLC value for each non-confirmation.

This "scorched earth" approach may cost the honest node considerable
fees, but it will cost the attacker even more, since each attacker
replacement needs to burn at least as much as the HTLC-timeout fees,
and the attacker will need to do a replacement every time the honest
node fee bumps.

I think this fee-bumping policy will provide sufficient defense even
if the attacker is replacement-cycling directly in miners' mempools
and the victim has no visibility into the attack.



Now, the above is all true in a spherical cow kinda world, and the P2P network 
has plenty of slow
nodes and strange behavior. Its possible that these mitigations might, by some 
stroke of luck,
happen to catch such an attack and prevent it, because something took longer 
than the attacker
intended or whatever. But, that's a far cry from any kind of material "fix" for 
the issue.

Ultimately the only fix for this issue will be when miners keep a history of 
transactions they've
seen and try them again after they may be able to enter the mempool because of 
an attack like this.

Matt

On 10/16/23 12:57 PM, Antoine Riard wrote:

(cross-posting mempool issues identified are exposing lightning chan to loss of 
funds risks, other
multi-party bitcoin apps might be affected)

Hi,

End of last year (December 2022), amid technical discussions on eltoo payment 
channels and
incentives compatibility of the mempool anti-DoS rules, a new transaction-relay 
jamming attack
affecting lightning channels was discovered.

After careful analysis, it turns out this attack is practical and immediately 
exposed lightning
routing hops carrying HTLC traffic to loss of funds security risks, both legacy 
and anchor output
channels. A potential exploitation plausibly happening even without network 
mempools congestion.

Mitigations have been designed, implemented and deployed by all major lightning 
implementations
during the last months.

Please find attached the release numbers, where the mitigations should be 
present:
- LDK: v0.0.118 - CVE-2023 -40231
- Eclair: v0.9.0 - CVE-2023-40232
- LND: v.0.17.0-beta - CVE-2023-40233
- Core-Lightning: v.23.08.01 - CVE-2023-40234

While neither replacement cycling attacks have been observed or reported in the 
wild since the last
~10 months or experimented in real-world conditions on bitcoin mainet, 
functional test is available
exercising the affected lightning channel against bitcoin core mempool (26.0 
release cycle).

It is understood that a simple replacement cycling attack does not demand 
privileged capabilities

Re: [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-18 Thread Matt Corallo
to all the network mempools, the honest lightning node is able to catch on the 
flight the unconfirmed HTLC-preimage, before its subsequent mempool replacement. The preimage can be 
extracted from the second-stage HTLC-preimage and used to fetch the off-chain inbound HTLC with a 
cooperative message or go on-chain with it to claim the accepted HTLC output.


Implemented and deployed by Eclair and LND.

CLTV Expiry Delta: With every jammed block comes an absolute fee cost paid by the attacker, a risk 
of the HTLC-preimage being detected or discovered by the honest lightning node, or the HTLC-timeout 
to slip in a winning block template. Bumping the default CLTV delta hardens the odds of success of a 
simple replacement cycling attack.


Default setting: Eclair 144, Core-Lightning 34, LND 80 and LDK 72.

## Affected Bitcoin Protocols and Applications

 From my understanding the following list of Bitcoin protocols and applications could be affected by 
new denial-of-service vectors under some level of network mempools congestion. Neither tests or 
advanced review of specifications (when available) has been conducted for each of them:

- on-chain DLCs
- coinjoins
- payjoins
- wallets with time-sensitive paths
- peerswap and submarine swaps
- batch payouts
- transaction "accelerators"

Inviting their developers, maintainers and operators to investigate how replacement cycling attacks 
might disrupt their in-mempool chain of transactions, or fee-bumping flows at the shortest delay. 
Simple flows and non-multi-party transactions should not be affected to the best of my understanding.


## Open Problems: Package Malleability

Pinning attacks have been known for years as a practical vector to compromise lightning channels 
funds safety, under different scenarios (cf. current bip331's motivation section). Mitigations at 
the mempool level have been designed, discussed and are under implementation by the community 
(ancestor package relay + nverrsion=3 policy). Ideally, they should constraint a pinning attacker to 
always attach a high feerate package (commitment + CPFP) to replace the honest package, or allow a 
honest lightning node to overbid a malicious pinning package and get its time-sensitive transaction 
optimistically included in the chain.


Replacement cycling attack seem to offer a new way to neutralize the design goals of package relay 
and its companion nversion=3 policy, where an attacker package RBF a honest package out of the 
mempool to subsequently double-spend its own high-fee child with a transaction unrelated to the 
channel. As the remaining commitment transaction is pre-signed with a minimal relay fee, it can be 
evicted out of the mempool.


A functional test exercising a simple replacement cycling of a lightning channel commitment 
transaction on top of the nversion=3 code branch is available:
https://github.com/ariard/bitcoin/commits/2023-10-test-mempool-2 
<https://github.com/ariard/bitcoin/commits/2023-10-test-mempool-2>


## Discovery

In 2018, the issue of static fees for pre-signed lightning transactions is made more widely known, 
the carve-out exemption in mempool rules to mitigate in-mempool package limits pinning and the 
anchor output pattern are proposed.


In 2019, bitcoin core 0.19 is released with carve-out support. Continued discussion of the anchor 
output pattern as a dynamic fee-bumping method.


In 2020, draft of anchor output submitted to the bolts. Initial finding of economic pinning against 
lightning commitment and second-stage HTLC transactions. Subsequent discussions of a 
preimage-overlay network or package-relay as mitigations. Public call made to inquiry more on 
potential other transaction-relay jamming attacks affecting lightning.


In 2021, initial work in bitcoin core 22.0 of package acceptance. Continued discussion of the 
pinning attacks and shortcomings of current mempool rules during community-wide online workshops. 
Later the year, in light of all issues for bitcoin second-layers, a proposal is made about killing 
the mempool.


In 2022, bip proposed for package relay and new proposed v3 policy design proposed for a review and 
implementation. Mempoolfullrbf is supported in bitcoin core 24.0 and conceptual questions about 
alignment of mempool rules w.r.t miners incentives are investigated.


Along this year 2022, eltoo lightning channels design are discussed, implemented and reviewed. In 
this context and after discussions on mempool anti-DoS rules, I discovered this new replacement 
cycling attack was affecting deployed lightning channels and immediately reported the finding to 
some bitcoin core developers and lightning maintainers.


## Timeline

- 2022-12-16: Report of the finding to Suhas Daftuar, Anthony Towns, Greg 
Sanders and Gloria Zhao
- 2022-12-16: Report to LN maintainers: Rusty Russell, Bastien Teinturier, Matt Corallo and Olaoluwa 
Osuntunkun

- 2022-12-23: Sharing to Eugene Siegel (LND)
- 2022-12-24: Sharing to James O&#x

Re: [Lightning-dev] Disclosure: Fake channel DoS vector

2023-08-27 Thread Matt Corallo



On 8/26/23 5:03 AM, Antoine Riard wrote:

Hi Matt,

 > While you were aware of these fixes at the time, I'd appreciate it if you, 
someone who hasn't spent
 > much time contributing to LDK over the past two or three years, stop trying 
to speak on behalf of
 > the LDK project.

While this statement is blatantly false and disregards all the review


You've definitely done some review for some subset of code, mostly the anchors code which was added 
not too long ago, but please don't pretend you've reviewed a large volume of the pull requests in 
LDK, as far as I understand you have several other projects you focus heavily on, which is great, 
but that's not being a major LDK contributor.


and robustness hardening 
landed during the last two or three years


In 2022 and 2023 you:
 * landed a PR removing yourself from the security-reporting list (#2323, no idea why you're trying 
to speak for the project when you removed yourself!)

 * fixed one bug in the anchors aggregation stuff before it was released 
(#1841, thanks!)
 * made some constants public (#1839)
 * increase a constant (#1532)
 * added a trivial double-check of user code (#1531)

You've also, to my knowledge, never joined the public bi-weekly LDK development calls, don't join 
the lightning spec meeting, and don't engage in the public discord discussions where development 
decisions are made.


This implies you absolutely don't have a deep understanding of all the things happening in the 
project, which makes you poorly suited to speak on behalf of the project. I'm not trying to pass 
judgement on whether you've contributed (you have! thanks for your contributions!), but only 
suggesting that if you don't contribute regularly enough to have a good understanding of everything 
going on, speaking on behalf of the project isn't appropriate.


I would appreciate it from you in the conduct of your 
maintenance janitorial role to have more regard for the LDK users funds security rather than a "move 
fast and break things" attitude.


While I know you feel like lightning at large isn't a protocol which takes security seriously, I 
think you're pretty far off base here. Getting lightning right is *hard*, as you well know there are 
many, many, many ways it can go wrong. And we, like every other lightning software project, take 
that seriously, while also trying to ship features to make lightning broadly useful and usable (two 
things that its historically not really been...because its hard for many reasons beyond just 
security issues).


If you followed LDK (and other lightning) development more closely, I think you'd have a greater 
appreciation for these things :).


As such, and with in mind all open-source ethical rules, I'll keep speaking on the behalf of the LDK 
project when I see fit, whether you're pleased or not.


I'm really unsure what you mean here "open-source ethical rules" - is it your opinion that you 
should speak for a project you don't really follow closely just because you think the people who do 
work on it a lot aren't doing a good enough job in your opinion? That seems incredibly strange to me.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Disclosure: Fake channel DoS vector

2023-08-25 Thread Matt Corallo
While you were aware of these fixes at the time, I'd appreciate it if you, someone who hasn't spent 
much time contributing to LDK over the past two or three years, stop trying to speak on behalf of 
the LDK project.


Thanks,
Matt

On 8/24/23 4:33 PM, Antoine Riard wrote:

Hi Matt,

Thanks for the great work here.

Confirming the v0.0.114 release number for the LDK "fake" lightning channels 
mitigations.

 From the LDK-side, the vulnerability didn't come as novel knowledge, we have thought of potential 
DoS issues in peer state machine handling (e.g [0]) a long time ago and our long-term design 
philosophy has always been to privilege watchtower and process separation as a defense in-depth 
mitigation (e.g see our glossary about "monitor replica" [1]). All those hardening architectures are 
not implemented yet as part of the "vanilla" LDK (we're a library after), though it's consistent 
with the answer we made privately during your disclosure.


About the lessons, a few remarks.

"Use watchtowers": note there is difference between watchtower only encompassing revoked state 
punishment and "monitor replica' encompassing second-stage HTLC. For that type of DoS issues, you 
wish to have the second deployed.


"Multiple process": note your Lightning node multi-process should be free of "deadlock" and other 
processing contaminating bugs, i.e the chain monitoring and reaction logic should maintain 
liveliness even if the off-chain state machine coordinator (let's say `ChannelManager`) got DoS / 
crashed, the chain monitoring should be able to detect revoked states and react finally.


"More security auditing needed": I can only share the opinion that security and robustness has not 
been the top priorities of the lightning implementations, for a very long time, I think 
implementations have been more focus on spec features parity to maintain a usage market share, 
rather thinking about the long-term network sustainability and safety of end-user funds. For the 
more senior Lightning devs, those issues won’t come as strong surprise, I think some things like 
package relay rank higher on folks priorities to mitigate publicly known and more serious security 
issues [2].


Looking forward to more security auditing of the Lightning Network.

Cheers,
Antoine

[0] https://github.com/lightningdevkit/rust-lightning/issues/383 
 and 
https://github.com/lightningdevkit/rust-lightning/issues/59 

[1] https://github.com/lightningdevkit/rust-lightning/blob/main/GLOSSARY.md#monitor-replicas 


[2] https://github.com/bitcoin/bips/pull/1382 


Le jeu. 24 août 2023 à 01:36, Matt Morehouse > a écrit :


Hi list,

Last year a DoS vector was discovered to affect all node
implementations, where an attacker could open large numbers of fake
channels to the victim and never publish them on-chain, causing the
victim to consume excessive resources.

Defenses against the DoS have been shipped in the following releases:
- CLN 23.02 [1]
- eclair 0.9.0 [2]
- LDK 0.0.114 [3]
- LND 0.16.0 [4]

If you are running node software older than this, your funds may be at
risk!  Update to at least the above versions to help protect your
node.

More details about the DoS vector are available at
https://morehouse.github.io/lightning/fake-channel-dos
.

- Matt

[1] https://github.com/ElementsProject/lightning/releases/tag/v23.02

[2] https://github.com/ACINQ/eclair/releases/tag/v0.9.0

[3] https://github.com/lightningdevkit/rust-lightning/releases/tag/v0.0.114

[4] https://github.com/lightningnetwork/lnd/releases/tag/v0.16.0-beta

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org 

https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev



___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] LN Summit 2024 Organization

2023-08-20 Thread Matt Corallo
Sadly, the african continent doesn't only come with additional travel time and visa effort. It also 
generally carries additional medical risk/vaccination schedules depending on your 
employer/insurer/country of origin.


I don't think I'll be attending any developer meetings on the continent, though if I'm attending a 
large conference and meeting lots of folks using Bitcoin and developing on top of it who I can't 
meet elsewhere, I'd weigh the costs differently.


Matt

On 8/20/23 6:40 PM, Antoine Riard wrote:

Hi Matt,

I fully understand the concern about the additional requirements for travel, which isn't well-served 
by direct international flights and be conservative with people's time.
Though there are few observations, I still think doing a summit on the africa continent will take 
less accumulated travel time than past destinations like Tokyo (CoreDev 2018) or Adelaide (LN Summit 
2018).


On the additional requirements here the ones for French / US / UK passport 
holders:
- https://en.wikipedia.org/wiki/Visa_requirements_for_French_citizens 
<https://en.wikipedia.org/wiki/Visa_requirements_for_French_citizens>
- https://en.wikipedia.org/wiki/Visa_requirements_for_United_States_citizens 
<https://en.wikipedia.org/wiki/Visa_requirements_for_United_States_citizens>
- https://en.wikipedia.org/wiki/Visa_requirements_for_British_citizens 
<https://en.wikipedia.org/wiki/Visa_requirements_for_British_citizens>


Visa-free countries for the intersection of the three you have Morocco, Senegal, Cameroon and South 
Africa, already visited few from this list.
Note, a country with a strong Bitcoin local community like Algeria is excluded from the intersecting 
list.


Indeed the main motivation to schedule well-ahead is to give a buffer time to manage the complexity 
of additional operational requirements.


On the latest concern, about the Zurich 2020 organization none of the members of the organizing 
committee were living in this city iirc. However few of the members were Swiss themselves, which 
indeed is quite nice for the organisation. Already in touch with local Bitcoiners with a track 
record for few countries.


If you're a Lightning developer and you wish to take part in next year's summit organization feel 
free to reach out if you want to contribute to the organization.
I'll start a summit organization Signal group and throw Matt inside as he's quite experienced about 
open-source events and has opinions on a lot of things.


Best,
Antoine

Le lun. 21 août 2023 à 01:16, Matt Corallo <mailto:lf-li...@mattcorallo.com>> a écrit :


While more lightning developers attending conference(s) in a more diverse 
set of countries,
including on the african continent, sounds like a great idea, the usual LN 
summit is an invite-only
developer meeting. Hosting it in a country with additional requirements for 
travel and which isn't
as well-served by direct international flights doesn't carry any benefit 
and only has additional
costs on peoples' time.

I'm also admittedly a little dubious of any summit or conference organized 
by someone who does not
live in the city in which it is being hosted.

Matt

On 8/18/23 4:02 PM, Antoine Riard wrote:
 > Hi lightning devs,
 >
 > Follow up on next year LN Summit organization:
 > 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-June/003994.html

<https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-June/003994.html>
 > 
<https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-June/003994.html

<https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-June/003994.html>>
 >
 > After browsing the travel advisories, social situations of a good number 
of geographical
areas in
 > Africa, and chatting with the Built with Bitcoin folks on the state of 
the local Bitcoin
community
 > country by country, I would like to propose Ghana as a place of location 
for next year's June
2024
 > LN Summit.
 >
 > As announced in my previous mail, I was thinking to survey privately the 
usual Lightning Summit
 > attendees on the choice of location to collect a first wave of feedback. 
After really digging
into
 > the travel advisories country by country, it turns out if we're looking 
for European / US-like
 > standards of travel for a group of people of 30/40 attendees, we start 
to be more operationally
 > constrained.
 >
 > Ghana has already hosted last year's Afro Bitcoin Conference and they're 
doing an edition
again in
 > December of this year [0]. I've never been to Ghana so I'm currently 
planning to attend this
year's
 > 2023 conference to get myself familiar with the g

Re: [Lightning-dev] LN Summit 2024 Organization

2023-08-20 Thread Matt Corallo
While more lightning developers attending conference(s) in a more diverse set of countries, 
including on the african continent, sounds like a great idea, the usual LN summit is an invite-only 
developer meeting. Hosting it in a country with additional requirements for travel and which isn't 
as well-served by direct international flights doesn't carry any benefit and only has additional 
costs on peoples' time.


I'm also admittedly a little dubious of any summit or conference organized by someone who does not 
live in the city in which it is being hosted.


Matt

On 8/18/23 4:02 PM, Antoine Riard wrote:

Hi lightning devs,

Follow up on next year LN Summit organization:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-June/003994.html 



After browsing the travel advisories, social situations of a good number of geographical areas in 
Africa, and chatting with the Built with Bitcoin folks on the state of the local Bitcoin community 
country by country, I would like to propose Ghana as a place of location for next year's June 2024 
LN Summit.


As announced in my previous mail, I was thinking to survey privately the usual Lightning Summit 
attendees on the choice of location to collect a first wave of feedback. After really digging into 
the travel advisories country by country, it turns out if we're looking for European / US-like 
standards of travel for a group of people of 30/40 attendees, we start to be more operationally 
constrained.


Ghana has already hosted last year's Afro Bitcoin Conference and they're doing an edition again in 
December of this year [0]. I've never been to Ghana so I'm currently planning to attend this year's 
2023 conference to get myself familiar with the ground and that way ensure smooth preparation for 
next year's June LN Summit. From my Zurich 2020 experience, it's good to organize Bitcoin technical 
events in a country where you're familiar a bit.


As a backup plan, I think we could consider countries like Morocco or Algeria, which given current 
composition of the organization committee is straightforward due to the french-speaking communities, 
or South Africa, which is itself beautiful and where they're doing Bitcoin events [1], though this 
latter is very far far away in term of international travel logistic.


Note for Ghana, from a quick look it sounds like a visa will be required for all Schengen, US and 
Commonwealth passport holders will need a travel visa. ECOWAS passport holders sound to be exempted.


In terms of financial resources, Zurich 2020 hard logistical organization cost was around 10$k. My 
pleasure to cover the LN Summit 2024 hard logistical cost out of my pocket.


For clarity, I'm speaking about the LN Summit which is an invitation-only event reserved to the 
Lightning developers and researchers based on technical proof-of-work of which the previous edition 
happens in Adelaide 2018, Berlin 2019 (one evening event on the sport), Zurich 2021 (covid edition), 
Oakland 2022 and NYC 2023.


This is _not_ to be confused with the Lightning conference which has been traditionally organized by 
Fulmo, and of which the latest _official_ edition has been Berlin 2019 iirc.


As it has been suggested by nully0x, it can be interesting to organize a co-event with Qala Africa, 
I'm already in touch with few folks there due to FOSS things and I'll reach out of band to them, 
though I'll take personal accountability on the LN Summit, _only, not any other satellite event around.


Overall, I think it's wise for the 1st protocol dev event (CoreDev included [2]) beyond the US / 
Europe / Australia / Japan geographical boundaries to plan well ahead and start small.


Cheers,
Antoine

[0] https://www.afrobitcoin.org 
[1] https://adoptingbitcoin.org/capetown-2024/ 

[2] https://coredev.tech/pastevents.html 

Le ven. 23 juin 2023 à 10:39, Antoine Riard > a écrit :


Hi lightning devs,

Proposing myself to organize next year's LN Summit in Africa, with a rough 
date somewhere in
June 2024.

There are a lot of reasons to hold a summit there. Africa is a beautiful 
continent, there is a
rich cultural and historical past, a lot of fragmentation in the financial 
systems of the 56
states that can be solved with a compatible payment protocol, an explosive 
demography with a lot
of energy to get things done, more and more Lightning developers coming 
from this continent and
formidable perspectives to grow "full-stack" local Lightning economies.

Usually, we don't announce the organization of CoreDev or LN Summit on open 
communication
channels, as there is a goal of serenity of the engineering conversation 
(and as we would like
to avoid being trolled by BSV fans or tabloid-style of journalism). Fo

Re: [Lightning-dev] Multipath Keysend

2023-07-29 Thread Matt Corallo
IMO we should revisit this post-PTLCs, but in the mean time, folks should allow for classic keysend 
to be MPP. IIRC CLN already does, LDK added support for it not too long ago, and it'd be nice if 
others did too. Just like with normal lightning MPP payments - if the recipient decides to claim 
less than we tell them we want to send them, that's up to them, and if someone gets a donation and 
decides they don't want all of it but only a part, so be it. The sender has sent the payment as far 
as they're concerned - its been claimed!


Matt

On 7/27/23 10:13 AM, ZmnSCPxj via Lightning-dev wrote:

Good morning list,

I would like to share a simple scheme for creating a `keysend` protocol that 
allows for multipath payments.

In `keysend`, the preimage is embedded as TLV 5482373484 with length 32.

In the multipath case, we want the receiver to only be able to claim the 
payment once all parts have arrived at the receiver.

For example, suppose we want to split the `keysend` into 2 parts.
Let us select a true preimage `p` at random.
Then, we generate the payment hash `h = SHA256(p)`.

Then, we generate a new 256-bit scalar, `a`.
For one part, we send `a` for TLV 5482373484, and for the second part, we send 
`a ^ p`, where `^` is XOR.
All parts use the same payment hash `h`.

The receiver, on receiving either part, will find that the supposed preimage 
does not match the actual HTLC payment hashes.
Instead of failing, it holds the payment, using the usual basic multipath 
payment rules.

When the receiver receives another part, it will XOR together the supposed 
preimages.
In the above case, it would get `a` and `a ^ p`, which when XORed together 
result in `a ^ a ^ p` or `p`, which is now the correct preimage, and the 
receiver can now claim the entire complete funds.

The same technique would work with any number of parts --- if we split into `n` 
parts, we generate `n - 1` additional random scalars and use it for the first 
`n - 1` parts, then XOR all of them with the scalar-to-be-split for the `n`th 
part.
This scheme also works for dynamic splitting, i.e. if you are splitting a part 
that was already split off from a part that was already split off from a part 
etc.

A sender can detect if the receiver does not support multipath `keysend` if a 
part reaches the receiver and it errors with 
`incorrect_or_unknown_payment_details`.
If the receiver is aware of multipath `keysend`, it would hold onto the 
incoming HTLCs until MPP timeout, and instead error with `mpp_timeout`.
Thus, support for this on the receiver side does not need to be specially 
announced via a new feature bit --- an MPP-capable sender can simply try to 
split, and if it gets an `incorrect_or_unknown_payment_details`, knows that the 
receiver does not support multipath `keysend`.
The same feature bit 55 can be reused.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal: Bundled payments

2023-06-20 Thread Matt Corallo



On 6/20/23 1:45 AM, Thomas Voegtlin wrote:
 - snip -


We have not implemented BOLT-12 yet in Electrum. Would you care to
describe whether bundled payments already would work with the current
specification, or whether they would require changes to BOLT-12?


They will not as-specified, but because BOLT12 offers are reusable you will be able to scan a single 
offer and send two payments as long as the sender implements this.



We
are going to implement BOLT-12 support in Electrum in the coming
months, and I would be happy to help here.


Great to hear that!


I believe that it will take years *after it is merged*, until BOLT-12
actually becomes the dominant payment method on Lightning.


Absolutely true.


OTOH, if
this feature was adopted in BOLT-11, I think it could be deployed much
faster.


I'm not sure why? Indeed, BOLT-12 has some time to go in terms of getting off the ground floor, as 
it were, but a new BOLT-11 extension has to start from 0 - with zero implementations today and, 
worse, having to convince people to actually care enough to implement it.


At least with BOLT-12 a client that is "swap-aware" you don't have to scan two QR codes or do any 
complicated dance.



The goal of my proposal is to level the field of competition between
Lightning service providers, by allowing reverse submarine swap
payments to come from any wallet (of course, a dedicated client will
still be needed to verify the redeem script and the invoice, and to
sweep the funds, as discussed above)


I admit I still kinda struggle to see the value here - the user has to have a "swap-aware" client 
doing the swap setup and enforcement, but then can have any lightning wallet pay it. So it only 
really makes sense if the user has an on-chain wallet they want to use which is distinct from their 
lightning wallet (and their lightning wallet doesn't support splice-out, which presumably most will 
in a year or so). This seems like quite a narrow set of requirements to me.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal: Bundled payments

2023-06-14 Thread Matt Corallo

I think the ship has probably sailed on getting any kind of new interoperable 
change in to BOLT-11.

We already can't get amount-less BOLT-11 invoices broadly supported, rolling out yet another new 
incompatible version of BOLT-11 and expecting the entire ecosystem to support it doesn't seem all 
that likely.


If we're working towards specifying some "standard" way of doing swaps, (a) I'd be curious to 
understand why the need isn't obviated by splice-out, and (b) why it shouldn't be built on OMs so 
you can do it more privately.


Matt

On 6/13/23 1:10 AM, Thomas Voegtlin wrote:

Good morning list,

I would like to propose an extension to BOLT-11, where an invoice can contain two bundled payments, 
with distinct preimages and amounts.


The use case is for services that require the prepayment of a mining fee in order for a 
non-custodian exchange to take place:

  - Submarine swaps
  - JIT channels

In both cases, the service provider receives a HTLC for which they do not have the preimage, have to 
send funds on-chain (to the channel or submarine swap funding address), and wait for the client to 
reveal the preimage when they claim the payment. Because there is no guarantee that the client will 
actually claim the payment, the service providers need to ask prepayment of mining fees.


In the case of submarine swaps, services that use dedicated client software, such as Loop by 
Lightning Labs, can ask for a prepayment, because their software can handle it (this is called "no 
show penalty" on the Loop website). However, competitors who do require a dedicated wallet, not such 
as the Boltz exchange, cannot do that. Their website shows an invoice to the user, whose wallet that 
is agnostic about the swap, and it would be unpractical for them to show two invoices to be paid 
simultaneously. This creates a situation where Boltz is vulnerable to DoS attacks, where the 
attacker forces them to pay on-chain fees.


In the case of JIT channels, providers who want to protect themselves against this mining fee attack 
need to ask the preimage of the main payment before they open the channel. I believe this is what 
Phoenix does (although their pay-to-open service is not open-source, so I cannot really check). The 
issue is that a service that asks for the preimage first becomes custodian. From a legal 
perspective, it does not matter whether they open the channel immediately after receiving the 
preimage, the ordering of events makes their service custodian. In Europe, such a service will fall 
within the European MICA regulation. Competitors who refuse to offer custodian services, such as 
Electrum, are excluded from that game.


In order to solve that, it would be beneficial to bundle the prepayment and the main payment in the 
same BOLT-11 invoice.


The semantics of bundled payments is as follows:
  - 1. the BOLT-11 invoice contains two preimages and two amounts: prepayment 
and main payment.
  - 2. the receiver should wait until all the HTLCs of both payments have arrived, before they 
fulfill the HTLCs of the pre-payment. If the main payment does not arrive, they should fail the 
pre-payment with a MPP timeout.
  - 3. once the HTLCs of both payments have arrived, the receiver fulfills the HTLCs of the 
prepayment, and they broadcast their on-chain transaction. Note that the main payment can still fail 
if the sender never reveal the preimage of the main payment.


Of course, nothing in my proposal prevents the service provider from stealing the pre-payment, but 
that is already the case today.


I believe this proposal would level the field in terms of competition between lightning service 
providers. Currently, you need to use a dedicated client in order to use Loop, and competitors who 
do not have an established user base running a dedicated client are exposed to the mining fee 
attack. I also believe that ACINQ would benefit from this, because it would make it possible for 
them to make their pay-to-open service fully non-custodian. My understanding is that in its current 
form, the 'pay-to-open' service used by Phoenix will fall into the scope of the European MICA 
regulation, which they should consider as a serious issue.


Finally, I believe that such a change should be implemented in BOLT-11, and not using BOLT-12 or 
onion messages. Indeed, my proposal does not require the exchange of new messages. Some of the 
initial feedback I received was that this is a use case for BOLT-12 or OM, but I think that this is 
making things unnecessarily complicated. We should not add new messages when things can be done in a 
non-interactive way.


Cheers,
ThomasV
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightnin

[Lightning-dev] Today’s Spam

2023-06-04 Thread Matt Corallo
It seems moderation needs to go back on to tamp down spam for a minute. See 
y’all Monday.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A new Bitcoin implementation integrated with Core Lightning

2023-05-08 Thread Matt Corallo
Hi Michael,While I don’t think forks of Core with an intent to drive consensus rule changes (or lack thereof) benefits the bitcoin system as the Bitcoin Core project stands today, if you want to build a nice full node wallet with lightning based on a fork of Core, there was code written to do this some years ago.https://github.com/bitcoin/bitcoin/pull/18179It never went anywhere as lightning (and especially LDK!) were far from ready to be a first class feature in bitcoin core at the time (and I’d argue still today), but as a separate project it could be interesting, at least if maintenance burden were kept to a sustainable level.MattOn Jan 14, 2023, at 13:03, Michael Folkson via Lightning-dev  wrote:I tweeted this [0] back in November 2022."With the btcd bugs and the analysis paralysis on a RBF policy option in Core increasingly thinking @BitcoinKnots and consensus compatible forks of Core are the future. Gonna chalk that one up to another thing @LukeDashjr was right about all along."A new bare bones Knots style Bitcoin implementation (in C++/C) integrated with Core Lightning was a long term idea I had (and presumably many others have had) but the dysfunction on the Bitcoin Core project this week (if anything it has been getting worse over time, not better) has made me start to take the idea more seriously. It is clear to me that the current way the Bitcoin Core project is being managed is not how I would like an open source project to be managed. Very little discussion is public anymore and decisions seem to be increasingly made behind closed doors or in private IRC channels (to the extent that decisions are made at all). Core Lightning seems to have the opposite problem. It is managed effectively in the open (admittedly with fewer contributors) but doesn't have the eyeballs or the usage that Bitcoin Core does. Regardless, selfishly I at some point would like a bare bones Bitcoin and Lightning implementation integrated in one codebase. The Bitcoin Core codebase has collected a lot of cruft over time and the ultra conservatism that is needed when treating (potential) consensus code seems to permeate into parts of the codebase that no one is using, definitely isn't consensus code and should probably just be removed.The libbitcoinkernel project was (is?) an attempt to extract the consensus engine out of Core but it seems like it won't achieve that as consensus is just too slippery a concept and Knots style consensus compatible codebase forks of Bitcoin Core seem to still the model. To what extent you can safely chop off this cruft and effectively maintain this less crufty fork of Bitcoin Core also isn't clear to me yet.Then there is the question of whether it makes sense to mix C and C++ code that people have different views on. C++ is obviously a superset of C but assuming this merging of Bitcoin Core and Core Lightning is/was the optimal final destination it surely would have been better if Core Lightning was written in the same language (i.e. with classes) as Bitcoin Core.I'm just floating the idea to (hopefully) hear from people who are much more familiar with the entirety of the Bitcoin Core and Core Lightning codebases. It would be an ambitious long term project but it would be nice to focus on some ambitious project(s) (even if just conceptually) for a while given (thankfully) there seems to be a lull in soft fork activation chaos.ThanksMichael[0]: https://twitter.com/michaelfolkson/status/1589220155006910464?s=20&t=GbPm7w5BqS7rS3kiVFTNcw


--Michael FolksonEmail: michaelfolkson at protonmail.com Keybase: michaelfolksonPGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3






___Lightning-dev mailing listLightning-dev@lists.linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/lightning___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-15 Thread Matt Corallo



On 2/14/23 11:36 PM, Joost Jager wrote:

But how do you decide to set it without a credit relationship? Do I measure 
my channel and set the

bit because the channel is "usually" (at what threshold?) saturating in the 
inbound direction? What
happens if this changes for an hour and I get unlucky? Did I just screw 
myself?


As a node setting the flag, you'll have to make sure you open new channels, rebalance or swap-in in 
time to maintain outbound liquidity. That's part of the game of running an HA channel.


Define "in time" in a way that results in senders not punishing you for not meeting your "HA 
guarantees" due to a large flow. I don't buy that this results in anything other than pressure to 
add credit.



 > How can you be sure about this? This isn't publicly visible data.

Sure it is! https://river.com/learn/files/river-lightning-report.pdf



Some operators publish data, but are the experiences of one of the most well connected (custodial) 
nodes representative for the network as a whole when evaluating payment success rates? In the end 
you can't know what's happening on the lightning network.


Right, that was my above point about fetching scoring data - there's three relevant "buckets" of 
nodes, I think - (a) large nodes sending lots of payments, like the above, (b) "client nodes" that 
just connect to an LSP or two, (c) nodes that route some but don't send a lot of payments (but do 
send *some* payments), and may have lots or not very many channels.


(a) I think we're getting there, and we don't need to add anything extra for this use-case beyond 
the network maturing and improving our scoring algorithms.
(b) I think is trivially solved by downloading the data from a node in category (a), presumably the 
LSP(s) in question (see other branch of this thread)
(c) is trickier, but I think the same solution of just fetching semi-trusted data here more than 
sufficies. For most routing nodes that don't send a lot of payments we're talking about a very small 
amount of payments, so trusting a third-party for scoring data seems reasonable.


Once we do that, everyone gets a similar experience as the River report :).

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/14/23 1:42 PM, Antoine Riard wrote:

Hi Joost,

 > I think movement in this direction is important to guarantee competitiveness with centralised 
payment systems and their (at least theoretical) ability to
 > process a payment in the blink of an eye. A lightning wallet trying multiple paths to find one 
that works doesn't help with this.


Or there is the direction to build forward-error-correction code on top of MPP, 
like in traditional
networking [1]. The rough idea, you send more payment shards than the requested 
sum, and then
you reveal the payment secrets to the receiver after an onion interactivity 
round to finalize payment.


Ah, thank you for bringing this up! I'd thought about it and then forgot to 
mention it in this thread.

I think this is very important to highlight as we talk about "building a reliable lightning network 
out of unreliable nodes" - this is an *incredibly* powerful feature for this.


While its much less capital-effecient, the ability to over-commit upfront and then only allow the 
recipient to claim a portion of the total committed funds would substantially reduce the impact of 
failed HTLCs on payment latency. Of course the extra round-trip to request the "unlock keys" for the 
correct set of HTLCs adds a chunk to total latency, so senders will have to be careful about 
deciding when to do this or not.


Still, now that we have onion messages, we should do (well, try) this! Its not super complicated to 
implement (like everything it seems, the obvious implementation forgoes proof-of-payment, and like 
everything the obvious solution is PTLCs, I think). Its not clear to me how we get good data from 
trials, though, we'd need a sufficient set of the network to support this that we could actually 
test it, which is hard to get for a test.


Maybe someone (anyone?) wants to do some experiments doing simulations using real probing success 
rates to figure out how successful this would be and propose a concrete sender strategy that would 
improve success rates.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/14/23 2:34 AM, Joost Jager wrote:

Hi Matt,

If nodes start aggressively preferring routes through nodes that reliably 
route payments (which
I believe lnd already does, in effect, to some large extent), they should 
do so by measurement,
not signaling.


The signaling is intended as a way to make measurement more efficient. If a node signals that a 
particular channel is HA and it fails, no other measurements on that same node need to be taken by 
the sender. They can skip the node altogether for a longer period of time.


But as a lightning node I don't actually care if a node is binary good/bad. I care about what 
success rate a node has. If you make the decision binary, suddenly in order for a node to be "good" 
I *have* to establish a credit relationship with my peers (i.e. support 0conf splicing). I think 
that is a very, very bad thing to do to the lightning network.


If someone wants to establish such a relationship with their peers, so be it, but as developers we 
should strongly avoid adding features which push node operators in that direction, and part of that 
is writing good routing scoring so that we aren't boxing ourselves into some binary good/bad idea of 
a node but rather estimating liquidity.


Honestly this just strikes me as developers being too lazy to do things right. If we do things 
carefully and we are seeing issues then we can consider breaking lightning, but until we give it a 
good shot, let's not!



In practice, many channels on the network are “high availability” today, 
but only in one
direction (I.e. they aren’t regularly spliced/rebalanced and are regularly 
unbalanced). A node
strongly preferring a high payment success rate *should* prefer such a 
channel, but in your
scheme would not.


This shouldn't be a problem, because the HA signaling is also directional. Each end can decide 
independently on whether to add the flag for a particular channel.


But how do you decide to set it without a credit relationship? Do I measure my channel and set the 
bit because the channel is "usually" (at what threshold?) saturating in the inbound direction? What 
happens if this changes for an hour and I get unlucky? Did I just screw myself?



This ignores the myriad of “at what threshold do you signal HA” issues, 
which likely make such a
signal DOA, anyway.


I think this is a product of sender preference for HA channels and the severity of the penalty if an 
HA channel fails. Given this, routing nodes will need to decide whether they can offer a service 
level that increases their routing revenue overall if they would signal HA. It is indeed dynamic, 
but I think the market is able to work it out.


I'm afraid this is going to immediately fall into a cargo cult of "set the bit" vs "don't set the 
bit" and we'll never get useful data out of it. But you may be right.



Finally, I’m very dismayed at this direction in thinking on how ln should 
work - nodes should be
measuring the network and routing over paths that it thinks are reliable 
for what it wants,
*robustly over an unreliable network*. We should absolutely not be 
expecting the lightning
network to be built out of high reliability nodes, that creates strong 
centralization pressure.
To truly meet a “high availability” threshold, realistically, you’d need to 
be able to JIT 0conf
splice-in, which would drive lightning to actually being a credit network.


Different people can have different opinions about how ln should work, that is fine. I see a 
trade-off between the reliability of the network and the barrier of entry, and I don't think the 
optimum is on one of the ends of the scale.


My point wasn't that lightning should be unreliable, but rather a reliable network build on 
unreliable hops. I'm very confident we can accomplish that without falling back to forcing nodes to 
establish credit to meet "reliability requirements".



With reasonable volume, lightning today is very reliable and relatively 
fast, with few retries
required. I don’t think we need to change anything to fix it. :)


How can you be sure about this? This isn't publicly visible data.


Sure it is! https://river.com/learn/files/river-lightning-report.pdf

I'm also quite confident we can do substantially better than this.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-14 Thread Matt Corallo



On 2/13/23 7:05 PM, ZmnSCPxj wrote:

Good morning all,


First of all let's see what types of reputation system exist (and yes,
this is my very informal categorization):

- First hand experience
- Inferred experience
- Hearsay

The first two are likely the setup we all are comfortable with: we ourselves
experienced something, and make some decisions based on that
experience. This is probably what we're all doing at the moment: we
attempt a payment, it fails, we back off for a bit from that channel
being used again. This requires either being able to witness the issue
directly (local peer) or infer from unforgeable error messages (the
failing node returns an error, and it can't point the finger at someone
else). Notice that this also includes some transitive constructions,
such as the backpressure mechanism we were discussing for ariard's
credentials proposal.

Ideally we'd only rely on the first two to make decisions, but here's
exactly the issue we ran into with Bittorrent: repeat interactions are
too rare. In addition, our local knowledge gets out of date the longer
we wait, and a previously failing channel may now be good again, and
vice-versa. For us to have sufficient knowledge to make good decisions
we need to repeatedly interact with the same nodes in the network, and
since end-users will be very unlikely to do that, we might end up in a
situation were we instinctively fall back to the hearsay method, either
by sharing our local reputation with peers and then somehow combine that
with our own view. To the best of my knowledge such a system has never
been built successfully, and all attempts have ended in a system that
was either way too simple or is gameable by rational players.



In lightning we have a trivial solution to this - your wallet vendor/LSP is 
already extracting a fee
from you for every HTLC routed through it, it has you captive and can set the 
fee (largely)
arbitrarily (up to you paying on-chain fees to switch LSPs). They can happily 
tell you their view of
the network ~live and you should generally accept it. Its by no means perfect, 
and there's plenty of
games they could play on, eg, your privacy, but its pretty damned good.

If we care a ton about the risks here, we could have a few altruistic nodes 
that release similar
info and users can median-filter the data in one way or another to reduce risk.

I just do not buy that this is a difficult problem for the "end user" part of 
the network. For
larger nodes its (obviously, and trivially) not a problem either, which leaves the 
"middle nodes"
stranded without good data but without an LSP they want to use for data. I 
believe that isn't a
large enough cohort to change the whole network around for, and them asking a 
few altruistic (let's
say, developer?) nodes for scoring data seems more than sufficient.


But this is all ultimately hearsay.

LSPs can be bought out, and developers can go rogue.
Neither should be trusted if at all possible.


You're missing the point - if your LSP wants to "go rogue" here, at worst they can charge you more 
fees. They could also do this by...charging you more fees. I'm not really sure what your concern is.



Which is why I think forwardable peerswaps fixes this: it *creates* paths that 
allow payment routing, without requiring pervasive monitoring (which is 
horrible because eventually the network will be large enough that you will 
never encounter the same node twice if you're a plebeian, and if you're an 
aristocrat, you have every incentive to lie to the plebeians to solidify your 
power) of the network.


No, this is much, much worse for the network. In order to do this "live" (i.e. without failing a 
payment) you have to establish trust relationships across the network (i.e. make giving your peers 
credit a requirement to be considered a "robust node" and, thus, receive fee revenue).


Doing splicing/peerswap as a better way to rebalance is, of course, awesome, but it doesn't solve 
the issue of "what do I do if I'm just too low on capacity *right now* to clear this HTLC".


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Matt Corallo
ot free.

And after all this rambling, let's get back to the topic at hand: I
don't think enshrining the differences of availability in the protocol,
thus creating two classes of nodes, is a desirable
feature. Communicating up-front that I intend to be reliable does
nothing, and penalizing after the fact isn't worth much due to the
repeat interactions issue. It'd be even worse if now we had to rely on a
third party to aggregate and track the reliability, in order to get
enough repeat interactions to build a good model of their liquidity,
since we're now back in the hearsay world, and the third party can feed
us wrong information to maximize their profits.

Regards,
Christian


Matt Corallo  writes:

Hi Joost,

I’m not sure I agree that lightning is “capital efficient” (or even close to 
it), but more generally I don’t see why this needs a signal.

If nodes start aggressively preferring routes through nodes that reliably route 
payments (which I believe lnd already does, in effect, to some large extent), 
they should do so by measurement, not signaling.

In practice, many channels on the network are “high availability” today, but 
only in one direction (I.e. they aren’t regularly spliced/rebalanced and are 
regularly unbalanced). A node strongly preferring a high payment success rate 
*should* prefer such a channel, but in your scheme would not.

This ignores the myriad of “at what threshold do you signal HA” issues, which 
likely make such a signal DOA, anyway.

Finally, I’m very dismayed at this direction in thinking on how ln should work 
- nodes should be measuring the network and routing over paths that it thinks 
are reliable for what it wants, *robustly over an unreliable network*. We 
should absolutely not be expecting the lightning network to be built out of 
high reliability nodes, that creates strong centralization pressure. To truly 
meet a “high availability” threshold, realistically, you’d need to be able to 
JIT 0conf splice-in, which would drive lightning to actually being a credit 
network.

With reasonable volume, lightning today is very reliable and relatively fast, 
with few retries required. I don’t think we need to change anything to fix it. 
:)

Matt


On Feb 13, 2023, at 06:46, Joost Jager  wrote:


Hi,

For a long time I've held the expectation that eventually payers on the 
lightning network will become very strict about node performance. That they 
will require a routing node to operate flawlessly or else apply a hefty penalty 
such as completely avoiding the node for an extended period of time - multiple 
weeks. The consequence of this is that routing nodes would need to manage their 
liquidity meticulously because every failure potentially has a large impact on 
future routing revenue.

I think movement in this direction is important to guarantee competitiveness 
with centralised payment systems and their (at least theoretical) ability to 
process a payment in the blink of an eye. A lightning wallet trying multiple 
paths to find one that works doesn't help with this.

A common argument against strict penalisation is that it would lead to less 
efficient use of capital. Routing nodes would need to maintain pools of 
liquidity to guarantee successes all the time. My opinion on this is that 
lightning is already enormously capital efficient at scale and that it is worth 
sacrificing a slight part of that efficiency to also achieve the lowest 
possible latency.

This brings me to the actual subject of this post. Assuming strict penalisation 
is good, it may still not be ideal to flip the switch from one day to the 
other. Routing nodes may not offer the required level of service yet, causing 
senders to end up with no nodes to choose from.

One option is to gradually increase the strength of the penalties, so that 
routing nodes are given time to adapt to the new standards. This does require 
everyone to move along and leaves no space for cheap routing nodes with less 
leeway in terms of liquidity.

Therefore I am proposing another way to go about it: extend the 
`channel_update` field `channel_flags` with a new bit that the sender can use 
to signal `highly_available`.

It's then up to payers to decide how to interpret this flag. One way could be 
to prefer `highly_available` channels during pathfinding. But if the routing 
node then returns a failure, a much stronger than normal penalty will be 
applied. For routing nodes this creates an opportunity to attract more traffic 
by marking some channels as `highly_available`, but it also comes with the 
responsibility to deliver.

Without shadow channels, it is impossible to guarantee liquidity up to the 
channel capacity. It might make sense for senders to only assume high 
availability for amounts up to `htlc_maximum_msat`.

A variation on this scheme that requires no extension of `channel_update` is to 
signal availability implicitly through routing fees. So the more expensive a 
channel is, 

Re: [Lightning-dev] Highly Available Lightning Channels

2023-02-13 Thread Matt Corallo
Hi Joost,

I’m not sure I agree that lightning is “capital efficient” (or even close to 
it), but more generally I don’t see why this needs a signal.

If nodes start aggressively preferring routes through nodes that reliably route 
payments (which I believe lnd already does, in effect, to some large extent), 
they should do so by measurement, not signaling.

In practice, many channels on the network are “high availability” today, but 
only in one direction (I.e. they aren’t regularly spliced/rebalanced and are 
regularly unbalanced). A node strongly preferring a high payment success rate 
*should* prefer such a channel, but in your scheme would not.

This ignores the myriad of “at what threshold do you signal HA” issues, which 
likely make such a signal DOA, anyway.

Finally, I’m very dismayed at this direction in thinking on how ln should work 
- nodes should be measuring the network and routing over paths that it thinks 
are reliable for what it wants, *robustly over an unreliable network*. We 
should absolutely not be expecting the lightning network to be built out of 
high reliability nodes, that creates strong centralization pressure. To truly 
meet a “high availability” threshold, realistically, you’d need to be able to 
JIT 0conf splice-in, which would drive lightning to actually being a credit 
network.

With reasonable volume, lightning today is very reliable and relatively fast, 
with few retries required. I don’t think we need to change anything to fix it. 
:)

Matt

> On Feb 13, 2023, at 06:46, Joost Jager  wrote:
> 
> 
> Hi,
> 
> For a long time I've held the expectation that eventually payers on the 
> lightning network will become very strict about node performance. That they 
> will require a routing node to operate flawlessly or else apply a hefty 
> penalty such as completely avoiding the node for an extended period of time - 
> multiple weeks. The consequence of this is that routing nodes would need to 
> manage their liquidity meticulously because every failure potentially has a 
> large impact on future routing revenue.
> 
> I think movement in this direction is important to guarantee competitiveness 
> with centralised payment systems and their (at least theoretical) ability to 
> process a payment in the blink of an eye. A lightning wallet trying multiple 
> paths to find one that works doesn't help with this.
> 
> A common argument against strict penalisation is that it would lead to less 
> efficient use of capital. Routing nodes would need to maintain pools of 
> liquidity to guarantee successes all the time. My opinion on this is that 
> lightning is already enormously capital efficient at scale and that it is 
> worth sacrificing a slight part of that efficiency to also achieve the lowest 
> possible latency.
> 
> This brings me to the actual subject of this post. Assuming strict 
> penalisation is good, it may still not be ideal to flip the switch from one 
> day to the other. Routing nodes may not offer the required level of service 
> yet, causing senders to end up with no nodes to choose from.
> 
> One option is to gradually increase the strength of the penalties, so that 
> routing nodes are given time to adapt to the new standards. This does require 
> everyone to move along and leaves no space for cheap routing nodes with less 
> leeway in terms of liquidity.
> 
> Therefore I am proposing another way to go about it: extend the 
> `channel_update` field `channel_flags` with a new bit that the sender can use 
> to signal `highly_available`. 
> 
> It's then up to payers to decide how to interpret this flag. One way could be 
> to prefer `highly_available` channels during pathfinding. But if the routing 
> node then returns a failure, a much stronger than normal penalty will be 
> applied. For routing nodes this creates an opportunity to attract more 
> traffic by marking some channels as `highly_available`, but it also comes 
> with the responsibility to deliver.
> 
> Without shadow channels, it is impossible to guarantee liquidity up to the 
> channel capacity. It might make sense for senders to only assume high 
> availability for amounts up to `htlc_maximum_msat`.
> 
> A variation on this scheme that requires no extension of `channel_update` is 
> to signal availability implicitly through routing fees. So the more expensive 
> a channel is, the stronger the penalty that is applied on failure will be. It 
> seems less ideal though, because it could disincentivize cheap but reliable 
> channels on high traffic links.
> 
> The effort required to implement some form of a `highly_available` flag seem 
> limited and it may help to get payment success rates up. Interested to hear 
> your thoughts.
> 
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-d

Re: [Lightning-dev] Unjamming lightning (new research paper)

2022-12-03 Thread Matt Corallo



On 11/15/22 12:09 PM, Clara Shikhelman wrote:
Matt – I don't know that I agree with "... upfront payments kinda kill the lightning UX ...". I 
think that upfront fees are almost essential, even outside the context of jamming. This also helps 
with probing, general spam, and other aspects. Furthermore, I think that the UX is very explainable,


Indeed, it may be explainable, but its still somewhat painful, I think. I do wonder if we can enable 
probing via a non-HTLC message and do immediate pre-send-probing to avoid paying upfront fees on 
paths that will fail.


and in general nodes shouldn't be motivated to send a lot of failed payments, and should adopt 
better routing strategies.


I don't think so - today there are at least three different routing goals to maximize - (a) privacy, 
(b) fees, (c) success rate. For "live" payment, you probably want to lean towards optimizing for 
success rate, and many nodes do today by default. But that isn't the full story - many nodes do 
background rebalancing and they prefer to take paths which optimize for fees, trying many paths they 
think are likely to fail to see if they can rebalance with lower fees. I don't think we should tell 
those users/use-cases that they aren't allowed to do that or they're doing something "wrong" - I 
think choosing to optimize for fees (or, in the future, privacy) is an important thing to allow, and 
ideally make as reliable as possible, without charging extra for it.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Unjamming lightning (new research paper)

2022-11-14 Thread Matt Corallo
Thanks for committing all the time today. I’m much happier with (binary, not-so-)local reputation than I was in the past, at least as better than other reputation systems.I believe you’ve stated a few times that local reputation by itself is far from sufficient and that’s why we need upfront payments. Intuitively this makes sense to me but I’m sure you have more data to back that up. It’s come up a few times that upfront payments kinda kill the lightning UX if they’re at all nontrivial. Just restating here to make sure we’re on the same page.One thing I think we need to research more is how this compares to Rusty’s old proof-of-channel-closure idea. Philosophically it’s quite nice to punish the one holding an HTLC over punishing the sender, especially in a world where people are getting options in USD<->BTC through HTLCs or sending to mobile nodes that sit on payments. Practically I’m not sure if there’s a strong need to worry about a large hub sitting on HTLCs to jam instead of a sender  being involved, but having zero mitigation against it also seems wrong.As always I’m a bit dubious of doing something, especially that required network-wide upgrade, until we’re confident it’s the right direction. Even binary HTLC reputation flags, I think, carry a very large privacy cost so it’s strongly worth exploring every alternative before we commit.MattOn Nov 10, 2022, at 10:35, Clara Shikhelman  wrote:Hi all,We are planning a call to discuss this proposal further. It will be on Monday the 14th, at 7 pm UTC here:https://meet.jit.si/UnjammingLNPlease let me know if this conflicts with any other Bitcoin event.Hope to see you all there!On Thu, Nov 3, 2022 at 1:25 PM Clara Shikhelman  wrote:Hi list,We would like to share with you our recent research on jamming in Lightning. We propose a combination of unconditional (~ upfront) fees and local reputation to fight jamming. We believe this can be a basis for an efficient and practical solution that can be implemented in the foreseeable future.The full paper is available [1].We classify jams into quick (resolve in seconds, mimicking honest payments) and slow (remain in-flight for hours or days). Fees disincentivize an attack where quick jams are constantly resolved and sent again. Reputation, in turn, allows nodes to deprioritize peers who consistently forward slow jams.We believe that our proposal is practical and efficient. In particular, we have shown that the additional (unconditional) fees can be relatively low (as low as 2% of the total fee) to fully compensate jamming victims for the lost routing revenue. Moreover, the total unconditional fee paid for all failed attempts stays low even if the failure rate is reasonably high. This means that the UX burden of paying for failed attempts is also low. A straightforward PoC implementation [2] demonstrates one approach to implementing the fee-related aspect of our proposal.Further sections provide more details on our approach and results.# JammingAs a reminder, jamming is a DoS attack where a malicious sender initiates payments (jams) but delays finalizing them, blocking channels along the route until the jams are resolved. Jamming may target liquidity or payment slots.We distinguish between quick and slow jamming. Quick jamming implies that jams are failed and re-sent every few seconds, making them hardly distinguishable from honest failing payments. In slow jamming, jams remain in-flight for hours.# Unconditional feesWe propose unconditional fees to discourage quick jamming. Currently, jams are free because routing nodes don’t charge for failed payment attempts. With unconditional fees, however, jamming is no longer free.Our simulations indicate that unconditional fees don’t have to be too high. Under certain assumptions about the honest payment flow, a fee increase by just 2% (paid upfront) fully compensates a routing node under attack. Our simulator is open-source [3]. A PoC implementation demonstrates one approach to implementing unconditional fees and only requires minor changes [2].We have also considered the UX implications of paying for failed attempts. We have concluded that this should not be a deal-breaker, as the total unconditional fee paid stays low even if the failure rate is reasonably high (even as high as 50%). Privacy and incentives are also discussed in the paper.# ReputationFees are not very effective in preventing slow jamming: this type of attack requires only a few jams, therefore, fees would have to be too high to be effective. Instead, we address slow jamming using local reputation.As per our proposal, nodes keep track of their peers’ past behavior. A routing node considers its peer “good” if it only forwards honest payments that resolve quickly and bring sufficient fee revenue. A peer that forwards jams, in contrast, loses reputation. Payments endorsed by a high-reputation peer are forwarded on the best efforts basis, while other (“high-risk”) payments can only use a 

Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-11-01 Thread Matt Corallo
Right, I kinda assume let's figure that out when we get PTLCs? There's also the channel-upgrade work 
that has been around for a while. Once we do that we can probably define a new type for PTLCs in 
"normal" channels.


Adding two types adds a bunch of complexity right now, and it would be effectively dead code that 
we're not likely to get right unless we have an immediate use/test for it. It is a good point, 
though, and something implementors should likely keep in mind when writing code.


Matt

On 10/28/22 12:35 AM, Johan Torås Halseth wrote:

Hi, Matt.

You're correct, I made the suggestion mainly because it would open up for PTLCs (or other future 
features) in today's channels.


Having the existing network close and reopen new channels would really slow the adoption of new 
channel features I reckon.


And I don't think it adds much complexity compared to the adapter approach.

- Johan

On Thu, Oct 27, 2022 at 4:54 PM Matt Corallo <mailto:lf-li...@mattcorallo.com>> wrote:


I’m not sure I understand this - is there much reason to want taproot 
commitment outputs? I mean
they’re cool, and witnesses are a bit smaller, which is nice I guess, but 
they’re not providing
materially new features, AFAIU. Taproot funding, on the other hand, 
provides a Bitcoin-wide
privacy improvement as well the potential future ability of channel 
participants to use multisig
for their own channel funds transparently.

Sure, if we’re doing taproot funding outputs we should probably just do it 
for the commitment
outputs as well, because why not (and it’s a prereq for PTLCs). But trying 
to split them up
seems like added complexity “just because”? I suppose it tees us up for 
eventual PTLC support in
todays channels, but we can also consider that separately when we get to 
that point, IMO.

Am I missing some important utility of taproot commitment transaction 
outputs?

Matt


On Oct 27, 2022, at 02:17, Johan Torås Halseth mailto:joha...@gmail.com>>
wrote:


Hi, Laolu.

I think it could be worth considering dividing the taprootyness of a 
channel into two:
1) taproot funding output
2) taproot commitment outputs

That way we could upgrade existing channels only on the commitment level, 
not needing to close
or re-anchor the channels using an adapter in order to get many of the 
taproot benefits.

New channels would use taproot multisig (musig2) for the funding output.

This seems to be less disruptive to the existing network, and we could get 
features enabled by
taproot to larger parts of the network quicker. And to me this seems to 
carry less complexity
(and closing fees) than an adapter.

One caveat is that this wouldn't work (I think) for Eltoo channels, as the 
funding output
would not be plain multisig anymore.

- Johan

On Sat, Mar 26, 2022 at 1:27 AM Antoine Riard mailto:antoine.ri...@gmail.com>> wrote:

Hi Laolu,

Thanks for the proposal, quick feedback.

> It *is* still the case that _ultimately_ the two transactions to 
close the
> old segwit v0 funding output, and re-open the channel with a new 
segwit v1
> funding output are unavoidable. However this adapter commitment lets 
peers
> _defer_ these two transactions until closing time.

I think there is one downside coming with adapter commitment, which is 
the uncertainty of
the fee overhead at the closing time. Instead of closing your segwit v0 
channel _now_ with
known fees, when your commitment is empty of time-sensitive HTLCs, 
you're taking the risk
of closing during fees spikes, due a move triggered by your 
counterparty, when you might
have HTLCs at stake.

It might be more economically rational for a LN node operator to pay 
the upgrade cost now
if they wish  to benefit from the taproot upgrade early, especially if 
long-term we expect
block fees to increase, or wait when there is a "normal" cooperative 
closing.

So it's unclear to me what the economic gain of adapter commitments ?

> In the remainder of this mail, I'll describe an alternative
> approach that would allow upgrading nearly all channel/commitment 
related
> values (dust limit, max in flight, etc), which is inspired by the way 
the
> Raft consensus protocol handles configuration/member changes.

Long-term, I think we'll likely need a consensus protocol anyway for 
multi-party
constructions (channel factories/payment pools). AFAIU this proposal 
doesn't aim to roll
out a full-fledged consensus protocol *now* though it could be wise to 
ensure what we're
building slowly moves in this direction. Less critical code to maintain 
across bitcoin
codebases/toolchains.


Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-10-27 Thread Matt Corallo via Lightning-dev
I’m not sure I understand this - is there much reason to want taproot commitment outputs? I mean they’re cool, and witnesses are a bit smaller, which is nice I guess, but they’re not providing materially new features, AFAIU. Taproot funding, on the other hand, provides a Bitcoin-wide privacy improvement as well the potential future ability of channel participants to use multisig for their own channel funds transparently.Sure, if we’re doing taproot funding outputs we should probably just do it for the commitment outputs as well, because why not (and it’s a prereq for PTLCs). But trying to split them up seems like added complexity “just because”? I suppose it tees us up for eventual PTLC support in todays channels, but we can also consider that separately when we get to that point, IMO.Am I missing some important utility of taproot commitment transaction outputs?MattOn Oct 27, 2022, at 02:17, Johan Torås Halseth  wrote:Hi, Laolu.I think it could be worth considering dividing the taprootyness of a channel into two: 1) taproot funding output 2) taproot commitment outputsThat way we could upgrade existing channels only on the commitment level, not needing to close or re-anchor the channels using an adapter in order to get many of the taproot benefits.New channels would use taproot multisig (musig2) for the funding output.This seems to be less disruptive to the existing network, and we could get features enabled by taproot to larger parts of the network quicker. And to me this seems to carry less complexity (and closing fees) than an adapter.One caveat is that this wouldn't work (I think) for Eltoo channels, as the funding output would not be plain multisig anymore.- JohanOn Sat, Mar 26, 2022 at 1:27 AM Antoine Riard  wrote:Hi Laolu,Thanks for the proposal, quick feedback.> It *is* still the case that _ultimately_ the two transactions to close the> old segwit v0 funding output, and re-open the channel with a new segwit v1> funding output are unavoidable. However this adapter commitment lets peers> _defer_ these two transactions until closing time.I think there is one downside coming with adapter commitment, which is the uncertainty of the fee overhead at the closing time. Instead of closing your segwit v0 channel _now_ with known fees, when your commitment is empty of time-sensitive HTLCs, you're taking the risk of closing during fees spikes, due a move triggered by your counterparty, when you might have HTLCs at stake.It might be more economically rational for a LN node operator to pay the upgrade cost now if they wish  to benefit from the taproot upgrade early, especially if long-term we expect block fees to increase, or wait when there is a "normal" cooperative closing.So it's unclear to me what the economic gain of adapter commitments ?> In the remainder of this mail, I'll describe an alternative> approach that would allow upgrading nearly all channel/commitment related> values (dust limit, max in flight, etc), which is inspired by the way the> Raft consensus protocol handles configuration/member changes.Long-term, I think we'll likely need a consensus protocol anyway for multi-party constructions (channel factories/payment pools). AFAIU this proposal doesn't aim to roll out a full-fledged consensus protocol *now* though it could be wise to ensure what we're building slowly moves in this direction. Less critical code to maintain across bitcoin codebases/toolchains.> The role of the signature it to prevent "spoofing" by one of the parties> (authenticate the param change), and also it serves to convince a party that> they actually sent a prior commitment propose update during the> retransmission phase.What's the purpose of data origin authentication if we assume only two-parties running over Noise_XK ?I think it's already a security property we have. Though if we think we're going to reuse these dynamic upgrades for N counterparties communicating through a coordinator, yes I think it's useful.> In the past, when ideas like this were brought up, some were concerned that> it wouldn't really be possible to do this type of updates while existing> HTLCs were in flight (hence some of the ideas to clear out the commitment> beforehand).The dynamic upgrade might serve in an emergency context where we don't have the leisury to wait for the settlement of the pending HTLCs. The timing of those ones might be beyond the coordination of link counterparties. Thus, we have to allow upgrade of non-empty commitments (and if there are undesirable interferences between new commitment types and HTLCs/PTLCs present, deal case-by-case).AntoineLe jeu. 24 mars 2022 à 18:53, Olaoluwa Osuntokun  a écrit :Hi y'all,## Dynamic Commitments RetrospectiveTwo years-ish ago I made a mailing list post on some ideas re dynamiccommitments [1], and how the concept can be used to allow us to upgradechannel types on the fly, and also remove pesky hard coded limits like the483 HTLC in-flight limit that's pr

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-23 Thread Matt Corallo

Two questions -
a) How much gossip overhead do you expect this type of protocol to generate/is there a useful 
outcome for this type of update even if you limit gossip updates to once/twice/thrice per day?
b) What are the privacy implications of the naive "update on drained channel", and have you done any 
analysis of the value of this type of gossip update at different levels of privacy?


Thanks,
Matt

On 9/22/22 2:40 AM, René Pickhardt via Lightning-dev wrote:

Good morning fellow Lightning Developers,

I am pleased to share my most recent research results [1] with you. They may (if at all) only have a 
small impact on protocol development / specification but are actually mainly of concern to node 
operators and LSPs. I still thought they may be relevant for the list.


While trying to estimate the expected liquidity distribution in depleted channels due to drain via 
Markov Models I realized that we can exploit the `htlc_maxium_msat` setting to act as a control 
valve and regulate the "pressure" coming from the drain and mitigate the depletion of channels. Such 
ideas are btw not novel at all and heavily used in fluid networks [2]. Thus it seems very natural 
that we do the same on the Lightning Network.


In the article we show within a theoretic model how expected payment failure rates per channel may 
drop significantly by up to an order of magnitude if channels set up proper asymmetric 
`htlc_maximum_msat` pairs.


We furthermore provide in our iPython notebook [3] two experimental algorithmic ideas with which 
node operators can find decent `htlc_maximum_msat` values in a greedy fashion. One of the algorithms 
does not even require to know the drain or payment size distribution or build the Markov model but 
just looks at the liquidity distribution in the channel at the last x routing attempts and adjusts 
the `htlc_maximum_msat` value if the distribution is to far away from a uniform distribution.


Looking forwards for your thoughts and feedback.

with kind regards Rene


[1]: 
https://blog.bitmex.com/the-power-of-htlc_maximum_msat-as-a-control-valve-for-better-flow-control-improved-reliability-and-lower-expected-payment-failure-rates-on-the-lightning-network/ 

[2]: https://en.wikipedia.org/wiki/Control_valve 

[3]: 
https://github.com/lnresearch/Flow-Control-on-Lightning-Network-Channels-with-Drain-via-Control-Valves/blob/main/htlc_maximum_msat%20as%20a%20valve%20for%20flow%20control%20on%20the%20Lightnig%20network.ipynb 


--
https://ln.rene-pickhardt.de 

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-10 Thread Matt Corallo




On 7/10/22 4:43 AM, Joost Jager wrote:
It can also be considered a bad thing that DoS ability is not based on a number of messages. It 
means that for the one time cost of channel open/close, the attacker can generate spam forever if 
they stay right below the rate limit.


I don't see why this is a problem? This seems to assume some kind of per-message cost that nodes 
have to bear, but there is simply no such thing. Indeed, if message spam causes denial of service to 
other network participants, this would be an issue, but an attacker generating spam from one 
specific location within the network should not cause that, given some form of backpressure within 
the network.


Suppose the attacker has enough channels to hit the rate limit on an important connection some hops 
away from themselves. They can then sustain that attack indefinitely, assuming that they stay below 
the rate limit on the routes towards the target connection. What will the response be in that case? 
Will node operators work together to try to trace back to the source and take down the attacker? 
That requires operators to know each other.


No it doesn't, backpressure works totally fine and automatically applies pressure backwards until 
nodes, in an automated fashion, are appropriately ratelimiting the source of the traffic.


Maybe this is a difference between lightning network and the internet that is relevant for this 
discussion. That routers on the internet know each other and have physical links between them, where 
as in lightning ties can be much looser.


No? The internet does not work by ISPs calling each other up on the phone to apply backpressure 
manually whenever someone sends a lot of traffic? If anything lightning ties between nodes are much, 
much stronger than ISPs on the internet - you generally are at least loosely trusting your peer with 
your money, not just your customer's customer's bits.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-03 Thread Matt Corallo



On 7/1/22 8:48 PM, Olaoluwa Osuntokun wrote:

Hi Val,

 > Another huge win of backpressure is that it only needs to happen in DoS
 > situations, meaning it doesn’t have to impact users in the normal case.

I agree, I think the same would apply to prepayments as well (0 or 1 msat in
calm times). My main concern with relying _only_ on backpressure rate
limiting is that we'd end up w/ your first scenario more often than not,
which means routine (and more important to the network) things like fetching
invoices becomes unreliable.


You're still thinking about this in a costing world, but this really is a networking problem, not a 
costing one.



I'm not saying we should 100% compare onion messages to Tor, but that we might
be able to learn from what works and what isn't working for them. The systems
aren't identical, but have some similarities.


To DoS here you have to have *very* asymmetric attack power - regular ol' invoice requests are 
trivial amounts of bandwidth, like, really, really trivial. Like, 1000x less bandwidth than an 
average ol' home node on a DOCSIS high-latency line with 20Mbps up has available. Closer to 
1,000,000x less if we're talking about "real metal".


More importantly, Tor's current attack actually *isn't* a simple DoS attack. The attack there isn't 
relevant to onion messages at all, you're just throwing up roadblocks with nonsense here.




On the topic of parameters across the network: could we end up in a scenario
where someone is doing like streaming payments for a live stream (or w/e),
ends up fetching a ton of invoices (actual traffic leading to payments), but
then ends up being erroneously rate limited by their peers? Assuming they
have 1 or 2 channels that have now all been clamped down, is waiting N
minutes (or w/e) their only option? If so then this might lead to their
livestream (data being transmitted elsewhere) being shut off. Oops, they just
missed the greatest World Cup goal in history!  You had to be there, you had to
be there, you had to *be* there...


You're basically making a "you had to have more inbound capacity" argument, which, sure, yes, you 
do. Even better, though, onion messages are *cheap*, like absurdly cheap, so if you have enough 
inbound capacity you're almost certain to have enough inbound *network* capacity to handle some 
invoice requests, hell, they're a millionth the cost of the HTLCs you're about to receive 
anyway...this argument is just nonsense.




Another question on my mind is: if this works really well for rate limiting of
onion messages, then why can't we use it for HTLCs as well?


We do? 400-some-odd HTLCs in flight at once is a *really* tight rate limit, even! Order of 
magnitudes tighter than onion message rate limits need to be :)


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-03 Thread Matt Corallo




On 7/1/22 9:09 PM, Olaoluwa Osuntokun wrote:

Hi Matt,

 > Ultimately, paying suffers from the standard PoW-for-spam issue - you
 > cannot assign a reasonable cost that an attacker cares about without
 > impacting the system's usability due to said cost.

Applying this statement to related a area


I mean, I think its only mostly-related, cause HTLCs are pretty different in 
cost, but.


would you also agree that proposals
to introduce pre-payments for HTLCs to mitigate jamming attacks is similarly
a dead end?


I dunno if its a "dead end", but, indeed, the naive proposals I'm definitely no fan of whatsoever. I 
certainly remain open to being shown I'm wrong.



Personally, this has been my opinion for some time now. Which
is why I advocate for the forwarding pass approach (gracefully degrade to
stratified topology), which in theory would allow the major flows of the
network to continue in the face of disruption.


I'm starting to come around to allowing a "pay per HTLC-locked-time" fee, with Rusty's proposal 
around allowing someone to force-close a channel to "blame"
 a hop for not failing back after fees stop coming in. Its really nifty in theory and doesn't have 
all the classic issues that up-front-fees have, but it puts a very, very, very high premium on high 
uptime, which may be catastrophic, dunno.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Matt Corallo
One further note, I don’t think it makes sense to specify exactly what the 
rate-limiting behavior is here - if a node wants to do something other than the 
general “keep track of last forwarded message source and rate limit them” logic 
they should be free to, there’s no reason that needs to be normative (and there 
may be some reason to think it’s vulnerable to a node deliberately causing one 
inbound edge to be limited even though they’re spamming a different one).

> On Jun 29, 2022, at 04:28, Bastien TEINTURIER  wrote:
> 
> 
> During the recent Oakland Dev Summit, some lightning engineers got together 
> to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting 
> scheme that
> statistically propagates back to the correct sender, which we describe in 
> details below.
> You can also read this in gist format if that works better for you [1].
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
> relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion 
> messages from
> peers with whom you have channels, for example 10/seconds when you have a 
> channel and 1/second
> when you don't.
> 
> When relaying an onion message, nodes keep track of where it came from (by 
> using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per 
> outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't 
> need to be persisted.
> 
> Let's walk through an example to illustrate this mechanism:
> 
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's 
> `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Dave
> * ...
> 
> We introduce a new message that will be sent when dropping an incoming onion 
> message because it
> reached rate limits:
> 
> 1. type: 515 (`onion_message_drop`)
> 2. data:
>* [`rate_limited`:`u8`]
>* [`shared_secret_hash`:`32*byte`]
> 
> Whenever an incoming onion message reaches the rate limit, the receiver sends 
> `onion_message_drop`
> to the sender. The sender looks at its per-connection state to find where the 
> message was coming
> from and relays `onion_message_drop` to the last sender, halving their rate 
> limits with that peer.
> 
> If the sender doesn't overflow the rate limit again, the receiver should 
> double the rate limit
> after 30 seconds, until it reaches the default rate limit again.
> 
> The flow will look like:
> 
> Alice  Bob  Carol
>   | | |
>   |  onion_message  | |
>   |>| |
>   | |  onion_message  |
>   | |>|
>   | |onion_message_drop   |
>   | |<|
>   |onion_message_drop   | |
>   |<| |
> 
> The `shared_secret_hash` field contains a BIP 340 tagged hash of the Sphinx 
> shared secret of the
> rate limiting peer (in the example above, Carol):
> 
> * `shared_secret_hash = SHA256(SHA256("onion_message_drop") || 
> SHA256("onion_message_drop") || sphinx_shared_secret)`
> 
> This value is known by the node that created the onion message: if 
> `onion_message_drop` propagates
> all the way back to them, it lets them know which part of the route is 
> congested, allowing them
> to retry through a different path.
> 
> Whenever there is some latency between nodes and many onion messages, 
> `onion_message_drop` may
> be relayed to the incorrect incoming peer (since we only store the `node_id` 
> of the _last_ incoming
> peer in our outgoing connection state). The following example highlights this:
> 
>  Eve   Bob  Carol
>   |  onion_message  | |
>   |>|  onion_message  |
>   |  onion_message  |>|
>   |>|  onion_message  |
>   |  onion_message  |>|
>   |>|  onion_message  |
> |>|
> Alice   |onion_message_drop   |
>   |  onion_message  |+|
>   |>|  onion_message ||
>   | |---

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-30 Thread Matt Corallo


> On Jun 28, 2022, at 19:11, Peter Todd  wrote:
> 
> Idle question: would it be worthwhile to allow people to opt-in to their
> payments happening more slowly for privacy? At the very least it'd be fine if
> payments done by automation for rebalancing, etc. happened slowly.

Yea, actually, I think that’d be a really cool idea. Obviously you don’t want 
to hold onto an HTLC for much longer than you have to or you’re DoS’ing 
yourself using up channel capacity, but most channels spend the vast majority 
of their time with zero HTLCs, so waiting a second instead of 100ms to batch 
seems totally reasonable.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Matt Corallo
Thanks Bastien for writing this up! This is a pretty trivial and straightforward way to rate-limit 
onion messages in a way that allows legitimate users to continue using the system in spite of some 
bad actors trying (and failing, due to being rate-limited) to DoS the network.


I do think any spec for this shouldn't make any recommendations about willingness to relay onion 
messages for anonymous no-channel third parties, if anything deliberately staying mum on it and 
allowing nodes to adapt policy (and probably rate-limit no-channel third-parties before they rate 
limit any peer they have a channel with). Ultimately, we have to assume that nodes will attempt to 
send onion messages by routing through the existing channel graph, so there's little reason to worry 
too much about ensuring ability to relay for anonymous parties.


Better yet, as Val points out, requiring a channel to relay onion messages puts a very real, 
nontrivial (in a world of msats) cost to getting an onion messaging channel. Better yet, with 
backpressure ability to DoS onion message links isn't denominated in number of messages, but instead 
in number of channels you are able to create, making the backpressure system equivalent to today's 
HTLC DoS considerations, whereas explicit payment allows an attacker to pay much less to break the 
system.


As for the proposal to charge for onion messages, I'm still not at all sure where its coming from. 
It seems to flow from a classic "have a hammer (a system to make micropayments for things), better 
turn this problem into a nail (by making users pay for it)" approach, but it doesn't actually solve 
the problem at hand.


Even if you charge for onion messages, users may legitimately want to send a bunch of payments in 
bulk, and trivially overflow a home or Tor nodes' bandwidth. The only response to that, whether its 
a DoS attack or a legitimate user, is to rate-limit, and to rate-limit in a way that tells the user 
sending the messages to back off! Sure, you could do that by failing onion messages with an error 
that updates the fee you charge, but you're ultimately doing a poor-man's (or, I suppose, 
rich-man's) version of what Bastien proposes, not adding some fundamental difference.


Ultimately, paying suffers from the standard PoW-for-spam issue - you cannot assign a reasonable 
cost that an attacker cares about without impacting the system's usability due to said cost. Indeed, 
making it expensive enough to mount a months-long DDoS without impacting legitimate users be pretty 
easy - at 1msat per relay of a 1366 byte onion message you can only saturate an average home users' 
30Mbps connection for 30 minutes before you rack up a dollar in costs, but if your concern is 
whether someone can reasonably trivially take out the network for minutes at a time to make it have 
perceptibly high failure rates, no reasonable cost scheme will work. Quite the opposite - the only 
reasonable way to respond is to respond to a spike in traffic while maintaining QoS is to rate-limit 
by inbound edge!


Ultimately, what we have here is a networking problem, that has to be solved with networking 
solutions, not a costing problem, which can be solved with payment. I can only assume that the 
desire to add a cost to onion messages ultimately stems from a desire to ensure every possible 
avenue for value extraction is given to routing nodes, but I think that desire is misplaced in this 
case - the cost of bandwidth is diminutive compared to other costs of routing node operation, 
especially when you consider sensible rate-limits as proposed in Bastien's email.


Indeed, if anyone were proposing rate-limits which would allow anything close to enough bandwidth 
usage to cause "lightning is turning into Tor and has Tor's problems" to be a legitimate concern I'd 
totally agree we should charge for its use. But no one is, nor has anyone ever seriously, to my 
knowledge, proposed such a thing. If lightning messages get deployed and start eating up even single 
Mbps's on a consistent basis on nodes, we can totally revisit this, its not like we are shutting the 
door to any possible costing system if it becomes necessary, but rate-limiting has to happen either 
way, so we should start there and see if we need costing, not jump to costing on day one, hampering 
utility.


Matt

On 6/29/22 8:22 PM, Olaoluwa Osuntokun wrote:

Hi t-bast,

Happy to see this finally written up! With this, we have two classes of
proposals for rate limiting onion messaging:

   1. Back propagation based rate limiting as described here.

   2. Allowing nodes to express a per-message cost for their forwarding
   services, which is described here [1].

I still need to digest everything proposed here, but personally I'm more
optimistic about the 2nd category than the 1st.

One issue I see w/ the first category is that a single party can flood the
network and cause nodes to trigger their rate limits, which then affects the
usability of the 

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Matt Corallo




On 6/28/22 9:05 AM, Christian Decker wrote:

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested
FROST.


In general, and especially for "edge nodes", yes, but if forwarding nodes start taking a full second 
to forward a payment, we probably need to start aggressively avoiding any such nodes - while I'd 
love for all forwarding nodes to take 30 seconds to forward to improve privacy, users ideally expect 
payments to complete in 100ms, with multiple payment retries in between.


This obviously probably isn't ever going to happen in lightning, but getting 95th percentile 
payments down to one second is probably a good goal, something that requires never having to retry 
payments and also having forwarding nodes not take more than, say, 150ms.


Of course I don't think we should ever introduce a timeout on the peer level - if your peer went 
away for a second and isn't responding quickly to channel updates it doesn't merit closing a 
channel, but its something we will eventually want to handle in route selection if it becomes more 
of an issue going forward.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-05-26 Thread Matt Corallo




On 5/26/22 8:59 PM, Alex Myers wrote:

Ah, this is an additional proposal on top, and requires a gossip "hard fork", 
which means your new
protocol would only work for taproot channels, and any old/unupgraded channels 
will have to be
propagated via the old mechanism. I'd kinda prefer to be able to rip out the 
old gossip sync code
sooner than a few years from now :(.


I viewed it as a soft fork, where if you want to use set reconciliation, 
anything added to the set would be subject to a constricted ruleset - in this 
case the gossip must be accompanied by a blockheight tlv (or otherwise 
reference a blockheight) and it must not replace a message in the current 100 
block range.

It doesn't necessarily have to reference blockheight, but that would simplify 
many edge cases.  The key is merely that a node is responsible for limiting 
it's own gossip to a predefined interval, and it must be easily verifiable for 
any other nodes building and reconciling sketches.  Given that we have access 
to a timechain, this just made the most sense.


Ah, good point, you can just add it as a TLV. It still implies that "old-gossip" can't go away for a 
lont time - ~everyone has to upgrade, so we'll have two parallel systems. Worse, people are relying 
on the old behavior and some nodes may avoid upgrading to avoid the new rate-limits :(.



If some nodes have 60 and others have 600099 (because you broke the
ratelimiting recommendation, and propagated both approx the same
time), then the network will split, sure.



Right, so what do you do in that case, though? AFAIU, in your proposed sync 
mechanism if a node does
this once, you're stuck with all of your gossip reconciliations with every peer 
"wasting" one
difference "slot" for a day or however long it takes before the peer does a 
sane update. In my
proposed alternative it only appears once and then you move on (or maybe once 
more on startup, but
we can maybe be willing to take on some extra cost there?).


This case may not be all that difficult. Easiest answer is you offer a spam 
proof to your peer.  Send both messages, signed by the offending node as proof 
they violated the set reconciliation rate limit, then remove both from your 
sketch. You may want to keep the evidence it in your data store, at least until 
it's superceded by the next valid update, but there's no reason it must occupy 
a slot of the sketch.  Meanwhile, feel free to use the message as you wish, 
just keep both out of the sketch. It's not perfect, but the sketch capacity is 
not compromised and the second incidence of spam should not propagate at all. 
(It may be possible to keep one, but this is the simplest answer.)


Right, well if we're gonna start adding "spam-proofs" we shouldn't start talking about complexity of 
tracking the changed-set :p.


Worse, unlike tracking the chanaged-set as proposed this protocol is a ton of unused code to handle 
an edge case we should only rarely hit...in other words code that will almost certainly be buggy, 
untested, and fail if people start hitting it. In general, I'm not a huge fan of protocols with any 
more usually-unused code than is strictly necessary.


This also doesn't capture things like channel_update extensions - BOLTs today say a recipient "MAY 
choose NOT to for messages longer than the minimum expected length" - so now we'd need remove that 
(and I guess have a fixed "maximum length" for channel updates that everyone agrees to...basically 
we have to have exact consensus on valid channel updates across nodes.



Heh, I'm surprised you'd complain about this - IIUC your existing gossip 
storage system keeps this
as a side-effect so it'd be a single integer for y'all :p. In any case, if it 
makes the protocol a
chunk more efficient I don't see why its a big deal to keep track of the set of 
which invoices have
changed recently, you could even make it super efficient by just saying 
"anything more recent than
timestamp X except a few exceptions that we got with some lag against the update 
timestamp".


The benefit of a single global sketch is less overhead in adding additional 
gossip peers, though looking at the numbers, sketch decoding time seems to be 
the more significant driving factor than rebuilding sketches (when they're 
incremental.) I also like maximizing the utility of the sketch by adding the 
full gossip store if possible.


Note that the alternative here does not prevent you from having a single global sketch. You can keep 
a rolling global sketch that you send to all your peers at once, it would just be a bit of a 
bandwidth burst when they all request a few channel updates/announcements from you.


More generally, I'm somewhat surprised to hear a performance concern here - I can't imagine we'd be 
including any more entries in such a sketch than Bitcoin Core does transactions to relay to peers, 
and this is exactly the design direction they went in (because of basically the same concerns).



I still think getting th

Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-05-26 Thread Matt Corallo




On 5/26/22 1:25 PM, Rusty Russell wrote:

Matt Corallo  writes:

I agree there should be *some* rough consensus, but rate-limits are a 
locally-enforced thing, not a
global one. There will always be races and updates you reject that your peers 
dont, no matter the
rate-limit, and while I agree we should have guidelines, we can't "just make them 
the same" - it
both doesn't solve the problem and means we can't change them in the future.


Sure it does!  It severly limits the set divergence to race conditions
(down to block height divergence, in practice).


Huh? There's always some line you draw, if an update happens right on the line 
(which they almost
certainly often will because people want to update, and they'll update every X 
hours to whatever the
rate limit is), then ~half the network will accept the update and half won't. I 
don't see how you
solve this problem.


The update contains a block number.  Let's say we allow an update every
100 blocks.  This must be <= current block height (and presumably, newer
than height - 2016).

If you send an update number 60, and then 600100, it will propagate.
600099 will not.


Ah, this is an additional proposal on top, and requires a gossip "hard fork", which means your new 
protocol would only work for taproot channels, and any old/unupgraded channels will have to be 
propagated via the old mechanism. I'd kinda prefer to be able to rip out the old gossip sync code 
sooner than a few years from now :(.



If some nodes have 60 and others have 600099 (because you broke the
ratelimiting recommendation, *and* propagated both approx the same
time), then the network will split, sure.


Right, so what do you do in that case, though? AFAIU, in your proposed sync mechanism if a node does 
this once, you're stuck with all of your gossip reconciliations with every peer "wasting" one 
difference "slot" for a day or however long it takes before the peer does a sane update. In my 
proposed alternative it only appears once and then you move on (or maybe once more on startup, but 
we can maybe be willing to take on some extra cost there?).



Maybe.  What's a "non-update" based sketch?  Some huge percentage of
gossip is channel_update, so it's kind of the thing we want?


Oops, maybe we're on *very* different pages, here - I mean doing sketches based on 
"the things that
I received since the last sync, ie all the gossip updates from the last hour" 
vs doing sketches
based on "the things I have, ie my full gossip store".


But that requires state.  Full store requires none, keeping it
super-simple


Heh, I'm surprised you'd complain about this - IIUC your existing gossip storage system keeps this 
as a side-effect so it'd be a single integer for y'all :p. In any case, if it makes the protocol a 
chunk more efficient I don't see why its a big deal to keep track of the set of which invoices have 
changed recently, you could even make it super efficient by just saying "anything more recent than 
timestamp X *except* a few exceptions that we got with some lag against the update timestamp".


Better, the state is global, not per-peer, and a small fraction of the total state of the gossip 
store anyway, so its not like its introducing some new giant or non-constant-factor blowup.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-05-26 Thread Matt Corallo

Oops, sorry, I don't really monitor the dev lists but once every few months so 
this fell off my plate :/

On 4/28/22 6:11 PM, Rusty Russell wrote:

Matt Corallo  writes:
OK, let's step back.  Unlike Bitcoin, we can use a single sketch for
*all* peers.  This is because we *can* encode enough information that
you can get useful info from the 64 bit id, and because it's expensive
to create them so you can't spam.


Yep, makes sense.


The more boutique per-peer handling we need, the further it gets from
this ideal;.


The second potential thing I think you might have meant here I don't see as an 
issue at all? You can
simply...let the sketch include one channel update that you ignored? See above 
discussion of similar
rate-limits.


No, you need to get all the ignored ones somehow?  There's so much cruft
in the sketch you can't decode it.  Now you need to remember the ones
you ratelimited, and try to match other's ratelimiting.


Right, you'd end up downloading the thing you rate-limited, but only once (possibly per-peer). If 
you use the total-sync approach you'd download it on every sync, vs a "only updates" approach you'd 
do it once.



I agree there should be *some* rough consensus, but rate-limits are a 
locally-enforced thing, not a
global one. There will always be races and updates you reject that your peers 
dont, no matter the
rate-limit, and while I agree we should have guidelines, we can't "just make them 
the same" - it
both doesn't solve the problem and means we can't change them in the future.


Sure it does!  It severly limits the set divergence to race conditions
(down to block height divergence, in practice).


Huh? There's always some line you draw, if an update happens right on the line (which they almost 
certainly often will because people want to update, and they'll update every X hours to whatever the 
rate limit is), then ~half the network will accept the update and half won't. I don't see how you 
solve this problem.

Maybe.  What's a "non-update" based sketch?  Some huge percentage of
gossip is channel_update, so it's kind of the thing we want?


Oops, maybe we're on *very* different pages, here - I mean doing sketches based on "the things that 
I received since the last sync, ie all the gossip updates from the last hour" vs doing sketches 
based on "the things I have, ie my full gossip store".


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #PickhardtPayments implemented in lnd-manageJ

2022-05-16 Thread Matt Corallo
Its probably worth somewhat disentangling the concept of switching to a minimum-cost flow routing 
algorithm from the concept of "scoring based on channel value and estimated available liquidity".


These are two largely-unrelated concepts that are being mashed into one in this description - the 
first concept needs zero-base-fee to be exact, though its not clear to me that a heuristics-based 
approach won't give equivalent results in practice, given the noise in success rate compared to 
theory here.


The second concept is something that LDK (and I believe CLN and maybe even eclair now) do already, 
though lnd does not last I checked. For payments where MPP does not add much to success rate (i.e. 
payments where the amount is relatively "low" compared to available network liquidity) dijkstra's 
with a liquidity/channel-size based scoring will give you the exact same result.


For cases where you're sending an amount which is "high" compared to available network liquidity, 
taking a minimum-cost-flow algorithm becomes important, as you point out. Of course you're always 
going to suffer really slow payment and many retires in this case anyway.


Matt

On 5/15/22 1:01 PM, Carsten Otto via Lightning-dev wrote:

Dear all,

the most recent version of lnd-manageJ [1] now includes basic, but usable,
support for #PickhardtPayments. I kindly invite you to check out the code, give
it a try, and use this work for upcoming experiments.

Teaser with video: https://twitter.com/c_otto83/status/1525879972786749453

The problem, heavily summarized:

- Sending payments in the LN often fails, especially with larger amounts.
- Splitting a large payment into smaller shards (using MPP) is helpful,
   as in general the smaller shards don't fail as often as a single large
   payment would. However, the success probability also drops
   exponentially with the number of channels included [2].
- Finding routes through the LN is tricky, as the channels' liquidity is
   uncertain at the time of computing the routes and a simple "trial and
   error" approach might take too long.
- There are several implementations using various heuristics, taking
   aspects like fees, previous failures, HTLC deltas, channel age, ...
   into account. Comparing these approaches is very hard.

The gist of #PickhardtPayments:

- The issue of finding MPP routes in the LN corresponds to the
   well-known minimum-cost flow problem in computer science (graph
   theory) with lots of related research, results, algorithms, ...
- As shown in the paper [3] the results are optimal, no matter which
   "cost" function is used to reason about routing costs (fees) and/or
   reliability. However, depending on the characteristics of the
   function, actually finding optimal results can be extremely hard
   (NP-complete in some cases). By imposing the zero base fee limitation
   and assuming a uniform distribution of funds, fast implementations
   (polynomial time with sub-second runtimes) can be used.
- Assuming (!) a uniform distribution of funds in each channel and zero
   base fee only, #PickhardtPayments offers an approach that is optimal,
   i.e. proven perfect and computationally feasible.
- The most basic version only considers uncertainty costs for
   reliability, but it is possible (and implemented in lnd-manageJ) to
   also consider routing costs (fee rates) and optimize for both features
   to come up with reliable and cheap-ish MPPs.
- The implementation of #PickhardtPayments in lnd-manageJ needs to
   ignore non-zero base fee channels to avoid extremely slow
   (NP-complete) computations. Furthermore, certain aspects are
   approximated [4]. As such, optimality claims are limited to the zero
   base fee subset of the LN, and real-world experiments might be tricky
   to interpret. However, as also shown in the testnet videos [5][6],
   first results are very promising!

The real strength of #PickhardtPayments:

- Liquidity information, for example obtained by previous failures, is
   taken into account. For each attempt, the relevant bits of information
   are obtained and will be used to guide the following attempts.
- As the underlying algorithm is proven to be optimal, we do not need to
   rely on heuristics. Instead, the algorithm happily finds routes that
   may be very long (but very probable/cheap, for whatever reason), have
   a surprising number of shards, or rather odd amounts.
- The underlying algorithm also deals with shared segments, i.e.
   individual channels that are used for more than one shard of the MPP,
   without oversaturating it.

The code in lnd-manageJ:

- Highly experimental, but it's a start!
- lnd + gRPC middleware + Java/Spring + PostgreSQL is a bit more complex
   than necessary.
- Only works with lnd.
- Doesn't really work with lnd until issue #5746 [7] is fixed. I'd be
   very happy for someone to have a look at my proposal (PR #6543 [8])!
- The code doesn't handle all corner cases, especially the
   less-than-usual failure co

Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-27 Thread Matt Corallo




On 4/26/22 11:53 PM, Rusty Russell wrote:

Matt Corallo  writes:

This same problem will occur if *anyone* does ratelimiting, unless
*everyone* does.  And with minisketch, there's a good reason to do so.


None of this seems like a good argument for *not* taking the "send updates 
since the last sync in
the minisketch" approach to reduce the damage inconsistent policies
cause, though?


You can't do this, with minisketch.  You end up having to keep all the
ratelimited differences you're ignoring *per peer*, and then cancelling
them out of the minisketch on every receive or send.


Hmm? I'm a bit confused, let me attempt to restate to make sure we're on the same page. What I 
*think* you said here is: "If you have a node which is rejecting a large percentage *channel*'s 
updates (on a per-channel, not per-update basis), and it tries to sync, you'll end up having to keep 
some huge set of 'I dont want any more updates for that channel' on a per-peer basis"? Or maybe you 
might have said "When you rate-limit, you have to tell your peer that you rate-limited a channel 
update and that it shouldn't add that update to its next sketch"?


Either way, I don't think its all that interesting an issue. The first case is definitely an issue, 
but is an issue in both a new-data-only sketch and all-data sketch world, and is not completely 
solved with identical rate-limits in any case. It can be largely addressed by sane software defaults 
and roughly-similar rate-limits, though, and because its a per-channel, not per-update issue I'm 
much less concerned.


The second potential thing I think you might have meant here I don't see as an issue at all? You can 
simply...let the sketch include one channel update that you ignored? See above discussion of similar 
rate-limits.



So you end up doing that LND and core-lightning do, which is "pick 3
peers to gossip with" and tell everyone else to shut up.

Yet the point of minisketch is robustness; you can (at cost of 1 message
per minute) keep in sync with an arbitrary number of peers.

So, we might as well define a preferred ratelimit, so nodes know that
spamming past a certain point is not going to propagate.  At the moment,
LND has no effective ratelimit at all, so it's a race to the bottom.


I agree there should be *some* rough consensus, but rate-limits are a locally-enforced thing, not a 
global one. There will always be races and updates you reject that your peers dont, no matter the 
rate-limit, and while I agree we should have guidelines, we can't "just make them the same" - it 
both doesn't solve the problem and means we can't change them in the future.


Ultimately, a updates-based sync is more robust in such a case - if there's some race and your peer 
accepts something you don't it may mean one more entry in the sketch one time, but it won't hang 
around forever.



We need that limit eventually, this just makes it more of a priority.


I'm not really
sure in a world where you do "update-based-sketch" gossip sync you're any worse 
off than today even
with different rate-limit policies, though I obviously agree there are 
substantial issues with the
massively inconsistent rate-limit policies we see today.


You can't really do it, since rate-limited junk overwhelms the sketch
really fast :(


How is this any better in a non-update-based-sketch? The only way to address it is to have a bigger 
sketch, which you can do no matter the thing you're building the sketch over.


Maybe lets schedule a call to get on the same page, throwing text at each other will likely not move 
very quickly.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-24 Thread Matt Corallo



On 4/22/22 6:40 PM, Rusty Russell wrote:

Matt Corallo  writes:

Allowing only 1 a day, ended up with 18% of channels hitting the spam
limit.  We cannot fit that many channel differences inside a set!

Perhaps Alex should post his more detailed results, but it's pretty
clear that we can't stay in sync with this many differences :(


Right, the fact that most nodes don't do any limiting at all and y'all have a 
*very* aggressive (by
comparison) limit is going to be an issue in any context.


I'm unable to find the post years ago where I proposed this limit
and nobody had major objections.  I just volunteered to go first :)


I'm not trying to argue the number is good or bad, only that being several orders of magnitude away 
from everything else is going to lead to rejections.



We could set some guidelines and improve
things, but luckily regular-update-sync bypasses some of these issues anyway - 
if we sync once per
block and your limit is once per block, getting 1000 updates per block for some 
channel doesn't
result in multiple failures in the sync. Sure, multiple peers sending different 
updates for that
channel can still cause some failures, but its still much better.


Nodes will want to aggressively spam as much as they can, so I think we
need a widely-agreed limit.  I don't really care what it is, but
somewhere between per 1 and 1000 blocks makes sense?


I don't really disagree, but my point is that we should strive for the sync system to not need to 
care about this number as much as possible. Because views of the rate limits are a local view, not a 
global view, you'll always end up with things on the edge getting rejected during sync, and, worse, 
when we eventually want to change the limit, we'd be hosed.




But we might end up with a gossip2 if we want to enable taproot, and use
blockheight as timestamps, in which case we could probably just support
that one operation (and maybe a direct query op).


Like eclair, we don’t bother to rate limit and don’t see any issues with it, 
though we will skip relaying outbound updates if we’re saturating outbound 
connections.


Yeah, we did as a trial, and in some cases it's become limiting.  In
particular, people restarting their LND nodes once a day resulting in 2
updates per day (which, in 0.11.0, we now allow).


What do you mean "its become limiting"? As in you hit some reasonably-low 
CPU/disk/bandwidth limit
in doing this? We have a pretty aggressive bandwidth limit for this kinda stuff 
(well, indirect
bandwidth limit) and it very rarely hits in my experience (unless the peer is 
very overloaded and
not responding to pings, which is a somewhat separate thing...)


By rejecting more than 1 per day, some LND nodes had 50% of their
channels left disabled :(

This same problem will occur if *anyone* does ratelimiting, unless
*everyone* does.  And with minisketch, there's a good reason to do so.


None of this seems like a good argument for *not* taking the "send updates since the last sync in 
the minisketch" approach to reduce the damage inconsistent policies cause, though? I'm not really 
sure in a world where you do "update-based-sketch" gossip sync you're any worse off than today even 
with different rate-limit policies, though I obviously agree there are substantial issues with the 
massively inconsistent rate-limit policies we see today.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-22 Thread Matt Corallo



On 4/21/22 7:20 PM, Rusty Russell wrote:

Matt Corallo  writes:

Sure, if you’re rejecting a large % of channel updates in total
you’re gonna end up hitting degenerate cases, but we can consider
tuning the sync frequency if that becomes an issue.


Let's be clear: it's a problem.

Allowing only 1 a day, ended up with 18% of channels hitting the spam
limit.  We cannot fit that many channel differences inside a set!

Perhaps Alex should post his more detailed results, but it's pretty
clear that we can't stay in sync with this many differences :(


Right, the fact that most nodes don't do any limiting at all and y'all have a *very* aggressive (by 
comparison) limit is going to be an issue in any context. We could set some guidelines and improve 
things, but luckily regular-update-sync bypasses some of these issues anyway - if we sync once per 
block and your limit is once per block, getting 1000 updates per block for some channel doesn't 
result in multiple failures in the sync. Sure, multiple peers sending different updates for that 
channel can still cause some failures, but its still much better.



gossip queries  is broken in at least five ways.


Naah, it's perfect if you simply want to ask "give me updates since XXX"
to get you close enough on reconnect to start using set reconciliation.
This might allow us to remove some of the other features?


Sure, but that's *just* the "gossip_timestamp_filter" message, there's several other messages and a 
whole query system that we can throw away if we just want that message :)



But we might end up with a gossip2 if we want to enable taproot, and use
blockheight as timestamps, in which case we could probably just support
that one operation (and maybe a direct query op).


Like eclair, we don’t bother to rate limit and don’t see any issues with it, 
though we will skip relaying outbound updates if we’re saturating outbound 
connections.


Yeah, we did as a trial, and in some cases it's become limiting.  In
particular, people restarting their LND nodes once a day resulting in 2
updates per day (which, in 0.11.0, we now allow).


What do you mean "its become limiting"? As in you hit some reasonably-low CPU/disk/bandwidth limit 
in doing this? We have a pretty aggressive bandwidth limit for this kinda stuff (well, indirect 
bandwidth limit) and it very rarely hits in my experience (unless the peer is very overloaded and 
not responding to pings, which is a somewhat separate thing...)


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-22 Thread Matt Corallo



On 4/22/22 9:15 AM, Alex Myers wrote:

Hi Matt,

Appreciate your responses.  Hope you'll bear with me as I'm a bit new to this.

Instead of trying to make sure everyone’s gossip acceptance matches 
exactly, which as you point
it seems like a quagmire, why not (a) do a sync on startup and (b) do syncs 
of the *new* things.

I'm not opposed to this technique, and maybe it ends up as a better solution.  The rationale for not 
going full Erlay approach was that it's far less overhead to maintain a single sketch than to 
maintain a per-peer sketch and associated state for every gossip peer.  In this way there's very 
little cost to adding additional gossip peers, which further encourages propagation and convergence 
of the gossip network.


I'm not sure what you mean by per-node state here - I'd think you can implement it with a simple 
"list of updates that happened since time X" data, instead of having to maintain per-peer state.


IIUC Erlay's design was concerned for privacy of originating nodes.  Lightning gossip is public by 
nature, so I'm not sure we should constrain ourselves to the same design route without trying the 
alternative first.


Part of the design of Erlay, especially the insight of syncing updates instead of full mempools, was 
actually this precise issue - Bitcoin Core nodes differ in policy for a number of reasons 
(especially across updates), and thus syncing the full mempool will result in degenerate cases of 
trying over and over and over again to sync stuff your peer is rejecting. At least if I recall 
correctly.



if we're gonna add a minisketch-based sync anyway, please lets also use it 
for initial sync
after restart

This was out of the scope of what I had in mind, but I will give this some thought. I could see how 
a block_height reference coupled with set reconciliation could provide some better options here. 
This may not be all that difficult to shoe-horn in.


Regardless of single sketch or per-peer set reconciliation, it should be easier to implement with 
tighter rules on rate-limiting. (Keep in mind, the node's graph can presumably be updated 
independently of the gossip it rebroadcasts if desired.) As a thought experiment, if we consider a 
CLN-LDK set reconciliation, and that each node is gossiping with 5 other peers in an evenly spaced 
frequency, we would currently see 42.8 commonly accepted channel_updates over an average 60s window 
along with 11 more updates which LDK accepts and CLN rejects (spam.)[1] Assuming the other 5 peers 
have shared 5/6ths of this gossip before the CLN/LDK set reconciliation, we're left with CLN seeing 
7 updates to reconcile, while LDK sees 18.  Already we've lost 60% efficiency due to lack of a 
common rate-limit heuristic.


I do not believe that we will ever form a strong agreement on exactly what the rate-limits should 
be. And even if we do, we still have the issue of upgrades, where a simple change to the rate-limits 
causes sync to suddenly blow up and hit degenerate cases all over the place. Unless we can make the 
sync system relatively robust against slightly different policies, I think we're kinda screwed.


Worse, what happens if someone sends updates at exactly the limit of the rate-limiters? Presumably 
people will do this because "that's what the limit is and I want to send updates as often as I can 
becaux...". Now you'll still have similar issues, I believe.


I understand gossip traffic is manageable now, but I'm not sure it will be that long before it 
becomes an issue. Furthermore, any particular set reconciliation technique would benefit from a 
simple common rate-limit heuristic, not to mention originating nodes, who may not currently realize 
their channel updates are being rejected by a portion of the network due to differing criteria 
across implementations.


Yes, I agree there is definitely a concern with differing criteria resulting in nodes not realizing 
their gossip is not propagating. I agree guidelines would be nice, but guidelines doesn't solve the 
issue for sync, sadly, I think. Luckily lightning does provide a mechanism to bypass the rejection - 
send an update back with an HTLC failure. If you're trying to route an HTLC and a node has new 
parameters for you, it'll helpfully let you know when you try to use the old parameters.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-21 Thread Matt Corallo



On 4/21/22 1:31 PM, Alex Myers wrote:

Hello Bastien,

Thank you for your feedback. I hope you don't mind I let it percolate for a 
while.

Eclair doesn't do any rate-limiting. We wanted to "feel the pain" before 
adding
anything, and to be honest we haven't really felt it yet.

I understand the “feel the pain first” approach, but attempting set reconciliation has forced me to 
confront the issue a bit early.


My thoughts on sync were that set-reconciliation would only be used once a node had fully synced 
gossip through traditional means (initial_routing_sync / gossip_queries.) There should be many 
levers to pull in order to help maintain sync after this. I'm going to have to experiment with them 
a bit before I can claim they are sufficient, but I'm optimistic.


Please, no. initial_routing_sync was removed from most implementations (it sucks) and gossip queries 
is broken in at least five ways. May we can recover it by adding yet more extensions but if we're 
gonna add a minisketch-based sync anyway, please lets also use it for initial sync after restart 
(unless you have no channels at all, in which case lets maybe revive initial_routing_sync...)


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-21 Thread Matt Corallo
Instead of trying to make sure everyone’s gossip acceptance matches exactly, 
which as you point it seems like a quagmire, why not (a) do a sync on startup 
and (b) do syncs of the *new* things. This way you aren’t stuck staring at the 
same channels every time you do a sync. Sure, if you’re rejecting a large % of 
channel updates in total you’re gonna end up hitting degenerate cases, but we 
can consider tuning the sync frequency if that becomes an issue.

Like eclair, we don’t bother to rate limit and don’t see any issues with it, 
though we will skip relaying outbound updates if we’re saturating outbound 
connections.

> On Apr 14, 2022, at 17:06, Alex Myers  wrote:
> 
> 
> Hello lightning developers,
> 
> I’ve been investigating set reconciliation as a means to reduce bandwidth and 
> redundancy of gossip message propagation. This builds on some earlier work 
> from Rusty using the minisketch library [1]. The idea is that each node will 
> build a sketch representing it’s own gossip set. Alice’s node will encode and 
> transmit this sketch to Bob’s node, where it will be merged with his own 
> sketch, and the differences produced. These differences should ideally be 
> exactly the latest missing gossip of both nodes. Due to size constraints, the 
> set differences will necessarily be encoded, but Bob’s node will be able to 
> identify which gossip Alice is missing, and may then transmit exactly those 
> messages.
> 
> This process is relatively straightforward, with the caveat that the sets 
> must otherwise match very closely (each sketch has a maximum capacity for 
> differences.) The difficulty here is that each node and lightning 
> implementation may have its own rules for gossip acceptance and propagation. 
> Depending on their gossip partners, not all gossip may propagate to the 
> entire network.
> 
> Core-lightning implements rate limiting for incoming channel updates and node 
> announcements. The default rate limit is 1 per day, with a burst of 4. I 
> analyzed my node’s gossip over a 14 day period, and found that, of all 
> publicly broadcasting half-channels, 18% of them fell afoul of our 
> spam-limiting rules at least once. [2]
> 
> Picking several offending channel ids, and digging further, the majority of 
> these appear to be flapping due to Tor or otherwise intermittent connections. 
> Well connected nodes may be more susceptible to this due to more frequent 
> routing attempts, and failures resulting in a returned channel update (which 
> otherwise might not have been broadcast.) A slight relaxation of the rate 
> limit resolves the majority of these cases.
> 
> A smaller subset of channels broadcast frequent channel updates with minor 
> adjustments to htlc_maximum_msat and fee_proportional_millionths parameters. 
> These nodes appear to be power users, with many channels and large balances. 
> I assume this is automated channel management at work.
> 
> Core-Lightning has updated rate-limiting in the upcoming release to achieve a 
> higher acceptance of incoming gossip, however, it seems that a broader 
> discussion of rate limits may now be worthwhile. A few immediate ideas:
> - A common listing of current default rate limits across lightning network 
> implementations.
> - Internal checks of RPC input to limit or warn of network propagation issues 
> if certain rates are exceeded.
> - A commonly adopted rate-limit standard.
> 
> My aim is a set reconciliation gossip type, which will use a common, simple 
> heuristic to accept or reject a gossip message. (Think one channel update per 
> block, or perhaps one per block_height << 5.) See my github for my current 
> draft. [3] This solution allows tighter consensus, yet suffers from the same 
> problem as original anti-spam measures – it remains somewhat arbitrary. I 
> would like to start a conversation regarding gossip propagation, 
> channel_update and node_announcement usage, and perhaps even bandwidth goals 
> for syncing gossip in the future (how about a million channels?) This would 
> aid in the development of gossip set reconciliation, but could also benefit 
> current node connection and routing reliability more generally.
> 
> Thanks,
> Alex
> 
> [1] https://github.com/sipa/minisketch
> [2] 
> https://github.com/endothermicdev/lnspammityspam/blob/main/sampleoutput.txt
> [3] 
> https://github.com/endothermicdev/lightning-rfc/blob/gossip-minisketch/07-routing-gossip.md#set-reconciliation
> 
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-12-27 Thread Matt Corallo




On 12/2/21 21:59, Rusty Russell wrote:

Matt Corallo  writes:
In bolt12, we have the additional problem for the tipping case: each
invoice contains an amount, so you can't preprint amountless invoices.
(This plugs a hole in bolt11 for this case, where you get a receipt but
no amount!).

However, I think the best case is a generic authorization mechanism:

1. The offer contains a fallback node.
2. Fallback either returns you an invoice signed by the node you expect, *or*
one signed by itself and an authorization from the node you expect.
3. The authorization might be only for a particular offer, or amount, or
have an expiry.  *handwave*.

This lets the user choose the trust model they want.  The fallback node
may also provide an onion message notification service when the real
node comes back, to avoid polling.


Missed this mail, but, right, good point about amounts. Indeed, having cross-signing by the fallback 
node seems like a good idea. For the tipping use-case, allowing a BOLT-12 response with no amount 
included under the signature seems fine (with a signed amount from the fallback node).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-17 Thread Matt Corallo
Yep, this is roughly the direction we've gone in LDK - an abstract interface that gives you some 
information about a channel and you return "I'm willing to pay up to X msats to avoid routing over 
that channel as specified".


It's obviously not perfect in the sense that it won't generate the absolute optimal route given the 
parameters, but it can do pretty well (after some additional fixes we'd like to land) and at least 
optimizes for something the user controls.


Sadly, of course, all of this requires a good model for failure probability, something we don't yet 
have, so we rely on some naive guesses in our default code, and let the user plug in a more advanced 
model if they prefer. Long-term we'll probably add more intelligence, as others (or at least 
c-lightning) have done.


Matt

On 11/15/21 14:44, Joost Jager wrote:

One direction that I explored is to start with a statement by the user in this 
form:

"If there is a route with a success probability of 50%, then I am willing to pay up to 1.8x the 
routing fee for an alternative route that has a 80% success probability"


I like this because it isn't an abstract weight or factor. It is actually clear 
what this means.

What I didn't yet succeed in is to find a model where I can plug in 50%, 80% and 1.8x and 
generalizes it to arbitrary inputs A% and B%. But it seems to me that there must be some 
probabilistic equation / law / rule / theorem / ... that can support this.


Joost.

On Mon, Nov 15, 2021 at 4:25 PM Joost Jager mailto:joost.ja...@gmail.com>> 
wrote:


In Lightning pathfinding the two main variables to optimize for are routing 
fee and reliability.
Routing fee is concrete. It is the sat amount that is paid when a payment 
succeeds. Reliability
is a property of a route that can be expressed as a probability. The 
probability that a route
will be successful.

During pathfinding, route options are compared against each other. So for 
example:

Route A: fee 10 sat, success probability 50%
Route B: fee 20 sat, success probability 80%

Which one is the better route? That depends on user preference. A patient 
user will probably go
for route A in the hope of saving on fees whereas for a time-sensitive 
payment route B looks better.

It would be great to offer this trade-off to the user in a simple way. 
Preferably a single [0,
1] value that controls the selection process. At 0, the route is only 
optimized for fees and
probabilities are ignored completely. At 1, the route is only optimized for 
reliability and fees
are ignored completely.

But how to choose between the routes A and B for a value somewhere in 
between 0 and 1? For
example 0.5 - perfect balance between reliability and fee. But what does 
that mean exactly?

Anyone got an idea on how to approach this best? I am looking for a simple 
formula to decide
between routes, preferably with a reasonably sound probability-theoretical 
basis (whatever that
means).

Joost


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-20 Thread Matt Corallo


> On Oct 19, 2021, at 04:51, Bastien TEINTURIER  wrote:
> 
> I like this proposal, it's a net improvement compared to hodling HTLCs
> at the recipient's LSP. With onion messages, we do have all the tools we
> need to build this. I don't think we can do much better than that anyway
> if we want to keep payments fully non-custodial. This will be combined
> with notifications to try to get the recipient to go online asap.

Thanks! 

> One thing to note is that the senders also need to come online while
> the payment isn't settled, otherwise there is a risk they'll lose their
> channels. If the sender's LSP receives the preimage but the sender does
> not come online, the sender's LSP will have to force-close to claim the
> HTLC on-chain when it gets close to the timeout.

Yep. I was imagining a huge CLTV on that hop (and maybe some way of having a 
first-hop-set CLTV at hops after that, I don’t recall if it’s allowed, but it 
should be for this). That way at least the sender has a week/month to go online 
and clear the HTLC, subject to the usual LSP liquidity requirements of course.

> Definitely not a show-stopper, just an implementation detail to keep in
> mind.
> 
> Bastien
> 
>> Le jeu. 14 oct. 2021 à 02:20, ZmnSCPxj via Lightning-dev 
>>  a écrit :
>> Good morning Matt,
>> 
>> > On 10/13/21 02:58, ZmnSCPxj wrote:
>> >
>> > > Good morning Matt,
>> > >
>> > > >  The Obvious (tm) solution here is PTLCs - just have the sender 
>> > > > always add some random nonce * G to
>> > > >  the PTLC they're paying and send the recipient a random nonce in 
>> > > > the onion. I'd generally suggest we
>> > > >  just go ahead and do this for every PTLC payment, cause why not? 
>> > > > Now the sender and the lnurl
>> > > >  endpoint have to collude to steal the funds, but, like, the 
>> > > > sender could always just give the lnurl
>> > > >  endpoint the money. I'd love suggestions for fixing this short of 
>> > > > PTLCs, but its not immediately
>> > > >  obvious to me that this is possible.
>> > > >
>> > >
>> > > Use two hashes in an HTLC instead of one, where the second hash is from 
>> > > a preimage the sender generates, and which the sender sends (encrypted 
>> > > via onion) to the receiver.
>> > > You might want to do this anyway in HTLC-land, consider that we have a 
>> > > `payment_secret` in invoices, the second hash could replace that, and 
>> > > provide similar protection to what `payment_secret` provides (i.e. 
>> > > resistance against forwarding nodes probing; the information in both 
>> > > cases is private to the ultimate sender and ultimate reeceiver).
>> >
>> > Yes, you could create a construction which does this, sure, but I'm not 
>> > sure how you'd do this
>> > without informing every hop along the path that this is going on, and 
>> > adapting each hop to handle
>> > this as well. I suppose I should have been more clear with the 
>> > requirements, or can you clarify
>> > somewhat what your proposed construction is?
>> 
>> Just that: two hashes instead of one.
>> Make *every* HTLC on LN use two hashes, even for current "online RPi user 
>> pays online RPi user" --- just use the `payment_secret` for the preimage of 
>> the second hash, the sender needs to send it anyway.
>> 
>> >
>> > If you're gonna adapt every node in the path, you might as well just use 
>> > PTLC.
>> 
>> Correct, we should just do PTLCs now.
>> (Basically, my proposal was just a strawman to say "we should just do PTLCs 
>> now")
>> 
>> 
>> Regards,
>> ZmnSCPxj
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-13 Thread Matt Corallo




On 10/13/21 02:58, ZmnSCPxj wrote:

Good morning Matt,



 The Obvious (tm) solution here is PTLCs - just have the sender always add 
some random nonce * G to
 the PTLC they're paying and send the recipient a random nonce in the 
onion. I'd generally suggest we
 just go ahead and do this for every PTLC payment, cause why not? Now the 
sender and the lnurl
 endpoint have to collude to steal the funds, but, like, the sender could 
always just give the lnurl
 endpoint the money. I'd love suggestions for fixing this short of PTLCs, 
but its not immediately
 obvious to me that this is possible.


Use two hashes in an HTLC instead of one, where the second hash is from a 
preimage the sender generates, and which the sender sends (encrypted via onion) 
to the receiver.
You might want to do this anyway in HTLC-land, consider that we have a 
`payment_secret` in invoices, the second hash could replace that, and provide 
similar protection to what `payment_secret` provides (i.e. resistance against 
forwarding nodes probing; the information in both cases is private to the 
ultimate sender and ultimate reeceiver).


Yes, you could create a construction which does this, sure, but I'm not sure how you'd do this 
without informing every hop along the path that this is going on, and adapting each hop to handle 
this as well. I suppose I should have been more clear with the requirements, or can you clarify 
somewhat what your proposed construction is?


If you're gonna adapt every node in the path, you might as well just use PTLC.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-12 Thread Matt Corallo



On 10/12/21 22:08, Andrés G. Aragoneses wrote:

Hello Matt, can you clarify what you mean with this particular paragraph?:

But for some reason those pesky users keep wanting to use lightning for 
tips, or at least accept
payment on their phones without keeping them unlocked with the lightning 
app open on the foreground
24/7.


So the use case here is more narrow? You mean that the recipient is a mobile user that has his phone 
locked?

Just so I understand better what the problem is.


Yes, but not just locked, just "doesn't have the lightning app open and in the foreground when a 
payment comes in".  See this paragraph:



     Several lightning apps do this today, and its somewhat of a stop-gap 
but does help. On
platforms
where the app gets some meager CPU time in response to a notification, this 
can even fully solve
the
problem by claiming the HTLC in response to the notification pushed 
out-of-band. Sadly, the refrain
I've heard repeatedly is, these days, on both Android and especially iOS, 
you can't even rely on a
microsecond of CPU time in response to a notification. The OS fully expects 
your app to run code
only when its on and in the foreground, unless you're a VoIP app you're 
screwed. Relying on the
user
to open the app immediately when they receive a notification is...fine, I 
guess, absent a better
idea it seems like the best we've got today, but I'm not sure you'd find a 
UX designer who would
*suggest* this :).

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-12 Thread Matt Corallo
I'm sure most of y'all are familiar with this problem by now - a lightning user on a phone trying to 
pay another lightning user on a phone requires some amount of coordination to ensure the sender and 
recipient are, roughly, online at the same time.


Invoices provide this somewhat today by requiring the recipient provide some live-ish data to the 
sender with their phone in their hand.


But for some reason those pesky users keep wanting to use lightning for tips, or at least accept 
payment on their phones without keeping them unlocked with the lightning app open on the foreground 
24/7.


There's a few things live today which make progress towards this goal, but don't quite get there 
(and mostly aren't trying to solve this problem, but are worth mentioning):


 * just have the recipient use a custodial product/run a full lightning node at 
home on an RPi.
   Obviously this has some pretty substantial drawbacks, I'm not sure I even need to list them, but 
the "just require the recipient use a custodial service" is what Twitter ended up shipping with for 
lightning tipping, and we should all probably feel ashamed that they felt the need to do that.


 * Blockstream Greenlight.
   This change the online-requirements model - with the keys on your phone/elsewhere you still have 
to have your phone online with the same requirements as running a full lightning node. It just means 
fewer resources on that device.


 * use keysend/AMP/whatever.
   This is great for tips, but only half the story. Sender goes to send a keysend payment, gets to 
one hop before the recipient, and then a day later the recipient comes online to find the payment 
long-since timed out and failed backwards. Or you could use a long CLTV on the payment to make sure 
the recipient has time to claim it, which is basically a DoS on the lightning network's capacity, 
one that may eventually be fixed, breaking your payments, and which is just generally antisocial. 
Still, my understanding is some folks do this today cause its the only option for a mobile device.


 * lnurl
   ...is a great way to get an invoice, presumably from a trusted LSP for the recipient, trusting 
them to not give the same invoice twice, but doesn't help the recipient receive the payment, they 
still need to be online, unless...


 * have a fully-trusted LSP that accepts payments and forwards them later
   this is also fine, where its practical, I guess, but I'd hope we can do better. Worse, as far as 
I understand the places where this is practical are becoming fewer and fewer as the regulatory 
uncertainty clears and everyone realizes the regulatory overhead of this is...well you might as well 
start applying for that banking charter now.


 * have an untrusted LSP that sends you a notification to open the app when a 
payment is received
   Several lightning apps do this today, and its somewhat of a stop-gap but does help. On platforms 
where the app gets some meager CPU time in response to a notification, this can even fully solve the 
problem by claiming the HTLC in response to the notification pushed out-of-band. Sadly, the refrain 
I've heard repeatedly is, these days, on both Android and especially iOS, you can't even rely on a 
microsecond of CPU time in response to a notification. The OS fully expects your app to run code 
only when its on and in the foreground, unless you're a VoIP app you're screwed. Relying on the user 
to open the app immediately when they receive a notification is...fine, I guess, absent a better 
idea it seems like the best we've got today, but I'm not sure you'd find a UX designer who would 
*suggest* this :).



But what would it take to do better? What follows is a simple straw-man, but something that's 
borderline practical today and may at least generate a few ideas. It comes in two variants


If we accept the lnurl trust model of "a third-party I can give a list of pre-signed invoices, which 
I trust to never provide an invoice twice, but otherwise is untrusted", then we could do something 
like this:


Step 1. Tipper gets an invoice from the lnurl endpoint they wish to pay, which contains some 
"recipient is behind an LSP and rarely online, act accordingly" flag.


Step 2. Tipper sender sends a HTLC with a long CLTV timeout to their own LSP with instructions 
saying "when you get an onion message telling you nonce B, forward this HTLC, until then, just sit 
on it". The LSP accepts this HTLC but does not forward it and is generally okay with the long CLTV 
delta because it would otherwise just be the users' balance anyway - if they want to encumber their 
own funds forever, no harm done.

  Note that if tipper is online regularly they can skip this step and move on.

Step 3. The Tipper sends an onion message to recipient's LSP saying "hey, when recipient is online 
again, use the included reply path to send nonce B to my LSP".


- sender can now safely go offline -

Step 4. When the Recipient comes online, their LSP sends the reply

Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-10-12 Thread Matt Corallo



On 10/12/21 12:57, Olaoluwa Osuntokun wrote:

Hi Fabrice,

 > I believe that was a mistake: a few days ago, Arcane Research published a
 > fairly detailed report on the state of the Lightning Network:
 > https://twitter.com/ArcaneResearch/status/1445442967582302213 
.  They

 > obviously did some real work there, and seem to imply that their report
 > was vetted by Open Node and Lightning Labs.

Appreciate the hard work from Arcane on putting together this report. That
said, our role wasn't to review the entire report, but instead to provide
feedback on questions they had. Had we reviewed the section in question, we
would have spotted those errors and told the authors to fix them. Mistakes
happen, and we're glad it got corrected.

Also note that lnd has _never_ referred to itself as the "reference"
implementation.  A few years ago some other implementations adopted that
title themselves, but have since adopted softer language.

 > So I'm proposing that lnd's source code be removed from
 > https://github.com/lightningnetwork/  
(and moved to
 > https://github.com/lightninglabs  for example, with the rest of 
their

 > Lightning tools, but it's up to Lightning Labs).

I think it's worth briefly revisiting a bit of history here w.r.t the github
org in question. In the beginning, the lightningnetwork github org was
created by Joseph, and the lightningnetwork/paper repo was added, the
manuscript that kicked off this entire thing. Later lightningnetwork/lnd was
created where we started to work on an initial implementation (before the
BOLTs in their current form existed), and we were added as owners.
Eventually we (devs of current impls) all met up in Milan and decided to
converge on a single specification, thus we added the BOLTs to the same
repo, despite it being used for lnd and knowingly so.

We purposefully made a _new_ lightninglabs github org as we wanted to keep
lnd, the implementation distinct from any of our future commercial
products/services. To this day, we've architected all our paid products to
be built _on top_ of lnd, rather than within it. As a result, users always
opt into these services.

As it seems the primary grievance here is collocating an implementation of
Lightning along with the _specification_ of the protocol, and given that the
spec was added last, how about we move the spec to an independent repo owned
by the community? I currently have github.com/lightning , and would be 
happy

to donate it to the community, or we could create a new org like
"lightning-specs" or something similar. We could then move the spec (the
BOLTs and also potentially the bLIPs since some devs want it to be within
its own repo) there, and have it be the home for any other
community-backed/owned projects.  I think the creation of a new github
organization would also be a good opportunity to further formalize the set
of stakeholders and the general process related to the evolution of
Lightning the protocol.

Thoughts?


No super strong opinion on where things end up, but roughly agree they should be separate. In other 
words, this proposal sounds good to me, want to set it up?


Matt

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-10-11 Thread Matt Corallo



On 10/11/21 05:29, Bryan Bishop wrote:
On Mon, Oct 11, 2021 at 12:25 AM Andrés G. Aragoneses mailto:kno...@gmail.com>> 
wrote:


Completely agree with this. How to move this forward? Set up a vote? What 
would be the reasoning
for not moving it?


One consideration is broken links, which can be solved by a soft note in a 
README somewhere.

- Bryan
https://twitter.com/kanzure 


I believe the Github "move repository" feature makes all old links auto-redirects, so I'd hope this 
wouldn't happen. This information is at least a few years old, however.


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-09-01 Thread Matt Corallo


> On Sep 1, 2021, at 00:07, ZmnSCPxj  wrote:
> 
> Good morning Matt and all,
> 
>> Please be careful accepting the faulty premise that the proposed algorithm 
>> is “optimal”. It is optimal under a specific heuristic used to approximate 
>> what the user wants. In reality, there are a ton of different things to 
>> balance, from CLTV to feed to estimated failure probability calculated from 
>> node online percentages at-open liquidity, and even fees.
> 
> It may be possible to translate all these "things to balance" to a single 
> unit, the millisatoshi.

Indeed, in practice this is what we all do today. My point is less that you 
cannot create a single unit out of all the various things you consider and more 
that doing so involves some heuristics on the part of the application 
developer. There is no “correct” or “optimal” answer to how to do this, only 
various designs different folks have. How you balance competing costs may lead 
to different score units (eg instead of msat, probability of success) and 
that’s fine, neither is provably better than the other.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-31 Thread Matt Corallo
Please be careful accepting the faulty premise that the proposed algorithm is 
“optimal”. It is optimal under a specific heuristic used to approximate what 
the user wants. In reality, there are a ton of different things to balance, 
from CLTV to feed to estimated failure probability calculated from node online 
percentages at-open liquidity, and even fees. There is no such thing as 
“optimal”, only heuristics for how to balance these things into a score that 
you can feed into a routing algorithm.

> Do we really want users to solve an NP-hard problem when they wish to find a 
> cheap way of paying each other on the Lightning Network?

I find this framing sufficiently insulting to the serious discussion people 
have had on this topic that I’m not really sure where to go from here aside 
from ignoring it.

> On Aug 31, 2021, at 03:35, Orfeas Stefanos Thyfronitis Litos 
>  wrote:
> 
> Hi list,
> 
> On 8/31/21 5:01 AM, Anthony Towns wrote:
>>> "Do we really want users to solve an NP-hard problem when
>>> they wish to find a cheap way of paying each other on the Lightning 
>>> Network?" 
>> FWIW, my answer to this is "sure, if that's the way it turns out".
>> 
>> Another program which solves an NP-hard problem every time it runs is
>> "apt-get install".
>> [I]f it fails too often,
>> you re-analyse what's going on manually and add a new heuristic to cope
>> with it.
> I've been following the conversation with interest and I acknowledge this is 
> a thorny issue.
> 
> I am a bit worried with a path which relies on constantly finding new 
> heuristics to approximate a solution to an NP-hard problem:
> * It allows too much room for nonconstructive disagreement between LN 
> developers in the future.
>  - In a worst case scenario, all implementations end up using different, 
> incompatible heuristics because each group of developers thinks that they 
> have the best one, leading to a suboptimal performance for everyone. 
> Heuristics are less of an exact science after all.
> * It makes the job of node operators less predictable, since it would depend 
> more on the decisions of said developers with no guarantee of convergence to 
> a single solution.
>  - Node operators may perceive this as loss of decentralization to the 
> developers.
> 
> Such an approach is much more suitable to debian, since they have full 
> control and a complete view over their "network" of packages, as opposed to 
> LN, which is decentralized, nodes come and go at will and they can be private 
> (even from developers!).
> 
> Best,
> Orfeas
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336. Is e buidheann carthannais a th’ ann an 
> Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-25 Thread Matt Corallo




On 8/25/21 05:50, Anthony Towns wrote:

On Tue, Aug 24, 2021 at 08:50:42PM -0700, Matt Corallo wrote:

I feel like we're having two very, very different conversations here. On one
hand, you're arguing that the base fee is of marginal use, and that maybe we
can figure out how to average it out such that we can avoid needing it.


I'm not sure about the "we" in that sentence


You and I, and it seems I was very much right :)


I'm saying node operators
shouldn't bother with it, not that lightning software devs shouldn't offer
it as a config option or take it into account when choosing routes. The
only software change that /might/ make sense is changing defaults from
1sat to 0msat, but it seems a bit early for that too, to me.


I think I largely agree, its too early to decide these things and node operators can consider these 
issues for themselves.



(I'm assuming comments like "We'll most definitely support #zerobasefee"
[0] just means "you can set it to zero if you like" which seems like a
weird thing to have to say explicitly...)

[0] https://twitter.com/Snyke/status/1418109408438063104


I don't believe so at all, we were definitely having a different conversation from both sides here. 
The #zerobasefee movement grew out of, and focuses on, switching to #zerobasefee in order to allow 
people to start using routing algorithms in production which ignore all nodes which do *not* have 
zero base fee and requiring that to be a routing node. Rusty even made a comment to that effect 
recently on a Twitter Spaces, saying that its probably something that could be considered sooner or 
later, though I admit it was an off-the-cuff remark so maybe he has a slightly different view when 
pressed.


My objection, and it seems like you agree, is that it is much, much too early to start making a 
strong assumption of the only fee being a proportional one in deployed routing algorithms.



Also, even if we can maybe do away with the base fee, that still
doesn't mean we should start relying on the lack of any
not-completely-linear-in-HTLC-value fees in our routing algorithms,


I mean, exprimental/research routing algorithms should totally rely
on that if they feel like it? I just don't see any evidence that
anyone's thinking of moving that out of research and into production
until there's feedback from operators and a lot more results from the
research in general...


Maybe, maybe not - my only points on Twitter, and here, have been focused on how more research needs 
to happen on proposed routing algorithms and how we can adapt the ideas to other algorithms. A large 
part of the impetus for the #zerobasefee movement has been to reduce base fees to allow for a 
migration to these experimental algorithms, and, to me, is entirely premature.



No, that's not the topic at hand, at all?


Well, then we were having two different conversations :p


I think I'm arguing for these things:

  a) "everyone" should drop their base fee msat from the default,
 probably to 0 because that's an easy fixed point that you don't need
 to think about again as the price of btc changes, but anything at
 or below 10msat would be much more reasonable than 1000msat.

  b) if people are concerned about wasting resources forwarding super
 small payments for correspondingly super small fees, they should
 raise min_htlc_amount from 0 (or 1000) to compensate, instead of
 raising their base fee.


:shrug: dunno. some people pay on-chain fees to route tiny payments to Muun wallets and seem fine 
with it.



  c) software should dynamically increase min_htlc_amount as the
 number of available htlc slots decreases, as a DoS mitigation measure.
 (presuming this is a temporary increase, probably this wouldn't
 be gossiped, and possibly not even communicated to the channel
 counterparty -- just a way of immediately rejecting htlcs? I think
 if something along these lines were implemented, (b) would almost
 never be necessary)


This sounds like a cool idea. We shipped something highly related that almost accomplishes this on 
accident in LDK [1] focusing on exposure to small-value HTLCs and limiting that to ensure we don't 
send all our money to miners.


More dynamic limits in lightning sounds like the right direction to me! Also dynamic fees, also 
dynamic



  d) the default base fee should be changed to 0, 1, or 10msat instead
 of 1000msat

  e) trivially: (I don't think anyone's saying otherwise)
  - deploying new algorithms in production should only be done with
a lot of care


There is *so* much debate around this point in the lightning world these days. This is just another 
flavor of it.


Matt

[1] https://github.com/rust-bitcoin/rust-lightning/pull/1009
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-24 Thread Matt Corallo
I feel like we're having two very, very different conversations here. On one hand, you're arguing that the base fee is 
of marginal use, and that maybe we can figure out how to average it out such that we can avoid needing it. On the other 
hand, I'm arguing that, yes, maybe you can, but ideally you wouldn't have to, because its still pretty nice to capture 
those costs sometimes. Also, even if we can maybe do away with the base fee, that still doesn't mean we should start 
relying on the lack of any not-completely-linear-in-HTLC-value fees in our routing algorithms, as maybe we'll want to do 
upfront payments or some other kind of anti-DoS payment in the future to solve the gaping, glaring, giant DoS hole that 
is HTLCs taking forever to time out.


I'm not even sure that you're trying to argue, here, that we should start making key assumptions about the only fee 
being a proportional one in our routing algorithms, but that is what the topic at hand is, so I can't help but assume 
you are?


If you disagree with the above characterization I'm happy to go line-by-line tit-for-tat, but usually those kinds of 
tirades aren't exactly useful and end up being more about semantics than the thrust of the argument.


Matt

On 8/20/21 21:46, Anthony Towns wrote:

On Mon, Aug 16, 2021 at 12:48:36AM -0400, Matt Corallo wrote:

The base+proportional fees paid only on success roughly match the *value*
of forwarding an HTLC, they don't match the costs particularly well
at all.

Sure, indeed, there's some additional costs which are not covered by failed
HTLCs, [...]
Dropping base fee makes the whole situation a good chunk *worse*.


Can you justify that quantitatively?

Like, pick a realistic scenario, where you can make things profitable
with some particular base_fee, prop_fee, min_htlc_amount combination,
but can't reasonably pick another similarly profitable outcome with
base_fee=0?  (You probably need to have a bimodal payment distribution
with a micropayment peak and a regular payment peak, I guess, or perhaps
have particularly inelastic demand and highly competitive supply?)

>

And all those costs can be captured equally well (or badly) by just
setting a proportional fee and a minimum payment value. I don't know why
you keep ignoring that point.

I didn't ignore this, I just disagree, and I'm not entirely sure why you're 
ignoring the points I made to that effect :).


I don't think I've seen you explicitly disagree with that previously,
nor explain why you disagree with it? (If I've missed that, a reference
appreciated; explicit re-explanation also appreciated)


In all seriousness, I'm entirely unsure why you think proportional is just
as good?


In principle, because fee structures already aren't a good match, and
a simple approximation is better that a complicated approximation.
Specifically, because you can set
  
  min_htlc_msat=old_base_fee_msat * 1e6 / prop_fee_millionths


which still ensures every HTLC you forward offers a minimum fee of
old_base_fee_msat, and your fees still increase as the value transferred
goes up, which in the current lightning environment seems like it's just
as good an approximation as if you'd actually used "old_base_fee_msat".

For example, maybe you pay $40/month for your node, which is about 40msat
per second [0], and you really can only do one HTLC per second on average
[1]. Then instead of a base_fee of 40msat, pick your proportional rate,
say 0.03%, and calculate your min_htlc amount as above, ie 133sat. So if
someone sends 5c/133sat through you, they'll pay 40msat, and for every
~3 additional sats, they'll pay you an additional 1msat. Your costs are
covered, and provided your fee rate is competitive and there's traffic
on the network, you'll make your desired profit.

If your section of the lightning network is being used mainly for
microtransactions, and you're not competitive/profitable when limiting
yourself to >5c transactions, you could increase your proportional fee
and lower your min_htlc amount, eg to 1% and 4sat so that you'll get
your 40msat from a 4sat/0.16c HTLC, and increase at a rate of 10msat/sat
after that.

That at least matches the choices you're probably actually making as a
node operator: "I'm trying to be cheap at 0.03% and focus on relatively
large transfers" vs "I'm focussing on microtransactions by reducing the
minimum amount I'll support and being a bit expensive". I don't think
anyone's setting a base fee by calculating per-tx costs (and if they
were, per the footnote, I'm not convinced it'd even justify 1msat let
alone 1sat per tx).

OTOH, if you want to model an arbitrary concave fee function (because
you have some scheme that optimises fee income by discriminating against
smaller payments), you could do that by ha

Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread Matt Corallo

Dropped a number of replies to which the reply would otherwise be "see above".

On 8/16/21 00:00, Anthony Towns wrote:

On Sun, Aug 15, 2021 at 10:21:52PM -0400, Matt Corallo wrote:

On 8/15/21 22:02, Anthony Towns wrote:

In
one particular class of applicable routing algorithms you could use for
lightning routing having a base fee makes the algorithm intractably slow,

I don't think of that as the problem, but rather as the base fee having
a multiplicative effect as you split payments.

Yes, matching the real-world costs of forwarding an HTLC.


Actually, no, not at all.

The base+proportional fees paid only on success roughly match the *value*
of forwarding an HTLC, they don't match the costs particularly well
at all.

Why not? Because the costs are incurred on failed HTLCs as well, and
also depend on the time a HTLC lasts, and also vary heavily depending
on how many other simultaneous HTLCs there are.


Sure, indeed, there's some additional costs which are not covered by failed HTLCs, nor incorporate the time the HTLC 
slot was used. But that wasn't my argument - my argument was that base + proportional is a much, much closer match for 
the costs of a node barring clever-er solutions around HTLC-slot-time-used. Dropping base fee makes the whole situation 
a good chunk *worse*.



Yes. You have to pay the cost of a node. If we're really worried about this,
we should be talking about upfront fees and/or refunds on HTLC fulfillment,
not removing the fees entirely.


(I don't believe either of those are the right approach, but based on
previous discussions, I don't think anyone's going to realise I'm right
until I implement it and prove it, so *shrug*)


I think I agree, but I think they may currently be better than any *other* 
proposal, not that they're particularly good.


The cost to nodes is largely [...]


The cost to nodes is almost entirely the opportunity cost of not being
able to accept other txs that would come in afterwards and would pay
higher fees.

And all those costs can be captured equally well (or badly) by just
setting a proportional fee and a minimum payment value. I don't know why
you keep ignoring that point.


I didn't ignore this, I just disagree, and I'm not entirely sure why you're 
ignoring the points I made to that effect :).

In all seriousness, I'm entirely unsure why you think proportional is just as good? As you note, the cost for nodes is a 
function of the opportunity cost of the capital, and opportunity cost of the HTLC slots. Lets say as a routing node I 
decide that the opportunity cost of one of my HTLC slots is generally 1 sat per second, and the average HTLC is 
fulfilled in one second. Why is it that a proportional fee captures this "equally well"?!


Yes, you could amortize it, but that doesn't make it "equally" good, and there are semi-serious proposals to start 
ignoring nodes that *dont* set their fees to some particular structure in routing decisions. Sure, nodes can do what 
they want, but its kinda absurd to suggest that this is a perfectly fine thing to do absent a somewhat compelling 
reason. This goes doubly because deploying such things significantly will mean we cannot do future protocol changes 
which may better capture the time-value of node resources!



Additionally, I don't think HTLC slot usage needs to be kept as a
limitation after we switch to eltoo;

The HTLC slot limit is to keep transactions broadcastable. I don't see why
this would change, you still get an output for each HTLC on the latest
commitment in eltoo, AFAIU.


eltoo gives us the ability to have channel factories


That doesn't solve the issue at all - you still have a ton of transactions and transaction outputs and spends thereof to 
put on the chain in the case of a closure with pending HTLCs. In fact, most nodes today enforce a lower limit than the 
400-some-odd HTLCs that represent the transaction standardness limit, because 100KB transactions are stupid impractical.



(By "any time soon" I mean, I could see software defaults changing if
over 50% of the network deliberately switched to zero base fees and found
it worked fine; and I could see deprecating non-zero fees if that ended
up with 90% of the network on zero base fees, no good reasons for node
operators wanting to stick with running non-zero base fees, and the
experimental algos that relied on zero base fees being significantly
easier to maintain or faster/better)


What is your definition of "works fine" here? In today's nearly-entirely-altruistic-routing-node network, we could 
probably entirely drop the routing fees and things would "work fine". That doesn't make it a good idea for the long-term 
health of the network.


My suggestion is quite simple - that the software vendors wishing to rely on these types of algorithms *first* do the 
legwork to s

Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread Matt Corallo




On 8/15/21 22:02, Anthony Towns wrote:

In
one particular class of applicable routing algorithms you could use for
lightning routing having a base fee makes the algorithm intractably slow,


I don't think of that as the problem, but rather as the base fee having
a multiplicative effect as you split payments.


Yes, matching the real-world costs of forwarding an HTLC.


If every channel has the same (base,proportional) fee pair, and send a
payment along a single path, you're paying n*(base+k*proportional). If
you split the payment, and send half of it one way, and half the other
way, you're paying n*(2*base+k*proportional). If you split the payment
four ways, you're paying n*(4*base+k*proportional). Where's the value
to the network in penalising payment splitting?


Yes. You have to pay the cost of a node. If we're really worried about this, we should be talking about upfront fees 
and/or refunds on HTLC fulfillment, not removing the fees entirely.



Being denominated in sats, the base fee also changes in value as the
bitcoin price changes -- c-lightning dropped the base fee to 1sat (from
546 sat!) in Jan 2018, but the value of 1sat has increased about 4x
since then, and it seems unlikely the fixed costs of a successful HTLC
payment have likewise increased 4x.  Proportional fees deal with this
factor automatically, of course.


This isn't a protocol issue, implementations can automate this without issue.


There's real cost to distorting the fee structures on the network away from
the costs of node operators,


That's precisely what the base fee is already doing. Yes, we need some
other way of charging fees to prevent using up too many slots or having
transactions not fail in a timely manner, but the base fee does not
do that.


Huh? For values much smaller than a node's liquidity, the cost for nodes is (mostly) a function of HTLCs, not the value. 
The cost to nodes is largely (a) the forever-storage that exists, roughly, per HTLC ever on a channel, (b) the HTLC 
slots which are highly limited for technical reasons per channel, (c) the disk/cpu/network/etc operations per HTLC on an 
channel, (d) the liquidity required per node. I'd argue (c) is basically zero in any realistic context, (a) is pretty 
low, but could be non-zero in some cases, so you really just have (b) and (d). For many HTLCs forwarded today, the 
liquidity on a channel isn't much, so I'd argue for many HTLCs forwarded today per-payment costs mirror the cost to a 
node much, much, much, much better than some proportional fees?


I'm really not sure where you're coming from here.


Imagine we find some great way to address HTLC slot flooding/DoS attacks (or
just chose to do it in a not-great way) by charging for HTLC slot usage, now
we can't fix a critical DoS issue because the routing algorithms we deployed
can't handle the new costing.


I don't think that's true. The two things we don't charge for that can
be abused by probing spam are HTLC slot usage and channel balance usage;
both are problems only in proportion to the amount of time they're held
open, and the latter is also only a problem proportional to the value
being reserved. [0]

Additionally, I don't think HTLC slot usage needs to be kept as a
limitation after we switch to eltoo;


The HTLC slot limit is to keep transactions broadcastable. I don't see why this would change, you still get an output 
for each HTLC on the latest commitment in eltoo, AFAIU.



and in the meantime, I think it can
be better managed via adjusting the min_htlc_amount -- at least for the
scenario where problems are being caused by legitimate payment attempts,
which is also the only place base fee can help.


Sure, we could also shift towards upfront fees or similar solutions, though, and that was my point - if we start 
dropping absolute fee amounts now in order to make some given routing algorithm work, we box ourselves in here, and 
quite needlessly given no one has (yet) done the legwork to show that we even *need* to box ourselves in.



[0] (Well, ln-penalty's requirement to permanently store HTLC information
  in order to apply the penalty is in some sense a constant
  cost, however the impact is also proportional to value, and for
  sufficiently low value HTLCs can be ignored entirely if the HTLC
  isn't included in the channel commitment)


Instead, we should investigate how we can
apply the ideas here with the more complicated fee structures we have.


Fee structures should be *simple* not complicated.

I mean, it's kind of great that we started off complicated -- if it
turns out base fee isn't necessary, it's easy to just set it to zero;
if we didn't have it, but needed it, it would be much more annoying to
add it in later.


Fee structures should also match reality, and allow node operators sufficient flexibility to capture their costs. I 
think we have a design that does so quite well - its pretty simple, there's only two knobs, but the two knobs capture 
exactly the two br

Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread Matt Corallo

Thanks, AJ, for kicking off the thread.

I'm frankly still very confused why we're having these conversations now. In one particular class of applicable routing 
algorithms you could use for lightning routing having a base fee makes the algorithm intractably slow, but:


a) to my knowledge, no one has (yet) done any follow-on work to investigate pulling many of the same heuristics Rene et 
al use in a Dijkstras/A* algorithm with multiple passes or generating multiple routes in the same pass to see whether 
you can emulate the results in a faster algorithm without the drawbacks here,


b) to my knowledge, no one has (yet) done any follow-on work to investigate mapping the base fee to other, more 
flow-based-routing-compatible numbers, eg you could convert the base fee to a minimum fee by increasing the "effective" 
proportional fees. From what others have commented, this may largely "solve" the issue.


c) to my knowledge, no one has (yet) done any follow-on work to analyze where the proposed algorithm may be most optimal 
in the HTLC-value<->channel liquidity ratio ranges. We may find that the proposed algorithm only provides materially 
better routing when the HTLC value approaches X% of common network channel liquidity, allowing us to only use it for 
large-value payments where we can almost ignore the base fees entirely.


There's real cost to distorting the fee structures on the network away from the costs of node operators, especially as 
we move towards requiring and using Real (tm) amounts of capital on routing nodes. If we're relying purely on hobbyists 
forever who are operating out of goodwill, we should just remove all fees. If we think Lightning is going to involve 
capital with real opportunity cost, matching fees to the costs is important, or at least important enough that we 
shouldn't throw it away after one (pretty great) paper and limited further analysis.


Imagine we find some great way to address HTLC slot flooding/DoS attacks (or just chose to do it in a not-great way) by 
charging for HTLC slot usage, now we can't fix a critical DoS issue because the routing algorithms we deployed can't 
handle the new costing. Instead, we should investigate how we can apply the ideas here with the more complicated fee 
structures we have.


Color me an optimist, but I'm quite confident with sufficient elbow grease and heuristics we can get 95% of the way 
there. We can and should revisit these conversations if such exploration is done and we find that its not possible, but 
until then this all feels incredibly premature.


Matt

On 8/14/21 21:00, Anthony Towns wrote:

Hey *,

There's been discussions on twitter and elsewhere advocating for
setting the BOLT#7 fee_base_msat value [0] to zero. I'm just writing
this to summarise my understanding in a place that's able to easily be
referenced later.

Setting the base fee to zero has a couple of benefits:

  - it means you only have one value to optimise when trying to collect
the most fees, and one-dimensional optimisation problems are
obviously easier to write code for than two-dimensional optimisation
problems

  - when finding a route, if all the fees on all the channels are
proportional only, you'll never have to worry about paying more fees
just as a result of splitting a payment; that makes routing easier
(see [1])

So what's the cost? The cost is that there's no longer a fixed minimum
fee -- so if you try sending a 1sat payment you'll pay 0.1% of the fee
to send a 1000sat payment, and there may be fixed costs that you have
in routing payments that you'd like to be compensated for (eg, the
computational work to update channel state, the bandwith to forward the
tx, or the opportunity cost for not being able to accept another htlc if
you've hit your max htlcs per channel limit).

But there's no need to explicitly separate those costs the way we do
now; instead of charging 1sat base fee and 0.02% proportional fee,
you can instead just set the 0.02% proportional fee and have a minimum
payment size of 5000 sats (htlc_minimum_msat=5e6, ~$2), since 0.02%
of that is 1sat. Nobody will be asking you to route without offering a
fee of at least 1sat, but all the optimisation steps are easier.

You could go a step further, and have the node side accept smaller
payments despite the htlc minimum setting: eg, accept a 3000 sat payment
provided it pays the same fee that a 5000 sat payment would have. That is,
treat the setting as minimum_fee=1sat, rather than minimum_amount=5000sat;
so the advertised value is just calculated from the real settings,
and that nodes that want to send very small values despite having to
pay high rates can just invert the calculation.

I think something like this approach also makes sense when your channel
becomes overloaded; eg if you have x HTLC slots available, and y channel
capacity available, setting a minimum payment size of something like
y/2/x**2 allows you to accept small payments (good for the network)

Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-08 Thread Matt Corallo
If it weren't for the implications in changing standardness here, I think we should consider increasing the dust limit 
instead.


The size of the UTXO set is a fundamental scalability constraint of the system. In fact, with proposals like 
assume-utxo/background history sync it is arguably *the* fundamental scalability constraint of the system. Today's dust 
limit is incredibly low - its based on a feerate of only 3 sat/vByte in order for claiming the UTXO to have *any* value, 
not just having enough value to be worth bothering. As feerates have gone up over time, and as we expect them to go up 
further, we should be considering drastically increasing the 3 sat/vByte basis to something more like 20 sat/vB.


Matt

On 8/8/21 14:52, Jeremy via bitcoin-dev wrote:

We should remove the dust limit from Bitcoin. Five reasons:

1) it's not our business what outputs people want to create


It is precisely our business - the costs are born by us, not the creator. If someone wants to create outputs which don't 
make sense to spend, they can do so using OP_RETURN, since they won't spend it anyway.



2) dust outputs can be used in various authentication/delegation smart contracts


So can low-value-but-enough-to-be-worth-spending-when-you're-done-with-them 
outputs.

3) dust sized htlcs in lightning 
(https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light 
) 
force channels to operate in a semi-trusted mode which has implications (AFAIU) for the regulatory classification of 
channels in various jurisdictions; agnostic treatment of fund transfers would simplify this (like getting a 0.01 cent 
dividend check in the mail)


This is unrelated to the consensus dust limit. This is related to the practical question about the value of claiming an 
output. Again, the appropriate way to solve this instead of including spendable dust outputs would be an OP_RETURN 
output (though I believe this particular problem is actually better solved elsewhere in the lightning protocol).



4) thinly divisible colored coin protocols might make use of sats as value 
markers for transactions.


These schemes can and should use values which make them economical to spend. The whole *point* of the dust limit is to 
encourage people to use values which make sense economically to "clean up" after they're done with them. If people want 
to use outputs which they will not spend/"clean up" later, they should be using OP_RETURN.



5) should we ever do confidential transactions we can't prevent it without 
compromising privacy / allowed transfers


This is the reason the dust limit is not a *consensus* limit. If and when CT were to happen we can and would relax the 
standardness rules around the dust limit to allow for CT.




The main reasons I'm aware of not allow dust creation is that:

1) dust is spam
2) dust fingerprinting attacks


3) The significant costs to every miner and full node operator.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-07-01 Thread Matt Corallo

Thanks!

On 6/29/21 01:34, Rusty Russell wrote:

Hi all!

 John Carvalo recently pointed out that not every implementation
accepts zero-conf channels, but they are useful.  Roasbeef also recently
noted that they're not spec'd.

How do you all do it?  Here's a strawman proposal:

1. Assign a new feature bit "I accept zeroconf channels".
2. If both negotiate this, you can send update_add_htlc (etc) *before*
funding_locked without the peer getting upset.


Does it make sense to negotiate this per-direction in the channel init message(s)? There's a pretty different threat 
model between someone spending a dual-funded or push_msat balance vs someone spending a classic channel-funding balance.



3. Nodes are advised *not* to forward HTLCs from an unconfirmed channel
unless they have explicit reason to trust that node (they can still
send *out* that channel, because that's not their problem!).

It's a pretty simple change, TBH (this zeroconf feature would also
create a new set of channel_types, altering that PR).

I can draft something this week?

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dropping Tor v2 onion services from node_announcement

2021-06-10 Thread Matt Corallo
There isn’t a lot to do at the spec level to deprecate them - nodes should 
start ignoring the addresses bug nodes always need to know the length/parse v2 
Onion addresses forever as well as store and forward them. We could amend the 
spec to say that nodes shouldn’t produce them but it won’t change the 
receive/process end much.

> On Jun 1, 2021, at 18:19, darosior via Lightning-dev 
>  wrote:
> 
> Hi all,
> 
> It's been almost 9 months since Tor v2 hidden services have been deprecated.
> The Tor project will drop v2 support in about a month in the latest release. 
> It will then be entirely be dropped from all supported releases by October.
> More at https://blog.torproject.org/v2-deprecation-timeline .
> 
> Bitcoin Core defaults to v3 since 0.21.0 
> (https://bitcoincore.org/en/releases/0.21.0/) and is planning to drop the v2 
> support for 0.22 (https://github.com/bitcoin/bitcoin/pull/22050), which means 
> that v2 onions will gradually stop being gossiped on the Bitcoin network.
> 
> I think we should do the same for the Lightning network, and the timeline is 
> rather tight. Also, the configuration is user-facing (as opposed to Bitcoin 
> Core, which generates ephemeral services) which i expect to make the 
> transition trickier.
> C-lightning is deprecating the configuration of Tor v2 services starting next 
> release, according to our deprecation policy we should be able to entirely 
> drop its support 3 releases after this one, which should be not so far from 
> the October deadline.
> 
> Opinions? What is the state of other implementations with regard to Tor v2 
> support?
> 
> Antoine
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Matt Corallo




On 4/27/21 17:32, Rusty Russell wrote:

OK, draft is up:

 https://github.com/lightningnetwork/lightning-rfc/pull/867

I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).

Notably, I added the requirement that update_fee messages be on their
own.  This means there's no debate on the state of the channel when
this is being applied.


I do have to admit *that* part I like :).

If we don't do turns for splicing, I wonder if we can take the rules around splicing pausing other HTLC updates, make 
them generic for future use, and then also use them for update_fee in a simpler-to-make-compulsory change :).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Matt Corallo



On 4/27/21 01:04, Rusty Russell wrote:

Matt Corallo  writes:

On Apr 24, 2021, at 01:56, Rusty Russell  wrote:

Matt Corallo  writes:

I promise it’s much less work than it sounds like, and avoids having to debug 
these things based on logs, which is a huge pain :). Definitely less work than 
a new state machine:).


But the entire point of this proposal is that it's a subset of the
existing state machine?


Compared to today, its a good chunk of additional state machine logic to enforce when a message can or can not be sent, 
and additional logic for when we can (or can not) flush any pending changes buffer(s).




The only "twist" is that if it's your turn and you receive an update,
you can either reply with a "yield" message, or ignore it.


How do you handle the "no changes to make" case - do you send yields back and forth ever Nms all day long or is there 
some protocol by which you resolve it when both parties try to claim turn at once?




Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
that’s new) and the loser has to store their proposal in a holding cell for 
later (which they have to do in turn-based anyway). Logic to check if there’s 
unsettled things in RAA handling is pretty similar to turn-based, and logic to 
reject other messages is the same as shutdown handling today.


Nope, with the simplified protocol you can `update_splice` at any time
instead of your normal update, since both sides are already in sync.


Hmm, I'm somewhat failing to understand why its that different - you can only update_splice if its your turn, which is 
about exactly the same amount of additional logic to check turn conditions as just flag "want to do splice". Either way 
you have the same pending splice buffer.



- MUST use the higher of the two `funding_feerate_perkw` as the feerate for
  the splice.


If we like turn based, why not just deterministic throw out one slice? :)


Because while I am going to implement turn-based, I'm not sure if anyone
else is.  I guess we'll see?


My point was more that its similar in logic - if you throw out the splice deterministically and just keep it in some 
"pending slice" buffer on the sending side, you've just done basically what you'd do to implement turns, while keeping 
the non-turn slice protocol a bit easier :).


Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-26 Thread Matt Corallo

I looked into this more closely, and as far as I understand it, the spec

already states that you should not count dust HTLCs:

Oops! We do the same thing, we will fix that.

On 4/26/21 11:03, Eugene Siegel wrote:
I would have to think more about the issue where it's not possible to lower the feerate though.  That seems 
like a spec issue?


There is no current in-protocol way to change the dust limit, so the issue 
doesn't apply here.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-24 Thread Matt Corallo


> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
> 
> Matt Corallo  writes:
>> Somehow I missed this thread, but I did note in a previous meeting - these 
>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>> tests for precisely these types of message-non-delivery-and-resending 
>> production desync bugs for several years. When it initially landed it forced 
>> several rewrites of parts of the state machine, but quickly exhausted the 
>> bug fruit (though catches other classes of bugs occasionally as well). The 
>> state machine here is really not that big - while I agree simplifying it 
>> where possible is nice, ripping things out to replace them with fresh code 
>> (which would need similar testing) is probably not the most obvious decrease 
>> in complexity.
> 
> It's historically had more bugs than anything else in the protocol.  We
> literally found another one in feerate negotiation since the last
> c-lightning release :(
> 
> I'd rather not have bugs than try to catch them all.

I promise it’s much less work than it sounds like, and avoids having to debug 
these things based on logs, which is a huge pain :). Definitely less work than 
a new state machine:).

> You could propose a splice (or update to anchors, or whatever) any time
> when it's your turn, as long as you haven't proposed any other updates.
> That's simple!

I presume you’d need to take it a few steps further - if the last message 
received required a response CS/RAA, you must still wait until things have 
settled down. I guess it also depends on the exact semantics of a “turn based” 
message protocol - if you received some updates and a signature, are you 
allowed to add more updates after you send your CS/RAA (then you have a good 
chunk of today’s complexity), or do you have to wait until they send you back 
their last RAA (in which case presumably they aren’t allowed to include 
anything else as then they’d be able to monopolize update windows). In the 
first case you still have the same issues of today, in the second less so, but 
you’re doing a similar “ok, just pause updates and wait for things to settle “, 
I think.

> Instead, *both* sides have to send a splice message to synchronize, and
> they can only do so once all in-flight changes have cleared. You have
> to resolve simultaneous splice attempts (we use "highest feerate"
> tiebreak by node_id), and keep track of this stage while you clear
> in-flight changes.

Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
that’s new) and the loser has to store their proposal in a holding cell for 
later (which they have to do in turn-based anyway). Logic to check if there’s 
unsettled things in RAA handling is pretty similar to turn-based, and logic to 
reject other messages is the same as shutdown handling today.

> Here's the subset of requirements from the draft which relate to this:
> 
> The sender:
> - MUST NOT send another splice message while a splice is being negotiated.
> - MUST NOT send a splice message after sending uncommitted changes.
> - MUST NOT send other channel updates until splice negotiation has completed.
> 
> The receiver:
> - MUST respond with a `splice` message of its own if it has not already.
> - MUST NOT reply with `splice` until all commitment updates are resolved by 
> both peers.

Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
HTLCs left”.

> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>  the splice.

If we like turn based, why not just deterministic throw out one slice? :)

> - MUST NOT send other channel updates until splice negotiation has completed.
> 
> Similar requirements exist for other major channel changes.
> 
> Cheers,
> Rusty.
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-23 Thread Matt Corallo


> On Apr 20, 2021, at 17:19, Rusty Russell  wrote:
> 
> After consideration, I prefer alternation.  It fits better with the
> existing implementations, and it is more optimal than reflection for
> optimized implementations.
> 
> In particular, you have a rule that says you can send updates and
> commitment_signed when it's not your turn, and the leader either
> responds with a "giving way" message, or ignores your changes and sends
> its own.
> 
> A simple implementation *never* sends a commitment_signed until it
> receives "giving way" so it doesn't have to deal with orphaned
> commitments.  A more complex implementation sends opportunistically and
> then has to remember that it's committed if it loses the race.  Such an
> implementation is only slower than the current system if that race
> happens.

Somehow I missed this thread, but I did note in a previous meeting - these 
issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
tests for precisely these types of message-non-delivery-and-resending 
production desync bugs for several years. When it initially landed it forced 
several rewrites of parts of the state machine, but quickly exhausted the bug 
fruit (though catches other classes of bugs occasionally as well). The state 
machine here is really not that big - while I agree simplifying it where 
possible is nice, ripping things out to replace them with fresh code (which 
would need similar testing) is probably not the most obvious decrease in 
complexity.

> I've been revisiting this because it makes things like splicing easier:
> the current draft requires stopping changes while splicing is being
> negotiated, which is not entirely trivial.  With the simplified method,
> you don't have to wait at all.

Hmm, what’s nontrivial about this? How much more complicated is this than 
having an alternation to updates and pausing HTLC updates for a cycle or two 
while splicing is negotiated (I assume it would still need a similar 
requirement, as otherwise you have the same complexity)? We already have a 
similar update-stopping process for shutdown, though of course it doesn’t 
include restarting.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increase channel-jamming capital requirements by not counting dust HTLCs

2021-04-23 Thread Matt Corallo
The update_fee message does not, as far as I recall, change the dust limit for 
outputs in a channel (though I’ve suggested making such a change).

> On Apr 23, 2021, at 12:24, Bastien TEINTURIER  wrote:
> 
> 
> Hi Eugene,
> 
> The reason dust HTLCs count for the 483 HTLC limit is because of `update_fee`.
> If you don't count them and exceed the 483 HTLC limit, you can't lower the 
> fee anymore
> because some HTLCs that were previously dust won't be dust anymore and you 
> may end
> up with more than 483 HTLC outputs in your commitment, which opens the door 
> to other
> kinds of attacks.
> 
> This is the first issue that comes to mind, but there may be other drawbacks 
> if we dig into
> this enough with an attacker's mindset.
> 
> Bastien
> 
>> Le ven. 23 avr. 2021 à 17:58, Eugene Siegel  a écrit :
>> I propose a simple mitigation to increase the capital requirement of 
>> channel-jamming attacks. This would prevent an unsophisticated attacker with 
>> low capital from jamming a target channel.  It seems to me that this is a 
>> *free* mitigation without any downsides (besides code-writing), so I'd like 
>> to hear other opinions.
>> 
>> In a commitment transaction, we trim dust HTLC outputs.  I believe that the 
>> reason for the 483 HTLC limit each side has in the spec is to prevent 
>> commitment tx's from growing unreasonably large, and to ensure they are 
>> still valid tx's that can be included in a block.  If we don't include dust 
>> HTLCs in this calculation, since they are not on the commitment tx, we still 
>> allow 483 (x2) non-dust HTLCs to be included on the commitment tx.  There 
>> could be a configurable limit on the number of outstanding dust HTLCs, but 
>> the point is that it doesn't affect the non-dust throughput of the channel.  
>> This raises the capital requirement of channel-jamming so that each HTLC 
>> must be non-dust, rather than spamming 1 sat payments.
>> 
>> Interested in others' thoughts.
>> 
>> Eugene (Crypt-iQ)
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for skip channel confirmation.

2020-08-24 Thread Matt Corallo

A few notes.

Given gossip messages will be rejected by many nodes if no such on-chain transaction exists, I don't think you can 
"re-broadcast" gossip messages at that time, instead I believe you simply need to not gossip until the funding 
transaction has some confirmations. Still, this shouldn't prevent receiving payments, as invoices carrying a last-hop 
hint should be able to indicate any short_channel_id value and have it be accepted.


It may make sense to reuse some "private short channel ID negotiation" feature for the temporary 0-conf short channel id 
value.


One thing this protocol doesn't capture is unidirectional 0-conf - maybe the channel initiator is happy to receive 
payments (since its their funds which opened the channel, this is reasonable), but the channel initie-ee (?) isn't 
(which, again, is reasonable). This leaves only the push_msat value pay-able, and only once, but is a perfectly 
reasonable trust model and I believe some wallets use this today.


Matt

On 8/24/20 4:16 AM, Roei Erez wrote:

Hello everyone,

I would like to discuss the ability to skip a channel funding
transaction confirmation, making the channel fully operational before
its on-chain confirmation (aka a zero-conf channel).
Till confirmation, this channel requires trust between its two parties
and in the case of a remote initiator, it puts the received funds of
the local party at risk.
Nevertheless, there are cases where it makes sense to support this
behavior. For example, in cases both parties decide to trust each
other. Or, in cases where trust between the parties already exists
(buying a pre-loaded channel from a service like Bitrefill).

The motivation is gained from the "Immediate on-boarding" use case:
* Bob is connected to a routing node and issues an invoice with a
routing hint that points to a fake channel between Bob and that node.
* When Alice pays Bob's invoice, the routing node intercepts the HTLC
and holds it.
* Then, the routing node does the following:
   * Opens a channel to Bob where Bob has a choice of skipping funding
  confirmation (channel is open and active).
   * Pays Bob the original Alices' invoice (potentially, minus a service fee)

 From Bob perspective it is his choice on whether to agree for the
payment via this channel (and by that increase the trust) or disagree
and wait for confirmation.
Another practical way for Bob is to skip confirmation and "hold" the
payment by not providing the pre-image.

Right now different implementations support zero-conf channels in
different ways. These implementations have defined their own way on
how to agree on a short_channel_id (fake one) before the transaction
is confirmed.

The following suggests some changes to the funding flow to support that:
   1. accept_channel message - in case the fundee wants to skip
   confirmation he sends minimum_depth=0
   2. funding_signed message - no change.
   3. funding_locked message - if fundee has sent minimum_depth=0, then
   both parties send funding_locked while the channel_id is derived using a
   convention agreed on both. One proposal for such convention is to take it
   from the temporary_channel_id provided in previous open_channel
   message as follows:
 * Use the first 8 bytes to initialize an unsigned integer,
call it shortID
 * Apply this transformation:  shortID / 2^6 + 100,000
 * The above transformation ensures the block height falls in the
range of 100,000 - 2^18+100,000. The rationale is that the id will
never point to a valid mined transaction and the first 100,000 
blocks
are left for testing in other chains.
 * Assuming the temporary_channel_id is some random number, it is
   not likely that the derived short_channel_id will conflict with other
   channels in both peers but both peers should validate that before
   sending funding_locked.
   4. When the channel is confirmed gossip messages such as
   channel_update are re-broadcasted and refers to the confirmed
   channel_id (such as the case with re-org).

I created a PR in LND that implements these changes
https://github.com/lightningnetwork/lnd/pull/4424

Cheers,
Roei
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-06-24 Thread Matt Corallo
Given transaction relay delays and a network topology that is rather 
transparent if you look closely enough, I think this is very real and very 
practical (double-digit % success rate, at least, with some trial and error 
probably 50+). That said, we all also probably know most of the people who know 
enough to go from zero to doing this practically next week. As for motivated 
folks who have lots of time to read code and dig, this seems like something 
worth fixing in the medium term.

Your observation is what’s largely led me to conclude there isn’t a lot we can 
do here without a lot of creativity and fundamental rethinking of our approach. 
One thing I keep harping on is maybe saving the blind-CPFP approach with a) 
eltoo, and b) some kind of magic transaction relay metadata that allows you to 
specify “this spends at least one output on any transaction that spends output 
X” so that nodes can always apply it properly. But maybe that’s a pipedream of 
complexity. I know Antoine has other thoughts.

Matt

> On Jun 22, 2020, at 04:04, Bastien TEINTURIER via bitcoin-dev 
>  wrote:
> 
> 
> Hey ZmnSCPxj,
> 
> I agree that in theory this looks possible, but doing it in practice with 
> accurate control
> of what parts of the network get what tx feels impractical to me (but maybe 
> I'm wrong!).
> 
> It feels to me that an attacker who would be able to do this would break 
> *any* off-chain
> construction that relies on absolute timeouts, so I'm hoping this is insanely 
> hard to
> achieve without cooperation from a miners subset. Let me know if I'm too 
> optimistic on
> this!
> 
> Cheers,
> Bastien
> 
>> Le lun. 22 juin 2020 à 10:15, ZmnSCPxj  a écrit :
>> Good morning Bastien,
>> 
>> > Thanks for the detailed write-up on how it affects incentives and 
>> > centralization,
>> > these are good points. I need to spend more time thinking about them.
>> >
>> > > This is one reason I suggested using independent pay-to-preimage
>> > > transactions[1]
>> >
>> > While this works as a technical solution, I think it has some incentives 
>> > issues too.
>> > In this attack, I believe the miners that hide the preimage tx in their 
>> > mempool have
>> > to be accomplice with the attacker, otherwise they would share that tx 
>> > with some of
>> > their peers, and some non-miner nodes would get that preimage tx and be 
>> > able to
>> > gossip them off-chain (and even relay them to other mempools).
>> 
>> I believe this is technically possible with current mempool rules, without 
>> miners cooperating with the attacker.
>> 
>> Basically, the attacker releases two transactions with near-equal fees, so 
>> that neither can RBF the other.
>> It releases the preimage tx near miners, and the timelock tx near non-miners.
>> 
>> Nodes at the boundaries between those that receive the preimage tx and the 
>> timelock tx will receive both.
>> However, they will receive one or the other first.
>> Which one they receive first will be what they keep, and they will reject 
>> the other (and *not* propagate the other), because the difference in fees is 
>> not enough to get past the RBF rules (which requires not just a feerate 
>> increase, but also an increase in absolute fee, of at least the minimum 
>> relay feerate times transaction size).
>> 
>> Because they reject the other tx, they do not propagate the other tx, so the 
>> boundary between the two txes is inviolate, neither can get past that 
>> boundary, this occurs even if everyone is running 100% unmodified Bitcoin 
>> Core code.
>> 
>> I am not a mempool expert and my understanding may be incorrect.
>> 
>> Regards,
>> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-23 Thread Matt Corallo via Lightning-dev



On 4/23/20 8:46 AM, ZmnSCPxj wrote:
>>> -   Miners, being economically rational, accept this proposal and include 
>>> this in a block.
>>>
>>> The proposal by Matt is then:
>>>
>>> -   The hashlock branch should instead be:
>>> -   B and C must agree, and show the preimage of some hash H (hashlock 
>>> branch).
>>> -   Then B and C agree that B provides a signature spending the hashlock 
>>> branch, to a transaction with the outputs:
>>> -   Normal payment to C.
>>> -   Hook output to B, which B can use to CPFP this transaction.
>>> -   Hook output to C, which C can use to CPFP this transaction.
>>> -   B can still (somehow) not maintain a mempool, by:
>>> -   B broadcasts its timelock transaction.
>>> -   B tries to CPFP the above hashlock transaction.
>>> -   If CPFP succeeds, it means the above hashlock transaction exists and B 
>>> queries the peer for this transaction, extracting the preimage and claiming 
>>> the A->B HTLC.
>>
>> Note that no query is required. The problem has been solved and the 
>> preimage-containing transaction should now confirm just fine.
> 
> Ah, right, so it gets confirmed and the `blocksonly` B sees it in a block.
> 
> Even if C hooks a tree of low-fee transactions on its hook output or normal 
> payment, miners will still be willing to confirm this and the B hook CPFP 
> transaction without, right?

Correct, once it makes it into the mempool we can CPFP it and all the regular 
sub-package CPFP calculation will pick it
and its descendants up. Of course this relies on it not spending any other 
unconfirmed inputs.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev
Great summary, a few notes inline.

> On Apr 22, 2020, at 21:50, ZmnSCPxj  wrote:
> 
> Good morning lists et al,
> 
> Let me try to summarize things a little:
> 
> * Suppose we have a forwarding payment A->B->C.
> * Suppose B does not want to maintain a mempool and is running in 
> `blocksonly` mode to reduce operational costs.

Quick point of clarification, due to the mempool lacking a consensus system 
(that’s the whole point, after all :p), there are several reasons to that just 
running a full node/having a mempool isn’t sufficient.

> * C triggers B somehow dropping the B<->C channel, such as by sending an 
> `error` message, which will usually cause the other side to drop the channel 
> onchain using its commitment transaction.
> * The dropped B<->C channel has an HTLC (that was set up during the A->B->C 
> forwarding).
> * The HTLC, being used in a Poon-Dryja channel, actually has the following 
> contract text:
> * The fund may be claimed by either of these clauses:
> * C can claim, if C shows the preimage of some hash H (hashlock branch).
> * B and C must agree, and claim after time L (timelock branch).
> * B holds a signature from C that can claim the timelock branch of the HTLC, 
> for a transaction that spends to an output with an `OP_CHECKSEQUENCEVERIFY`.
> * The signature is `SIGHASH_ALL`, so the transaction has a fixed feerate.
> * C can "pin" the HTLC output by spending using the hashlock branch, and 
> creating a large fee, low fee-rate (tree of) transactions.

Another: this is the simplest example. There are also games around the package 
size limits if I recall correctly.

> * As it is a low fee-rate, miners have no incentive to put this in a block, 
> especially if unrelated higher-fee-rate transactions exist that would earn 
> them more money.
> * Even in a full RBF universe, because of the anti-DoS mempool rules, B 
> cannot evict this pinned transaction by just bidding up the feerate.
> * A replacing transaction cannot evict alternatives unless its absolute fee 
> is greater than the absolute fee of the alternative.
> * The pinning transaction has a high fee, but is blockspace-wasteful, so it 
> is:
>   * Undesirable to mine (low feerate).
>   * Difficult to evict (high fee).
> * Thus, B is unable to get its timelock-branch transaction in the mempools of 
> miners.
> * C waits until the A->B HTLC times out, then:
> * C directly contacts miners with an out-of-band proposal to replace its 
> transaction with an alternative that is much smaller and has a low fee, but 
> much better feerate.

Or they can just wait. For example in today’s mempool it would not be strange 
for a transaction at 1 sat/vbyte to wait a day but eventually confirm.

> * Miners, being economically rational, accept this proposal and include this 
> in a block.
> 
> The proposal by Matt is then:
> 
> * The hashlock branch should instead be:
> * B and C must agree, and show the preimage of some hash H (hashlock branch).
> * Then B and C agree that B provides a signature spending the hashlock 
> branch, to a transaction with the outputs:
> * Normal payment to C.
> * Hook output to B, which B can use to CPFP this transaction.
> * Hook output to C, which C can use to CPFP this transaction.
> * B can still (somehow) not maintain a mempool, by:
> * B broadcasts its timelock transaction.
> * B tries to CPFP the above hashlock transaction.
> * If CPFP succeeds, it means the above hashlock transaction exists and B 
> queries the peer for this transaction, extracting the preimage and claiming 
> the A->B HTLC.

Note that no query is required. The problem has been solved and the 
preimage-containing transaction should now confirm just fine.

> Is that a fair summary?

Yep!

> --
> 
> Naively, and remembering I am completely ignorant of the exact details of the 
> mempool rules, it seems to me quite strange that we are allowing an 
> undesirable transaction (tree) into the mempool:
> 
> * Undesirable to mine (low fee-rate).
> * Difficult to evict (high fee).

As noted, such transactions today are profit in 10 hours. Just because they’re 
big doesn’t mean they don’t pay.

> Miners are not interested in low fee-rate transactions, as long as higher 
> fee-rate transactions exist.
> And being difficult to evict means miners cannot get alternatives that are 
> more lucrative for them.
> 
> The reason (as I understand it) eviction is purposely made difficult here is 
> to prevent certain DoS attacks on Bitcoin nodes, specifically:
> 
> 1. Attacker sends a low fee-rate tx as a "root" transaction.
> 2  Attacker sends thousands of low fee-rate tx that build off the above root.

I believe the limit is 25, though the point stands, mostly from a total-size 
perspective.

> 3. Attacker sends a slightly higher fee-rate alternative to the root, 
> evicting the above tree of txes.
> 4. Attacker sends thousands of low fee-rate tx that build off the latest root.
> 5. GOTO 3.
> 
> However, it seems to me, naively, that "an ounce of prevention is

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev


On 4/22/20 7:27 PM, Olaoluwa Osuntokun wrote:
> 
>> Indeed, that is what I’m suggesting
> 
> Gotcha, if this is indeed what you're suggesting (all HTLC spends are now
> 2-of-2 multi-sig), then I think the modifications to the state machine I
> sketched out in an earlier email are required. An exact construction which
> achieves the requirements of "you can't broadcast until you have a secret
> which I can obtain from the htlc sig for your commitment transaction, and my
> secret is revealed with another swap", appears to be an open problem, atm.

Hmm, indeed, it does seem to require a change to the state machine, but I don't 
think a very interesting one. Because B
providing A an HTLC signature spending a commitment transaction B will 
broadcast does not allow A to actually broadcast
said HTLC transaction, B can be rather liberal with it. Indeed, however, it 
would require that B provide such a
signature before A can send the commitment_signed that exists today.

> Even if they're restricted in this fashion (must be a 1-in-1 out,
> sighashall, fees are pre agreed upon), they can still spend that with a CPFP
> (while still unconfirmed in the mempool) and create another heavy tree,
> which puts us right back at the same bidding war scenario?

Right, you'd have to use anchor outputs just like we do on the commitment 
transaction :).

>> There are a bunch of ways of doing pinning - just opting into RBF isn’t
>> even close to enough.
> 
> Mhmm, there're other ways of doing pinning. But with anchors as is defined
> in that spec PR, they're forced to spend with an RBF-replaceable
> transaction, which means the party wishing to time things out can enter into
> a bidding war. If the party trying to impeded things participates in this
> progressive absolute fee increase, it's likely that the war terminates
> with _one_ of them getting into the block, which seems to resolve
> everything?

No? Even if we assume there are no tricks that you can play with, eg, the 
package limits duri eviction, which I'd be
surprised about, the "absolute fee/feerate" thing still screws you. The 
attacker here gets to hold something at the
bottom of the mempool and the poor honest party is going to have to pay an 
absurd (likely more than the HTLC value) fee
just to get it unstuck, whereas the attacker never would have had to pay said 
fee.

> -- Laolung
> 
> 
> On Wed, Apr 22, 2020 at 4:20 PM Matt Corallo  <mailto:lf-li...@mattcorallo.com>> wrote:
> 
> 
> 
>> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun > <mailto:laol...@gmail.com>> wrote:
>>
>> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures 
>> to
>> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
>> > transaction by sending a transaction which spends it and seeing if it 
>> is
>> > accepted
>>
>> Sorry I still don't follow. By "we clearly need to go the other 
>> direction -
>> all HTLC output spends need to be pre-signed.", you don't mean that the 
>> HTLC
>> spends of the non-broadcaster also need to be an off-chain 2-of-2 
>> multi-sig
>> covenant? If the other party isn't restricted w.r.t _how_ they can spend 
>> the
>> output (non-rbf'd, ect), then I don't see how that addresses anything.
> 
> Indeed, that is what I’m suggesting. Anchor output and all. One thing we 
> could think about is only turning it on
> over a certain threshold, and having a separate 
> “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.
> 
>> Also see my mail elsewhere in the thread that the other party is actually
>> forced to spend their HTLC output using an RBF-replaceable transaction. 
>> With
>> that, I think we're all good here? In the end both sides have the 
>> ability to
>> raise the fee rate of their spending transactions with the highest 
>> winning.
>> As long as one of them confirms within the CLTV-delta, then everyone is
>> made whole.
> 
> It does seem like my cached recollection of RBF opt-in was incorrect but 
> please re-read the intro email. There are a
> bunch of ways of doing pinning - just opting into RBF isn’t even close to 
> enough.
> 
>> [1]: https://github.com/bitcoin/bitcoin/pull/18191
>>
>>
>> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo > <mailto:lf-li...@mattcorallo.com>> wrote:
>>
>> A few replies inline.
>>
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> >
>&

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev


> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun  wrote:
> 
> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
> > transaction by sending a transaction which spends it and seeing if it is
> > accepted
> 
> Sorry I still don't follow. By "we clearly need to go the other direction -
> all HTLC output spends need to be pre-signed.", you don't mean that the HTLC
> spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
> covenant? If the other party isn't restricted w.r.t _how_ they can spend the
> output (non-rbf'd, ect), then I don't see how that addresses anything.

Indeed, that is what I’m suggesting. Anchor output and all. One thing we could 
think about is only turning it on over a certain threshold, and having a 
separate “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.

> Also see my mail elsewhere in the thread that the other party is actually
> forced to spend their HTLC output using an RBF-replaceable transaction. With
> that, I think we're all good here? In the end both sides have the ability to
> raise the fee rate of their spending transactions with the highest winning.
> As long as one of them confirms within the CLTV-delta, then everyone is
> made whole.

It does seem like my cached recollection of RBF opt-in was incorrect but please 
re-read the intro email. There are a bunch of ways of doing pinning - just 
opting into RBF isn’t even close to enough.

> [1]: https://github.com/bitcoin/bitcoin/pull/18191
> 
> 
>> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo  
>> wrote:
>> A few replies inline.
>> 
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> > 
>> > 
>> >> While this is somewhat unintuitive, there are any number of good anti-DoS
>> >> reasons for this, eg:
>> > 
>> > None of these really strikes me as "good" reasons for this limitation, 
>> > which
>> > is at the root of this issue, and will also plague any more complex Bitcoin
>> > contracts which rely on nested trees of transaction to confirm (CTV, 
>> > Duplex,
>> > channel factories, etc). Regarding the various (seemingly arbitrary) 
>> > package
>> > limits it's likely the case that any issues w.r.t computational complexity
>> > that may arise when trying to calculate evictions can be ameliorated with
>> > better choice of internal data structures.
>> > 
>> > In the end, the simplest heuristic (accept the higher fee rate package) 
>> > side
>> > steps all these issues and is also the most economically rationale from a
>> > miner's perspective. Why would one prefer a higher absolute fee package
>> > (which could be very large) over another package with a higher total _fee
>> > rate_?
>> 
>> This seems like a somewhat unnecessary drive-by insult of a project you 
>> don't contribute to, but feel free to start with
>> a concrete suggestion here :).
>> 
>> >> You'll note that B would be just fine if they had a way to safely monitor 
>> >> the
>> >> global mempool, and while this seems like a prudent mitigation for
>> >> lightning implementations to deploy today, it is itself a quagmire of
>> >> complexity
>> > 
>> > Is it really all that complex? Assuming we're talking about just watching
>> > for a certain script template (the HTLC scipt) in the mempool to be able to
>> > pull a pre-image as soon as possible. Early versions of lnd used the 
>> > mempool
>> > for commitment broadcast detection (which turned out to be a bad idea so we
>> > removed it), but at a glance I don't see why watching the mempool is so
>> > complex.
>> 
>> Because watching your own mempool is not guaranteed to work, and during 
>> upgrade cycles that include changes to the
>> policy rules an attacker could exploit your upgraded/non-upgraded status to 
>> perform the same attack.
>> 
>> >> Further, this is a really obnoxious assumption to hoist onto lightning
>> >> nodes - having an active full node with an in-sync mempool is a lot more
>> >> CPU, bandwidth, and complexity than most lightning users were expecting to
>> >> face.
>> > 
>> > This would only be a requirement for Lightning nodes that seek to be a part
>> > of the public routing network with a desire to _forward_ HTLCs. This isn't
>> > doesn't affect laptops or mobil

Re: [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev
Hmm, that's an interesting suggestion, it definitely raises the bar for attack 
execution rather significantly. Because lightning (and other second-layer 
systems) already relies heavily on uncensored access to blockchain data, its 
reasonable to extend the "if you don't have enough blocks, aggressively query 
various sources to find new blocks, or, really just do it always" solution to 
"also send relevant transactions while we're at it".

Sadly, unlike for block data, there is no consensus mechanism for nodes to 
ensure the transactions in their mempools are the same as others. Thus, if you 
focus on sending the pinning transaction to miner nodes directly (which isn't 
trivial, but also not nearly as hard as it sounds), you could still pull off 
the attack. However, to do it now, you'd need to
wait for your counterparty to broadcast the corresponding timeout transaction 
(once it is confirmable, and can thus get into mempools), turning the whole 
thing into a mempool-acceptance race. Luckily there isn’t much cost to 
*trying*, though it’s less likely you’ll succeed.

There are also practical design issues - if you’re claiming multiple HTLC 
output in a single transaction the node would need to provide reject messages 
for each input which is conflicted, something which we’d need to think hard 
about the DoS implications of.

In any case, while it’s definitely better than nothing, it’s unclear if it’s 
really the kind of thing I’d want to rely on for my own funds.

Matt


> On 4/22/20 2:24 PM, David A. Harding wrote:
>> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev 
>> wrote:
>> A lightning counterparty (C, who received the HTLC from B, who
>> received it from A) today could, if B broadcasts the commitment
>> transaction, spend an HTLC using the preimage with a low-fee,
>> RBF-disabled transaction.  After a few blocks, A could claim the HTLC
>> from B via the timeout mechanism, and then after a few days, C could
>> get the HTLC-claiming transaction mined via some out-of-band agreement
>> with a small miner. This leaves B short the HTLC value.
> 
> IIUC, the main problem is honest Bob will broadcast a transaction
> without realizing it conflicts with a pinned transaction that's already
> in most node's mempools.  If Bob knew about the pinned transaction and
> could get a copy of it, he'd be fine.
> 
> In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
> and sends his conflicting transaction, the nodes would reply with
> something like "rejected: code 123: conflicts with txid 0123...cdef".
> Bob could then reply with a a getdata('tx', '0123...cdef') to get the
> pinned transaction, parse out its preimage, and resolve the HTLC.
> 
> This approach isn't perfect (if it even makes sense at all---I could be
> misunderstanding the problem) because one of the problems that caused
> BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
> if Bob had at least one honest peer that had the pinned transaction in
> its mempool and which implemented reject-with-conflicting-txid, Bob
> might be ok.
> 
> -Dave

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev



On 4/22/20 12:12 AM, ZmnSCPxj wrote:
> Good morning Matt, and list,
> 
> 
> 
>> RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds, how, 
>> now?")
>> =
>>
>> You'll note that in the discussion of RBF pinning we were pretty broad, 
>> and that that discussion seems to in fact cover
>> our HTLC outputs, at least when spent via (3) or (4). It does, and in 
>> fact this is a pretty severe issue in today's
>> lightning protocol [2]. A lightning counterparty (C, who received the 
>> HTLC from B, who received it from A) today could,
>> if B broadcasts the commitment transaction, spend an HTLC using the 
>> preimage with a low-fee, RBF-disabled transaction.
>> After a few blocks, A could claim the HTLC from B via the timeout 
>> mechanism, and then after a few days, C could get the
>> HTLC-claiming transaction mined via some out-of-band agreement with a 
>> small miner. This leaves B short the HTLC value.
> 
> My (cached) understanding is that, since RBF is signalled using `nSequence`, 
> any `OP_CHECKSEQUENCEVERIFY` also automatically imposes the requirement "must 
> be RBF-enabled", including `<0> OP_CHECKSEQUENCEVERIFY`.
> Adding that clause (2 bytes in witness if my math is correct) to the hashlock 
> branch may be sufficient to prevent C from making an RBF-disabled transaction.

Hmm, indeed, though note that (IIRC) you can break this by adding children or 
parents which are *not* RBF-enabled and
then the package may lose the ability to be RBF'd.

> But then you mention out-of-band agreements with miners, which basically 
> means the transaction might not be in the mempool at all, in which case the 
> vulnerability is not really about RBF or relay, but sheer economics.

No. The whole point of this attack is that you keep a transaction in the 
mempool but unconfirmed via RBF pinning, which
prevents an *alternative* transaction from being confirmed. You then have 
plenty of time to go get it confirmed later.

> The payment is A->B->C, and the HTLC A->B must have a larger timeout (L + 1) 
> than the HTLC B->C (L), in abstract non-block units.
> The vulnerability you are describing means that the current time must now be 
> L + 1 or greater ("A could claim the HTLC from B via the timeout mechanism", 
> meaning the A->B HTLC has timed out already).
> 
> If so, then the B->C transaction has already timed out in the past and can be 
> claimed in two ways, either via B timeout branch or C hashlock branch.
> This sets up a game where B and C bid to miners to get their version of 
> reality committed onchain.
> (We can neglect out-of-band agreements here; miners have the incentive to 
> publicly leak such agreements so that other potential bidders can offer even 
> higher fees for their versions of that transaction.)

Right, I think I didn't explain clearly enough. The point is that, here, B 
tries to broadcast the timeout transaction
but cannot because there is an in-mempool conflict.

> Before L+1, C has no incentive to bid, since placing any bid at all will leak 
> the preimage, which B can then turn around and use to spend from A, and A and 
> C cannot steal from B.
> 
> Thus, B should ensure that *before* L+1, the HTLC-Timeout has been committed 
> onchain, which outright prevents this bidding war from even starting.
> 
> The issue then is that B is using a pre-signed HTLC-timeout, which is needed 
> since it is its commitment tx that was broadcast.
> This prevents B from RBF-ing the HTLC-Timeout transaction.
> 
> So what is needed is to allow B to add fees to HTLC-Timeout:
> 
> * We can add an RBF carve-out output to HTLC-Timeout, at the cost of more 
> blockspace.
> * With `SIGHASH_NOINPUT` we can make the C-side signature 
> `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side signature 
> for a higher-fee version of HTLC-Timeout (assuming my cached understanding of 
> `SIGHASH_NOINPUT` still holds).

This does not solve the issue because you can add as many fees as you want, as 
long as the transaction is RBF-pinned,
there is not much you can do in an automated fashion.

> With this, B can exponentially increase the fee as L+1 approaches.
> If B can get HTLC-Timeout confirmed before L+1, then C cannot steal the HTLC 
> value at all, since the UTXO it could steal from has already been spent.
> 
> In particular, it does not seem to me that it is necessary to change the 
> hashlock-branch transaction of C at all, since this mechanism is enough to 
> sidestep the issue (as I understand it).
> But it does point to a need to make HTLC-Timeout (and possibly symmetrically, 
> HTLC-Success) also fee-bumpable.
> 
> Note as well that this does not require a mempool: B can run in `blocksonly` 
> mode and as each block comes in from L to L+1, if HTLC-Timeout is not 
> confirmed, feebump HTLC-Timeout.
> In particular, HTLC-Timeout comes into play only if B broadcast its own 
> commitment transaction, and B *should* be aware that it did

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Matt Corallo via Lightning-dev
A few replies inline.

On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
> Hi Matt,
> 
> 
>> While this is somewhat unintuitive, there are any number of good anti-DoS
>> reasons for this, eg:
> 
> None of these really strikes me as "good" reasons for this limitation, which
> is at the root of this issue, and will also plague any more complex Bitcoin
> contracts which rely on nested trees of transaction to confirm (CTV, Duplex,
> channel factories, etc). Regarding the various (seemingly arbitrary) package
> limits it's likely the case that any issues w.r.t computational complexity
> that may arise when trying to calculate evictions can be ameliorated with
> better choice of internal data structures.
> 
> In the end, the simplest heuristic (accept the higher fee rate package) side
> steps all these issues and is also the most economically rationale from a
> miner's perspective. Why would one prefer a higher absolute fee package
> (which could be very large) over another package with a higher total _fee
> rate_?

This seems like a somewhat unnecessary drive-by insult of a project you don't 
contribute to, but feel free to start with
a concrete suggestion here :).

>> You'll note that B would be just fine if they had a way to safely monitor the
>> global mempool, and while this seems like a prudent mitigation for
>> lightning implementations to deploy today, it is itself a quagmire of
>> complexity
> 
> Is it really all that complex? Assuming we're talking about just watching
> for a certain script template (the HTLC scipt) in the mempool to be able to
> pull a pre-image as soon as possible. Early versions of lnd used the mempool
> for commitment broadcast detection (which turned out to be a bad idea so we
> removed it), but at a glance I don't see why watching the mempool is so
> complex.

Because watching your own mempool is not guaranteed to work, and during upgrade 
cycles that include changes to the
policy rules an attacker could exploit your upgraded/non-upgraded status to 
perform the same attack.

>> Further, this is a really obnoxious assumption to hoist onto lightning
>> nodes - having an active full node with an in-sync mempool is a lot more
>> CPU, bandwidth, and complexity than most lightning users were expecting to
>> face.
> 
> This would only be a requirement for Lightning nodes that seek to be a part
> of the public routing network with a desire to _forward_ HTLCs. This isn't
> doesn't affect laptops or mobile phones which likely mostly have private
> channels and don't participate in HTLC forwarding. I think it's pretty
> reasonable to expect a "proper" routing node on the network to be backed by
> a full-node. The bandwidth concern is valid, but we'd need concrete numbers
> that compare the bandwidth over head of mempool awareness (assuming the
> latest and greatest mempool syncing) compared with the overhead of the
> channel update gossip and gossip queries over head which LN nodes face today
> as is to see how much worse off they really would be.

If mempool-watching were practical, maybe, though there are a number of folks 
who are talking about designing
partially-offline local lightning hubs which would be rendered impractical.

> As detailed a bit below, if nodes watch the mempool, then this class of
> attack assuming the anchor output format as described in the open
> lightning-rfc PR is mitigated. At a glance, watching the mempool seems like
> a far less involved process compared to modifying the state machine as its
> defined today. By watching the mempool and implementing the changes in
> #lightning-rfc/688, then this issue can be mitigated _today_. lnd 0.10
> doesn't yet watch the mempool (but does include anchors [1]), but unless I'm
> missing something it should be pretty straight forward to add which mor or 
> less
> resolves this issue all together.
> 
>> not fixing this issue seems to render the whole exercise somewhat useless
> 
> Depends on if one considers watching the mempool a fix. But even with that a
> base version of anchors still resolves a number of issues including:
> eliminating the commitment fee guessing game, allowing users to pay less on
> force close, being able to coalesce 2nd level HTLC transactions with the
> same CLTV expiry, and actually being able to reliably enforce multi-hop HTLC
> resolution.
> 
>> Instead of making the HTLC output spending more free-form with
>> SIGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other
>> direction - all HTLC output spends need to be pre-signed.
> 
> I'm not sure this is actually immediately workable (need to think about it
> more). To see why, remember that the commit_sig message includes HTLC
> signatures for the _remote_ party's commitment transaction, so they can
> spend the HTLCs if they broadcast their version of the commitment (force
> close). If we don't somehow also _gain_ signatures (our new HTLC signatures)
> allowing us to spend HTLCs on _their_ version of the commitment, then if
> they broadcas

[Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-20 Thread Matt Corallo via Lightning-dev
[Hi bitcoin-dev, in lightning-land we recently discovered some quite 
frustrating issues which I thought may merit
broader discussion]

While reviewing the new anchor outputs spec [1] last week, I discovered it 
introduced a rather nasty ability for a user
to use RBF Pinning to steal in-flight HTLCs which are being enforced on-chain. 
Sadly, Antoine pointed out that this is
an issue in today's light as well, though see [2] for qualifications. After 
some back-and-forth with a few other
lightning folks, it seems clear that there is no easy+sane fix (and the 
practicality of exploitation today seems
incredibly low), so soliciting ideas publicly may be the best step forward.

I've included lots of background for those who aren't super comfortable with 
lightning's current design, but if you
already know it well, you can skip at least background 1 & 2.

Background - Lightning's Transactions (you can skip this)
=

As many of you likely know, lightning today does all its update mechanics 
through:
 a) a 2-of-2 multisig output, locking in the channel,
 b) a "commitment transaction", which spends that output: i) back to its 
owners, ii) to "HTLC outputs",
 c) HTLC transactions which spend the relevant commitment transaction HTLC 
outputs.

This somewhat awkward third layer of transactions is required to allow HTLC 
timeouts to be significantly lower than the
time window during which a counterparty may be punished for broadcasting a 
revoked state. That is to say, you want to
"lock-in" the resolution of an HTLC output (ie by providing the hash lock 
preimage on-chain) by a fixed block height
(likely a few hours from the HTLC creation), but the punishment mechanism needs 
to occur based on a sequence height
(possibly a day or more after transaction broadcast).

As Bitcoin has no covanents, this must occur using pre-signed transactions - 
namely "HTLC-Success" and "HTLC-Timeout"
transactions, which finalize the resolution of an HTLC, but have a 
sequence-lock for some time during which the funds
may be taken if they had previously been revoked. To avoid needless delays, if 
the counterparty which did *not*
broadcast the commitment transaction wishes to claim the HTLC value, they may 
do so immediately (as there is no reason
to punish the non-broadcaster for having *not* broadcasted a revoked state). 
Thus, we have four possible HTLC
resolutions depending on the combination of which side broadcast the HTLC and 
which side sent the HTLC (ie who can claim
it vs who can claim it after time-out):

 1) pre-signed HTLC-Success transaction, providing the preimage in the witness 
and sent to an output which is sequence-
locked for some time to provide the non-broadcasting side the opportunity 
to take the funds,
 2) pre-signed HTLC-Timeout transaction, time-locked to N, providing no 
preimage, but with a similar sequence lock and
output as above,
 3) non-pre-signed HTLC claim, providing the preimage in the witness and 
unencumbered by the broadcaster's signature,
 4) non-pre-signed HTLC timeout, OP_CLTV to N, and similarly unencumbered.

Background 2 - RBF Pinning (you can skip this)
==

Bitcoin Core's general policy on RBF transactions is that if a counterparty 
(either to the transaction, eg in lightning,
or not, eg a P2P node which sees the transaction early) can modify a 
transaction, especially if they can add an input or
output, they can prevent it from confirming in a world where there exists a 
mempool (ie in a world where Bitcoin works).
While this is somewhat unintuitive, there are any number of good anti-DoS 
reasons for this, eg:
 * (ok, this is a bad reason, but) a child transaction could be marked 
'non-RBF', which would mean allowing the parent
   be RBF'd would violate the assumptions those who look at the RBF opt-in 
marking make,
 * a parent may be very large, but low feerate - this requires the RBF attempt 
to "pay for its own relay" and include a
   large absolute fee just to get into the mempool,
 * one of the various package size limits is at its maximum, and depending on 
the structure of the package the
   computational complexity of calculation evictions may be more than we want 
to do for a given transaction.

Background 3 - "The RBF Carve-Out" (you can skip this)
==

In today's lightning, we have a negotiation of what we expect the future 
feerate to be when one party goes to close the
channel. All the pre-signed transactions above are constructed with this 
fee-rate in mind, and, given they are all
pre-signed, adding additional fee to them is not generally an option. This is 
obviously a very maddening prediction
game, especially when the security consequences for negotiating a value which 
is wrong may allow your counterparty to
broadcast and time out HTLCs which you otherwise have the preimage for. To 
remove this quirk, we came up with an idea a
year or two back now called "anchor outputs" (aka th

Re: [Lightning-dev] Anchor Outputs Spec & Implementation Progress

2020-04-16 Thread Matt Corallo via Lightning-dev
Knee-jerk gut reaction replies inline :)

Matt

On 3/30/20 3:00 PM, Olaoluwa Osuntokun wrote:

-snip-

> In response to the first concern: it is indeed the case that these new
> commitments are more expensive, but they're only _slightly_ so. The new
> default commitment weight is as if there're two HTLCs at all times on the
> commitment transaction. Adding in the extra anchor cost (660 satoshis) is a
> false equivalence as both parties are able to recover these funds if they
> chose. It's also the case that force cases in the ideal case are only due to
> nodes needing to go on-chain to sweep HTLCs, so the extra bytes may be
> dwarfed by several HTLCs, particularly in a post MPP/AMP world. The extra
> cost may seem large (relatively) when looking at a 1 sat/byte commitment
> transaction. However, fees today in the system are on the rise, and if one
> is actually in a situation where they need to resolve HTLCs on chain,
> they'll likely require a fee rate higher than 1 sat/byte to have their
> commitment confirm in a timely manner.

Indeed, a few hundred sats isn't likely worth arguing about :)

> On the topic of UTXO bloat, IMO re-purposing the to_remote output as an
> anchor is arguably _worse_, as only a single party in the channel is able to
> spend that output in order to remove its impact on the UTXO set. On the
> other hand, using two anchors (with their special scripts) allows _anyone_
> to sweep these outputs several blocks after the commitment transaction has
> confirmed. In order to cover the case where the remote party has no balance,
> but a single incoming HTLC, the channel initiator must either create a new
> anchor output for this special case (creating a new type of ad-hoc reserve),
> or always create a to_remote output for the other party (donating the 330
> satoshis).

This seems like a straw-man. Going from 1 anyone-spendable + 1 such UTXO to 2 
anyone-spendable UTXOs + 1 such UTXO does not seem like a
decrease to me. If you really want to nitpick, I have zero issue with having 
the to_remote+anchor revert to an anyone-can-spend form if it
is of dust value. This may be a bit complicated for others, but with our new 
onchain stuff, it would be rather trivial for us to deal with.

> The first option reduces down to having two anchors once again,
> while the second option creates an output which is likely uneconomical to
> sweep in isolation (compared to anchors which can be swept globally in the
> system taking advantage of the input aggregation savings).
> 
> The final factor to consider is if we wish to properly re-introduce a CSV
> delay to the to_remote party in an attempt to remedy some game theoretical
> issues w.r.t forcing one party to close early without a cost to the
> instigator. In the past we made some headway in this direction, but then
> reverted our changes as we discoverers some previously unknown gaming
> vectors even with a symmetrical delay. If we keep two anchor as is, then we
> leave this thread open to a comprehensive solution, as the dual anchor
> format is fully decoupled from the rest of the commitment.

If we want to deploy an upgrade, the commitment transaction format will change. 
Lets not get excited about reducing the diff in such a
change for its own sake - the vast majority of the effort is the upgrade, not 
the diff of having an extra anchor.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Using libp2p as a communication protocol for Lightning

2020-02-17 Thread Matt Corallo
Because writing connection logic and peer management is really not that 
complicated compared to HTLC state machines and the rest of lightning. For 
crypto, lighting does use the noise framework, though the resulting code is so 
simple (in a good way) that its super easy to just write it yourself instead of 
fighting with a dependency.

Lastly, for self-respecting cryptocurrency developers, not-carefully-audited 
dependencies are security vulnerabilities that will expose your users’ funds. 
By pulling simple connection logic into a lighting implementation, it’s easier 
to  test/fuzz/etc with the rest of a project.

Matt

> On Feb 17, 2020, at 06:12, Alexandr Burdiyan  wrote:
> 
> 
> Hi everyone!
> 
> Since I recently started digging into all-things-peer-to-peer, I found that 
> there’s a lot of fragmentation between many different projects that seemingly 
> have a lot of things in common, like networking, encoding standards, and etc. 
> I suppose there’re lots of historical reasons for that. 
> 
> More concretely for Lightning, I wonder why it couldn’t use some existing 
> open source technologies and standards, like libp2p [1] for communication, or 
> various multiformats [2] standards for addresses, hashes and encodings?
> 
> I do think that building and evolving common toolkits and standards for 
> decentralized system like libp2p, or multiformats, or IPLD [3] could be 
> something very useful for the whole community. Currently, it feels like 
> everyone wants to go so fast, so there’s no time for coordination and 
> consensus to build these kinds of specs. That is understandable. But I wonder 
> if Lightning community ever looked at projects like libp2p and multiformats, 
> or maybe is considering to implement them in lightning. Or maybe there was a 
> decision of not using them for some reason that I might be missing.
> 
> [1]: https://libp2p.io
> [2]: https://multiformats.io
> [3]: https://ipld.io
> 
> Thanks!
> 
> Alexandr Burdiyan
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Not revealing the channel capacity during opening of channel in lightning network

2020-01-27 Thread Matt Corallo
Right, but there are approaches that are not as susceptible - an
obvious, albeit somewhat naive, approach would be to define a fixed and
proportional max fee, and pick a random (with some privacy properties eg
biasing towards old or good-reputation nodes, routing across nodes
hosted on different ISPs/Tor/across continents, etc) route that pays no
more than those fees unless no such route is available. You could
imagine hard-coding such fees to "fees that are generally available on
the network as observed in the real world".

Matt

On 1/27/20 3:55 PM, ZmnSCPxj wrote:
> Good morning Matt,
> 
> Thread-hijack...
> 
>> route-hijacks (aka someone providing routing for a lower fee
>> and users happily taking it)
> 
> I observe that this is something of a catch-22.
> 
> If users *notice* lower fees and go to lower-fee channels, then 
> route-hijacking is possible and surveillors can pay (via sacrificed fees) for 
> better surveillance.
> If users *ignore* lower fees, then forwarding nodes will jack up their prices 
> to 21 million Bitcoin `fee_base`, because users are going to go through their 
> nodes with equal probability as lower-priced nodes.
> 
> In many ways, traits that make one a good forwarding node (large number of 
> channels, cheap fees, central location, high uptime, low latency...) also 
> makes one a good surveillance node, sigh.
> Fortunately this second-layer Lightning network remains censorship-resistant 
> (censorship leads to loss of profit from fees, same as on the blockchain 
> layer, and censors can be evicted by jacking up your willingness to pay fees 
> (including onchain fees to move your channels away from the censoring node 
> and towards the node you want to pay to, which again is an eviction of the 
> censor), just as effectively as on the blockchain layer).
> 
> Regards,
> ZmnSCPxj
> 
>> in the routing DB today. The issue of
>> anti-DoS is somewhat different - we do have reasonable protection from
>> someone OOM'ing every node on the network by opening a billion channels.
>> Anti-DoS could reasonably be accomplished with simple equivalent proofs,
>> though of course they would be somewhat more expensive to create/validate.
>>
>> Matt
>>
>> On 1/27/20 3:19 PM, ZmnSCPxj wrote:
>>
>>> Good morning Subhra,
>>>
>>>> So introducing proof of knowledge of fund locked instead of revealing the 
>>>> amount of fund locked by counterparties will introduce added complexity 
>>>> while routing but how effective is this going to be against handling 
>>>> attacks like hijacking of routes and channel exhaustion ?
>>>
>>> The added complexity is spam-prevention, as mentioned, and not routing in 
>>> particular.
>>> Pathfinding algorithms can just use the lower limit of the rangeproof to 
>>> filter out channels too small to pass a particular payment through, 
>>> C-Lightning (and probably other implementations) already does this, using 
>>> the known channel capacity as the limit (knowledge of the exact channel 
>>> capacity is a rangeproof whose lower limit equals the upper limit, yes?).
>>> Now, since the proofs involved are likely to be larger than just a simple 
>>> 64-bit integer that indicates the location of the funding transaction on 
>>> the blockchain (24-bit blockheight, 24-bit transaction index within block, 
>>> 16-bit output index), the spam-prevention might end up requiring more data 
>>> than the spam it stops, so ---
>>> (Though if Matt has some ideas here I would be greatly interested --- we do 
>>> have to change the encodings of short-channel-ids at some point, if only to 
>>> support channel factories)
>>> Regards,
>>> ZmnSCPxj
>>>
>>>> On Mon, Jan 27, 2020, 20:12 Matt Corallo lf-li...@mattcorallo.com wrote:
>>>>
>>>>> Note that there's no real reason lightning nodes have to have
>>>>> confidence in that - if a node routes your payment to the next hop, how
>>>>> they do it doesn't really matter. Allowing things like non-lightning
>>>>> "channels" (eg just a contractual agreement to settle up later between
>>>>> two mutually-trusting parties) would actually be quite compelling.
>>>>> The reason lightning nodes today require proof-of-funds-locked is
>>>>> largely for DoS resistance, effectively rate-limiting flooding the
>>>>> global routing table with garbage, but such rate-limiting could be
>>>>> accomplished (albeit with a ton more complexity) via other means.
>>>>> Matt
>>&g

Re: [Lightning-dev] Not revealing the channel capacity during opening of channel in lightning network

2020-01-27 Thread Matt Corallo
Note the distinction - there is almost nothing done today to prevent
spam and route-hijacks (aka someone providing routing for a lower fee
and users happily taking it) in the routing DB today. The issue of
anti-DoS is somewhat different - we do have reasonable protection from
someone OOM'ing every node on the network by opening a billion channels.
Anti-DoS could reasonably be accomplished with simple equivalent proofs,
though of course they would be somewhat more expensive to create/validate.

Matt

On 1/27/20 3:19 PM, ZmnSCPxj wrote:
> Good morning Subhra,
> 
>> So introducing proof of knowledge of fund locked instead of revealing the 
>> amount of fund locked by counterparties will introduce added complexity 
>> while routing but how effective is this going to be against handling attacks 
>> like hijacking of routes and channel exhaustion ?
> 
> The added complexity is spam-prevention, as mentioned, and not routing in 
> particular.
> Pathfinding algorithms can just use the lower limit of the rangeproof to 
> filter out channels too small to pass a particular payment through, 
> C-Lightning (and probably other implementations) already does this, using the 
> known channel capacity as the limit (knowledge of the exact channel capacity 
> is a rangeproof whose lower limit equals the upper limit, yes?).
> 
> Now, since the proofs involved are likely to be larger than just a simple 
> 64-bit integer that indicates the location of the funding transaction on the 
> blockchain (24-bit blockheight, 24-bit transaction index within block, 16-bit 
> output index), the spam-prevention might end up requiring *more* data than 
> the spam it stops, so ---
> (Though if Matt has some ideas here I would be greatly interested --- we do 
> have to change the encodings of short-channel-ids at some point, if only to 
> support channel factories)
> 
> Regards,
> ZmnSCPxj
> 
>>
>> On Mon, Jan 27, 2020, 20:12 Matt Corallo  wrote:
>>
>>> Note that there's no real reason lightning nodes *have* to have
>>> confidence in that - if a node routes your payment to the next hop, how
>>> they do it doesn't really matter. Allowing things like non-lightning
>>> "channels" (eg just a contractual agreement to settle up later between
>>> two mutually-trusting parties) would actually be quite compelling.
>>>
>>> The reason lightning nodes *today* require proof-of-funds-locked is
>>> largely for DoS resistance, effectively rate-limiting flooding the
>>> global routing table with garbage, but such rate-limiting could be
>>> accomplished (albeit with a ton more complexity) via other means.
>>>
>>> Matt
>>>
>>> On 1/27/20 7:50 AM, Ugam Kamat wrote:
>>>> Hey Subhra – In order to have faith that the channel announced by the
>>>> nodes is actually locked on the Bitcoin mainchain we need to have the
>>>> outpoint (`txid` and `vout`) of the funding transaction. If we do not
>>>> verify that the funding transaction has been confirmed, nodes can cheat
>>>> us that a particular transaction is confirmed when it is not the case.
>>>> As a result we require that nodes announce this information along with
>>>> the public keys and the signatures of the public keys that was used to
>>>> lock the funding transaction.
>>>>
>>>>  
>>>>
>>>> This information is broadcasted in the `channel_announcement` message in
>>>> the `short_channel_id` field which includes the block number,
>>>> transaction number and vout. Since Bitcoin does not allow confidential
>>>> transactions, we can query the blockchain and find out the channel
>>>> capacity even when the amounts are never explicitly mentioned.
>>>>
>>>>  
>>>>
>>>>  
>>>>
>>>> Ugam
>>>>
>>>>  
>>>>
>>>> *From:* Lightning-dev 
>>>> *On Behalf Of *Subhra Mazumdar
>>>> *Sent:* Monday, January 27, 2020 12:45 PM
>>>> *To:* lightning-dev@lists.linuxfoundation.org
>>>> *Subject:* [Lightning-dev] Not revealing the channel capacity during
>>>> opening of channel in lightning network
>>>>
>>>>  
>>>>
>>>> Dear All,
>>>>
>>>>  What can be the potential problem if a channel is opened
>>>> whereby the channel capacity is not revealed publicly but just a range
>>>> proof of the attribute (capacity >0 and capacity < value) is provided ?
>>>> Will it pose a problem during routing of transaction ? What are the pros
>>>> and cons ?
>>>>
>>>> I think that revealing channel capacity make the channels susceptible to
>>>> channel exhaustion attack or a particular node might be targeted for
>>>> node isolation attack ?
>>>>
>>>>
>>>> --
>>>>
>>>> Yours sincerely,
>>>> Subhra Mazumdar.
>>>>
>>>>
>>>> ___
>>>> Lightning-dev mailing list
>>>> Lightning-dev@lists.linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>>
> 
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Not revealing the channel capacity during opening of channel in lightning network

2020-01-27 Thread Matt Corallo
Why require a funding locked? Just require proof-of-UTXO - its only for
anti-DoS, again there is no reason to require a standard lightning
channel on-chain for this.

In general proving 2-of-2 multisig UTXO ownership doesn't do much to
prevent route hijacking to begin with, so it shouldn't be much different.

Matt

On 1/27/20 3:04 PM, Subhra Mazumdar wrote:
> So introducing proof of knowledge of fund locked instead of revealing
> the amount of fund locked by counterparties will introduce added
> complexity while routing but how effective is this going to be against
> handling attacks like hijacking of routes and channel exhaustion ?
> 
> On Mon, Jan 27, 2020, 20:12 Matt Corallo  <mailto:lf-li...@mattcorallo.com>> wrote:
> 
> Note that there's no real reason lightning nodes *have* to have
> confidence in that - if a node routes your payment to the next hop, how
> they do it doesn't really matter. Allowing things like non-lightning
> "channels" (eg just a contractual agreement to settle up later between
> two mutually-trusting parties) would actually be quite compelling.
> 
> The reason lightning nodes *today* require proof-of-funds-locked is
> largely for DoS resistance, effectively rate-limiting flooding the
> global routing table with garbage, but such rate-limiting could be
> accomplished (albeit with a ton more complexity) via other means.
> 
> Matt
> 
> On 1/27/20 7:50 AM, Ugam Kamat wrote:
> > Hey Subhra – In order to have faith that the channel announced by the
> > nodes is actually locked on the Bitcoin mainchain we need to have the
> > outpoint (`txid` and `vout`) of the funding transaction. If we do not
> > verify that the funding transaction has been confirmed, nodes can
> cheat
> > us that a particular transaction is confirmed when it is not the case.
> > As a result we require that nodes announce this information along with
> > the public keys and the signatures of the public keys that was used to
> > lock the funding transaction.
> >
> >  
> >
> > This information is broadcasted in the `channel_announcement`
> message in
> > the `short_channel_id` field which includes the block number,
> > transaction number and vout. Since Bitcoin does not allow confidential
> > transactions, we can query the blockchain and find out the channel
> > capacity even when the amounts are never explicitly mentioned.
> >
> >  
> >
> >  
> >
> > Ugam
> >
> >  
> >
> > *From:* Lightning-dev
>  <mailto:lightning-dev-boun...@lists.linuxfoundation.org>>
> > *On Behalf Of *Subhra Mazumdar
> > *Sent:* Monday, January 27, 2020 12:45 PM
> > *To:* lightning-dev@lists.linuxfoundation.org
> <mailto:lightning-dev@lists.linuxfoundation.org>
> > *Subject:* [Lightning-dev] Not revealing the channel capacity during
> > opening of channel in lightning network
> >
> >  
> >
> > Dear All,
> >
> >  What can be the potential problem if a channel is opened
> > whereby the channel capacity is not revealed publicly but just a range
> > proof of the attribute (capacity >0 and capacity < value) is
> provided ?
> > Will it pose a problem during routing of transaction ? What are
> the pros
> > and cons ?
> >
> > I think that revealing channel capacity make the channels
> susceptible to
> > channel exhaustion attack or a particular node might be targeted for
> > node isolation attack ?
> >
> >
> > --
> >
> > Yours sincerely,
> > Subhra Mazumdar.
> >
> >
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> <mailto:Lightning-dev@lists.linuxfoundation.org>
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Not revealing the channel capacity during opening of channel in lightning network

2020-01-27 Thread Matt Corallo
Note that there's no real reason lightning nodes *have* to have
confidence in that - if a node routes your payment to the next hop, how
they do it doesn't really matter. Allowing things like non-lightning
"channels" (eg just a contractual agreement to settle up later between
two mutually-trusting parties) would actually be quite compelling.

The reason lightning nodes *today* require proof-of-funds-locked is
largely for DoS resistance, effectively rate-limiting flooding the
global routing table with garbage, but such rate-limiting could be
accomplished (albeit with a ton more complexity) via other means.

Matt

On 1/27/20 7:50 AM, Ugam Kamat wrote:
> Hey Subhra – In order to have faith that the channel announced by the
> nodes is actually locked on the Bitcoin mainchain we need to have the
> outpoint (`txid` and `vout`) of the funding transaction. If we do not
> verify that the funding transaction has been confirmed, nodes can cheat
> us that a particular transaction is confirmed when it is not the case.
> As a result we require that nodes announce this information along with
> the public keys and the signatures of the public keys that was used to
> lock the funding transaction.
> 
>  
> 
> This information is broadcasted in the `channel_announcement` message in
> the `short_channel_id` field which includes the block number,
> transaction number and vout. Since Bitcoin does not allow confidential
> transactions, we can query the blockchain and find out the channel
> capacity even when the amounts are never explicitly mentioned.
> 
>  
> 
>  
> 
> Ugam
> 
>  
> 
> *From:* Lightning-dev 
> *On Behalf Of *Subhra Mazumdar
> *Sent:* Monday, January 27, 2020 12:45 PM
> *To:* lightning-dev@lists.linuxfoundation.org
> *Subject:* [Lightning-dev] Not revealing the channel capacity during
> opening of channel in lightning network
> 
>  
> 
> Dear All,
> 
>  What can be the potential problem if a channel is opened
> whereby the channel capacity is not revealed publicly but just a range
> proof of the attribute (capacity >0 and capacity < value) is provided ?
> Will it pose a problem during routing of transaction ? What are the pros
> and cons ?
> 
> I think that revealing channel capacity make the channels susceptible to
> channel exhaustion attack or a particular node might be targeted for
> node isolation attack ?
> 
> 
> -- 
> 
> Yours sincerely,
> Subhra Mazumdar.
> 
> 
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-20 Thread Matt Corallo
Indeed. To go further, anything where the file can be proven correct under a 
zkSNARK (ie you can write a relatively simple program to do so) does not bear 
such an issue.

> On Jan 21, 2020, at 02:38, Andrés G. Aragoneses  wrote:
> 
> 
> Hey ZmnSCPxj,
> 
>> On Tue, 21 Jan 2020 at 08:47, ZmnSCPxj via Lightning-dev 
>>  wrote:
>> Good morning Subhra,
>> 
>> Refer to this protocol instead of DLAS: 
>> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-June/002035.html
>> In this protocol, an *encrypted* form of the *entire file* is sent.
>> Consequently, a *single* payment is made, where the payment preimage is the 
>> decryption key.
>> Knowing an additional zk proof is necessary to show that the file is indeed 
>> encrypted using the decryption key that is the preimage of the given hash 
>> (the linked thread has details I believe).
>> 
>> Relevantly, there is no need to consider blocks of a file when using the 
>> linked protocol instead of DLAS.
>> Of course, a Zk-proof of some property of the entire file, that can be 
>> understood by an end-user, may not be possible.
>> Likely, you might want to prove of a video file that a thumbnail of the 
>> video file is extracted from a frame of the video, and show that thumbnail 
>> to the end-user.
>> Looking at the *rest* of the frames of the video (after you have paid for 
>> its decryption) may very reveal them to be frames of a video of Rick Astley 
>> singing "Never Gonna Give You Up".
> 
> I wanted to ask a simple question with regards to the NeverGonnaGiveYouUp 
> problem.
> Let's say you use this technology for a specific use case subset of what has 
> been proposed: the payer wants to exchange bitcoin (via LN) in exchange for 
> some data. The data, in this case, was known by the payer at some point in 
> the past, so the payer encrypted it with his own private key, and gave it to 
> someone for backup purposes (after that, he gets a hash of the data, which 
> she keeps, and deletes the data from her end). At some point in the future, 
> when she wants to retrieve the data, the payee can only supply a bunch of 
> bytes whose hash match with the hash that the payer has, therefore the 
> NeverGonnaGiveYouUp problem can't happen here, am I right?
> 
> Thanks
> 
>  
>> !
>> 
>> Regards,
>> ZmnSCPxj
>> 
>> > So is it sufficient to give a zk proof of the entire file and not of the 
>> > individual blocks which are transferred at each iteration? Also does it 
>> > make sense that you make partial payment per block instead of waiting for 
>> > the total file to arrive. It might be the case that the zk proof of the 
>> > total file is correct but then sender might cheat while sending individual 
>> > block.
>> >
>> > On Tue, Jan 21, 2020, 00:26 Matt Corallo  wrote:
>> >
>> > > Don’t and data in lighting payments unless you have to. It’s super DoS-y 
>> > > and rude to your peers. If you’re just transferring a file, you can use 
>> > > ZKCP to send an encrypted copy of the file with the encryption key being 
>> > > the payment_preimage, making the whole thing one big atomic action.
>> > >
>> > > > On Jan 20, 2020, at 13:33, Subhra Mazumdar 
>> > > >  wrote:
>> > >
>> > > > 
>> > > > Sounds good. But how do I provide a correctness for the entire asset 
>> > > > to be transferred when I am already partitioning into several units 
>> > > > (say chunks of file ? ) So as an when the block of file is received 
>> > > > then we have to give a ZK proof "block x is part of File F". Is it how 
>> > > > this should work ?
>> > > >
>> > > > On Mon, Jan 20, 2020 at 11:59 PM Matt Corallo 
>> > > >  wrote:
>> > > >
>> > > > > Zk proofs are incredibly fast these days for small-ish programs. 
>> > > > > They’re much too slow for a consensus system where every party needs 
>> > > > > to download and validate them, but for relatively simple programs a 
>> > > > > two-party system using them is very doable.
>> > > > >
>> > > > > > On Jan 20, 2020, at 13:23, Subhra Mazumdar 
>> > > > > >  wrote:
>> > > > >
>> > > > > > 
>> > > > > > But isn't it that the use of ZK proof will render the system slow 
>> > > > > > and hence defy the very purpose of lightning network 

Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-20 Thread Matt Corallo
Don’t and data in lighting payments unless you have to. It’s super DoS-y and 
rude to your peers. If you’re just transferring a file, you can use ZKCP to 
send an encrypted copy of the file with the encryption key being the 
payment_preimage, making the whole thing one big atomic action.

> On Jan 20, 2020, at 13:33, Subhra Mazumdar  
> wrote:
> 
> 
> Sounds good. But how do I provide a correctness for the entire asset to be 
> transferred when I am already partitioning into several units (say chunks of 
> file ? ) So as an when the block of file is received then we have to give a 
> ZK proof "block x is part of File F". Is it how this should work ?
> 
>> On Mon, Jan 20, 2020 at 11:59 PM Matt Corallo  
>> wrote:
>> Zk proofs are incredibly fast these days for small-ish programs. They’re 
>> much too slow for a consensus system where every party needs to download and 
>> validate them, but for relatively simple programs a two-party system using 
>> them is very doable.
>> 
>>>> On Jan 20, 2020, at 13:23, Subhra Mazumdar  
>>>> wrote:
>>>> 
>>> 
>>> But isn't it that the use of ZK proof will render the system slow and hence 
>>> defy the very purpose of lightning network which intends to make things 
>>> scalable as well as faster transaction ?
>>> 
>>>> On Mon, Jan 20, 2020 at 11:48 PM Matt Corallo  
>>>> wrote:
>>>> That paper discusses it, but I don't think there was ever a paper proper
>>>> on ZKCP. There are various discussions of it, though, if you google.
>>>> Sadly this is common in this space - lots of great ideas where no one
>>>> ever bothered to write academic-style papers about them (hence why
>>>> academic papers around Bitcoin tend to miss nearly all relevant context,
>>>> sadly).
>>>> 
>>>> Matt
>>>> 
>>>> On 1/20/20 6:10 PM, Subhra Mazumdar wrote:
>>>> > Are you referring to the paper Zero knowledge contingent payment
>>>> > revisited ? I will look into the construction. Thanks for the
>>>> > information! :)
>>>> > 
>>>> > On Mon, Jan 20, 2020, 23:31 Matt Corallo >>> > <mailto:lf-li...@mattcorallo.com>> wrote:
>>>> > 
>>>> > On 11/9/19 4:31 AM, Takaya Imai wrote:
>>>> > > [What I do not describe]
>>>> > > * A way to detect that data is correct or not, namely zero 
>>>> > knowledge
>>>> > > proof process.
>>>> > 
>>>> > Have you come across Zero Knowledge Contingent Payments? Originally 
>>>> > it
>>>> > was designed for on-chain applications but it slots neatly into
>>>> > lightning as it only requires a method to lock funds to a hash 
>>>> > preimage.
>>>> > 
>>>> > Matt
>>>> > ___
>>>> > Lightning-dev mailing list
>>>> > Lightning-dev@lists.linuxfoundation.org
>>>> > <mailto:Lightning-dev@lists.linuxfoundation.org>
>>>> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>> > 
>>> 
>>> 
>>> -- 
>>> Yours sincerely,
>>> Subhra Mazumdar.
>>> 
> 
> 
> -- 
> Yours sincerely,
> Subhra Mazumdar.
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-20 Thread Matt Corallo
Zk proofs are incredibly fast these days for small-ish programs. They’re much 
too slow for a consensus system where every party needs to download and 
validate them, but for relatively simple programs a two-party system using them 
is very doable.

> On Jan 20, 2020, at 13:23, Subhra Mazumdar  
> wrote:
> 
> 
> But isn't it that the use of ZK proof will render the system slow and hence 
> defy the very purpose of lightning network which intends to make things 
> scalable as well as faster transaction ?
> 
>> On Mon, Jan 20, 2020 at 11:48 PM Matt Corallo  
>> wrote:
>> That paper discusses it, but I don't think there was ever a paper proper
>> on ZKCP. There are various discussions of it, though, if you google.
>> Sadly this is common in this space - lots of great ideas where no one
>> ever bothered to write academic-style papers about them (hence why
>> academic papers around Bitcoin tend to miss nearly all relevant context,
>> sadly).
>> 
>> Matt
>> 
>> On 1/20/20 6:10 PM, Subhra Mazumdar wrote:
>> > Are you referring to the paper Zero knowledge contingent payment
>> > revisited ? I will look into the construction. Thanks for the
>> > information! :)
>> > 
>> > On Mon, Jan 20, 2020, 23:31 Matt Corallo > > <mailto:lf-li...@mattcorallo.com>> wrote:
>> > 
>> > On 11/9/19 4:31 AM, Takaya Imai wrote:
>> > > [What I do not describe]
>> > > * A way to detect that data is correct or not, namely zero knowledge
>> > > proof process.
>> > 
>> > Have you come across Zero Knowledge Contingent Payments? Originally it
>> > was designed for on-chain applications but it slots neatly into
>> > lightning as it only requires a method to lock funds to a hash 
>> > preimage.
>> > 
>> > Matt
>> > ___
>> > Lightning-dev mailing list
>> > Lightning-dev@lists.linuxfoundation.org
>> > <mailto:Lightning-dev@lists.linuxfoundation.org>
>> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>> > 
> 
> 
> -- 
> Yours sincerely,
> Subhra Mazumdar.
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-20 Thread Matt Corallo
That paper discusses it, but I don't think there was ever a paper proper
on ZKCP. There are various discussions of it, though, if you google.
Sadly this is common in this space - lots of great ideas where no one
ever bothered to write academic-style papers about them (hence why
academic papers around Bitcoin tend to miss nearly all relevant context,
sadly).

Matt

On 1/20/20 6:10 PM, Subhra Mazumdar wrote:
> Are you referring to the paper Zero knowledge contingent payment
> revisited ? I will look into the construction. Thanks for the
> information! :)
> 
> On Mon, Jan 20, 2020, 23:31 Matt Corallo  <mailto:lf-li...@mattcorallo.com>> wrote:
> 
> On 11/9/19 4:31 AM, Takaya Imai wrote:
> > [What I do not describe]
> > * A way to detect that data is correct or not, namely zero knowledge
> > proof process.
> 
> Have you come across Zero Knowledge Contingent Payments? Originally it
> was designed for on-chain applications but it slots neatly into
> lightning as it only requires a method to lock funds to a hash preimage.
> 
> Matt
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> <mailto:Lightning-dev@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Data Lightning Atomic Swap (DLAS-down, DLAS-up)

2020-01-20 Thread Matt Corallo
On 11/9/19 4:31 AM, Takaya Imai wrote:
> [What I do not describe]
> * A way to detect that data is correct or not, namely zero knowledge
> proof process.

Have you come across Zero Knowledge Contingent Payments? Originally it
was designed for on-chain applications but it slots neatly into
lightning as it only requires a method to lock funds to a hash preimage.

Matt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Time-Dilation Attacks on Offchain Protocols

2019-12-16 Thread Matt Corallo
Right, I kinda agree here in the sense that there are things that very
significantly help mitigate the issue, but a) I'm not aware of any
clients implementing it (and the equivalent mitigations in Bitcoin Core
are targeted at a different class of issue, and are not sufficient
here), and b) its somewhat unclear what the "emergency action" would be.
Even if you implement detection, figuring out how to do a fallback is
nontrivial, especially if you are concerned with user privacy.

Matt

On 12/16/19 9:10 AM, David A. Harding wrote:
> On Mon, Dec 16, 2019 at 02:17:31AM -0500, Antoine Riard wrote:
>> If well executed, attacks described stay stealth until it's too late
>> to react.  
> 
> I suspect the attacks you described are pretty easy to detect (assuming
> block relay is significantly delayed) by simply comparing the time of
> the latest block header to real time.  If the difference is too large,
> then an emergency action should be taken.[1]
> 
> You mention IBD as confounding this strategy, but I don't think that's
> the case.  Compare the normal case to the pathological case:
> 
> - Normal: when Alice is requesting blocks from honest nodes because
>   she's far behind, those nodes are telling her their current block
>   height and are promptly serving any blocks she requests.
> 
> - Pathological: when Alice is requesting blocks from a eclipse attacker,
>   those dishonest nodes are telling her she's already at the chain tip
>   even though the latest block they serve her has a header timestamp
>   that's hours or days old.  (Alternatively, they're reporting the
>   latest block height but refusing to serve her blocks/headers past a
>   certain point in the past.)
> 
> It seems pretty easy to me to detect the difference between the normal
> case (Alice's chaintip is old but she's still successfully downloading
> blocks) and the pathological case (Alice's chaintip is old and she's
> unable to obtain more recent blocks).
> 
> A possibly optimal attack strategy would be to combine
> commitment/penalty transaction censorship with plausible block delays.
> To a point, transaction censorship just looks a failure to pay a
> sufficient feerate---so a node will probably fee bump a
> commitment/penalty transaction a few times before it starts to panic.
> Also to a point, a delay of up to several hours[2] just looks like
> regular stochastic block production.  By using both deceits in the same
> attack to the maximum possible degree without triggering an alarm, an
> attacker can maximum their chance of stealing funds.
> 
> -Dave
> 
> [1] There is an interesting case where a large miner or cartel of miners
> could deliberately trigger a false positive of block delay
> protection by manipulating Median Time Past (MTP) to allow them to
> set their header nTime fields to values from hours or days ago.  I
> don't believe the fix for the time warp proposed in the consensus
> cleanup soft fork fixes this, since it only directly affects the
> first block in a new difficulty period (preventing difficulty gaming
> but not header time gaming).  This problem is partly mitigated by
> miners keeping MTP far in the past being unable to claim fees from
> recent time locked transactions (see BIP113), though I'm not sure
> how many transactions are actually using nLockTime-as-a-time (LN and
> anti-fee sniping use it as a height).
> 
> [2] If we suddenly lose half of Bitcoin's hashrate so that the average
> time between blocks is 20 minutes, there's a once-in-a-century
> chance of a block taking more than 310 minutes to produce:
> 
> 1 / (exp(-310/20) * 144 * 365)
> 
> 
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> 
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning in a Taproot future

2019-12-15 Thread Matt Corallo
Given the timeline for soft forks to activate on Bitcoin, I don't know
why we'd be super conservative about using new features of the Bitcoin
consensus rules. I think obviously we'd want to rush as fast as we can
into adding real cross-hop privacy to lightning payments, given both the
number of awkward edge cases it introduces into the protocol, and the
massive privacy leak.

Changing the underlying commitment transaction structure *could* come
later, as it is, indeed, lower priority (and then the rule would be
"taproot outputs are always lightning" for some time anyway), but, if
you're changing things kinda why not.

Of course once you get privacy across routing hops, the importance of
privacy-preserving routing algorithm work and fixing some other privacy
holes is probably more important, so I could be dissuaded.

Matt

On 12/15/19 3:43 PM, ZmnSCPxj via Lightning-dev wrote:
> Good morning list,
> 
> I would like to initiate some discussion on how Lightning could be updated to 
> take advantage of the upcoming taproot update to the base layer.
> 
> For now, I am assuming the continued use of the existing Poon-Dryja update 
> mechanism.
> Decker-Russell-Osuntokun requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`, and 
> its details seem less settled for now than taproot details.
> 
> * [ ] We could update the current Poon-Dryja mechanism to use Schnorr 
> signatures.
> * [ ] Or we could just keep the current Poon-Dryja mechanism with SegWit v0 
> only, and only update to Schnorr / SegWit v1 when we can implement 
> Decker-Russell-Osuntokun.
>   * This brings up the question as to whether we will allow Poon-Dryja to 
> host pointlocked-timelocked contracts (i.e. the Scriptless Script replacement 
> of HTLCs that uses payment points+scalars).
> * [ ] We could constrain Poon-Dryja channels to only HTLCs.
>   * This might be simpler for implementations: implementations could have 
> a completely new module that implements Decker-Russell-Osuntokun with HTLCs 
> and PTLCs, and not touch existing modules for Poon-Dryja with HTLCs only.
>   * We could "retire" Poon-Dryja constructions at some point and only add 
> new features to Decker-Russell-Osuntokun channels.
> * [ ] We could allow hosting PTLCs as well on Poon-Dryja channels, as 
> nothing in the base layer prevents a transaction from providing both SegWit 
> v0 and SegWit v1 outputs anyway.
> 
> Poon-Dryja with Schnorr
> ---
> 
> If we decide to update the current Poon-Dryja mechanism to use Schnorr, there 
> are some wrinkles:
> 
> * [ ] We could just drop-in replace the signing algorithm with Schnorr.
>   * We would define a NUMS point to be used as Taproot internal key, and use 
> a single tapscript that is simply the current script template.
>   * This is arguably not really taking advantage of the Schnorr and Taproot 
> features, however.
> * [ ] Or we could use some sort of MuSig between the two channel participants.
> 
> The latter option is probably what we want to use, as it potentially allows a 
> channel close to be indistinguishable from an ordinary SegWit v1 spend of a 
> UTXO.
> Even for channels that have been published via gossip, it moves the onus of 
> storing historical data about historically-published channels from base layer 
> fullnodes to nodes that want to surveill the network.
> 
> ### Digression: 2-of-2 is Obviously Lightning
> 
> Existing 2-of-2 outputs have a very high probability of being Lightning 
> Network channels.
> Thus, someone who wishes to surveill the Lightning Network can simply query 
> any archive fullnode for historical 2-of-2 outputs and have a good chance 
> that those are Lightning Network channels.
> 
> Consider the adage: Never go to sea with two chronometers; take one or three.
> https://en.wikipedia.org/wiki/Dual_modular_redundancy
> This implies that ordinary HODL usage of transaction outputs will either use 
> 1-of-1, or 2-of-3.
> 
> Offchain techniques, on the other hand, are only safe (trustless) if they are 
> n-of-n, and are only usefully multi-participant if n > 1.
> https://zmnscpxj.github.io/offchain/safety.html
> Thus any n-of-n is likely to be an offchain protocol, with the most likely 
> being the most widespread offchain protocol, Lightning Network.
> 
> Thus, the hyperbole "2-of-2 is Obviously Lightning".
> 
> However, we can "hide" 2-of-2 in a 2-of-3, which can be done by generating a 
> third point from a NUMS point plus a random tweak generated by both 
> participants.
> Better yet, MuSig allows hiding any n-of-n among 1-of-1; we expect 1-of-1 use 
> to dominate in the foreseeable future, thus MuSig usage maximizes our 
> anonymity set.
> 
> ### End Digression
> 
> A potential issue with MuSig is the increased number of communication rounds 
> needed to generate signatures.
> 
> In the current Poon-Dryja setup we have, in order to completely sign the 
> commitment transaction held by one participant, we only require sending a 
> completed signature from the

Re: [Lightning-dev] Time-Dilation Attacks on Offchain Protocols

2019-12-09 Thread Matt Corallo
Nice writeup!

In further in-person discussions today, it was noted that the key
last-resort fallback Bitcoin Core has to week out peers in case of an
Eclipse Attack does not protect from this style of attack. While it is
only of limited concern for most Bitcoin Core users, it very much may
expose lightning nodes to theft.

More importantly, while Bitcoin Core has some protections against
generic sybil-based Eclipse attacks, they are far from sufficient in a
model like Lightning where much more is at stake from simple no-hashrate
Eclipse attacks.

The proposed countermeasure here of "raising alarms" in case the
best-block nTime field is too far behind is compelling in a
SPV-assumption world, though it is far from sufficient if the time delay
is small (eg for lightning HTLC delays or if the to_self_delay is
gratuitously insecure, eg under 144).

Sadly, however, "raising alarms" is not sufficient to protect users, as
countermeasures likely need to be taken automatically for any kind of
compelling security.

I'd encourage Lightning node authors and operators to ensure they have
multiple, redundant, trusted methods of receiving Bitcoin block data in
a timely fashion. To shill my own bags, of late I've been thinking about
such systems and expose services to fetch header/BIP157 filter data over
DNS [1, 2], header data over arbitrary radio interfaces [3] and full
block data over HTTP. Finally, there is also a parallel P2P
implementation which attempts to make different tradeoffs from the
existing Bitcoin Core P2P implementation and is much more aggressive
about seeking additional connections if there is evidence of headers
which are better than our current chain at [5], relying on some of the
other mechanisms to seek header data.

Note that the Bitcoin Core-based work here also includes what I hope can
be used as an arbitrary easy-to-write-additional-block-fetch-methods
framework.

[1] https://github.com/bitcoin/bitcoin/pull/16834
[2] https://twitter.com/TheBlueMatt/status/1200585905163112449
[3]
https://github.com/TheBlueMatt/bitcoin/commit/2019-10-rusty-all-da-stuff%5E1
you should be able to pick up headers in the 915 MHz band over LoRa with
this patch in the New York City area already :)
[4] https://github.com/bitcoin/bitcoin/pull/16762
[5] https://github.com/bitcoin/bitcoin/pull/17376

Matt

On 12/9/19 6:04 PM, Antoine Riard wrote:
> Time-Dilation Attacks on Offchain Protocols
> ===
> 
> Lightning works on reversing the double-spend problem to a private
> state between parties instead of being a public issue verified by every
> network peer. The security model is based on revocation of previous
> states and in case of broadcast of any of them, being aware of it to
> generate justice transactions to claim misbehaving peer onchain outputs
> before contest period expiration. This period is driven by the blockchain
> which is here the system clock.
> 
> Eclipse attacks's end-goal is to monopolize a victim's incoming and
> outgoing connections, by this way isolating a node from the rest of its
> peers in the network. A successful Eclipse attacks lets the attacker
> filter the victim's view of the blockchain, i.e he controls transactions
> and blocks announcements [0].
> 
> Every LN node must be tied to a bitcoin full-node or light-client to
> verify independently channels opening/closing, HTLCs expiration and
> previous/latest state broadcast. To operate securely, the view of the
> blockchain must be up-to-date with the one shared with the rest of the
> network. By considering Eclipse attacks on the base layer, this assumption
> can be broken.
> 
> First scenario : Targeting the CSV security delay
> --
> 
> Alice and Mallory are LN peers with a channel opened at state N. They
> use a CSV of 144 blocks as a security parameter for contestation period.
> Mallory is able to identify Alice full-node and start to eclipse it.
> When done, it keeps announcing blocks to Alice node but delaying them by
> 2min. Given a variance of 10min, after 6 blocks, Mallory will have a
> height ahead of Alice of 1, after 24 blocks, a lead of 4, after 144,
> a lead of 24, after 1008, a lead of 168.
> 
> After maintaining the eclipse for a week, Mallory will have more than a
> day of height advance on Alice view of blockchain. The difference being
> superior at the CSV timelock, Mallory can broadcast a previous
> commitment transaction at state N - 10 with a balance far more favorable
> than the legit one at height H, the synchronized height with the rest
> of the network.
> 
> At revoked commitment transaction broadcast, Alice is slow down at
> height H - 168. At H+144, Mallory can unlock its funds out of the
> commitment transaction outputs and by so closing the contestation period
> while Alice is still stuck at H-24. When Alice learn about revoked
> broadcast at H, it's already too late. Mallory may have stopped the
> Eclipse attack after H+144,

Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-19 Thread Matt Corallo
Regarding your list,
A. Indeed, unlikely to happen.
B. Maybe, but we’re talking a 10x increase so that suddenly you’re paying some 
extra pennies. In the scale of likelihood, and in the scale of what fees will 
be anyway, this doesn’t matter.
C. You still seem to have missed the point that they need to be economical 
*eventually *, at the mempool’s lowest. That is hugely different from “fees 
increasing”. I really don’t think the history here supports your position.

At a high level, you seem to be completely discounting the cost of complexity 
in the protocol. Lightning already has way too many values you have to 
negotiate with your counterparty, and the state machine is already complicated 
enough that three groups of talented developers failed to check key parameters 
during state transitions!

If at all possible, the answer should be “remove more crap from the state 
machine”, not “well, we’re like 95% sure this is fine, let’s just heap on the 
complexity”. Not only does this fly in the face of any reasonable definition of 
“good engineering practice”, but the cost to change it later is relatively low!

We’re rewriting the state machine and transaction format now, let’s learn from 
the past few years, not pretend everything is perfect.

Matt

> On Nov 19, 2019, at 08:47, Joost Jager  wrote:
> 
> 
> Thanks for this explanation (and Matt's) of the dust limit. For me it 
> definitely adds to a better understanding of the matter.
> 
>> In short, I don't expect dust limits to rise unless the BTC/fiat price
>> drops so far that UTXO-filling attacks become much more affordable than
>> they are with today's limits.  Dust limits may instead decrease (or be
>> removed), but I don't think that's a problem for anchor outputs.
> 
> If the BTC/fiat price rises and this leads to lowering the dust limit, it 
> could be beneficial to lower the anchor size too. In the current proposal, 
> the channel initiator pays for both anchors. They basically give away a 
> little bit of money to the non-initiator via the non-initiator anchor output. 
> If those anchors have become expensive because btc is expensive, it would be 
> nice to choose a lower value (as far as permitted by the dust limit).
>  
>> That said, I think it'd be a nice thing for LN implementations to strive
>> to create anchor outputs that are economical to spend and that may
>> require using a negotiable output amount to compensate for rises in
>> feerates making small-value outputs less economical, especially if
>> you're using different anchor outputs for each channel party.
> 
> On the one hand, we'd want them to be economical to not create dust. But on 
> the other hand because it is free money too, we also want them to be as small 
> as possible (as mentioned above). I would think that an individual running a 
> node is more concerned with their balance than the quality of the utxo set.
> 
> So far, the following factors/events were mentioned that could lead to 
> unhappiness about a hard-coded anchor value (hopefully this is complete and 
> correct now):
> 
> A. Dust limit rises: need bigger anchors to get commitment transactions 
> accepted (arguably unlikely to happen).
> B. Btc price goes up, dust limit goes down: may want smaller anchors to 
> reduce amount (in fiat terms) of free money given to the non-initiator
> C. Fee rates go up: need bigger anchors to make them economical to spend and 
> prevent them from filling up the utxo set.
> 
> Introducing a new parameter in the channel opening sequence that sets the 
> anchor size would keep all options open. I would be comfortable with doing 
> that and knowing we won't need changes if any of the three scenarios above 
> play out.
>  
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-15 Thread Matt Corallo
Regarding the dust relay limit, there may be a little bit of a
misunderstanding as to a few important details. The purpose of it (much
like the dust output values in the anchor outputs) is to discourage
outputs which are not ever economically spendable, not short-term
uneconomically spendable.

This value is, thus, *not* connected to the mempool's min relay fee
(except for the purposes of calculating the constant, which may be part
of the disconnect here). The min relay fee represents a short-term DoS
limit, and, thus, can float wildly (though, since 2017, and even in
general, we largely have not seen it go up much in absolute value at all).

Further, and, critically, there are a number of issues with *any* policy
change that makes several bits of the P2P network less efficient, and,
thus, they are generally avoided where possible. These include, compact
block relay, feerate estimation, relay-DoS-resistance, etc.

While none of this is to say that the dust limit will *never* change, I
really don't think its unreasonable to hard code it - there's no
pressure *to* change it, and if there's an additional reason not to (ie
that deployed software relies on that value, which other software, more
than lightning already does), then it almost certainly wont be.

Matt

On 11/14/19 9:56 AM, Joost Jager wrote:
>> So then because unilateral close is the only way to resolve atm, it is
>> correct also in theory that there will never be a commitment tx
>where the
>> non-initiator pays fees? But the point is clear, channels can get
>stuck.
> 
>Yeah.  Generally, it doesn't happen because we insist on a reasonable
>balance in the channel, but it's theoretically possible.
> 
> 
> Ok, summarizing just for clarity:
> 
> - there will never be a commitment tx where the non-initiator pays fees
> - generally a unilateral close doesn't happen because we insist on a
> reasonable
> balance in the channel
>  
> 
 If we hard-code a constant, we won't be able to adapt to changes of
 `dustRelayFee` in the bitcoin network. And we'd also need to
>deal with a
 peer picking a value higher than that constant for its regular
>funding
>>> flow
 dust limit parameter.
>>> Note that we can't adapt to dustRelayFee *today*, since we can't
>change
>>> it after funding (another thing we probably need to fix).
>> You can't for an existing channel, but at least for a new channel
>you can
>> pick a different value. Which wouldn't be possible if we'd put a fixed
>> (anchor) amount in the spec.
> 
>That's not really much consolation though for the existing network.
> 
>Still Matt assures me that the relay dust limit is not going to change,
>so I think we're best off cutting down our test matrix by choosing a
>value and putting it directly into the spec.
> 
>By my calculations, at minfee it will cost you ~94 satoshis to spend.
>Dust limit is 294 for Segwit outputs (basically assuming 3x minfee).
> 
>So I'm actually happy to say "anchor outputs are 294 satoshi".  These
>are simply spendable, and still only $3 each if BTC is $1M.  Lower is
>better (as long as we stick with funder-pays), as long as they do
>eventually get spent.
> 
> 
> Looking at https://github.com/bitcoin/bitcoin/commit/9022aa3, is
> `dustRelayFee` really never going to change? It even is a (hidden) cmd
> line parameter that can be set easily. 
> 
> If the fee market would rise and stay high for an extended period of
> time, why wouldn't people use this flag to raise the dust relay fee? If
> we then have our hard coded 294 sat anchors, no force close transactions
> can be broadcast anymore. It would be risky to open new channels at that
> point, because they can only be coop closed.
>  
> Maybe Lightning is relevant enough by that time to keep people from
> touching `dustRelayFee`, but what if not? The fix at that point would be
> to introduce a new commitment format, which given our process takes a
> long time.
> 
> I'd think that having at least an option to adapt to `dustRelayFee`
> changes for new channels makes Lightning more robust. The two options
> that I know of are:
> 
> - Reuse `dust_limit_satoshis` on the `open_channel`/`accept_channel`
> messages as the anchor size. This ignores that an anchor does not need
> to be net positive after sweeping (because it's purpose is to get the
> commit tx confirmed), while we generally do want htlcs to be net
> positive. It may however be not such a big deal in practice. Suppose
> we'd just set this to 294 sat to get the desired anchor output value
> (and make it a soft requirement for channel acceptance). The worst that
> can happen is that there is a force close with one or more pending htlcs
> that aren't economical to sweep. Which can happen anyway because this is
> a channel open parameter and it is impossible to know what is economical
> for the lifetime of the channel. Instead of burning to fees, the htlc
> output will sit there waiting for fees to go

Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-10-30 Thread Matt Corallo
(resend from the right src)

>> On Oct 30, 2019, at 06:04, Joost Jager  wrote:
>> > For the anchor outputs we consider:
>> >
>> > * Output type: normal P2WKH. At one point, an additional spending path was
>> > proposed that was unconditional except for a 10 block csv lock. The
>> > intention of this was to prevent utxo set pollution by allowing anyone to
>> > clean up. This however also opens up the possibility for an attacker to
>> > 'use up' the cpfp carve-out after those 10 blocks. If the user A is offli=
>> ne
>> > for that period of time, a malicious peer B may already have broadcasted
>> > the commitment tx and pinned down user A's anchor output with a low fee
>> > child. That way, the commitment tx could still remain unconfirmed while an
>> > important htlc expires.
>> 
>> Agreed, this doesn't really work.  We actually needed a bitcoin rule
>> that allowed a single anyone-can-spend output.  Seems like we didn't get
>> that.
> 
> With the mempool acceptance carve-out in bitcoind 0.19, we indeed won't be 
> able to safely produce a single OP_TRUE output for anyone to spend. An 
> attacker could attach low fee child transactions, reach the limits and block 
> further fee bumping.

Quick correction. This is only partially true. You can still RBF the 
sub-package, the only issue I see immediately is you have to  pay for the 
otherwise-free relay of everything the attacker relayed.

Why not stick with the original design from Adelaide with a spending path with 
a 1CSV that is anyone can spend (or that is revealed by spending another 
output).

>> > * For the keys to use for `to_remote_anchor` and `to_local_anchor`, we=E2=
>> =80=99d
>> > like to introduce new addresses that both parties communicate in the
>> > `open_channel` and `accept_channel` messages. We don=E2=80=99t want to re=
>> use the
>> > main commitment output addresses, because those may (at some point) be co=
>> ld
>> > storage addresses and the cpfp is likely to happen from a hot wallet.
>> 
>> This is horribly spammy.  At the moment we see ~ one unilateral close
>> every 3 blocks.  Hopefully that will reduce, but there'll always be
>> some.
> 
> It seems there isn't another way to do the anchor outputs given the mempool 
> limitations that exist? Each party needs to have their own anchor, protected 
> by a key. Otherwise it would open up these attack scenarios where an attacker 
> blocks the commitment tx confirmation until htlcs time out. Even with the 
> script OP_DEPTH OP_IF  OP_CHECKSIG OP_ELSE 10 OP_CSV OP_ENDIF, the 
> "anyones" don't know the pubkey and still can't sweep after 10 blocks.
> 
>> > * Within each version of the commitment transaction, both anchors always
>> > have equal values and are paid for by the initiator.
>> 
>> Who pays if they can't afford it?  What if they push_msat everything to
>> the other side?
> 
> Similar to how it currently works. There should never be a commitment 
> transaction in which the initiator cannot pay the fee. With anchor outputs 
> there should never be a commitment tx in which the initiator cannot pay the 
> fee and the anchors. Also currently you cannot push everything to the other 
> side with push_msat. The initiator still needs to have enough balance to pay 
> for the on-chain costs (miner fee and anchors).
> 
>> > The value of the
>> > anchors is the dust limit that was negotiated in the `open_channel` or
>> > `accept_channel` message of the party that publishes the transaction.
>> 
>> Now initiator has to care about the other side's dust limit, which is
>> bad.  And as accepter I now want this much higher, since I get those
>> funds instantly.  I don't think we gain anything by making this
>> configurable at all; just pick a number and be done.
>> 
>> Somewhere between 1000 and 10,000 sat is a good idea.
> 
> Yes, it is free money. Therefore we need to validate the dust limit in the 
> funding flow. Check whether it is reasonable. That should also be done in the 
> current implementation. Otherwise your peer can set a really high dust limit 
> that lets your htlc disappear on-chain (although that is only free money for 
> the miner).
> 
> If we hard-code a constant, we won't be able to adapt to changes of 
> `dustRelayFee` in the bitcoin network. And we'd also need to deal with a peer 
> picking a value higher than that constant for its regular funding flow dust 
> limit parameter.
>  
>> There are several design constraints in the original watchtowers:
>> 
>> 1. A watchtower shouldn't be able to guess the channel history.
>> 2. ... even if it sees a unilateral tx.
>> 3. ... even if it sees a revoked unilateral tx it has a penalty for.
>> 4. A watchtower shouldn't be able to tell if it's being used for both
>>parties in the same channel.
>> 
>> If you don't rotate keys, a watchtower can brute-force the HTLCs for all
>> previous transactions it was told about, and previous channel balances.
>> 
>> We removed key rotation on the to-remote output because you can simply
>> not tell t

  1   2   >