Re: [Lightning-dev] [bitcoin-dev] Scaling Lightning With Simple Covenants

2023-09-11 Thread Rusty Russell
Hi!

I've read the start of the paper on my vacation, and am still
digesting it.  But even so far, it presents some delightful
possibilities.

As with some other proposals, it's worth noting that the cost of
enforcement is dramatically increased.  It's no longer one or two txs,
it's 10+.  If the "dedicated user" contributes some part of the expected
fee, the capital efficiency is reduced (and we're back to "how much is
enough?").

But worst case (dramatic dedicated user failure) it's only a 2x penalty
on number of onchain txs, which seems acceptable if the network is
sufficiently mature that these failure events are rare.

Note also that the (surprisingly common!) "user goes away" case where
the casual user fails to rollover only returns funds to the dedicated
user; relying on legal and normal custody policies in this case may be
preferable to an eternal burden on the UTXO set with the current
approach!

Thankyou!
Rusty.

jlspc via bitcoin-dev  writes:
> TL;DR
> =
> * The key challenge in scaling Lightning in a trust-free manner is the 
> creation of Lightning channels for casual users.
>   - It appears that signature-based factories are inherently limited to 
> creating at most tens or hundreds of Lightning channels per UTXO.
>   - In contrast, simple covenants (including those enabled by CTV [1] or APO 
> [2]) would allow a single UTXO to create Lightning channels for millions of 
> casual users.
> * The resulting covenant-based protocols also:
>   - support resizing channels off-chain,
>   - use the same capital to simultaneously provide in-bound liquidity to 
> casual users and route unrelated payments for other users,
>   - charge casual users tunable penalties for attempting to put an old state 
> on-chain, and
>   - allow casual users to monitor the blockchain for just a few minutes every 
> few months without employing a watchtower service.
> * As a result, adding CTV and/or APO to Bitcoin's consensus rules would go a 
> long way toward making Lightning a widely-used means of payment.
>
> Overview
> 
>
> Many proposed changes to the Bitcoin consensus rules, including 
> CheckTemplateVerify (CTV) [1] and AnyPrevOut (APO) [2], would support 
> covenants.
> While covenants have been shown to improve Bitcoin in a number of ways, 
> scalability of Lightning is not typically listed as one of them.
> This post argues that any change (including CTV and/or APO) that enables even 
> simple covenants greatly improves Lightning's scalability, while meeting the 
> usability requirements of casual users.
> A more complete description, including figures, is given in a paper [3].
>
> The Scalability Problem
> ===
>
> If Bitcoin and Lightning are to become widely-used, they will have to be 
> adopted by casual users who want to send and receive bitcoin, but who do not 
> want to go to any effort in order to provide the infrastructure for making 
> payments.
> Instead, it's reasonable to expect that the Lightning infrastructure will be 
> provided by dedicated users who are far less numerous than the casual users.
> In fact, there are likely to be tens-of-thousands to millions of casual users 
> per dedicated user.
> This difference in numbers implies that the key challenge in scaling Bitcoin 
> and Lightning is providing bitcoin and Lightning to casual users.
> As a result, the rest of this post will focus on this challenge.
>
> Known Lightning protocols allow casual users to perform Lightning payments 
> without:
>  * maintaining high-availability,
>  * performing actions at specific times in the future, or
>  * having to trust a third-party (such as a watchtower service) [5][6].
>
> In addition, they support tunable penalties for casual users who attempt to 
> put an old channel state on-chain (for example, due to a crash that causes a 
> loss of state).
> As a result, these protocols meet casual users' needs and could become 
> widely-used for payments if they were sufficiently scalable.
>
> The Lightning Network lets users send and receive bitcoin off-chain in a 
> trust-free manner [4].
> Furthermore, there are Lightning protocols that allow Lightning channels to 
> be resized off-chain [7].
> Therefore, making Lightning payments and resizing Lightning channels are 
> highly scalable operations.
>
> However, providing Lightning channels to casual users is not scalable.
> In particular, no known protocol that uses the current Bitcoin consensus 
> rules allows a large number (e.g., tens-of-thousands to millions) of 
> Lightning channels, each co-owned by a casual user, to be created from a 
> single on-chain unspent transaction output (UTXO).
> As a result, being able to create (and close) casual users' Lightning 
> channels remains the key bottleneck in scaling Lightning.
>
> Casual Users And Signatures
> ===
>
> Unfortunately, there are good reasons to believe this bottleneck is 
> unavoidable given the current Bitcoin consensus rules.
> The 

Re: [Lightning-dev] Remotely control your lightning node from your favorite HSM

2023-09-11 Thread Rusty Russell
Runes today are often bound to the BOLT8 nodeid, giving both (otherwise
you need to protect your rune from being read).

I like this model *but* it requires two-way comms for setup (the HSM
tells the node its id, the node gives the HSM the rune).

Fortunately, it's trivial to support runes as an extension, and set what
the default is if they don't present one.  Allows nice experimentation.

Further discussion on the gist...

Cheers!
Rusty.

Bastien TEINTURIER  writes:
> Hey Christian,
>
> You're right, if we create runes inside the HSM then we end up with the
> same security model.
> It then boils down to whether we'd rather implement Bolt 8 or rune
> management inside an HSM!
> I'd prefer Bolt 8, as I think it has more universality (and is simpler),
> but it could be worth experimenting with both approaches.
>
> It will also be interesting to see how we actually configure rights (access
> control) on the lightning node side.
> That really deserves some implementation work to flesh out that kind of
> details.
>
> Cheers,
> Bastien
>
> Le ven. 8 sept. 2023 à 16:51, Christian Decker 
> a écrit :
>
>> Very interesting proposal, though as Will points out we could implement
>> the same using runes: have the rune be managed by the hardware wallet, and
>> commit the rune used to authenticate the RPC call commit to the call's
>> payload. That way a potentially compromised client cannot authenticate
>> arbitrary calls, since the hardware wallet is required to associate a rune
>> with it, giving it a chance for review.
>>
>> This is similar to how authentication of RPC calls works in greenlight,
>> where the node host is not trusted, and we need to pass the authenticated
>> commands forward to the signer for verification before processing any
>> signature request from the node. We chose to authenticate the payload
>> rather than the transport (which is what partonnere does) because it
>> removes the need for a direct connection, and adds flexibility to how we
>> can deliver the commands. Functionally they are very similar however.
>>
>> Cheers,
>> Christian
>>
>> On Thu, Sep 7, 2023, 15:06 Bastien TEINTURIER  wrote:
>>
>>> Hi William,
>>>
>>> > What is wrong with runes/macaroons for validating and authenticating
>>> > commands?
>>>
>>> Runes/macaroons don't provide any protection if the machine you are
>>> issuing the RPCs from is compromised. The attacker can change the
>>> parameters of your RPC call and your lightning node will still gladly
>>> execute it.
>>>
>>> > I can't imagine validating every RPC request with a hardware
>>> > device and trusted display, unless you have some specific use case in
>>> > mind.
>>>
>>> I think that this is because you have the wrong idea of which RPCs
>>> this is supposed to protect. This is useful for the RPCs that actually
>>> involve paying something (channel open, channel close, pay invoice).
>>> This isn't useful for "read" RPCs (listing channels).
>>>
>>> Making an on-chain operation or paying an invoice is something that is
>>> infrequent enough for the vast majority of nodes that it makes sense
>>> to validate it manually. Also, this is fully configurable: you can
>>> choose which RPCs you want to protect that way and which RPCs you want
>>> to keep open.
>>>
>>> Thanks,
>>> Bastien
>>>
>>> Le mer. 6 sept. 2023 à 17:42, William Casarin  a écrit :
>>> >
>>> > On Wed, Sep 06, 2023 at 03:32:50AM +0200, Bastien TEINTURIER wrote:
>>> > >Hey Zman,
>>> > >
>>> > >I saw the announcement about the commando plugin, and it was actually
>>> > >one of the reasons I wanted to write up what I had in mind, because
>>> > >while commando also uses a lightning connection to send commands to a
>>> > >lightning node, it was missing what in my opinion is the most important
>>> > >part: having all of Bolt 8 handled by the HSM and validating commands
>>> > >using a trusted display.
>>> >
>>> > What is wrong with runes/macaroons for validating and authenticating
>>> > commands? I can't imagine validating every RPC request with a hardware
>>> > device and trusted display, unless you have some specific use case in
>>> > mind.
>>> >
>>> > Will
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] option_simple_close for "unfailable" closing

2023-07-18 Thread Rusty Russell
https://github.com/lightning/bolts/pull/1096

This is a "can't fail!" close protocol, as discussed at the NY Summit, and on 
@Roasbeef's wishlist. It's about as simple as I could make it: the only 
complexity comes from allowing each side to indicate whether they want to omit 
their own output.

It's "taproot ready"(TM) in the sense that shutdown is always sent to trigger 
it, so that can contain the nonces without any persistence requirement.

I split it into three commits for cleanliness:

1  Introduce the new protocol
2.  Remove the requirement that shutdown not be sent multiple times (which was 
already nonsensical)
3.  Remove the older protocols

The first commit is inline here, for easy review:

diff --git a/02-peer-protocol.md b/02-peer-protocol.md
index f43c686..ce859f2 100644
--- a/02-peer-protocol.md
+++ b/02-peer-protocol.md
@@ -16,6 +16,7 @@ operation, and closing.
 * [Channel Close](#channel-close)
   * [Closing Initiation: `shutdown`](#closing-initiation-shutdown)
   * [Closing Negotiation: 
`closing_signed`](#closing-negotiation-closing_signed)
+  * [Closing Negotiation: `closing_complete` and 
`closing_sig`](#closing-negotiation-closing_complete-and-closing_sig)
 * [Normal Operation](#normal-operation)
   * [Forwarding HTLCs](#forwarding-htlcs)
   * [`cltv_expiry_delta` Selection](#cltv_expiry_delta-selection)
@@ -584,6 +585,17 @@ Closing happens in two stages:
 |   |<-(?)-- closing_signed  Fn|   |
 +---+  +---+
 
++---+  +---+
+|   |--(1)-  shutdown  --->|   |
+|   |<-(2)-  shutdown  |   |
+|   |  |   |
+|   |  |   |
+|   A   | ...  |   B   |
+|   |  |   |
+|   |--(3)-- closing_complete Fee->|   |
+|   |<-(4)-- closing_sig --|   |
++---+  +---+
+
 ### Closing Initiation: `shutdown`
 
 Either node (or both) can send a `shutdown` message to initiate closing,
@@ -776,6 +788,83 @@ satoshis, which is possible if `dust_limit_satoshis` is 
below 546 satoshis).
 No funds are at risk when that happens, but the channel must be force-closed as
 the closing transaction will likely never reach miners.
 
+### Closing Negotiation: `closing_complete` and `closing_sig`
+
+Once shutdown is complete, the channel is empty of HTLCs, there are no 
commitments
+for which a revocation is owed, and all updates are included on both 
commitments,
+the final current commitment transactions will have no HTLCs.
+
+Each peer says what fee it will pay, and the other side simply signs that 
transaction.  The only complexity comes from allowing each side to omit its own 
output should it be uneconomic.
+
+This process will be repeated every time a `shutdown` message is received, 
which allows re-negotiation.
+
+1. type: 40 (`closing_complete`)
+2. data:
+   * [`channel_id`:`channel_id`]
+   * [`u64`:`fee_satoshis`]
+   * [`u8`: `has_closer_output`]
+   * [`signature`:`signature_with_closee_output`]
+   * [`signature`:`signature_without_closee_output`]
+
+1. type: 41 (`closing_sig`)
+2. data:
+   * [`channel_id`:`channel_id`]
+   * [`u8`: `closee_output`]
+   * [`signature`:`signature`]
+
+ Requirements
+
+Note: the details and requirements for the transaction being signed are in 
[BOLT 3](03-transactions.md#closing-transaction)).
+
+Both nodes:
+  - After a `shutdown` has been received, AND no HTLCs remain in either 
commitment transaction:
+- SHOULD send a `closing_complete` message.
+
+The sender of `closing_complete` (aka. "the closer"):
+  - MUST set `fee_satoshis` to a fee less than or equal to its outstanding 
balance, rounded down to whole satoshis.
+  - SHOULD set `has_closer_output` to 0 if it considers its own remaining 
balance to be uneconomic.
+  - Otherwise MUST set `has_closer_output` to 1.
+  - If it sets `has_closer_output` to 1:
+- MUST set `signature_with_closee_output` to a valid signature of a 
transaction with both closer and closee outputs.
+   - MUST set `signature_without_closee_output` to a valid signature of a 
transaction with only a closer output.
+  - Otherwise: (`has_closer_output` is 0):
+- MUST set `signature_with_closee_output` to a valid signature of a 
transaction with only the closee output.
+   - MUST set `signature_without_closee_output` to a valid signature of a 
transaction with only the null output.
+
+The receiver of `closing_complete` (aka. "the closee"):
+  - if either `signature_with_closee_output` or 
`signature_without_closee_output` is not valid for the closing transactions 
specified in [BOLT #3](03-transactions.md#closing-transaction) OR non-compliant 
with LOW-S-standard 
rule[LOWS](https://github.com/bitcoin/bitcoin/pull/6769):
+- SHOULD send 

Re: [Lightning-dev] Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

2023-07-10 Thread Rusty Russell
Um, this is a super-weird setup AFAICT.

You generate an invoice, but you don't remember anything about it?  Um,
OK, stateless invoices are a thing, but there's a metadata field for
that?

Someone pays an invoice, you don't check the place it's going (which,
y'know, signed it), you check some other field?

I've never used lnbits and I'm clearly missing something...

Thanks!
Rusty.

callebtc via Lightning-dev 
writes:

> Dear list,
>
> earlier last month, our team at LNbits discovered a rather interesting 
> exploit which wich would enable an attacker to create balances out of thin 
> air by abusing a quirk in how invoices are handled internally. We've patched 
> this in LNbits version 0.10.5 and urge anyone to update ASAP if you haven't 
> done so already. I want to describe the attack here, since my gut feeling is 
> that carrying out the same exploit is possible in other Lightning 
> applications. If you're working on custodial wallets, payment processors, 
> account management software, etc. you probably want to read this.
>
> In short, the attacker was able to insert a bolt-11 payment hash of payment A 
> into a different payment, creating a malicious invoice B that can trick the 
> backend into believing that B == A.
>
> Here is how it goes:
>
> - Attacker creates invoice A of amount 1000 sat in LNbits
> - Attacker creates invoice B' of amount 1 sat on her own node
> - Attacker deserializes B', inserts payment_hash(A) into payment_hash(B), 
> re-signs the invoice, and serializes it again, producing malicious invoice B
> - Attacker creates a new account in LNbits and pays B
>
> - LNbits backend uses payment_hash(B) to check whether this is an internal 
> payment or a payment via LN
> - Backend finds A in its database since we implicitly assume that 
> payment_hash(A) commits to A
>
> ** This is the critical part! Payment hashes do *NOT* commit to any payment 
> details (like amount) but only to the preimage! ** 
>
> - Backend settles payment internally by crediting A debiting B
> - Attacker has "created" 999 sats
>
> Mitigation:
>
> The mitigation is quite simple. Backends should either use self-generated 
> unique "checking id's" for looking up internal payments or use additional 
> checks to make sure that the invoice details have not been messed around with 
> (e.g., asserting amount(A) == amount(B)).
>
> Lessons:
>
> I think there are two lessons here. First, it's good to realize the level of 
> sophistication of LN-savvy attackers. This attack clearly involves a 
> fundamental understanding of bolt-11 and requires custom tooling to produce 
> the malicious invoice. 
>
> The second lesson is more valuable: The "payment hash" of an invoice is not a 
> "payment" hash but merely a "preimage" hash – and nothing else. Naming this 
> field as such increases the chance of developers implicitly assuming that the 
> hash commits to payment details like amount, pubkey, etc. I will from now on 
> call this simply the "preimage hash" and invite you to do so too.
>
> Best 
>
> Calle
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] BOLT cleanup: removing features, assuming others

2023-07-03 Thread Rusty Russell
https://github.com/lightning/bolts/pull/1092

 02-peer-protocol.md  |   74 ++---
 03-transactions.md   |  389 ---
 05-onchain.md|   17 --
 07-routing-gossip.md |   52 +-
 09-features.md   |   30 +--
 5 files changed, 48 insertions(+), 514 deletions(-)

Simplicity FTW.  This PR proposes to start ignoring
(and thus, simply assuming) various features, based on recent releases
and scanning node announcments:

Removed:

* initial_routing_sync (only had an effect if gossip_queries not supported)
* option_anchor_outputs (only supported by older experimental-only CLN builds)

I looked at all node_announcements on my node. There are 449 nodes apparently 
running a 4-year-old LND version (features hex 2200), which have 3+ year old 
channels. @Roasbeef points out that they already will have their 
channel_updates ignored due to lack of htlc_maximum_msat which is now required 
by LND and CLN, at least).

Features you can now assume (you should probably still set it for now, but you 
can stop checking it).

* var_onion_optin (all but 6 nodes)
* gossip_queries (all but 11 nodes)
* option_data_loss_protect (all but 11 nodes)
* option_static_remotekey (all but 16 nodes)

Cheers!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [MODERATION] List Moderation Enabled

2023-03-26 Thread Rusty Russell
Hi all,

All posts will be reviewed by a moderator, and I will not be
approving posts which are not directly concerned with Lightning
development.[1]

I'll review this decision sometime next week.[2]

Sorry for the inevitable latency!
Rusty.

[1] Except this one.
[2] Yeah, and I guess that one too?
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fat Errors

2022-10-25 Thread Rusty Russell
Joost Jager  writes:
> Hi list,
>
> I wanted to get back to a long-standing issue in Lightning: gaps in error
> attribution. I've posted about this before back in 2019 [1].

Hi Joost!

Thanks for writing this up fully.  Core lightning also doesn't
penalize properly, because of the attribution problem: solving this lets
us penalize a channel, at least.

I want to implement this too, to make sure I understand it
correctly, but having read it twice it seems reasonable.

How about 16 hops?  It's the closest power of 2 to the legacy hop
limit, and makes this 4.5k for payloads and hmacs.

There is, however, a completely different possibility if we want
to use a pre-pay scheme, which I think I've described previously.  You
send N sats and a secp point; every chained secret returned earns the
forwarder 1 sat[1].  The answers of course are placed in each layer of
the onion.  You know how far the onion got based on how much money you
got back on failure[2], though the error message may be corrupted.

Cheers,
Rusty.
[1] Simplest is truncate the point to a new secret key.  Each node would
apply a tweak for decorrelation ofc.
[2] The best scheme is that you don't get paid unless the next node
decrypts, actually, but that needs more thought.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-06-29 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
> Hi Rusty,
>
> Thanks for the feedback!
>
>> This is over-design: if you fail to get reliable gossip, your routing will
>> suffer anyway.  Nothing new here.
>
> Idk, it's pretty simple: you're already watching for closes, so if a close
> looks a certain way, it's a splice. When you see that, you can even take
> note of the _new_ channel size (funds added/removed) and update your
> pathfinding/blindedpaths/hophints accordingly.

Why spam the chain?

> If this is an over-designed solution, that I'd categorize _only_ waiting N
> blocks as wishful thinking, given we have effectively no guarantees w.r.t
> how long it'll take a message to propagate.

Sure, it's a simplification on "wait 6 blocks plus 30 minutes".

> If by routing you mean a sender, then imo still no: you don't necessarily
> need _all_ gossip, just the latest policies of the nodes you route most
> frequently to. On top of that, since you can get the latest policy each time
> you incur a routing failure, as you make payments, you'll get the latest
> policies of the nodes you care about over time. Also consider that you might
> fail to get "reliable" gossip, simply just due to your peer neighborhood
> aggressively rate limiting gossip (they only allow 1 update a day for a
> node, you updated your fee, oops, no splice msg for you).

There's no ratelimiting on new channel announcements?

> So it appears you don't agree that the "wait N blocks before you close your
> channels" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?

Because it's simple.

>>From my PoV, the whole point of even signalling that a splice is on going,
> is for the sender's/receivers: they can continue to send/recv payments over
> the channel while the splice is in process. It isn't that a node isn't
> getting any gossip, it's that if the node fails to obtain the gossip message
> within the N block period of time, then the channel has effectively closed
> from their PoV, and it may be an hour+ until it's seen as a usable (new)
> channel again.

Sure.  If you want to not forget channels at all on close, that works too.

> If there isn't a 100% reliable way to signal that a splice is in progress,
> then this disincentives its usage, as routers can lose out on potential fee
> revenue, and sends/receivers may grow to favor only very long lived
> channels. IMO _only_ having a gossip message simply isn't enough: there're
> no real guarantees w.r.t _when_ all relevant parties will get your gossip
> message. So why not give them a 100% reliable on chain signal that:
> something is in progress here, stay tuned for the gossip message, whenever
> you receive that.

That's not 100% reliable at all.   How long to you want for the new
gossip?

Just treat every close as signalling "stay tuned for the gossip
message".  That's reliable.  And simple.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-06-28 Thread Rusty Russell
Hi Roasbeef,

This is over-design: if you fail to get reliable gossip, your routing
will suffer anyway.  Nothing new here.

And if you *know* you're missing gossip, you can simply delay onchain
closures for longer: since nodes should respect the old channel ids for
a while anyway.

Matt's proposal to simply defer treating onchain closes is elegant and
minimal.  We could go further and relax requirements to detect onchain
closes at all, and optionally add a perm close message.

Cheers,
Rusty.

Olaoluwa Osuntokun  writes:
> Hi y'all,
>
> This mail was inspired by this [1] spec PR from Lisa. At a high level, it
> proposes the nodes add a delay between the time they see a channel closed on
> chain, to when they remove it from their local channel graph. The motive
> here is to give the gossip message that indicates a splice is in process,
> "enough" time to propagate through the network. If a node can see this
> message before/during the splicing operation, then they'll be able relate
> the old and the new channels, meaning it's usable again by senders/receiver
> _before_ the entire chain of transactions confirms on chain.
>
> IMO, this sort of arbitrary delay (expressed in blocks) won't actually
> address the issue in practice. The proposal suffers from the following
> issues:
>
>   1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement
>   takes longer than 2 hours to reach the "economic majority" of
>   senders/receivers, then the channel won't be able to mask the splicing
>   downtime.
>
>   2. Gossip propagation delay and offline peers. These days most nodes
>   throttle gossip pretty aggressively. As a result, a pair of nodes doing
>   several in-flight splices (inputs become double spent or something, so
>   they need to try a bunch) might end up being rate limited within the
>   network, causing the splice update msg to be lost or delayed significantly
>   (IIRC CLN resets these values after 24 hours). On top of that, if a peer
>   is offline for too long (think mobile senders), then they may miss the
>   update all together as most nodes don't do a full historical
>   _channel_update_ dump anymore.
>
> In order to resolve these issues, I think instead we need to rely on the
> primary splicing signal being sourced from the chain itself. In other words,
> if I see a channel close, and a closing transaction "looks" a certain way,
> then I know it's a splice. This would be used in concert w/ any new gossip
> messages, as the chain signal is a 100% foolproof way of letting an aware
> peer know that a splice is actually happening (not a normal close). A chain
> signal doesn't suffer from any of the gossip/time related issues above, as
> the signal is revealed at the same time a peer learns of a channel
> close/splice.
>
> Assuming, we agree that a chain signal has some sort of role in the ultimate
> plans for splicing, we'd need to decide on exactly _what_ such a signal
> looks like. Off the top, a few options are:
>
>   1. Stuff something in the annex. Works in theory, but not in practice, as
>   bitcoind (being the dominant full node implementation on the p2p network,
>   as well as what all the miners use) treats annexes as non-standard. Also
>   the annex itself might have some fundamental issues that get in the way of
>   its use all together [2].
>
>   2. Re-use the anchors for this purpose. Anchor are nice as they allow for
>   1st/2nd/3rd party CPFP. As a splice might have several inputs and outputs,
>   both sides will want to make sure it gets confirmed in a timely manner.
>   Ofc, RBF can be used here, but that requires both sides to be online to
>   make adjustments. Pre-signing can work too, but the effectiveness
>   (minimizing chain cost while expediting confirmation) would be dependent
>   on the fee step size.
>
>   In this case, we'd use a different multi-sig output (both sides can rotate
>   keys if they want to), and then roll the anchors into this splicing
>   transaction. Given that all nodes on the network know what the anchor size
>   is (assuming feature bit understanding), they're able to realize that it's
>   actually a splice, and they don't need to remove it from the channel graph
>   (yet).
>
>   3. Related to the above: just re-use the same multi-sig output. If nodes
>   don't care all that much about rotating these keys, then they can just use
>   the same output. This is trivially recognizable by nodes, as they already
>   know the funding keys used, as they're in the channel_announcement.
>
>   4. OP_RETURN (yeh, I had to list it). Self explanatory, push some bytes in
>   an OP_RETURN and use that as the marker.
>
>   5. Fiddle w/ the locktime+sequence somehow to make it identifiable to
>   verifiers. This might run into some unintended interactions if the inputs
>   provided have either relative or absolute lock times. There might also be
>   some interaction w/ the main constructing for eltoo (uses the locktime).
>
> Of all the options, I think #2 makes the 

Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-05-26 Thread Rusty Russell
Matt Corallo  writes:
>>> I agree there should be *some* rough consensus, but rate-limits are a 
>>> locally-enforced thing, not a
>>> global one. There will always be races and updates you reject that your 
>>> peers dont, no matter the
>>> rate-limit, and while I agree we should have guidelines, we can't "just 
>>> make them the same" - it
>>> both doesn't solve the problem and means we can't change them in the future.
>> 
>> Sure it does!  It severly limits the set divergence to race conditions
>> (down to block height divergence, in practice).
>
> Huh? There's always some line you draw, if an update happens right on the 
> line (which they almost 
> certainly often will because people want to update, and they'll update every 
> X hours to whatever the 
> rate limit is), then ~half the network will accept the update and half won't. 
> I don't see how you 
> solve this problem.

The update contains a block number.  Let's say we allow an update every
100 blocks.  This must be <= current block height (and presumably, newer
than height - 2016).

If you send an update number 60, and then 600100, it will propagate.
600099 will not.

If some nodes have 60 and others have 600099 (because you broke the
ratelimiting recommendation, *and* propagated both approx the same
time), then the network will split, sure.

We could be fascist and penalize nodes which do this, but that's
overkill unless it actually happens a lot.

Nodes which want to keep an potential update "up their sleeve" will
backdate updates by 101 blocks (everyone should do this, in fact).

As I said, this has a problem with block height differences, but that's
explicitly included in the messages so you can ignore and wait if you
want.  Again, may not be a problem in practice.

>> Maybe.  What's a "non-update" based sketch?  Some huge percentage of
>> gossip is channel_update, so it's kind of the thing we want?
>
> Oops, maybe we're on *very* different pages, here - I mean doing sketches 
> based on "the things that 
> I received since the last sync, ie all the gossip updates from the last hour" 
> vs doing sketches 
> based on "the things I have, ie my full gossip store".

But that requires state.  Full store requires none, keeping it
super-simple

Though Alex has a idea for a "include even the expired entries" then
"regenerate every N blocks" which avoids the problem that each change is
two deltas (one remove, one add), at cost of some complexity.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-28 Thread Rusty Russell
Matt Corallo  writes:
> On 4/26/22 11:53 PM, Rusty Russell wrote:
>> Matt Corallo  writes:
>>>> This same problem will occur if *anyone* does ratelimiting, unless
>>>> *everyone* does.  And with minisketch, there's a good reason to do so.
>>>
>>> None of this seems like a good argument for *not* taking the "send updates 
>>> since the last sync in
>>> the minisketch" approach to reduce the damage inconsistent policies
>>> cause, though?
>> 
>> You can't do this, with minisketch.  You end up having to keep all the
>> ratelimited differences you're ignoring *per peer*, and then cancelling
>> them out of the minisketch on every receive or send.
>
> Hmm? I'm a bit confused, let me attempt to restate to make sure we're on the 
> same page. What I 
> *think* you said here is: "If you have a node which is rejecting a large 
> percentage *channel*'s 
> updates (on a per-channel, not per-update basis), and it tries to sync, 
> you'll end up having to keep 
> some huge set of 'I dont want any more updates for that channel' on a 
> per-peer basis"? Or maybe you 
> might have said "When you rate-limit, you have to tell your peer that you 
> rate-limited a channel 
> update and that it shouldn't add that update to its next sketch"?

OK, let's step back.  Unlike Bitcoin, we can use a single sketch for
*all* peers.  This is because we *can* encode enough information that
you can get useful info from the 64 bit id, and because it's expensive
to create them so you can't spam.

The more boutique per-peer handling we need, the further it gets from
this ideal;.

> The second potential thing I think you might have meant here I don't see as 
> an issue at all? You can 
> simply...let the sketch include one channel update that you ignored? See 
> above discussion of similar 
> rate-limits.

No, you need to get all the ignored ones somehow?  There's so much cruft
in the sketch you can't decode it.  Now you need to remember the ones
you ratelimited, and try to match other's ratelimiting.

>> So you end up doing that LND and core-lightning do, which is "pick 3
>> peers to gossip with" and tell everyone else to shut up.
>> 
>> Yet the point of minisketch is robustness; you can (at cost of 1 message
>> per minute) keep in sync with an arbitrary number of peers.
>> 
>> So, we might as well define a preferred ratelimit, so nodes know that
>> spamming past a certain point is not going to propagate.  At the moment,
>> LND has no effective ratelimit at all, so it's a race to the bottom.
>
> I agree there should be *some* rough consensus, but rate-limits are a 
> locally-enforced thing, not a 
> global one. There will always be races and updates you reject that your peers 
> dont, no matter the 
> rate-limit, and while I agree we should have guidelines, we can't "just make 
> them the same" - it 
> both doesn't solve the problem and means we can't change them in the future.

Sure it does!  It severly limits the set divergence to race conditions
(down to block height divergence, in practice).

> Ultimately, a updates-based sync is more robust in such a case - if there's 
> some race and your peer 
> accepts something you don't it may mean one more entry in the sketch one 
> time, but it won't hang 
> around forever.
>
>> We need that limit eventually, this just makes it more of a priority.
>> 
>>> I'm not really
>>> sure in a world where you do "update-based-sketch" gossip sync you're any 
>>> worse off than today even
>>> with different rate-limit policies, though I obviously agree there are 
>>> substantial issues with the
>>> massively inconsistent rate-limit policies we see today.
>> 
>> You can't really do it, since rate-limited junk overwhelms the sketch
>> really fast :(
>
> How is this any better in a non-update-based-sketch? The only way to address 
> it is to have a bigger 
> sketch, which you can do no matter the thing you're building the sketch over.
>
> Maybe lets schedule a call to get on the same page, throwing text at each 
> other will likely not move 
> very quickly.

Maybe.  What's a "non-update" based sketch?  Some huge percentage of
gossip is channel_update, so it's kind of the thing we want?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [RELEASE] core-lightning v0.11.0: Simon's Carefully Chosen Release Name

2022-04-26 Thread Rusty Russell
We're pleased to announce the 0.11.0 release of c-lightning, named on
behalf of @SimonVrouwe.

This release is the first under the rebranded "Core Lightning" name.

(Note: binaries are labelled v0.11.0.1 due to a minor bugfix required
for reproducible builds).

Highlights for Users


* We now finally support multiple live channels to the same peer!
* If we detect rust support, we'll build the cln-grpc plugin for full
  native GRPC support.
* We advertize an external IP address if it's reported by two or more
  peers (disable-ip-discovery or always-use-proxy disables).
* You can specify two databases with --wallet, and we'll write to both
  at once (sqlite3 only).
* pay supports BOLT 11 payment metadata: we'll send it if it's in the invoice.
* New setchannel command (deprecates setchannelfee) allows setting max
  and min HTLC amounts. Try lightning-cli setchannel all 0 for #zerobasefee.
* pay can be forced to exclude channels or nodes with the exclude
* Significant speedup in start times for old nodes with many historical HTLCs.

Highlights for the Network
==

* We send the remote node's IP address in the init message, so they can
  tell  what it is. (lightning/bolts#917)
* We are more aggressive in sending our own gossip to peers, to help 
propagation.
* Default port is set by network, so regtest and testnet defaults are
  different. (lightning/bolts#968)
* We never generate legacy onions, it's always TLV.
  We still forward legacy onions for now.
* We flush sockets before closing, so errors are more likely to reach the peer.
* Experimental support for announcing DNS addresses in node_announcement
  (lightning/bolts#911)

Highlights for Developers
=

* pay has a maxfee parameter, which sets a simple LND-style upper fee
  (vs using maxfeepercent and exemptfee)
* You can create invoices with only a description hash, using
  deschashonly.   We still store the full description, so use restraint!
* pay has deprecated paying solely by description hash: you should
  provide the full description, too.
* delinvoice has a new desconly parameter to simply trim the
  descriptions, but leave the rest intact.
* We have a rust crate cln-rpc to easily interact with our JSON-RPC.
* msggen tool allows easy generation of language bindings for our JSON
  RPC interface.

More details can be found in the CHANGELOG.md.

Thanks to everyone for their contributions and bug reports; please keep
them coming.

Since 0.10.2, we've had 712 commits from 37 different authors over 170 days.

A special thanks goes to the 18 first time contributors (a new record!):

Aaron Dewes
Tim W
manreo
Gregory Sanders
zero fee routing
Stephen Webel
Michael Dance
Marnix
lightning-developer
kiwiidb
Jules Comte
JohnOnGit
GoofyAF
Denis Ahrens
Clay Shoaf
benthecarman
azuchi
Anand Suresh

Cheers,
Christian, Rusty, Lisa.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-22 Thread Rusty Russell
Matt Corallo  writes:
>> Allowing only 1 a day, ended up with 18% of channels hitting the spam
>> limit.  We cannot fit that many channel differences inside a set!
>> 
>> Perhaps Alex should post his more detailed results, but it's pretty
>> clear that we can't stay in sync with this many differences :(
>
> Right, the fact that most nodes don't do any limiting at all and y'all have a 
> *very* aggressive (by 
> comparison) limit is going to be an issue in any context.

I'm unable to find the post years ago where I proposed this limit
and nobody had major objections.  I just volunteered to go first :)

> We could set some guidelines and improve 
> things, but luckily regular-update-sync bypasses some of these issues anyway 
> - if we sync once per 
> block and your limit is once per block, getting 1000 updates per block for 
> some channel doesn't 
> result in multiple failures in the sync. Sure, multiple peers sending 
> different updates for that 
> channel can still cause some failures, but its still much better.

Nodes will want to aggressively spam as much as they can, so I think we
need a widely-agreed limit.  I don't really care what it is, but
somewhere between per 1 and 1000 blocks makes sense?

Normally I'd suggest a burst, but that's bad for consensus: better to
say "just create your update N-6 blocks behind so you can always create a
new one 6 blocks behind".

>>> gossip queries  is broken in at least five ways.
>> 
>> Naah, it's perfect if you simply want to ask "give me updates since XXX"
>> to get you close enough on reconnect to start using set reconciliation.
>> This might allow us to remove some of the other features?
>
> Sure, but that's *just* the "gossip_timestamp_filter" message, there's 
> several other messages and a 
> whole query system that we can throw away if we just want that message :)

I agree.  Removing features would be nice :)

>> But we might end up with a gossip2 if we want to enable taproot, and use
>> blockheight as timestamps, in which case we could probably just support
>> that one operation (and maybe a direct query op).
>> 
>>> Like eclair, we don’t bother to rate limit and don’t see any issues with 
>>> it, though we will skip relaying outbound updates if we’re saturating 
>>> outbound connections.
>> 
>> Yeah, we did as a trial, and in some cases it's become limiting.  In
>> particular, people restarting their LND nodes once a day resulting in 2
>> updates per day (which, in 0.11.0, we now allow).
>
> What do you mean "its become limiting"? As in you hit some reasonably-low 
> CPU/disk/bandwidth limit 
> in doing this? We have a pretty aggressive bandwidth limit for this kinda 
> stuff (well, indirect 
> bandwidth limit) and it very rarely hits in my experience (unless the peer is 
> very overloaded and 
> not responding to pings, which is a somewhat separate thing...)

By rejecting more than 1 per day, some LND nodes had 50% of their
channels left disabled :(

This same problem will occur if *anyone* does ratelimiting, unless
*everyone* does.  And with minisketch, there's a good reason to do so.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-21 Thread Rusty Russell
Matt Corallo  writes:
> Sure, if you’re rejecting a large % of channel updates in total
> you’re gonna end up hitting degenerate cases, but we can consider
> tuning the sync frequency if that becomes an issue.

Let's be clear: it's a problem.

Allowing only 1 a day, ended up with 18% of channels hitting the spam
limit.  We cannot fit that many channel differences inside a set!

Perhaps Alex should post his more detailed results, but it's pretty
clear that we can't stay in sync with this many differences :(

> gossip queries  is broken in at least five ways.

Naah, it's perfect if you simply want to ask "give me updates since XXX"
to get you close enough on reconnect to start using set reconciliation.
This might allow us to remove some of the other features?

But we might end up with a gossip2 if we want to enable taproot, and use
blockheight as timestamps, in which case we could probably just support
that one operation (and maybe a direct query op).

> Like eclair, we don’t bother to rate limit and don’t see any issues with it, 
> though we will skip relaying outbound updates if we’re saturating outbound 
> connections.

Yeah, we did as a trial, and in some cases it's become limiting.  In
particular, people restarting their LND nodes once a day resulting in 2
updates per day (which, in 0.11.0, we now allow).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-03-23 Thread Rusty Russell
annels which aren't actually
> anchored in the chain, which I *think* is a cool thing to have? On the other
> hand, the graph at the _edge_ level would be far more dynamic than it is
> today (Bob can advertise an entirely distinct topology from one day to
> another). Would need to think about the implications here for path finding
> algorithms and nodes that want to maintain an update to date view of the
> network...
>
>> - `channel_id_and_claimant` is a 31-bit per-node channel_id which can be
>> used in onion_messages, and a one bit stolen for the `claim` flag.
>
> Where would this `channel_id` be derived from? FWIW, using this value in the
> onion means we get a form of pubkey based routing [3] depending on how these
> are derived.

Yeah, we could drop this altogether if we wanted; its just a unique
identifier for that specific node to refer to a peer.  It saves space in
the onion, that is all.

>> This simplifies gossip, requiring only two messages instead of three,
>> and reducing the UTXO validation requirements to per-node instead of
>> per-channel.
>
> I'm not sure this would actually _simplify_ gossip in practice, given that
> we'd be moving to a channel graph that isn't entirely based in the reality
> of what's routable, and would be far more dynamic than it is today.
>
>> We can use a convention that a channel_update_v2 with no `capacity` is a
>> permanent close.
>
> On the neutrino side, we tried to do something where if we see both channels
> be disabled, then we'd mark the channel as closed. But in practice if you're
> not syncing _every_ channel update every transmitted, then you'll end up
> actually missing them.

Yeah, we may also want the 2week / 2000 block refresh here as well?

>> It also allows "leasing" of UTXOs: you could pay someone to sign their
>> UTXO for your node_announcement, with some level of trust.
>
> I'm not sure this is entirely a *good* thing, as the graph becomes more
> decoupled with the _actual_ on-chain relationships...

Which is the aim (though not sure how many ppl would do it in
practice?).

Cheers,
Rusty.

> -- Laolu
>
> [1]:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-March/003526.html
> [2]: https://github.com/lightning/bolts/pull/814
>
>
> On Tue, Feb 15, 2022 at 12:46 AM Rusty Russell 
> wrote:
>
>> Hi all,
>>
>> I've floated this idea before, but this is a more concrete
>> proposal for a "v2" gossip protocol.
>>
>> It assumes x-only pubkeys (aka point32) and BIP-340 signatures, and uses
>> modern TLVs for all optional or extensible fields.  Timestamps are
>> replaced with block heights.
>>
>> 1. Nodes send out weekly node_announcement_v2, proving they own some
>>UTXOs.
>> 2. This entitles them to broadcast channels, using channel_update_v2; a
>>channel_update_v2 from both peers means the channel exists.
>> 3. This uses UTXOs for anti-spam, but doesn't tie them to channels
>>directly.
>> 4. Future ZKP proofs are could be added.
>>
>> 1. type: 271 (`node_announcement_v2`)
>> 2. data:
>> * [`bip340sig`:`signature`]
>> * [`point32`:`node_id`]
>> * [`u32`:`blockheight`]
>> * [`node_announcement_v2_tlvs`:`tlvs`]
>>
>> 1. `tlv_stream`: `node_announcement_v2_tlvs`
>> 2. types:
>> 1. type: 2 (`features`)
>> 2. data:
>> * [`...*byte`:`featurebits`]
>> 1. type: 3 (`chain_hash`)
>> 2. data:
>> * [`chain_hash`:`chain`]
>> 1. type: 4 (`taproot_utxo_proofs`)
>> 2. data:
>> * [`...*taproot_utxo_proof`:`proofs`]
>> 1. type: 6 (`legacy_utxo_proofs`)
>> 2. data:
>> * [`...*legacy_utxo_proof`:`proofs`]
>> 1. type: 127 (`ipv4_addresses`)
>> 2. data:
>> * [`...*ipv4`:`addresses`]
>> 1. type: 129 (`ipv6_addresses`)
>> 2. data:
>> * [`...*ipv6`:`addresses`]
>> 1. type: 131 (`torv3_addresses`)
>> 2. data:
>> * [`...*torv3`:`addr`]
>> # Maybe alias, color, etc?
>>
>> Taproot proofs are a signature of the v1 output over the `node_id`,
>> `utxo` and `blockheight` with prefix "lightingtaproot_utxo_proofsig"
>> (using the tagged signatures as per BOLT12). (FIXME: what about
>> tapscripts?).
>>
>> Legacy proofs are two signatures, similar to the existing
>> channel_announcement.
>>
>> 1. subtype: `taproot_utxo_proof`
>> 2. data:
>> * [`short_channel_id`:`utxo`]
>> * [`signature`:`sig`]
>>
>> 1. subtype: `legacy_utxo_proof`
>> 2. data:
>> * [`short

Re: [Lightning-dev] A Proposal for Adding Bandwidth Metered Payment to Onion Messages

2022-02-23 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
> Hi y'all,
>
> (TL;DR: a way to nodes to get paid to forward onion messages by adding an
> upfront session creation phase that uses AMP tender a messaging session to a
> receiver, with nodes being paid upfront for purchase of forwarding
> bandwidth, and a session identifier being transmitted alongside onion
> messages to identify paid sessions)

AMP seems to be a Lightning Labs proprietary extension.  You mean
keysend, which at least has a draft spec?

> Onion messaging has been proposed as a way to do things like fetch invoices
> directly from a potential receiver _directly_ over the existing LN. The
> current proposal (packaged under the BOLT 12 umbrella) uses a new message
> (`onion_message`) that inherits the design of the existing Sphinx-based
> onion blob included in htlc_add messages as a way to propagate arbitrary
> messages across the network. Blinded paths which are effectively an unrolled
> Sphinx SURB (single use reply block), are used to support reply messages in
> a more private manner. Compared to SURBs, blinded paths are more flexible as
> they don't lock in things like fees or CLTV values.
>
> A direct outcome of widespread adoption of the proposal is that the scope of
> LN is expanded beyond "just" a decentralized p2p payment system, with the

Sure, let's keep encouraging people to use HTLCs for free to send data?
I can certainly implement that if you prefer!

>  1. As there's no explicit session creation/acceptance, a node can be
>  spammed with unsolicited messages with no way to deny unwanted messages nor
>  explicitly allow messages from certain senders.
>
>  2. Nodes that forward these messages (up to 32 KB per message) receive no
>  compensation for the network bandwidth their expend, effectively shuffling
>  around messages for free.
>
>  3. Rate limiting isn't concretely addressed, which may result in
>  heterogeneous rate limiting policies enforced around the network, which can
>  degrade the developer/user experience (why are my packets being randomly
>  dropped?).

Sure, this is a fun one!  I can post separately on ratelimiting; I
suggest naively limiting to 10/sec for peers with channels, and 1/sec
for peers without for now.

(In practice, spamming with HTLCs is infinitely more satisfying...)

> In this email I propose a way to address the issues mentioned above by
> adding explicit onion messaging session creation as well as a way for nodes
> to be (optionally) paid for any onion messages they forward. In short, an
> explicit session creation phase is introduced, with the receiver being able
> to accept/deny the session. If the session is accepted, then all nodes that
> comprise the session route are compensated for allotting a certain amount of
> bandwidth to the session (which is ephemeral by nature).

It's an interesting layer on top (esp if you want to stream movies), but
I never proposed this because it seems to add a source-identifying
session id, which is a huge privacy step backwards.

You really *do not want* to use this for independent transmissions.

I flirted with using blinded tokens, but it gets complex fast; ideas
welcome!

> ## Node Announcement TLV Extension
>
> In order to allow nodes to signal that they want to be paid to forward onion
> messages and also specify their pricing, we add two new TLV to the node_ann
> message:
>
>   * type: 1 (`sats_per_byte`)
>* data:
>   * [`uint64`:`forwarding_rate`]
>   * type: 2 (`sats_per_block`)
>* data:
>   * [`uint64`:`per_block_rate`]

You mean:

   * type: 1 (`sats_per_byte`)
   * data:
   * [`tu64`:`forwarding_rate`]
   * type: 3 (`sats_per_block`)
   * data:
   * [`tu64`:`per_block_rate`]

1. Don't use an even TLV field.
2. Might as well use truncated u64.

> The `sats_per_byte` field allows nodes to price their bandwidth, ensuring
> that they get paid for each chunk of allocated bandwidth. As sessions have a
> fixed time frame and nodes need to store additional data within that time
> frame, the `sats_per_block` allows nodes to price this cost, as they'll hold
> onto the session identifier information until the specified block height
> (detailed below).
>
> As onion messages will _typically_ be fixed sized we may want to use
> coursers
> metering here instead of bytes, possibly paying for 1.3KB or 32 KB chunks
> instead.

I think it's a premature optimization?  Make standard duration 2016
blocks; then they can request multiples if they want?  Reduces
node_announcement size.

> With the above nodes are able to express that they're willing to forward
> messages for sats, and how much they charge per byte as well as per block.
> Next we add a new TLV in the _existing_ HTLC onion blob that allows a
> sending node to tender paid onion message session creation. A sketch of this
> type would look something like:
>
>   * type: 14 (`onion_session_id`)
> * data:
>   * [`32*byte`:`session_id`]

I'd be tempted to use 16 bytes?  Collisions here are not really a 

Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-02-20 Thread Rusty Russell
ZmnSCPxj  writes:
> Good morning rusty,
>
> If we are going to switch to a new gossip version, should we prepare now for 
> published channels that are backed by channel factories?

This is already true with the new proposal: channels don't have to be
"real".  It's possible to raise the required ratio later, so less UTXO
proof is required (since channels are prioritized by their id, you can
choose which ones older nodes will see.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-02-15 Thread Rusty Russell
Joost Jager  writes:
> Hi Rusty,
>
> Nice to see the proposal in more concrete terms. Few questions:
>
> - The total proved utxo value (not counting any utxos which are spent)
>>   is multiplied by 10 to give the "announcable_channel_capacity" for that
>> node.
>>
>
> Could this work as a dynamic value too, similar to the minimum relay fee on
> L1?

I made the number up, so I'm not very attached to it.  How would we
choose to scale it though?  If nodes don't use the same value it makes
for wasted gossip (and minisketch-based gossip much harder).

>> 1. `tlv_stream`: `channel_update_v2_tlvs`
>>
> 2. types:
>> 1. type: 4 (`capacity`)
>> 2. data:
>> * [`tu64`:`satoshis`]
>>
>
> What does capacity mean exactly outside the context of a real channel? Will
> this be reduced to that maximum htlc amount that the nodes want to route,
> to save as much of the announceable budget as possible?

Yes, it's the old htlc_maximum_msat, but expressed in satoshis because
msat seems to finegrained?

> It is also the question of whether 10 x 10k channels should weigh as much
> on the budget as a 1 x 100k channel. A spammer may be able to do more harm
> with multiple smaller channels because there is more for the sender's
> pathfinding algorithms to explore. Maybe it doesn't matter as long as there
> is some mechanism to discourage spam.

Yes, I suggested a minimum cost to avoid 100k 1sat channels, I don't
know if we should be more sophisticated.

>> 1. type: 5 (`cost`)
>> 2. data:
>>* [`u16`:`cltv_expiry_delta`]
>>* [`u32`:`fee_proportional_millionths`]
>>* [`tu32`:`fee_base_msat`]
>> 1. type: 6 (`min_msat`)
>> 2. data:
>> * [`tu64`:`min_htlc_sats`]
>>
>> - `channel_id_and_claimant` is a 31-bit per-node channel_id which can be
>>   used in onion_messages, and a one bit stolen for the `claim` flag.
>
> If you'd increase the budget multiplier from 10 to 20, couldn't this be
> simplified to always applying the cost to both nodes?

Yes!  I forgot that capacity doesn't have to be symmetrical; if I open a
giant channel with you, and you don't want to expose more UTXOs, you
just set your capacity to some lower value.

>> - A channel is not considered to exist until both peers have sent a
>>   channel_update_v2, at least one of which must set the `claim` flag.
>> - If a node sets `claim`, the capacity of the channel is subtracted from
>>   the remaining announcable_channel_capacity for that node (minimum
>>   10,000 sats).
>
> Same question about magic value and whether it can be dynamic.

Yes, 10ksat might be giant one day?  We can change it naively with a
feature bit ("I use v2 capacity calcs"), or in some more finegrained
way, but what do we base it on?

Cheers!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [RFC] Lightning gossip alternative

2022-02-15 Thread Rusty Russell
Hi all,

I've floated this idea before, but this is a more concrete
proposal for a "v2" gossip protocol.

It assumes x-only pubkeys (aka point32) and BIP-340 signatures, and uses
modern TLVs for all optional or extensible fields.  Timestamps are
replaced with block heights.

1. Nodes send out weekly node_announcement_v2, proving they own some
   UTXOs.
2. This entitles them to broadcast channels, using channel_update_v2; a
   channel_update_v2 from both peers means the channel exists.
3. This uses UTXOs for anti-spam, but doesn't tie them to channels
   directly.
4. Future ZKP proofs are could be added.

1. type: 271 (`node_announcement_v2`)
2. data:
* [`bip340sig`:`signature`]
* [`point32`:`node_id`]
* [`u32`:`blockheight`]
* [`node_announcement_v2_tlvs`:`tlvs`]

1. `tlv_stream`: `node_announcement_v2_tlvs`
2. types:
1. type: 2 (`features`)
2. data:
* [`...*byte`:`featurebits`]
1. type: 3 (`chain_hash`)
2. data:
* [`chain_hash`:`chain`]
1. type: 4 (`taproot_utxo_proofs`)
2. data:
* [`...*taproot_utxo_proof`:`proofs`]
1. type: 6 (`legacy_utxo_proofs`)
2. data:
* [`...*legacy_utxo_proof`:`proofs`]
1. type: 127 (`ipv4_addresses`)
2. data:
* [`...*ipv4`:`addresses`]
1. type: 129 (`ipv6_addresses`)
2. data:
* [`...*ipv6`:`addresses`]
1. type: 131 (`torv3_addresses`)
2. data:
* [`...*torv3`:`addr`]
# Maybe alias, color, etc?

Taproot proofs are a signature of the v1 output over the `node_id`,
`utxo` and `blockheight` with prefix "lightingtaproot_utxo_proofsig"
(using the tagged signatures as per BOLT12). (FIXME: what about
tapscripts?).

Legacy proofs are two signatures, similar to the existing
channel_announcement.

1. subtype: `taproot_utxo_proof`
2. data:
* [`short_channel_id`:`utxo`]
* [`signature`:`sig`]

1. subtype: `legacy_utxo_proof`
2. data:
* [`short_channel_id`:`utxo`]
* [`point`:`bitcoin_key_1`]
* [`point`:`bitcoin_key_2`]
* [`signature`:`sig_1`]
* [`signature`:`sig_2`]

- node_announcement_v2 are discarded after a week (1000 blocks).
- If two node_announcement_v2 claim the same UTXO, use the first seen,
  discard any others.
- Nodes do not need to monitor existence of UTXOs after initial check (since
  they will automatically prune them after a week).
- The total proved utxo value (not counting any utxos which are spent)
  is multiplied by 10 to give the "announcable_channel_capacity" for that node.

1. type: 273 (`channel_update_v2`)
2. data:
* [`bip340sig`:`signature`]
* [`point32`:`local_node_id`]
* [`point32`:`remote_node_id`]
* [`u32`:`blockheight`]
* [`u32`:`channel_id_and_claimant`]
* [`channel_update_v2_tlvs`:`tlvs`]

1. `tlv_stream`: `channel_update_v2_tlvs`
2. types:
1. type: 2 (`features`)
2. data:
* [`...*byte`:`featurebits`]
1. type: 3 (`chain_hash`)
2. data:
* [`chain_hash`:`chain`]
1. type: 4 (`capacity`)
2. data:
* [`tu64`:`satoshis`]
1. type: 5 (`cost`)
2. data:
   * [`u16`:`cltv_expiry_delta`]
   * [`u32`:`fee_proportional_millionths`]
   * [`tu32`:`fee_base_msat`]
1. type: 6 (`min_msat`)
2. data:
* [`tu64`:`min_htlc_sats`]

- `channel_id_and_claimant` is a 31-bit per-node channel_id which can be
  used in onion_messages, and a one bit stolen for the `claim` flag.
- A channel is not considered to exist until both peers have sent a
  channel_update_v2, at least one of which must set the `claim` flag.
- If a node sets `claim`, the capacity of the channel is subtracted from
  the remaining announcable_channel_capacity for that node (minimum
  10,000 sats).
- If there is insufficient total `announcable_channel_capacity` for a
  node, it is used by the lower `channel_id`s first.

Implications


This simplifies gossip, requiring only two messages instead of three,
and reducing the UTXO validation requirements to per-node instead of
per-channel.  We can use a convention that a channel_update_v2 with no
`capacity` is a permanent close.

We might want to add a taproot_utxo_delegated_proof where UTXOs can
sign over to another key, so cold storage only needs to sign once, and
the other key can sign weekly.

It also allows "leasing" of UTXOs: you could pay someone to sign their
UTXO for your node_announcement, with some level of trust.  They could
spend the UTXO, which gives you a slow degredation as new nodes don't
accept your channels but existing nodes don't check until it's due for a
refresh).  Or they could sign one UTXO for multiple node_announcements,
which is why preference is given to the first-seen.  But it's a weekly
business so there's incentive for them not to.

Between nodes there's the question of "who claims this new channel?",
which I didn't address here.  Opener claims is logical, but leaks
information (though you could definitely pay for the peer to claim it).
With dual-funding, it's more 

Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-12-02 Thread Rusty Russell
Matt Corallo  writes:
> The Obvious (tm) solution here is PTLCs - just have the sender always add 
> some random nonce * G to 
> the PTLC they're paying and send the recipient a random nonce in the onion. 
> I'd generally suggest we 
> just go ahead and do this for every PTLC payment, cause why not?

AFAICT we need to do this for decorrelation: each hop has a unique tweak
(prob derived from the onion share secret).

In bolt12, we have the additional problem for the tipping case: each
invoice contains an amount, so you can't preprint amountless invoices.
(This plugs a hole in bolt11 for this case, where you get a receipt but
no amount!).

However, I think the best case is a generic authorization mechanism:

1. The offer contains a fallback node.
2. Fallback either returns you an invoice signed by the node you expect, *or*
   one signed by itself and an authorization from the node you expect.
3. The authorization might be only for a particular offer, or amount, or
   have an expiry.  *handwave*.

This lets the user choose the trust model they want.  The fallback node
may also provide an onion message notification service when the real
node comes back, to avoid polling.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-08-11 Thread Rusty Russell
Sorry this took so long.

https://github.com/lightningnetwork/lightning-rfc/pull/895

This changed quite a bit, based on discussion here and more coherent
thinking.

1. You can simply send funding_locked early, no feature needed.
2. It's a bit useless unless you are the (sole) funder or you trust the
   other side.  Without that, you can neither accept payments nor route
   them; in theory if they used push_msat you could send payments out,
   but that seems a niche case.
3. We do want to know the short_channel_id they're going to use for the
   channel, so we can add it to routehints for incoming payments.

Adding the scid is nice anyway, for chainsplit scenarios.

Here is the new text, a little formatted:

1. `tlv_stream`: `funding_locked_tlvs`
2. types:
1. type: 1 (`short_channel_id`)
2. data:
* [`short_channel_id`:`short_channel_id`]
 
 Requirements

The sender:
...
 - SHOULD set `short_channel_id`

 - if it is the sole contributor to the funding transaction, or
   has reason to trust the peer:

- MAY send `funding_locked` before the funding transaction
  has reached `minimum_depth`
- MAY set `short_channel_id` to a fake value, if it will
  route payments to that `short_channel_id`.
  - otherwise:
- MUST wait until the funding transaction has reached
  `minimum_depth` before sending this message.

  - SHOULD re-transmit `funding_locked` if the
`short_channel_id` for this chanel has changed.
...
The receiver:
  - SHOULD ignore the `funding_locked` if it knows the
`short_channel_id` of the channel and it differs from the
value in `funding_locked`.

...

Nodes which have funded the channel or trust their peers to have done,
can simply start using the channel instantly by sending
`funding_locked`.  This raises the problem of how to use this new
channel in route hints, since it does not yet have a block number.
For this reason, a convincing fake number can be use; when the real
funding transaction is finally mined, it can re-send `funding_locked`
with the real value.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Error Codes for LN

2021-07-06 Thread Rusty Russell
Carla Kirk-Cohen  writes:
> Hi all,

Hi Carla,

I apologize for not responding to this earlier, but it was
raised again in the recent spec meeting
(https://lightningd.github.io/meetings/ln_spec_meeting/2021/ln_spec_meeting.2021-07-05-20.06.log.html).

I love the idea of more specific error codes, BTW!

Feedback interleaved:

> Since we shouldn’t have non-ascii values in the error string itself,
> this can most easily be achieved by adding TLV fields after the
> data field. In terms of supporting nodes that have not upgraded,
> we could either include the error code in the data field to cover
> our bases, or introduce a feature bit so that we know whether
> to backfill the data field. This gives upgraded nodes an improved
> quality of life, while leaving older nodes unaffected.

Older nodes should definitely ignore extra fields; it's in the spec and
we've relied on this to extend messages in the past, so this part is
easy.

Technically, all defined types are now assumed to have an optional TLV
appended, since f068dd0d (Bolt 1: Specify that extensions to existing
messages must use TLV (#754)).

> While we can’t enumerate every possible error, there are quite
> a few cases in the spec where we can introduce explicit error
> codes. For the sake of the skim-readers, I’ve left that list at
> the end of the email.
>
> Taking the example of our node receiving an invalid signature for
> a htlc, a new error would look like this:

I think this is both too much, and not enough.

Too much:
- Many of these errors are "your implementation is broken", which is
  really not something actionable by the recipient.
- A lot of work to fill in all these error cases, which will (because
  they're usually impossible) will be untested and broken.

Not enough:
- Look at the proposal for channel_types, where you would object to the
  channel_type if you don't like it.  This would be grouped under
  "Funding params unacceptable", which is actually 99% of errors at this
  point and does not say what the problem is with specificity.

I took a different approach with onion messages[1], where you (optionally)
specify the field number, even an optional suggested value:

1. type: 1 (`erroneous_field`)
2. data:
* [`tu64`:`tlv_fieldnum`]
1. type: 3 (`suggested_value`)
2. data:
* [`...*byte`:`value`]
1. type: 5 (`error`)
2. data:
* [`...*utf8`:`msg`]

In our case, we need to refer to which message (if any) caused the
error, and we have non-tlv fields, so it can't simply use the tlv field
number.

Here's my straw proposal:

1. `tlv_stream`: `error_tlvs`
2. types:
1. type: 1 (`erroneous_message`)
2. data:
* [`...*byte`:`message`]
1. type: 3 (`erroneous_fieldnum`)
2. data:
* [`tu64`:`fieldnum`]
1. type: 5 (`suggested_value`)
2. data:
* [`...*byte`:`value`]

erroneous_message is the message we're complaining about (including
2-byte type), which may be truncated (but must be at least 2 bytes).

fieldnum is either the 0-based field number (for fixed fields), or the
number of fixed fields + the tlv type (for tlv fields).

suggested_value is the optional value if we have an idea if what we
expected / prefer.

> This new kind of error provides us with an error code that tells us
>
> exactly what has gone wrong, and metadata pointing to the htlc
>
> with an invalid sig. This information can be logged, or stored in a
>
> more permanent error store to help diagnose issues in the future.
>
> Right now, the spec is pretty strict on error handling [13], indicating
>
> that senders/recipients of errors `MUST` fail the channel referenced
>
> in the error.

> This isn’t very practical, and I believe that the majority
> of the impls don’t abide by this instruction.

This was inevitable eventually, but c-lightning deliberately treated
errors as fatal for a long time so people would *notice* and *report*
these issues.

To be fair, *LND* didn't treat them as fatal.  As so naturally your
engineers didn't *think* of them as a big deal (and testing didn't show
it up), so it would send errors for cases which it clearly didn't want
to close the channel (e.g. peer too slow to respond!).

Hence this PR, which makes these less fatal, and adds warning support:

https://github.com/lightningnetwork/lightning-rfc/pull/834

(We should similarly add this TLV to warnings?)

> Candidates for error codes:

The vast majority of these are "contact your developer, peer says we did
something illegal".  Which is always nice to have more information
about, but not critital.

The exceptions are:

> Funding Process:
>
> * Funding process timeout [2]
>
> * Fees out of range [3]
>
> * Funding tx spent [11]
>
> * Funding params unacceptable (eg, channel too small)
...
> Channel State Machine:
> 
> * HTLC timeout [4]
...
> Fee Updates
>
> * Update fee to low/high [9]
...

And BTW this one is misguided:

> * Feature bit required

If Alice says a feature bit is even (compulsory), and 

Re: [Lightning-dev] Turbo channels spec?

2021-07-06 Thread Rusty Russell
ZmnSCPxj  writes:
> Mostly nitpick on terminology below, but I think text substantially like the 
> above should exist in some kind of "rationale" section in the BOLT, so ---
>
> In light of dual-funding we should avoid "funder" and "fundee" in favor of 
> "initiator" and "acceptor".

Yes, Lisa has a patch for this in her spec PR :)

> So what matters for the above rationale is the "sender" of an HTLC and the 
> "receiver" of an HTLC, not really who is acceptor or initiator.
>
> * Risks for HTLC sender is that the channel never confirms, but it probably 
> ignores the risk because it can close onchain (annoying, and fee-heavy, but 
> not loss of funds caused by peer).
> * Risks for HTLC receiver is that the channel never confirms, so HTLC must 
> not be routed out to others or resolved locally if the receiver already knows 
> the preimage, UNLESS the HTLC receiver has some *other* reason to trust the 
> peer.

This misses an important case: even with the dual-funding prototol,
single-sided funding is more common.

So:
  - if your peer hasn't contributed funds:
- You are in control, channel is safe (modulo your own conf issues)
  - if the peer has contributed funds:
- You can send, since cancellation just gives you a free refund (if
  you contributed anything at all).
- You should not route an incoming HTLCs (unless you trust peer)

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-07-04 Thread Rusty Russell
Matt Corallo  writes:
> Thanks!
>
> On 6/29/21 01:34, Rusty Russell wrote:
>> Hi all!
>> 
>>  John Carvalo recently pointed out that not every implementation
>> accepts zero-conf channels, but they are useful.  Roasbeef also recently
>> noted that they're not spec'd.
>> 
>> How do you all do it?  Here's a strawman proposal:
>> 
>> 1. Assign a new feature bit "I accept zeroconf channels".
>> 2. If both negotiate this, you can send update_add_htlc (etc) *before*
>> funding_locked without the peer getting upset.
>
> Does it make sense to negotiate this per-direction in the channel init 
> message(s)? There's a pretty different threat 
> model between someone spending a dual-funded or push_msat balance vs someone 
> spending a classic channel-funding balance.

channel_types fixes this :)

Until then, I'd say keep it simple.  I would think that c-lightning will
implement the "don't route from non-locked-in channels" and always
advertize this option.  That means we're always offering zero-conf
channels, but that seems harmless:

- Risks for funder is that channel never confirms, but it probably ignores
  the risk because it can close onchain (annoying, and fee-heavy, but not
  loss of funds caused by peer).

- Risks for fundee (or DF channels where peer contributes any funds) is
  that funder doublespends, so HTLCs must not be routed out to others
  (unless you have other reason to trust peer).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-06-29 Thread Rusty Russell
Bastien TEINTURIER  writes:
> Hi Rusty,
>
> On the eclair side, we instead send `funding_locked` as soon as we
> see the funding tx in the mempool.
>
> But I think your proposal would work as well.

This would be backward compatible, I think.  Eclair would send
`funding_locked`, which is perfectly legal, but a normal peer would
still wait for confirms before also sending `funding_locked`; it's
just that option_zeroconf_channels would mean it doesn't have to
wait for that before sending HTLCs?

> We may want to defer sending `announcement_signatures` until
> after the funding tx has been confirmed? What `min_depth` should
> we use here? Should we keep a non-zero value in `accept_channel`
> or should it be zero?

You can't send it before you know the channel_id, so it has to be at
least 1.  Spec says:

  - MUST NOT send `announcement_signatures` messages until `funding_locked`
  has been sent and received AND the funding transaction has at least six 
confirmations.

So still compliant there?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Turbo channels spec?

2021-06-28 Thread Rusty Russell
Hi all!

John Carvalo recently pointed out that not every implementation
accepts zero-conf channels, but they are useful.  Roasbeef also recently
noted that they're not spec'd.

How do you all do it?  Here's a strawman proposal:

1. Assign a new feature bit "I accept zeroconf channels".
2. If both negotiate this, you can send update_add_htlc (etc) *before*
   funding_locked without the peer getting upset.
3. Nodes are advised *not* to forward HTLCs from an unconfirmed channel
   unless they have explicit reason to trust that node (they can still
   send *out* that channel, because that's not their problem!).

It's a pretty simple change, TBH (this zeroconf feature would also
create a new set of channel_types, altering that PR).

I can draft something this week?

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Upgrade on reestablish.

2021-05-06 Thread Rusty Russell
I wanted this for my simplified update commitment draft, so here it is.
Would be nice to upgrade those old channels to static remotekey and
anchors (yeah, it's still on my TODO) so we could top the old stuff out
of implementations and finally the spec!

https://github.com/lightningnetwork/lightning-rfc/pull/868

Inline copy below!

Cheers,
Rusty.

### Upgrading Channels

Upgrading channels (e.g. enabling `option_static_remotekey` for a
channel where it was not negotiated originally) is possible at
reconnection time if both implementations support it.

Both peers indicate what upgrades are available, and if they both
offer an upgrade either peer wants, then the upgrade is performed
following any reestablish retransmissions and corresponding
commitments which bring the channel into a symmetrical state with no
updates outstanding.

Once both peers indicate things are quiescent by sending
`update_upgrade`, the channel features are considered upgraded and a
normal `commiment_signed` cycle occurs with the new upgrade in place.

In case of disconnection it's possible that one peer will consider the
channel upgraded and the other not.  For this reason (and potentially
better diagnostics in future) , they indicate what the current channel
features are on reconnect: the "more upgraded" one applies immediately
in this case.

Channel features are currently defined as:
  - `option_static_remotekey`
  - `option_anchor_outputs` (requires `option_static_remotekey`)

1. type: 40 (`update_upgrade`)
2. data:
   * [`channel_id`:`channel_id`]
   * [`...*byte`:`features`]

 Requirements

A node sending `channel_reestablish`:
  - if it sets `channel_features`:
- MUST set the channel features which currently apply to the channel.
  - if it sets `upgrades_available`
- MUST set `channel_features`
- MUST set it to a set of channel features not in `channel_features`.
  - if it sets `upgrades_wanted`:
- MUST set it to a single channel feature NOT in `channel_features`, plus 
any required features which are also not in `channel_features`.
- MUST NOT set any bits not in `upgrades_available`.

A node receiving `channel_reestablish`:
  - if `channel_features` has more bits set than the sent `channel_features`:
- if the additional bits are not in the sent `upgrades_available`:
  - MUST fail the upgrade
- otherwise:
  - MUST consider the received `channel_features` as the current features 
of the channel.
  - otherwise, if `channel_features` has fewer bits set than the sent 
`channel_features`:
- if the missing bits are not in the sent `upgrades_available`:
  - MUST fail the upgrade
- otherwise:
  - MUST consider the sent `channel_features` as the current features of 
the channel.
  - if either peer sets a bit in `upgrades_wanted` which is also in both peers' 
`upgrades_available`:
- if `channel_features` modified by `upgrades_wanted` does not have 
required features:
  - MUST fail the upgrade.
- MUST send `update_upgrade` with the new `channel_features` after any 
retransmissions required by `channel_reestablish` and as soon as there are no 
outstanding updates on either commitment transaction.

A node receiving `update_upgrade`:
  - if the `features` is not the same as the one it sent (or will send):
- MUST fail the upgrade

When a node has both sent and received `update_upgrade`:
  - MUST consider the channel features to be those sent in `update_upgrade`.
  - if it has a lower SEC1-encoded node_id than its peer:
- MUST send `commitment_signed` (using the new channel features).

 Rationale

It is generally simpler to have both sides synchronized when upgrades
occur: by indicating that an upgrade is desired and available, both
sides know to perform the upgrade as soon as this is the case.  In
practice most upgrades happen by restarting software which implies a
reconnect cycle anyway.

The modification of bits is actually quite tricky: a channel which has
`option_static_remotekey` needs only set `option_anchor_outputs` in
`upgrades_wanted`, but one with neither would set both.

A node which only offered `option_anchor_outputs` as an upgrade would
only set that in `upgrades_available`, to avoid indicating that an
upgrade only to `option_static_remotekey` was available.

There's weasel wording around how `channel_features` combines with
`upgrades_wanted` ("modified by") since future channel features may
turn off existing features they conflict with.  This will be defined
by them.

Finally, the `update_upgrade` features field is technically redundant,
but a useful sanity check and diagnostic that both sides are now
entering the same state.  It also allows us to continue to enforce the
rule that commitment_signed must include an update.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-05-04 Thread Rusty Russell
Matt Corallo  writes:
> On 4/27/21 01:04, Rusty Russell wrote:
>> Matt Corallo  writes:
>>>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>>>>
>>>> Matt Corallo  writes:
>>> I promise it’s much less work than it sounds like, and avoids having to 
>>> debug these things based on logs, which is a huge pain :). Definitely less 
>>> work than a new state machine:).
>> 
>> But the entire point of this proposal is that it's a subset of the
>> existing state machine?
>
> Compared to today, its a good chunk of additional state machine logic to 
> enforce when a message can or can not be sent, 
> and additional logic for when we can (or can not) flush any pending
> changes buffer(s)

Kind of.  I mean, we can add a "update_noop" message which simply
requests your turn and has no other effects.

>> The only "twist" is that if it's your turn and you receive an update,
>> you can either reply with a "yield" message, or ignore it.
>
> How do you handle the "no changes to make" case - do you send yields back and 
> forth ever Nms all day long or is there 
> some protocol by which you resolve it when both parties try to claim turn at 
> once?

You don't do anything?

If you want to send an update:
1. If it is your turn, send it.
2. If it is not your turn, send it and wait for either a `yield`, or a
   different update.  In the former case, it's now your turn, in the
   latter case it's not and your update was ignored.

If you receive an update when it's your turn:
1. If you've sent an update already, ignore it.
2. Otherwise, send `yield`.

>>> Isn’t that pretty similar? Discard one splice proposal deterministically 
>>> (ok that’s new) and the loser has to store their proposal in a holding cell 
>>> for later (which they have to do in turn-based anyway). Logic to check if 
>>> there’s unsettled things in RAA handling is pretty similar to turn-based, 
>>> and logic to reject other messages is the same as shutdown handling today.
>> 
>> Nope, with the simplified protocol you can `update_splice` at any time
>> instead of your normal update, since both sides are already in sync.
>
> Hmm, I'm somewhat failing to understand why its that different - you can only 
> update_splice if its your turn, which is 
> about exactly the same amount of additional logic to check turn conditions as 
> just flag "want to do splice". Either way 
> you have the same pending splice buffer.

No, for turn-taking, this case is exactly like any other update.

For non-turn taking, we need an explicit quiescence protocol, and to
handle simultanous splicing.

>>>> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>>>>   the splice.
>>>
>>> If we like turn based, why not just deterministic throw out one slice? :)
>> 
>> Because while I am going to implement turn-based, I'm not sure if anyone
>> else is.  I guess we'll see?
>
> My point was more that its similar in logic - if you throw out the splice 
> deterministically and just keep it in some 
> "pending slice" buffer on the sending side, you've just done basically what 
> you'd do to implement turns, while keeping 
> the non-turn slice protocol a bit easier :).

No, you really haven't.  Right now you can have Alice propose a splice
while Bob proposes at the same time, so we have a tiebreak protocol.
And you can have Alice propose a splice while Bob proposes a different
update which needs to be completely resolved before the splice can
continue.

Whereas in turn taking, when someone proposes a splice, that's what
you're doing, as soon as it is received.  And when someone wants to
propose a splice, they can do it as soon as it's their turn.  If it's
not their turn and the other side proposes a splice, they can jump onto
that (happy days, since the splice proposer pays for 1 input 1 output
and the core of the tx!).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-05-04 Thread Rusty Russell
Matt Corallo  writes:
> On 4/27/21 17:32, Rusty Russell wrote:
>> OK, draft is up:
>> 
>>  https://github.com/lightningnetwork/lightning-rfc/pull/867
>> 
>> I have to actually implement it now (though the real win comes from
>> making it compulsory, but that's a fair way away).
>> 
>> Notably, I added the requirement that update_fee messages be on their
>> own.  This means there's no debate on the state of the channel when
>> this is being applied.
>
> I do have to admit *that* part I like :).
>
> If we don't do turns for splicing, I wonder if we can take the rules around 
> splicing pausing other HTLC updates, make 
> them generic for future use, and then also use them for update_fee in a 
> simpler-to-make-compulsory change :).

Yes, it is similar to the close requirement, except that requires all
HTLCs be absent.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Recovery of Lightning channels without backups

2021-04-27 Thread Rusty Russell
Lloyd Fournier  writes:
> Hey Rusty,
>
> Thoughts on each point below.
>
> On Fri, 23 Apr 2021 at 14:29, Rusty Russell  wrote:
>
>> OK, I'm now leaning *against* this method.
>>
>> 1. It removes the ability to update a channel without access to the node's
>>secret key.  At the moment the node secret key is only needed for
>>gossip and to DH to set up a new peer connection.  c-lightning does
>>not use this for now (we keep the per-channel keys in the HSM too),
>>but it would be a perfectly acceptable tradeoff not to do this.
>>
>
> Don't you also need the node secret key for onion routing? i.e. every time
> you update your channel to forward a payment.

You need to ECDH with the node_id privkey, yes (as you do to establish
peer comms).  But in the c-lightning model, that's part of central
routing, not the subdaemon which deals with a single channel.  You can
still shutdown a channel without knowing the node's private key.

> I am not familiar with lightning HSM designs and security goals but to me
> it doesn't sound like much of a cost to keep the key on the HSM and to
> include doing channel updates as well seeing as it's already doing so much
> work. If it is desirable to have different keys for DH and channel updates
> then a simple solution is to have two static public keys -- one for each
> task.

The main concern is that access to one channel's keys doesn't give you
any access to the other channels' keys.  I don't think there's a way
around that in any "I can derive another nodes' keys" model.

>>From my perspective it is worth making the necessary sacrifices to include
> this feature. For me and many people, lost data without backups is the
> biggest risk to my funds in lightning. Certainly much more threatening than
> whether certain keys are on a HSM or not. Anecdotally I've heard stories
> like "I put my lnd on autopilot and then lost my disk died -- all my funds
> are gone!?" more than once.

Fair, but more reliable backups solve this better IMHO.  (Roasbeef told
us that Electrum uses OP_RETURN to tag opens, which also works).

> 2. It doesn't get rid of temporary_channel_id, since we don't know
>>the generation_number until both sides have sent it.  We have a
>>workaround for this already in dual-funding anyway.
>>
>
> Why did you decide to send this rather than just look up in your own
> database what "generation" should be? I think that it's easy to make sure
> that you and the other node are on the same page about this number without
> communicating it. If someone is opening a channel with data that appears to
> be invlaid because they are using the wrong generation then sending an
> error back indicating what you are up to should be sufficient to recover?

If you ever lose that information, you can never open a channel again?
Or you simply believe them and retry if they offer a higher generation?

>> 3. Because we need a generation counter, it's not quite as easily
>>scannable as you'd hope (the "gap" problem).
>>
>
> This doesn't seem to be a big issue. You are trying to recover your funds
> after all so you can afford to scan over very large gaps i.e. leave the
> node on for days. I mean my Bitcoin wallet manages to handle this so why
> wouldn't it work here?

Well, bitcoin core famously didn't do this at all (had a key pool) and
people lost funds.  Deterministic key generation is better, but it's
still making gross assumptions, usually undocumented, on how many keys
you can hand out before you *have* to use one.

It's sometimes shocking how unpolished Bitcoin infrastructure is.  But
it's stuff like this that so many exchanges offer fixed deposit
addresses :(

> I wonder if it is even necessary to bump the
> generation until a funding tx is confirmed -- I can't think of a good
> reason why you would want to open two channels to the same node at the same
> time (why not put all your funds into the same funding).

Well, I'd agree with you of course, but other implementations do allow
it.  If you don't allow it, you don't need a temporary_channel_id at
all.

But that still only prevents gaps if you scan the TXO set, not the UTXO
set.  And it doesn't help with unannounced peers or peers which are no
longer in the public graph.  You want backups :)

>> I think the "encrypted blob served by peers", even in a very naive way,
>> offers another way to do this, though it requires the assumption that at
>> least one peer is honest.
>
> I see encrypted backups as complementary. With this scheme you can at least
> find a peer that you've had a channel with. From the encrypted backup you
> left with them you can then find others and check against them.

I see encrypted backups as a more-likely-to-b

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-27 Thread Rusty Russell
OK, draft is up:

https://github.com/lightningnetwork/lightning-rfc/pull/867

I have to actually implement it now (though the real win comes from
making it compulsory, but that's a fair way away).

Notably, I added the requirement that update_fee messages be on their
own.  This means there's no debate on the state of the channel when
this is being applied.

Cheers,
Rusty.

Rusty Russell  writes:

> Matt Corallo  writes:
>>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>>> 
>>> Matt Corallo  writes:
>>>> Somehow I missed this thread, but I did note in a previous meeting - these 
>>>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>>>> tests for precisely these types of message-non-delivery-and-resending 
>>>> production desync bugs for several years. When it initially landed it 
>>>> forced several rewrites of parts of the state machine, but quickly 
>>>> exhausted the bug fruit (though catches other classes of bugs occasionally 
>>>> as well). The state machine here is really not that big - while I agree 
>>>> simplifying it where possible is nice, ripping things out to replace them 
>>>> with fresh code (which would need similar testing) is probably not the 
>>>> most obvious decrease in complexity.
>>> 
>>> It's historically had more bugs than anything else in the protocol.  We
>>> literally found another one in feerate negotiation since the last
>>> c-lightning release :(
>>> 
>>> I'd rather not have bugs than try to catch them all.
>>
>> I promise it’s much less work than it sounds like, and avoids having to 
>> debug these things based on logs, which is a huge pain :). Definitely less 
>> work than a new state machine:).
>
> But the entire point of this proposal is that it's a subset of the
> existing state machine?
>
>>> You could propose a splice (or update to anchors, or whatever) any time
>>> when it's your turn, as long as you haven't proposed any other updates.
>>> That's simple!
>>
>> I presume you’d need to take it a few steps further - if the last
>> message received required a response CS/RAA, you must still wait until
>> things have settled down. I guess it also depends on the exact
>> semantics of a “turn based” message protocol - if you received some
>> updates and a signature, are you allowed to add more updates after you
>> send your CS/RAA (then you have a good chunk of today’s complexity),
>> or do you have to wait until they send you back their last RAA (in
>> which case presumably they aren’t allowed to include anything else as
>> then they’d be able to monopolize update windows). In the first case
>> you still have the same issues of today, in the second less so, but
>> you’re doing a similar “ok, just pause updates and wait for things to
>> settle “, I think.
>
> Yes, as the original proposal stated: you propose changes, send
> commitment_signed, receive revoke_and_ack and commitment_signed, then
> send revoke_and_ack.  Then both sides are in sync, and the other side
> has a turn.
>
> The only "twist" is that if it's your turn and you receive an update,
> you can either reply with a "yield" message, or ignore it.
>
>>> Instead, *both* sides have to send a splice message to synchronize, and
>>> they can only do so once all in-flight changes have cleared. You have
>>> to resolve simultaneous splice attempts (we use "highest feerate"
>>> tiebreak by node_id), and keep track of this stage while you clear
>>> in-flight changes.
>>
>> Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
>> that’s new) and the loser has to store their proposal in a holding cell for 
>> later (which they have to do in turn-based anyway). Logic to check if 
>> there’s unsettled things in RAA handling is pretty similar to turn-based, 
>> and logic to reject other messages is the same as shutdown handling today.
>
> Nope, with the simplified protocol you can `update_splice` at any time
> instead of your normal update, since both sides are already in sync.
>
>>> Here's the subset of requirements from the draft which relate to this:
>>> 
>>> The sender:
>>> - MUST NOT send another splice message while a splice is being negotiated.
>>> - MUST NOT send a splice message after sending uncommitted changes.
>>> - MUST NOT send other channel updates until splice negotiation has 
>>> completed.
>>> 
>>> The receiver:
>>> - MUST respond with a `splice` message of its own if it h

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-26 Thread Rusty Russell
Matt Corallo  writes:
>> On Apr 24, 2021, at 01:56, Rusty Russell  wrote:
>> 
>> Matt Corallo  writes:
>>> Somehow I missed this thread, but I did note in a previous meeting - these 
>>> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
>>> tests for precisely these types of message-non-delivery-and-resending 
>>> production desync bugs for several years. When it initially landed it 
>>> forced several rewrites of parts of the state machine, but quickly 
>>> exhausted the bug fruit (though catches other classes of bugs occasionally 
>>> as well). The state machine here is really not that big - while I agree 
>>> simplifying it where possible is nice, ripping things out to replace them 
>>> with fresh code (which would need similar testing) is probably not the most 
>>> obvious decrease in complexity.
>> 
>> It's historically had more bugs than anything else in the protocol.  We
>> literally found another one in feerate negotiation since the last
>> c-lightning release :(
>> 
>> I'd rather not have bugs than try to catch them all.
>
> I promise it’s much less work than it sounds like, and avoids having to debug 
> these things based on logs, which is a huge pain :). Definitely less work 
> than a new state machine:).

But the entire point of this proposal is that it's a subset of the
existing state machine?

>> You could propose a splice (or update to anchors, or whatever) any time
>> when it's your turn, as long as you haven't proposed any other updates.
>> That's simple!
>
> I presume you’d need to take it a few steps further - if the last
> message received required a response CS/RAA, you must still wait until
> things have settled down. I guess it also depends on the exact
> semantics of a “turn based” message protocol - if you received some
> updates and a signature, are you allowed to add more updates after you
> send your CS/RAA (then you have a good chunk of today’s complexity),
> or do you have to wait until they send you back their last RAA (in
> which case presumably they aren’t allowed to include anything else as
> then they’d be able to monopolize update windows). In the first case
> you still have the same issues of today, in the second less so, but
> you’re doing a similar “ok, just pause updates and wait for things to
> settle “, I think.

Yes, as the original proposal stated: you propose changes, send
commitment_signed, receive revoke_and_ack and commitment_signed, then
send revoke_and_ack.  Then both sides are in sync, and the other side
has a turn.

The only "twist" is that if it's your turn and you receive an update,
you can either reply with a "yield" message, or ignore it.

>> Instead, *both* sides have to send a splice message to synchronize, and
>> they can only do so once all in-flight changes have cleared. You have
>> to resolve simultaneous splice attempts (we use "highest feerate"
>> tiebreak by node_id), and keep track of this stage while you clear
>> in-flight changes.
>
> Isn’t that pretty similar? Discard one splice proposal deterministically (ok 
> that’s new) and the loser has to store their proposal in a holding cell for 
> later (which they have to do in turn-based anyway). Logic to check if there’s 
> unsettled things in RAA handling is pretty similar to turn-based, and logic 
> to reject other messages is the same as shutdown handling today.

Nope, with the simplified protocol you can `update_splice` at any time
instead of your normal update, since both sides are already in sync.

>> Here's the subset of requirements from the draft which relate to this:
>> 
>> The sender:
>> - MUST NOT send another splice message while a splice is being negotiated.
>> - MUST NOT send a splice message after sending uncommitted changes.
>> - MUST NOT send other channel updates until splice negotiation has completed.
>> 
>> The receiver:
>> - MUST respond with a `splice` message of its own if it has not already.
>> - MUST NOT reply with `splice` until all commitment updates are resolved by 
>> both peers.
>
> Probably use “committed” not “resolved”. “Resolved” sounds like “no pending 
> HTLCs left”.

Yes, and in fact this protocol was flawed and had to be revised, as it
did not actually mean both sides were committed in the case of
simultaneous splice proposals :(

>> - MUST use the higher of the two `funding_feerate_perkw` as the feerate for
>>  the splice.
>
> If we like turn based, why not just deterministic throw out one slice? :)

Because while I am going to implement turn-based, I'm not sure if anyone
else is.  I guess we'll see?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making unannounced channels harder to probe

2021-04-23 Thread Rusty Russell
Joost Jager  writes:
>>
>> But Joost pointed out that you need to know the node_id of the next node
>> though: this isn't quite true, since if the node_id is wrong the spec
>> says you should send an `update_fail_malformed_htlc` with failure code
>> invalid_onion_hmac, which node N turns into its own failure message.
>> Perhaps it should convert it to `unknown_next_peer` instead?  This isn't
>> a common error on the modern network; I think our onion implementations
>> have been rock solid.
>>
>
> Isn't this what I am suggesting here?
> https://twitter.com/joostjgr/status/1385150318959341569

Oops, I didn't read the second part of your tweet properly.

Sorry: this was right there indeed.

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-23 Thread Rusty Russell
Matt Corallo  writes:
> Somehow I missed this thread, but I did note in a previous meeting - these 
> issues are great fodder for fuzzing. We’ve had a fuzzer which aggressively 
> tests for precisely these types of message-non-delivery-and-resending 
> production desync bugs for several years. When it initially landed it forced 
> several rewrites of parts of the state machine, but quickly exhausted the bug 
> fruit (though catches other classes of bugs occasionally as well). The state 
> machine here is really not that big - while I agree simplifying it where 
> possible is nice, ripping things out to replace them with fresh code (which 
> would need similar testing) is probably not the most obvious decrease in 
> complexity.

It's historically had more bugs than anything else in the protocol.  We
literally found another one in feerate negotiation since the last
c-lightning release :(

I'd rather not have bugs than try to catch them all.

>> I've been revisiting this because it makes things like splicing easier:
>> the current draft requires stopping changes while splicing is being
>> negotiated, which is not entirely trivial.  With the simplified method,
>> you don't have to wait at all.
>
> Hmm, what’s nontrivial about this? How much more complicated is this than 
> having an alternation to updates and pausing HTLC updates for a cycle or two 
> while splicing is negotiated (I assume it would still need a similar 
> requirement, as otherwise you have the same complexity)? We already have a 
> similar update-stopping process for shutdown, though of course it doesn’t 
> include restarting.

You could propose a splice (or update to anchors, or whatever) any time
when it's your turn, as long as you haven't proposed any other updates.
That's simple!

Instead, *both* sides have to send a splice message to synchronize, and
they can only do so once all in-flight changes have cleared.  You have
to resolve simultaneous splice attempts (we use "highest feerate"
tiebreak by node_id), and keep track of this stage while you clear
in-flight changes.

Here's the subset of requirements from the draft which relate to this:

The sender:
- MUST NOT send another splice message while a splice is being negotiated.
- MUST NOT send a splice message after sending uncommitted changes.
- MUST NOT send other channel updates until splice negotiation has completed.

The receiver:
- MUST respond with a `splice` message of its own if it has not already.
- MUST NOT reply with `splice` until all commitment updates are resolved by 
both peers.
- MUST use the higher of the two `funding_feerate_perkw` as the feerate for
  the splice.
- MUST NOT send other channel updates until splice negotiation has completed.

Similar requirements exist for other major channel changes.

Cheers,
Rusty.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Making unannounced channels harder to probe

2021-04-23 Thread Rusty Russell
Hi all,

You can currently probe for a channel id attached to node N by
sending an HTLC, and seeing whether the error reply comes from the N or
the next hop.  The real answer is to get back to blinded paths, BTW.

But Joost pointed out that you need to know the node_id of the next node
though: this isn't quite true, since if the node_id is wrong the spec
says you should send an `update_fail_malformed_htlc` with failure code
invalid_onion_hmac, which node N turns into its own failure message.
Perhaps it should convert it to `unknown_next_peer` instead?  This isn't
a common error on the modern network; I think our onion implementations
have been rock solid.

This doesn't help if you've revealed your node id in other ways
ofc. i.e. you offer me an invoice, now I probe the rest of the network
to find all unannounced channels you have.  For that, implementations
*could* choose to return `update_fail_malformed_htlc`
failure_code=invalid_onion_hmac as above on anything which comes through
an unannounced channel but is not a successful payment (or part thereof,
i.e. correct payment_hash for outstanding invoice with correct
payment_secret field?).

Cheers,
Rusty.
PS. https://twitter.com/cycryptr/status/1384355046381473792 contains 
exploration.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Recovery of Lightning channels without backups

2021-04-22 Thread Rusty Russell
OK, I'm now leaning *against* this method.

1. It removes the ability to update a channel without access to the node's
   secret key.  At the moment the node secret key is only needed for
   gossip and to DH to set up a new peer connection.  c-lightning does
   not use this for now (we keep the per-channel keys in the HSM too),
   but it would be a perfectly acceptable tradeoff not to do this.
2. It doesn't get rid of temporary_channel_id, since we don't know
   the generation_number until both sides have sent it.  We have a
   workaround for this already in dual-funding anyway.
3. Because we need a generation counter, it's not quite as easily
   scannable as you'd hope (the "gap" problem).

I think the "encrypted blob served by peers", even in a very naive way,
offers another way to do this, though it requires the assumption that at
least one peer is honest.

Damn, because it was so clever!

Thoughts?
Rusty.  

Rusty Russell  writes:
> Lloyd Fournier  writes:
>> Hi Rusty,
>>
>> On Tue, 20 Apr 2021 at 10:55, Rusty Russell  wrote:
>>
>>> Lloyd Fournier  writes:
>>> > On Wed, Dec 9, 2020 at 4:26 PM Rusty Russell 
>>> wrote:
>>> >
>>> >>
>>> >> Say r1=SHA256(ss || counter || 0), r2 = SHA256(ss || counter || 1)?
>>> >>
>>> >> Nice work.  This would be a definite recovery win.  We should add this
>>> >> to the DF spec, because Lisa was almost finished implmenting it, so it's
>>> >> clearly due for a change!
>>> >>
>>> >
>>> > Yes that's certainly a fine way to do it.
>>> > I was also thinking you could eliminate all "basepoints" (not just
>>> funding
>>> > pubkey) using something like this. i.e. just use the node pubkey as the
>>> > "basepoint" for everything and randomize it using the shared secret for
>>> > each purpose.
>>>
>>> OK, I tried to spec this out, to implement it.  One issue is that you
>>> now can't sign the commitment_tx (or htlc_tx) without knowing the node's
>>> secret key (or, equivalently, knowing the tweaked key and being able to
>>> use the derivation scheme to untweak it).
>>>
>>
>> Using node secret key to sign the commitment_tx seems like something you
>> will have to accept to introduce this feature. For the idea to work it has
>> to be some public key that is known by others and gossiped through the
>> network. Of course you could extend the information that is gossiped about
>> a node to include a "commit_tx_point" but the nodeid seems the more natural
>> choice.
>
> Duh, yes, of course you need the funding_key secret to sign the
> commitment tx.
>
> But you really don't want to access the `remote_pubkey` (which in a
> modern option_static_remotekey world is simply the payment_basepoint).
> It's generally considered good practice *not* to have this accessible to
> your lightning node at all.
>
>>> c-lightning currently does a round-trip to the signing daemon for this
>>> already, but it'd be nice to avoid requiring it.
>>>
>>> So I somewhat reluctantly added `commit_basepoint` from which the others
>>> are derived: an implementation can use some hardened derivation from its
>>> privkey (e.g. SHA256(node_privkey || ss || counter)) to create
>>> this in a deterministic but still private manner.
>>>
>>> Or we could just leave all the other points in and just replace
>>> funding_pubkey.
>>>
>>
>> Another approach is to do things in "soft-fork" like manner.
>> Each node that wants to offer this feature sets their funding_pubkey to a
>> specified DH tweak of the nodeid. Nodes that want backup-free channel
>> recovery can just refuse to carry on the funding protocol if the
>> funding_pubkey is not set the way it wanted.
>
> Yeah, you can totally do this in an opt-in manner, except it doesn't
> work unless your peer does it too.  Since we expect everyone to want to
> do this, it's clearer to force everyone to calculate this and not have
> redundant and confusing fields in the message.
>
>>>From my pruisit crypto point of view having only one public key is nice but
>> I'm not sure how it impacts things architecturally and other protocols like
>> watchtowers.
>
> They can operate exactly like the existing scheme, AFAICT.
>
> Here's the spec diff (based on dual-funding, since it's easier to simply
> hard change).  Please check my EC math! :)
>
> diff --git a/02-peer-protocol.md b/02-peer-protocol.md
> index fbc56c8..1114068 100644
> --- a/02-peer-protocol.md
&

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2021-04-20 Thread Rusty Russell
Christian Decker  writes:
> Rusty Russell  writes:
>>> This is in stark contrast to the leader-based approach, where both
>>> parties can just keep queuing updates without silent times to
>>> transferring the token from one end to the other.
>>
>> You've swayed me, but it needs new wire msgs to indicate "these are
>> your proposals I'm reflecting to you".
>>
>> OTOH they don't need to carry data, so we can probably just have:
>>
>> update_htlcs_ack:
>>* [`channel_id`:`channel_id`]
>>* [`u16`:`num_added`]
>>* [`num_added*u64`:`added`]
>>* [`u16`:`num_removed`]
>>* [`num_removed*u64`:`removed`]
>>
>> update_fee can stay the same.
>>
>> Thoughts?
>
> So this would pretty much be a batch-ack, sent after a whole series of
> changes were proposed to the leader, and referenced by their `htlc_id`,
> correct? This is one optimization step further than what I was thinking,
> but it can work. My proposal would have been to either reflect the whole
> message (nodes need to remember proposals they've sent anyway in case of
> disconnects, so matching incoming changes with the pending ones should
> not be too hard), or send back individual acks, containing the hash of
> the message if we want to safe on bytes transferred. Alternatively we
> could also use reference the change by its htlc_id.

[ Following up on an old thread ]

After consideration, I prefer alternation.  It fits better with the
existing implementations, and it is more optimal than reflection for
optimized implementations.

In particular, you have a rule that says you can send updates and
commitment_signed when it's not your turn, and the leader either
responds with a "giving way" message, or ignores your changes and sends
its own.

A simple implementation *never* sends a commitment_signed until it
receives "giving way" so it doesn't have to deal with orphaned
commitments.  A more complex implementation sends opportunistically and
then has to remember that it's committed if it loses the race.  Such an
implementation is only slower than the current system if that race
happens.

I've been revisiting this because it makes things like splicing easier:
the current draft requires stopping changes while splicing is being
negotiated, which is not entirely trivial.  With the simplified method,
you don't have to wait at all.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Splicing draft

2021-04-20 Thread Rusty Russell
https://github.com/lightningnetwork/lightning-rfc/pull/863

I haven't even *started* to implement this, so I could well have missed
something.  Let's see.

Cheers!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Recovery of Lightning channels without backups

2021-04-19 Thread Rusty Russell
Lloyd Fournier  writes:
> Hi Rusty,
>
> On Tue, 20 Apr 2021 at 10:55, Rusty Russell  wrote:
>
>> Lloyd Fournier  writes:
>> > On Wed, Dec 9, 2020 at 4:26 PM Rusty Russell 
>> wrote:
>> >
>> >>
>> >> Say r1=SHA256(ss || counter || 0), r2 = SHA256(ss || counter || 1)?
>> >>
>> >> Nice work.  This would be a definite recovery win.  We should add this
>> >> to the DF spec, because Lisa was almost finished implmenting it, so it's
>> >> clearly due for a change!
>> >>
>> >
>> > Yes that's certainly a fine way to do it.
>> > I was also thinking you could eliminate all "basepoints" (not just
>> funding
>> > pubkey) using something like this. i.e. just use the node pubkey as the
>> > "basepoint" for everything and randomize it using the shared secret for
>> > each purpose.
>>
>> OK, I tried to spec this out, to implement it.  One issue is that you
>> now can't sign the commitment_tx (or htlc_tx) without knowing the node's
>> secret key (or, equivalently, knowing the tweaked key and being able to
>> use the derivation scheme to untweak it).
>>
>
> Using node secret key to sign the commitment_tx seems like something you
> will have to accept to introduce this feature. For the idea to work it has
> to be some public key that is known by others and gossiped through the
> network. Of course you could extend the information that is gossiped about
> a node to include a "commit_tx_point" but the nodeid seems the more natural
> choice.

Duh, yes, of course you need the funding_key secret to sign the
commitment tx.

But you really don't want to access the `remote_pubkey` (which in a
modern option_static_remotekey world is simply the payment_basepoint).
It's generally considered good practice *not* to have this accessible to
your lightning node at all.

>> c-lightning currently does a round-trip to the signing daemon for this
>> already, but it'd be nice to avoid requiring it.
>>
>> So I somewhat reluctantly added `commit_basepoint` from which the others
>> are derived: an implementation can use some hardened derivation from its
>> privkey (e.g. SHA256(node_privkey || ss || counter)) to create
>> this in a deterministic but still private manner.
>>
>> Or we could just leave all the other points in and just replace
>> funding_pubkey.
>>
>
> Another approach is to do things in "soft-fork" like manner.
> Each node that wants to offer this feature sets their funding_pubkey to a
> specified DH tweak of the nodeid. Nodes that want backup-free channel
> recovery can just refuse to carry on the funding protocol if the
> funding_pubkey is not set the way it wanted.

Yeah, you can totally do this in an opt-in manner, except it doesn't
work unless your peer does it too.  Since we expect everyone to want to
do this, it's clearer to force everyone to calculate this and not have
redundant and confusing fields in the message.

>>From my pruisit crypto point of view having only one public key is nice but
> I'm not sure how it impacts things architecturally and other protocols like
> watchtowers.

They can operate exactly like the existing scheme, AFAICT.

Here's the spec diff (based on dual-funding, since it's easier to simply
hard change).  Please check my EC math! :)

diff --git a/02-peer-protocol.md b/02-peer-protocol.md
index fbc56c8..1114068 100644
--- a/02-peer-protocol.md
+++ b/02-peer-protocol.md
@@ -867,11 +867,9 @@ This message initiates the v2 channel establishment 
workflow.
* [`u16`:`to_self_delay`]
* [`u16`:`max_accepted_htlcs`]
* [`u32`:`locktime`]
-   * [`point`:`funding_pubkey`]
+   * [`u64`:`generation`]
* [`point`:`revocation_basepoint`]
* [`point`:`payment_basepoint`]
-   * [`point`:`delayed_payment_basepoint`]
-   * [`point`:`htlc_basepoint`]
* [`point`:`first_per_commitment_point`]
* [`byte`:`channel_flags`]
* [`opening_tlvs`:`tlvs`]
@@ -895,13 +893,16 @@ If nodes have negotiated `option_dual_fund`:
 
 The sending node:
   - MUST set `funding_feerate_perkw` to the feerate for this transaction
-  - MUST ensure `temporary_channel_id` is unique from any
-other channel ID with the same peer.
+  - MUST set `generation` to a number greater than any previous
+`generation` it has sent to this receiving node which has reached
+`commitment_signed`.
+  - SHOULD set `generation` to the lowest number which meets this requirement.
 
 The receiving node:
   - MAY fail the negotiation if:
 - the `locktime` is unacceptable
 - the `funding_feerate_per_kw` is unacceptable
+- the `generation` exceeds expectation by more than the maximum it would 
scan for recovery.
 
  Rationale
 `channel_id` for the 

Re: [Lightning-dev] Recovery of Lightning channels without backups

2021-04-19 Thread Rusty Russell
Lloyd Fournier  writes:
> On Wed, Dec 9, 2020 at 4:26 PM Rusty Russell  wrote:
>
>>
>> Say r1=SHA256(ss || counter || 0), r2 = SHA256(ss || counter || 1)?
>>
>> Nice work.  This would be a definite recovery win.  We should add this
>> to the DF spec, because Lisa was almost finished implmenting it, so it's
>> clearly due for a change!
>>
>
> Yes that's certainly a fine way to do it.
> I was also thinking you could eliminate all "basepoints" (not just funding
> pubkey) using something like this. i.e. just use the node pubkey as the
> "basepoint" for everything and randomize it using the shared secret for
> each purpose.

OK, I tried to spec this out, to implement it.  One issue is that you
now can't sign the commitment_tx (or htlc_tx) without knowing the node's
secret key (or, equivalently, knowing the tweaked key and being able to
use the derivation scheme to untweak it).

c-lightning currently does a round-trip to the signing daemon for this
already, but it'd be nice to avoid requiring it.

So I somewhat reluctantly added `commit_basepoint` from which the others
are derived: an implementation can use some hardened derivation from its
privkey (e.g. SHA256(node_privkey || ss || counter)) to create
this in a deterministic but still private manner.

Or we could just leave all the other points in and just replace
funding_pubkey.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Replacement of invoices to handle stuck payments.

2021-04-16 Thread Rusty Russell
Hi all,


https://github.com/lightningnetwork/lightning-rfc/pull/798/commits/fc8aab72ccdd616301dc200fc124824efe4fbb58

I've just added a simple addition to the proposed BOLT 12 offers spec,
where invoice requests can ask to obsolete old invoices.  This allows a
simple workaround in the case where a payment is stuck: the vendor
commits to a new invoice which obsoletes the old one, using the
already-existing invoice_request message.

If the vendor cheats and accepts both old and new payments, you
can prove they lied.  Or they can return an error which indicates
they've already received the payment and it's simply the return which is
stuck.

Either way, it's now simple to implement, and gives wallets another
option for handling these cases.

Cheers!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] c-lightning release v0.10.0: Neutralizing Fee Therapy

2021-03-31 Thread Rusty Russell
We're pleased to announce the 0.10.0 release of c-lightning, named by @jsarenik.

https://github.com/ElementsProject/lightning/releases/tag/v0.10.0

This is a major release, consolidating a number of features, fixes and
experimental extensions.

Highlights for Users

* pay has been refined and much improved across various less-common scenarios.
* listpeers shows the current feerate and unilateral close fee.
* listforwards can now filter by channel status, and in our out channel.
* fundpsbt and utxopsbt have a new excess_as_change parameter if you
  don't want to add it yourself.
* connect returns the address we actually connected to (and direction
  tells you if they actually connected to us instead).
* fundchannel_complete takes a PSBT, removing a common cause of tragic
  opening failures: txprepare and withdraw now provide a PSBT for convenience 
too.
* In regtest mode, we don't care that bitcoind doesn't give any fee
  estimates, but use the minimum.

Highlights for the Network

* We now send warning messages if an error condition is possibly
  recoverable, rather than closing the channel and sending error.
* We now implement sync_complete for gossip_range queries as per latest
  spec, with backwards compatibility for older nodes.
* `experimental-dual-fund` config option enables the draft dual funding
  option for compatible nodes, which includes RBF upgrades for opening
  transactions.

Highlights for Developers

* All hooks are now registerable by multiple plugins at once.
* `experimental-shutdown-wrong-funding` allows remote nodes to close
  incorrectly opened channels using the new wrong_funding option to
  close.

More details can be found in the changelog.

Thanks to everyone for their contributions and bug reports; please keep them 
coming.

Since 0.9.3, we've had 339 commits from 14 different authors over 69 days.

A special thanks goes to the 3 first time contributors:

Matthias Debernardini
Luke Childs
Alexey Zagarin

Cheers,
Rusty, Lisa, Christian, ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PoDLEs revisited

2021-01-28 Thread Rusty Russell
Lloyd Fournier  writes:
> I think immediate broadcast of signaling TX is a bad idea even if it's done
> over lightning since it leaks that the UTXO associated with the signaling
> TX is creating a channel (even if the channel was meant to be private).
> You could argue that the signaling TX need not be associated with a UTXO
> but I find this awkward.
> Lazily broadcast, signaling txs are a good way to protect against
> sequential attacks but are weak against parallel attacks. Unfortunately I
> think protection of the former means very little without the latter.

Agreed.  Let's PoDLE!

> Let H0 and H1 be 32-byte output hash functions.
>
> 1. In any of the `tx_add_input` messages the initiator may attach a PoDLE
> which contains the public key for an input as well as a P2 (the public key
> projected onto a different generator).
> 2. Upon receiving the PoDLE, the peer checks its validity and creates a
> "claim" message claiming the UTXO which contains.
> i) H0(P2)
> ii) A MAC (e.g. Poly1305) produced with the H1(P2) as the key and
> claimer_node_id as the message -- required so conflicting claim messages
> can only be produced by someone who actually knows P2.
> iii) The claimer_node_id and a BIP340 signature under it over the rest
> of the message data -- required to stop spam: only accept and re-broadcast
> these messages from nodes who have real channels.

Note: we can avoid leaking claimer_node_id (Hey, look at me, someone's
opening a channel with me now!).  The purpose of claimer_node_id is to
restrict spam (you need to have broadcasted a valid open channel to be
recognized by the network as a valid node_id), but we can make this
direct: require a utxo + the script needed to spend it, and any key in
that script will serve in place of claimer_node_id (for v1
segwit, the output itself may serve as key).

Since we're headed to an anchor (or Eltoo) world where nodes have to
keep a few UTXOs around for emergencies anyway, this may have better
privacy.  At worst, they use a key from an existing, public channel
UTXO, which is no worse than using their node_id.

> Now I'd like to make the strongest possible argument against it in favor of
> just doing nothing (for now) at the protocol level about this problem.
>
> Consider the following propositions:
> 1. The public nodes that will offer dual funding and are susceptible to
> this attack will be the kind that have a lot of churn i.e. they dual fund a
> channel, when that closes they use the remaining funds to fund another
> channel.
> 2. Chainalysis already works very well at identifying the UTXOs these kinds
> of nodes. If the change output of a funding or the closing output are
> reused in another public channel it is easy to identify which node was
> funding what with the techniques in [1,2].

Less true after taproot though?

> 3. It is therefore rather redundant to do this type of active UTXO probing
> since all you need to do is wait and be patient. Churning public nodes will
> eventually use their UTXO to do a dual or single funding. Then by
> cross-layer de-anonymization techniques you will be able to determine that
> they owned that UTXO without ever interacting with the node.
> 4. These techniques can even be applied to private channels at least while
> they are identifiable on the blockchain (in [2] using chainalysis they can
> identify one node involved in a private channel 79% of the time).
> 5. There is of course some extra advantage in doing this attack but looking
> at the effectiveness of techniques in [1,2] and my intuition about how
> churning nodes are most susceptible to these techniques I *guess* it
> wouldn't be much. If this is the case then chainalysis companies may not be
> able to justify  doing active attacks when passive attacks work almost as
> well.
> 6. It may be more effective to deal with UTXO probing outside of the
> protocol. For example, a group of dual-funders could maintain a shared UTXO
> blacklist and use chainalysis on it to not only ban single UTXOs but entire
> clusters of outputs. i.e. do chainalysis on the chainalyzers! There are
> some efforts to create open tools to do Chainalysis [3] that could be
> leveraged against this attack. This might be much more effective than
> PoDLEs as just spending the output somewhere else would not be enough to
> use it again in the attack.
> 7. The above PoDLE proposal actually creates a new extra bit of data that
> can be used for chainalysis -- when you broadcast the claim message you are
> saying you're going to make a dual funded channel sometime soon. So
> Chainalysis can look in the next block for something that looks like a dual
> funding and know you participated. This could be quite valuable for them
> and I would hesitate to give it to them in the anticipation of them doing
> an attack they may never actually do.
> 8. If all of the above points are not enough to prevent this attack from
> being widespread and the above PoDLE proposal is still the best idea I
> 

Re: [Lightning-dev] PoDLEs revisited

2021-01-07 Thread Rusty Russell
Lloyd Fournier  writes:
> This achieves all properties except for (4 - distinguishable on-chain)
> which is why it was dismissed.

It also seems to require 2 txs per channel open?  (Interestingly I
missed that post previously, thanks for the pointer!)

> I think it is possible to extend the idea to achieve (4) and therefore
> obtain all desired properties.
> Simply put peers can just use the SINGLE|ANYONECANPAY signature as back ups
> in case of abort. Here's how it could work in my mind:
>
> 1. Initiator requests dual-funding and provides a TX_temp spending their
> input set to a main output and a change output (does not sign it yet). They
> also provide a sighash SIGNLE|ANYONECANPAY signature on the main output
> spending into TX_backup-fund and a signature on the first commitment
> transaction spending from TX_backup-fund (exactly as in [6]).
> 2. Peer responds with commitment TX signature for TX_backup-fund.
> 3. Initiator responds with the signatures for TX_temp.
> *Peer now has a fully functional transaction chain with which to open the
> channel -- now they can attempt to upgrade to a SIGHASH_ALL opening*.
> 4. Peer (if possible) checks there are no existing transactions in the
> chain or mempool spending from the taker's inputs. If not it responds with
> its inputs, change and commitment tx signature for a SIGHASH_ALL TX_fund.
> 5. Initiator responds with commitment TX signature and TX_fund input
> signatures.
> 6. Peer broadcasts TX_fund.
> *If at any point after step 3 Initiator does not respond for ~2 seconds
> they broadcast TX_temp and TX_backup-funding*

2 seconds is not sufficient; as an Australian (or Tor user) you should
know this :)

But otherwise, it's kinda nice (bar breaking the interactive construction).

> We have (4) because the SINGLE|ANYONECANPAY signature only appears on-chain
> in case of abort (i.e. TX_backup-funding makes it on-chain).
> It appears to be pretty close to the ideal solution in terms of privacy and
> security.
> If the malicious initiator learns an output they will always have to spend
> one of their inputs otherwise they will quickly get hit by the TX_temp +
> TX_backup-funding.
> Note that it is possible the node is just slow in which case even if step
> TX_backup-funding makes it in both parties should just carry on with the
> channel.
>
> The downsides are that it involves six rounds of communication and cannot
> use the "interactive tx building" protocol developed for the original
> proposal
>
> # Signaling Transactions
>
> Finally I present a simple but unintuitive protocol that achieves roughly
> the same properties as the PoDLE protocol but without lightning gossip
> messages.
>
> Whenever the initiator adds an input in the interactive tx building they
> provide signatures on a "signaling" transaction spending that input (and
> any inputs they have added so far).
> The signaling transactions will typically spend the funds back to the
> initiator's wallet.
> Before revealing any of their inputs, the peer checks that none of the
> inputs added by the initiator are in their mempool/chain.
> If the initiator aborts the protocol after learning one of the peer's
> inputs the peer broadcasts one of the signaling transactions.
>
> Like the PoDLE proposal this doesn't achieve (3) since a malicious peer
> could broadcast the signaling transaction making the honest initiator pay a
> transaction fee before using the input in another session.
> To mitigate this a bit, the transactions could be RBF and have a 1
> sat-per-byte feerate to give the initiator a decent amount of time to use
> their input productively before the tx confirms (and paying a low fee if it
> ever does confirm).
>
> The advantages of signaling transactions over PoDLE is that it doesn't
> involve any wonky crypto or new gossip messages.
> The advantage of the PoDLE proposal over this is that a malicious peer can
> only blacklist the UTXO (not necessarily force you to spend it).

We only need a single UTXO for this, which is even better.

So the initiator sends a "good faith" signed tx, which spends one of its
UTXOs, to the accepter.  1sat-per-byte is probably a too low, but the
accepter can provide a feerate for it[1].  Opener aborts if that
"good-faith" feerate is too high.  It's implied that this is the first
added input, too.

If the accepter screws the opener by broadcasting it, the opener can
still open a channel with someone else before it's confirmed: they just
can't use *that* utxo if they want another node to DF.  Or simply take
the loss, since the feerate is presumably minimal, and use CPFP.

Cheers,
Rusty.

[1] The latest c-lightning implementation of the spec[2] already has the
accepter indicating min, max and preferred feerates (and then the
opener selects within that range).  This would simply add another
feerate field, suggest implementing as ceiling(min / 2, 1).
[2] Which Lisa promises she'll publish RSN, so we can add your derived
points proposal to it.

Re: [Lightning-dev] Minor tweaks to blinded path proposal

2020-11-21 Thread Rusty Russell
Bastien TEINTURIER  writes:
> Hey Rusty,
>
> Good questions.
>
> I think we could use additive tweaks, and they are indeed faster so it can
> be worth doing.
> We would replace `B(i) = HMAC256("blinded_node_id", ss(i)) * P(i)` by `B(i)
> = HMAC256("blinded_node_id", ss(i)) * G + P(i)`.
> Intuitively since the private key of the tweak comes from a hash function,
> it should offer the same security.
> But there may be dragons lurking there, I don't know how to properly
> evaluate whether it's as secure (whereas the multiplicative
> version is really just Sphinx, so we know it should be secure).

I agree.  I'll ask a real crypto person to review it, though.

> If we're able to use additive tweaks, we can probably indeed use x-only
> pubkeys.
> Even though we're not storing these on-chain, so the 1 byte saved isn't
> worth much.
> I'd say that if it's trivial to use them, let's do it, otherwise it's not
> worth any additional effort.

I'll try and report back; I think it's trivial (I converted offers, and
indeed it was trivial except needing a way to lookup a x-only node_id,
which simply required two lookups).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Minor tweaks to blinded path proposal

2020-11-17 Thread Rusty Russell


See:

https://github.com/lightningnetwork/lightning-rfc/blob/route-blinding/proposals/route-blinding.md

1. Can we use additive tweaks instead of multiplicative?
   They're slightly faster, and supported by the x-only secp API.
2. Can we use x-only pubkeys?  It's generally trivial, and a byte
   shorter.  I'm using them in offers to great effect.

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-20 Thread Rusty Russell
Christian Decker  writes:
>> And you don't get the benefit of the turn-taking approach, which is that
>> you can have a known state for fee changes.  Even if you change it to
>> have opener always the leader, it still has to handle the case where
>> incoming changes are not allowed under the new fee regime (and similar
>> issues for other dynamic updates).
>
> Good point, I hadn't considered that a change from one side might become
> invalid due to a change from the other side. I think however this can only
> affect changes that result in other changes no longer being applicable,
> e.g., changing the number of HTLCs you'll allow on a channel making the
> HTLC we just added and whose update_add is still in flight invalid.

To make dynamic changes in the current system, you need to make them the
same way we make feechanges: first remote, then local (once they ack).

This means you have to handle the cases where this causes the the commit
tx to not meet the new restrictions.  It's all possible, it's just
messy.

> I don't think fee changes are impacted here, since the non-leader only
> applies the change to its commitment once it gets back its own change.
> The leader will have inserted your update_add into its stream after the
> fee update, and so you'll first apply the fee update, and then use the
> correct fee to add the HTLC to your commitment, resulting in the same
> state.

Sure, but we still have the (existing) problem where you propose a fee
change you can no longer afford, because the other side is also adding
things.

They can just refuse to reflect the fee in that case, though.

> The remaining edgecases where changes can become invalid if they are in
> flight, can be addressed by bouncing the change through the non-leader,
> telling him that "hey, I'd like to propose this change, if you're good
> with it send it back to me and I'll add it to my stream". This can be
> seen as draining the queue of in-flight changes, however the non-leader
> may pipeline its own changes after it and take the updated parameters
> into consideration. Think of it as a two-phase commit, alerting the peer
> with a proposal, before committing it by adding it to the stream. It
> adds latency (about 1/2RTT over the token-passing approach since we can
> emulate it with the token-passing approach) but these synchronization
> points are rare and not on the critical path when forwarding payments.

You can create a protocol to reject changes, but now we're more complex
than the simply-alternate-leader approach.

> With the leader-based approach, we add 1RTT latency to the updates from
> one side, but the other never has to wait for the token, resulting in
> 1/2RTT per direction as well, since messages are well-balanced.

Good point.

>> Yes, but it alternates because that's optimal for a non-busy channel
>> (since it's usually "Alice adds htlc, Bob completes the htlc").
>
> What's bothering me more about the turn-based approach is that while the
> token is in flight, neither endpoint can make any progress, since the
> one reliquishing the token promised not to say anything and the other
> one hasn't gotten the token yet. This might result in rather a lot of
> dead-air if both sides have a constant stream of changes to add. So we'd
> likely have to add a timeout to defer giving up the token, to counter
> dead-air, further adding delay to the changes from the other end, and
> adding yet another parameter.

I originally allowed optimistically sending commitment_signed.  But it
means there can be more than one commitment tx for any given height (you
have to assume they received the sig and might broadcast it), which
seemed to complicate things.  OTOH this is only true if you choose to do
this.

> This is in stark contrast to the leader-based approach, where both
> parties can just keep queuing updates without silent times to
> transferring the token from one end to the other.

You've swayed me, but it needs new wire msgs to indicate "these are your
proposals I'm reflecting to you".

OTOH they don't need to carry data, so we can probably just have:

update_htlcs_ack:
   * [`channel_id`:`channel_id`]
   * [`u16`:`num_added`]
   * [`num_added*u64`:`added`]
   * [`u16`:`num_removed`]
   * [`num_removed*u64`:`removed`]

update_fee can stay the same.

Thoughts?
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-14 Thread Rusty Russell
Joost Jager  writes:
>>
>> > A crucial thing is that these hold fees don't need to be symmetric. A new
>> > node for example that opens a channel to a well-known, established
>> routing
>> > node will be forced to pay a hold fee, but won't see any traffic coming
>> in
>> > anymore if it announces a hold fee itself. Nodes will need to build a
>> > reputation before they're able to command hold fees. Similarly, routing
>> > nodes that have a strong relation may decide to not charge hold fees to
>> > each other at all.
>>
>> I can still establish channels to various low-reputation nodes, and then
>> use them to grief a high-reputation node.  Not only do I get to jam up
>> the high-reputation channels, as a bonus I get the low-reputation nodes
>> to pay for it!
>
> So you're saying:
>
> ATTACKER --(no hold fee)--> LOW-REP --(hold fee)--> HIGH-REP
>
> If I were LOW-REP, I'd still charge an unknown node a hold fee. I would
> only waive the hold fee for high-reputation nodes. In that case, the
> attacker is still paying for the attack. I may be forced to take a small
> loss on the difference, but at least the larger part of the pain is felt by
> the attacker. The assumption is that this is sufficient enough to deter the
> attacker from even trying.

No, because HIGH-REP == ATTACKER and LOW-REP pays.

> I guess your concern is with trying to become a routing node? If nobody
> knows you, you'll be forced to pay hold fees but can't attract traffic if
> you charge hold fees yourself. That indeed means that you'll need to be
> selective with whom you accept htlcs from. Put limits in place to control
> the expenditure. Successful forwards will earn a routing fee which could
> compensate for the loss in hold fees too.

"Be selectinve with whom you accept HTLCs from"... it always comes back
to incentives to de-anonymize the network :(

> I think this mechanism can create interesting dynamics on the network and
> eventually reach an equilibrium that is still healthy in terms of
> decentralization and privacy.

I suspect that if you try to create a set of actual rules for nodes
using actual numbers, I think you'll find you enter a complexity spiral
as you try to play whack-a-mole on all the different ways you can
exploit it.

(This is what happened every time I tried to design a peer-penalty
system).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Christian Decker  writes:
> I wonder if we should just go the tried-and-tested leader-based
> mechanism:
>
>  1. The node with the lexicographically lower node_id is determined to
> be the leader.
>  2. The leader receives proposals for changes from itself and the peer
> and orders them into a logical sequence of changes
>  3. The leader applies the changes locally and streams them to the peer.
>  4. Either node can initiate a commitment by proposing a `flush` change.
>  5. Upon receiving a `flush` the nodes compute the commitment
> transaction and exchange signatures.
>
> This is similar to your proposal, but does away with turn changes (it's
> always the leader's turn), and therefore reduces the state we need to
> keep track of (and re-negotiate on reconnect).

But now you need to be able to propose two kinds of things, which is
actually harder to implement; update-from-you and update-from-me.  This
is a deeper protocol change.

And you don't get the benefit of the turn-taking approach, which is that
you can have a known state for fee changes.  Even if you change it to
have opener always the leader, it still has to handle the case where
incoming changes are not allowed under the new fee regime (and similar
issues for other dynamic updates).

> The downside is that we add a constant overhead to one side's
> operations, but since we pipeline changes, and are mostly synchronous
> during the signing of the commitment tx today anyway, this comes out to
> 1 RTT for each commitment.

Yeah, it adds 1RTT to every hop on the network, vs my proposal which
adds just over 1/2 RTT on average.

> On the other hand a token-passing approach (which I think is what you
> propose) require a synchronous token handover whenever a the direction
> of the updates changes. This is assuming I didn't misunderstand the turn
> mechanics of your proposal :-)

Yes, but it alternates because that's optimal for a non-busy channel
(since it's usually "Alice adds htlc, Bob completes the htlc").

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-14 Thread Rusty Russell
Bastien TEINTURIER  writes:
> It's a bit tricky to get it right at first, but once you get it right you
> don't need to touch that
> code again and everything runs smoothly. We're pretty close to that state,
> so why would we want to
> start from scratch? Or am I missing something?

Well, if you've implemented a state-based approach then this is simply a
subset of that so it's simple to implement (I believe, I haven't done it
yet!).

But with a synchronous approach like this, we can do dynamic protocol
updates at any time without having a special "stop and drain" step.

For example, you can decrease the amount of HTLCs you accept, without
worrying about the case where there HTLCs being added right now.  This
solves a similar outstanding problem with update_fee.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-12 Thread Rusty Russell
Joost Jager  writes:
> This hold fee could be: lock_time * (fee_base + fee_rate * htlc_value).
> fee_base is in there to compensate for the usage of an htlc slot, which is
> a scarce resource too.

...
> 
> In both cases the sender needs to trust its peer to not steal the payment
> and/or artificially delay the forwarding to inflate the hold fee. I think
> that is acceptable given that there is a trust relation between peers
> already anyway.
>
> A crucial thing is that these hold fees don't need to be symmetric. A new
> node for example that opens a channel to a well-known, established routing
> node will be forced to pay a hold fee, but won't see any traffic coming in
> anymore if it announces a hold fee itself. Nodes will need to build a
> reputation before they're able to command hold fees. Similarly, routing
> nodes that have a strong relation may decide to not charge hold fees to
> each other at all.

I can still establish channels to various low-reputation nodes, and then
use them to grief a high-reputation node.  Not only do I get to jam up
the high-reputation channels, as a bonus I get the low-reputation nodes
to pay for it!

Operators of high reputation nodes can even make this profitable; doubly
so, since they eliminate the chance of any of those low-reputation nodes
every getting to be high reputation (and thus competing).

AFAICT any scheme which penalizes the direct peer creates a bias against
forwarding unknown payments, thus is deanonymizing.

> I'd also like to encourage everyone to prioritize this spam/jam issue and
> dedicate more time to solving it. Obviously there is a lot more to do in
> Lightning, but I am not sure if we can afford to wait for the real
> adversaries to show up on this one.

Agreed.  It's a classic "it's not actually on fire *right now*" problem,
so it does keep getting pushed back.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-12 Thread Rusty Russell
Hi all,

Our HTLC state machine is optimal, but complex[1]; the Lightning
Labs team recently did some excellent work finding another place the spec
is insufficient[2].  Also, the suggestion for more dynamic changes makes it
more difficult, usually requiring forced quiescence.

The following protocol returns to my earlier thoughts, with cost of
latency in some cases.

1. The protocol is half-duplex, with each side taking turns; opener first.
2. It's still the same form, but it's always one-direction so both sides
   stay in sync.
update+-> commitsig-> <-revocation <-commitsig revocation->
3. A new message pair "turn_request" and "turn_reply" let you request
   when it's not your turn.
4. If you get an update in reply to your turn_request, you lost the race
   and have to defer your own updates until after peer is finished.
5. On reconnect, you send two flags: send-in-progress (if you have
   sent the initial commitsig but not the final revocation) and
   receive-in-progress (if you have received the initial commitsig
   not not received the final revocation).  If either is set,
   the sender (as indicated by the flags) retransmits the entire
   sequence.
   Otherwise, (arbitrarily) opener goes first again.

Pros:
1. Way simpler.  There is only ever one pair of commitment txs for any
   given commitment index.
2. Fee changes are now deterministic.  No worrying about the case where
   the peer's changes are also in flight.
3. Dynamic changes can probably happen more simply, since we always
   negotiate both sides at once.

Cons:
1. If it's not your turn, it adds 1 RTT latency.

Unchanged:
1. Database accesses are unchanged; you need to commit when you send or
   receive a commitsig.
2. You can use the same state machine as before, but one day (when
   this would be compulsory) you'll be able signficantly simplify;
   you'll need to record the index at which HTLCs were changed
   (added/removed) in case peer wants you to rexmit though.

Cheers,
Rusty.

[1] This is my fault; I was persuaded early on that optimality was more
important than simplicity in a classic nerd-snipe.
[2] https://github.com/lightningnetwork/lightning-rfc/issues/794
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] c-lightning release v0.9.1: The Antiguan BTC Maximalist Society

2020-09-15 Thread Rusty Russell
We're pleased to announce the 0.9.1 release of c-lightning, named by Jon
Griffiths.

https://github.com/ElementsProject/lightning/releases/tag/v0.9.1

This is a significant release with major bugfixes to multi-part payments
and various notable speedups and improvements across the board.

*Did you know*: c-lightning deprecates features with 6 months warning, and
you can set allow-deprecated-apis=false to test?

Highlights for Users

* The sending of multi-part payments has seen a lot of work, covering
  more corner cases and generally becoming much more robust.

* New official plugins create commands multiwithdraw and
  multifundchannel to easily produce a single transaction which does
  more than one thing; these use the PSBT plumbing created for v0.9.0.
  
* We produce far less log spam when log-level is set to debug, so if
  you've avoided setting that before, I recommend trying now.
   
* Startup checks that bitcoind is the correct version, and relays
  transactions

* Builtin plugins are now nominated as important, and you can nominate
  others as important too. The daemon will stop if these fail.

* You can now build a postgres-only installation, without sqlite3.

Highlights for the Network

* Our invoices now supply more than one routehint if we think you'll
  need to use multi-part-payments.

* We prune channels which are not updated in both directions every 2
  weeks.

* Our default CTLV expiry has increased to 34 blocks, or 18 if we're the
  final node, as per updated specification recommendations
  (https://github.com/lightningnetwork/lightning-rfc/pull/785)

Highlights for Developers

* PSBT APIs fleshed out with utxopsbt and locktime arguments.

* Plugins can easily mark commands and options deprecated.

* The new channel_state_changed notification lets plugins easily track
  channel behavior.

More details can be found at

https://github.com/ElementsProject/lightning/blob/v0.9.1/CHANGELOG.md

Thanks to everyone for their contributions and bug reports; please keep
them coming.

Since 0.9.0, we've had 391 commits commits from 15 different authors.
A special thanks goes to the 3 first time contributors:

Matt Whitlock @whitslack
Sergi Delgado Segura @sr-gi
Moller40 @Moller40

Cheers,
Christian, Rusty, ZmnSCPxj, and Lisa
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-08-22 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
> After getting some feedback from the Lightning Labs squad, we're thinking
> that it may be better to make the initial switch over double-opt-in, similar
> to the current `shutdown` message flow. So with this variant, we'd add two
> new messages: `commit_switch` and `commit_switch_reply` (placeholder
> names). We may want to retain the "initiator" only etiquette for simplicity,
> but if we want to allow both sides to initiate then we'll need to handle
> collisions (with a randomized back off possibly).

(Sorry for long delay catching up with backlog).

Yeah, modelling on the shutdown flow makes more sense to me, due to
simplicity.

I think we will end up with a linear progression of channel types (type
n+1 is always preferred over type n).  This has the benefit of
*reducing* the test matrix at some point by dropping older formats.

(You can't drop older format completely without an onchain event, of
course, since you need to be able to penalize ancient commit txs.
Though perhaps you just pregen penalty txs for those cases and behave like
a watchtower, maybe even not bothering about HTLCs?)

I think inventing a new commitment type numbering scheme is unnecessary:
just use existing feature bits and define what upgrades are permissable.

I send `commit_switch` with features, you send `commit_switch` with
features, we do feature matching to determine new features for channel.
You can easily figure out the intersection: if one requires a feature
the other doesn't offer, it's a noop (upgrade failure).  Similarly, if
the combination offers no new features, it's a noop.

You can't add HTLCs after you've sent `commit_switch`.  You can add
again (under the new rules) once:
1. You've both sent and received `commit_switch`.
2. You have no outstanding HTLCs (in either direction).

This means we don't have to worry about the case where we both propose
upgrades at once, it Just Works.  It's also Just Works to always send on
reconnect, and simply echo your current features if you receive an
unexpected `commit_switch`.

---
I'd like to Upgrade The World to anchor_outputs, so maybe cleanest would
be:

1. Release supports anchor_outputs (odd).
2. Release supports upgrading to anchor_outputs.
3. Release requires anchor_outputs (even), unilaterally closes channels
   w/o (ideally very few!).

We need a feature bit for upgrades, since we don't want to stop the flow
if they don't respond to commit_switch (i.e. it should be even).

Anyone working on this right now?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-28 Thread Rusty Russell
"David A. Harding via bitcoin-dev"  
writes:
> To avoid the excessive wasting of bandwidth.  Bitcoin Core's defaults
> require each replacement pay a feerate of 10 nBTC/vbyte over an existing
> transaction or package, and the defaults also allow transactions or
> packages up to 100,000 vbytes in size (~400,000 bytes).  So, without
> enforcement of BIP125 rule 3, an attacker starting at the minimum
> default relay fee also of 10 nBTC/vbyte could do the following:
>
> - Create a ~400,000 bytes tx with feerate of 10 nBTC/vbyte (1 mBTC total
>   fee)
>
> - Replace that transaction with 400,000 new bytes at a feerate of 20
>   nBTC/vbyte (2 mBTC total fee)
>
> - Perform 998 additional replacements, each increasing the feerate by 10
>   nBTC/vbyte and the total fee by 1 mBTC, using a total of 400 megabytes
>   (including the original transaction and first replacement) to
>   ultimately produce a transaction with a feerate of 10,000 nBTC/vbyte
>   (1 BTC total fee)
>
> - Perform one final replacement of the latest 400,000 byte transaction
>   with a ~200-byte (~150 vbyte) 1-in, 1-out P2WPKH transaction that pays
>   a feerate of 10,010 nBTC/vbyte (1.5 mBTC total fee)

To be fair, if the feerate you want is 100x the minimum permitted, you
can always use 100x as much bandwidth as necessary without extra cost.
If everyone (or some major tx producers) were to do that, it would suck.

To fix this properly, you really need to agressively delay processing
(thus propagation) of transactions which aren't likely to be in the next
(few?) blocks.  This is a more miner incentive compatible scheme.

However, I realize this is a complete rewrite of bitcoind's logic, and
I'm not volunteering to do it!

Cheers,
Rusty,
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proof-of-closure as griefing attack mitigation

2020-04-05 Thread Rusty Russell
Hi ZmnSCPxj,

This is a rework of the old unwrap-the-onion proposal, with
some important bits missing.

> Secondly, C needs to prove that the channel it is willing to close involves 
> the payment attempt, and is not some other channel closure that it is 
> attempting to use to fulfill its own soft timeout.
> Since the unilateral close transaction *is* the proof-of-closure, B (and A) 
> can inspect the transaction outputs and see (with some additional data from 
> C) that one of the outputs is to an HTLC that matches the payment hash.
>
> Thus, B (and A) can believe that the proof-of-closure proves that whoever is 
> presenting it is free of wrongdoing, as whoever is actually causing the delay 
> has been punished (by someone being willing to close a channel with the 
> culprit), and that the proof-of-closure commits to this particular payment 
> attempt and no other (because it commits to a particular payment hash).

As you note below, the payment might be considered dust, or an
unresponsive peer has not yet acked the HTLC.

My previous proposal was to limit the damage somewhat by requiring that
C offer a signed list of some limited number of HTLCs it is claiming
were caught, alongside the closure proof (you can merkle this, but
that's a detail).  That closure claim gets socialized, and if there are
multiple different claim lists for the tx then C is a bad actor and we
no longer respect its closure proof.

You also missed how the timeout would work, which is important.  How
long does node N wait for a proof?  In my construction, it's 30 seconds,
plus get another 30 seconds for each decryption of the onion it
receives.

Otherwise, you can't know how long you've got to provide this closure
proof, or how long to wait for it.

In addition, for closure proofs to work, nodes need to agree on what is
a valid, standard, high-enough-fee commitment transaction.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Blind paths revisited

2020-03-10 Thread Rusty Russell
ZmnSCPxj  writes:
> Good morning Rusty, et al.,
>
>
>> Note that this means no payment secret is necessary, since the incoming
>> `blinding` serves the same purpose. If we wanted to, we could (ab)use
>> payment_secret as the first 32-bytes to put in Carol's enc1 (i.e. it's
>> the ECDH for Carol to decrypt enc1).
>
> I confess to not reading everything in detail, but it seems to me that, with 
> payment point + scalar and path decorrelation, we need to establish a secret 
> with each hop anyway (the blinding scalar for path decorrelation), so if you 
> need a secret per hop, possibly this could be reused as well?

Indeed, this could be used the same way, though for that secret it can
simply be placed inside the onion rather than passed alongside.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Blind paths revisited

2020-03-09 Thread Rusty Russell
I recently hit a dead-end on rendezvous routing; the single-use
requirement is a showstopper for offers (which are supposed to be static
and reusable).

Fortunately, t-bast has a scheme for blinded paths (see
https://gist.github.com/t-bast/9972bfe9523bb18395bdedb8dc691faf ) so
I've been examining that more closely.  I think it can be simplified
to use more standard primitives.

The problem: Alice wants to present Mallory with a path (Carol, Bob,
Alice) for which he can create an onion, which is obscured in some way,
but can be unobscured by the various nodes.  Mallory should be forced to
use the entire path.

Alice can give Mallory two ECDH blobs to place inside the per-hop
payload to establish shared secrets with Bob and Carol.  But crucially,
Bob needs the secret *before* he can unwrap the onion, so the ECDH
blob for the next peer needs to be sent alongside the onion itself.

t-bast proposed using the secret to XOR the scid, but Christian
suggested it's more powerful to encrypt a general payload.

What does this leave us with?

1. A new invoice letter `b`.  Encodes
   1 or more pubkey/feebase/feeprop/cltvdelta/features/encblob.

2. An additional (tlv of course) field to update_add_htlc, `blinding`.

3. New `tlv_payload` field `encblob` (varlen).

4. ECDH on incoming `blinding` to get a shared secret which tells
   this node how to tweak its nodeid to decrypt onion, and also how to
   decrypt `encblob`.  This gives a tlv, which presumably contains
   `short_channel_id` as well as `blinding`.

5. Use `blinding` for the next update_add_htlc.

6. If you get an error from downstream and you sent `blinding`, turn it
   into your own error for maximum obfuscation.  Perhaps a new
   "blinded_path_error"?  Obviously does not include a channel_update :)

So, if you get an invoice `b`, with path starting at (known) Carol:

Carol/1/1/9/""/enc1
  Bob'/1/1/9/""/enc2
[Optional: decoy hops...]

Payer constructs the onion to get to Carol as normal, then:

Carol: No `blinding` in incoming HTLC, but once it decrypts the
   onion, she sees `encblob` (value enc1).  Uses first 32
   bytes of `enc1` as `blinding`: do ECDH to get SS1, uses
   SS1 to decrypt rest to get next scid and `blinding`, send
   `blinding` with update_add_htlc to Bob.

Bob: Gets `blinding` from update_add_htlc. ECDH -> SS2.  Tweak
   own key with SS2 to decode onion.  Use SS2 to decrypt
   `enc2` to get next scid and blinding.  Send `blinding`
   with update_add_htlc to Alice.

Alice: Gets `blinding` from update_add_htlc. ECDH -> SS3.  Tweak
   own key with SS3 to decode onion.  If it put in decoy
   hops, this onion won't be terminal, but otherwise it can
   be treated as terminal iff `blinding` is the expected value
   for this invoice.

Note that this means no payment secret is necessary, since the incoming
`blinding` serves the same purpose.  If we wanted to, we could (ab)use
payment_secret as the first 32-bytes to put in Carol's enc1 (i.e. it's
the ECDH for Carol to decrypt enc1).

Thoughts?
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-29 Thread Rusty Russell
Anthony Towns  writes:
> Aside from those philosophical complaints, seems to me the simplest
> attack would be:
>
>   * route 1000s of HTLCs from your node A1 to your node A2 via different,
> long paths, using up the total channel capacity of your A1/A2 nodes,
> with long timeouts
>   * have A2 offer up a transaction claiming that was the channel
> close to A3; make it a real thing if necessary, but it's probably
> fake-able
>   * then leave the HTLCs open until they time out, using up capacity
> from all the nodes in your 1000s of routes. For every satoshi of
> yours that's tied up, you should be able to tie up 10-20sat of other
> people's funds
>
> That increases the cost of the attack by one on-chain transaction per
> timeout period, and limits the attack surface by how many transactions
> you can get started/completed within whatever the grace period is, but
> it doesn't seem a lot better than what we have today, unless onchain
> fees go up a lot.

Interestingly, I think your "reverse commitment signing" proposal would
mean the close tx will have the HTLC within it, so this attack is not
possible?  (Modulo handling dust HTLCs, which won't show up in the
commitment tx).

Previously I suggested the node simply send a (signed) list of up to N
additional HTLCs (this reduces batching to N, so make it at least 16).
This is gossiped, and if you get conflicting versions, the channel break
is considered invalid, so (as always) the previous channel has to break.

>> >   A->B: here's a HTLC, locked in
>> >   B->C: HTLC proposal
>> >   C->B: sure: updated commitment with HTLC locked in
>> >   B->C: great, corresponding updated commitment, plus revocation
>> >   C->B: revocation
>> Interesting; this adds a trip, but not in latency (since C can still
>> count on the HTLC being locked in at step 3).
>> I don't see how it helps B though?  It still ends up paying A, and C
>> doesn't pay anything?
>
> The updated commitment has C paying B onchain; if B doesn't receive that
> by the time the grace period's about over, B can cancel the HTLC with A,
> and then there's statemachine complexity for B to cancel it with C if
> C comes alive again a little later.

I thought C paid per unit time, so it wouldn't pay up-front?  You're
suggesting it pays the max in the commitment, and then it gets some back
in the normal case of things going right?

>> It forces a liveness check of C, but TBH I dread rewriting the state
>> machine for this when we can just ping like we do now.
>
> I'd be surprised if making musig work doesn't require a dread rewrite
> of the state machine as well, and then there's PTLCs and eltoo...

Hmm, PTLCs and eltoo don't.  Musig requires some mods to the protocol,
but the state machine changes are trivial.

>> >> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
>> >> would fail this HTLC once it's committed, here's the error" 
>> > Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
>> > fast fail the above too. 
>> > And I think something like that's necessary (at least with my view of how
>> > this "keep the HTLC open" payment would work), otherwise B could send C a
>> > "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
>> > timeout of 2016 blocks" and if C couldn't reject it immediately would
>> > owe B 50c per millisecond it took to cancel.
>> Well, surely grace period (and penalty rate) are either fixed in the
>> protocol or negotiated up-front, not per-HTLC.
>
> I think the "keep open rate" should depend on how many nodes have
> already been in the route (the more hops it's gone through, the more
> funds/channels you're tying up by holding onto the HTLC, so the more
> you should pay), while the grace period should depend on how many nodes
> there are still to go in the route (it needs to be higher to let each of
> those nodes deduct their delta from it). So I think you *should* expect
> those to change per HTLC than you're forwarding, as those factors will
> be different for different HTLCs.

In theory, but I could lie about both, and it's very undesirable to
communicate these things anyway.  I think it might make it worse, not
better, than having a fixed (per-msat?) rate.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Direct Message draft

2020-02-23 Thread Rusty Russell
Christian Decker  writes:
> Rusty Russell  writes:
>
>> I like it!  The lack of "reply" function eliminates all storage
>> requirements for the intermediaries.  Unfortunately it's not currently
>> possible to fit the reply onion inside the existing onion, but I know
>> Christian has a rabbit in his hat for this?
>
> I think circular payment really means an onion that is
>
>> A -> ... -> B -> ... -> A
>
> and not a reply onion inside of a forward onion.
>
> The problem with the circular path is that the "recipient" cannot add
> any reply without invalidating the HMACs on the return leg of the
> onion. The onion is fully predetermined by the sender, any malleability
> introduced in order to allow the recipient to reply poses a threat to
> the integrity of the onion routing, e.g., it opens us up to probing by
> fiddling with parts of the onion until the attacker identifies the
> location the recipient is supposed to put his reply into.
>
> As Rusty mentioned I have a construction of the onion routing packet
> that allows us to compress it in such a way that it fits inside of the
> payload itself.

I think this has the same problem though, that there's no way Alice can
send Bob an onion to use with an arbitrary message?

> Another advantage is that the end-to-end payload is not covered by the
> HMACs in the header, meaning that the recipient can construct a reply
> without having to modify (and invalidate) the routing onion. I guess
> this is going back to the roots of the Sphinx paper :-)

Good point, and it's trivial.  The paper suggests the payload be "final
key" followed by the desired data, providing a simple validation scheme.

We could potentially generalize the HTLC messages like this, but it's
unnecessary at this point.

Thanks,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-23 Thread Rusty Russell
Anthony Towns  writes:
> On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
>> And if there is a grace period, I can just gum up the network with lots
>> of slow-but-not-slow-enough HTLCs.
>
> Well, it reduces the "gum up the network for  blocks" to "gum
> up the network for  seconds", which seems like a pretty
> big win. I think if you had 20 hops each with a 1 minute grace period,
> and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> so at the very least, successfully performing this attack would be
> demonstrating lightning's solved bitcoin's transactions-per-second
> limitation?

But the comparison here is not with the current state, but with the
"best previous proposal we have", which is:

1. Charge an up-front fee for accepting any HTLC.
2. Will hang-up after grace period unless you either prove a channel
   close, or gain another grace period by decrypting onion.

(There was is an obvious extension to this, where you pay another HTLC
first which covers the (larger) up-front fee for the "I know the next
HTLC is going to take a long time").

That proposal is simpler, and covers this case quite nicely.

> at which point the best B can do is unilaterally close the B/C channel
> with their pre-HTLC commitment, but they still have to wait for that to
> confirm before they can safely cancel the HTLC with A, and that will
> likely take more than whatever the grace period is, so B will be losing
> money on holding fees.
>
> Whereas:
>
>   A->B: here's a HTLC, locked in
>
>   B->C: HTLC proposal
>   C->B: sure: updated commitment with HTLC locked in
>   B->C: great, corresponding updated commitment, plus revocation
>   C->B: revocation
>
> means that if C goes silent before B receives a new commitment, B can
> cancel the HTLC with A with no risk (B can publish the old commitment
> still even if the new arrives later, and C can only publish the pre-HTLC
> commitment), and if C goes silent after B receives the new commitment, B
> can drop the new commitment to the blockchain and pay A's fees out of it.

Interesting; this adds a trip, but not in latency (since C can still
count on the HTLC being locked in at step 3).

I don't see how it helps B though?  It still ends up paying A, and C
doesn't pay anything?

It forces a liveness check of C, but TBH I dread rewriting the state
machine for this when we can just ping like we do now.

>> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
>> would fail this HTLC once it's committed, here's the error" 
>
> Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> fast fail the above too. 
>
> And I think something like that's necessary (at least with my view of how
> this "keep the HTLC open" payment would work), otherwise B could send C a
> "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> timeout of 2016 blocks" and if C couldn't reject it immediately would
> owe B 50c per millisecond it took to cancel.

Well, surely grace period (and penalty rate) are either fixed in the
protocol or negotiated up-front, not per-HTLC.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-20 Thread Rusty Russell
Anthony Towns  writes:
> On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
>> A different way of mitigating this is to reverse the direction in which the
>> bond is paid. So instead of paying to offer an htlc, nodes need to pay to
>> receive an htlc. This sounds counterintuitive, but for the described jamming
>> attack there is also an attacker node at the end of the route. The attacker
>> still pays.
>
> I think this makes a lot of sense. I think the way it would end up working
> is that the further the route extends, the greater the payments are, so:
>
>   A -> B   : B sends A 1msat per minute
>   A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
>   A -> B -> C -> D : D sends C 3 msat, etc
>   A -> B -> C -> D -> E : E sends D 4 msat, etc
>
> so each node is receiving +1 msat/minute, except for the last one, who's
> paying n msat/minute, where n is the number of hops to have gotten up to
> the last one. There's the obvious privacy issue there, with fairly
> obvious ways to fudge around it, I think.

Yes, it needs to scale with distance to work at all.  However, it has
the same problems with other upfront schemes: how does E know to send
4msat per minute?

> I think it might make sense for the payments to have a grace period --
> ie, "if you keep this payment open longer than 20 seconds, you have to
> start paying me x msat/minute, but if it fulfills or cancels before
> then, it's all good".

But whatever the grace period, I can just rely on knowing that B is in
Australia (with a 1 second HTLC commit time) to make that node bleed
satoshis.  I can send A->B->C, and have C fail the htlc after 19
seconds for free.  But B has to send 1msat to A.  B can't blame A or C,
since this attack could come from further away, too.

This attack always seems possible.  Are you supposed to pay immediately
to fail an HTLC?  That makes for a trivial attack, so I guess not.

And if there is a grace period, I can just gum up the network with lots
of slow-but-not-slow-enough HTLCs.

> Maybe this also implies a different protocol for HTLC forwarding,
> something like:
>
>   1. A sends the HTLC onion packet to B
>   2. B decrypts it, makes sure it makes sense
>   3. B sends a half-signed updated channel state back to A
>   4. A accepts it, and forwards the other half-signed channel update to B
>
> so that at any point before (4) Alice can say "this is taking too long,
> I'll start losing money" and safely abort the HTLC she was forwarding to
> Bob to avoid paying fees; while only after (4) can she start the time on
> expecting Bob to start paying fees that she'll forward back. That means
> 1.5 round-trips before Bob can really forward the HTLC on to Carol;
> but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
> as Alice/Bob has finished (2).

We added a ping-before-commit[1] to avoid the case where B has disconnected
and we don't know yet; we have to assume an HTLC is stuck once we send
commitment_signed.  This would be a formalization of that, but I don't
think it's any better?

There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
would fail this HTLC once it's committed, here's the error" and if Alice
gets it before she sends the commitment_signed, she sends a new
"unadd_htlc" message first.  This theoretically allows Bob to do the
same: optimistically forward it, and unadd it if Alice doesn't commit
with it in time.[2]

Cheers,
Rusty.

[1] Technically, if we haven't seen any traffic from the peer in the
last 30 seconds, we send a ping and wait.

[2] This seems like a speedup, but it only is if someone fails the HTLC.
We still need to send the commitment_signed back and forth (w/
revoke and ack) before committing to it in the next hop.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Direct Message draft

2020-02-20 Thread Rusty Russell
René Pickhardt  writes:
> Hey Rusty,
>
> I was very delighted to read your proposal. But I don't see how you prevent
> message spam. If I understand you correctly you suggest that I can
> communicate to any node along a path of peer connections (not necessarily
> backed by payment channels but kind of only known through channel
> announcements of gossip) via onions and these onions which are send within
> a new gossip message are not bound to any fees or payments.

It doesn't handle spam, but OTOH it's far cheaper than the HTLC system
(which also doesn't handle spam).  I'd be delighted to add an up-front
1msat payment, as soon as we can figure out how to do that.

The non-persistent storage costs for remembering how to forward the
reply are the 200 bytes in my implementation (one pointer to the source
peer, two SHA256s, and the shared secret).

> Let's assume I just missed some spam prevention mechanism or that we can
> fix them. Do I understand the impact of your suggestion correctly that I
> could use this protocol to
>
> 1.) create a fee free rebalancing protocol? Because I could also attach a
> new lightning message inside the onions that would allow nodes without
> direct peer connection to set up a circular rebalancing path.
> 2.) have the ability to communicate with nodes further away than just my
> peers - for example to exchange information for pathfinding and / or
> autopilots?

Indeed.  I haven't prevented it, precisely because we *can't*.  This
proposal merely gives a more efficient method than encoding via HTLCs.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Direct Message draft

2020-02-20 Thread Rusty Russell
ZmnSCPxj  writes:
> Good morning Rusty,
>
> A concern I have brought up in the past is that if this is more than just a 
> single request-response, i.e. if this is a conversation where Alice sends to 
> Bob, Bob sends back to Alice, Alice sends back to Bob, and so on, then if the 
> same path is used each time Alice sends to Bob, the timing from Bob response 
> to Alice to the next Alice sends to Bob can help an intermediate node guess 
> how far away Alice is from itself.
> Obviously the timing from Alice sending to Bob and Bob replying gives a hint 
> as well as to the distance the intermediate node is to Bob already.
>
> It may be good to at least recommend that direct messages use different paths 
> if they are part of a larger conversation between the two parties.

This already applies to HTLCs, no?

> Would it not be better to create a circular path?
> By this I mean, Alice constructs an onion that overall creates a path from 
> herself to Bob and back, ensuring different nodes on the forward and return 
> directions.
> The onion hop at Bob reveals that Bob is the chosen conversation partner, and 
> Bob forwards its reply via the onion return path (that Alice prepared herself 
> to get back to her via another path).

I like it!  The lack of "reply" function eliminates all storage
requirements for the intermediaries.  Unfortunately it's not currently
possible to fit the reply onion inside the existing onion, but I know
Christian has a rabbit in his hat for this?

> After Alice receives the first message from Bob the circular "circuit" is 
> established and they can continue to communicate using the same circuit: 
> timing attacks are now "impossible" since Alice and Bob can be anywhere along 
> the circle, even if two of the nodes in the circuit are surveillors 
> cooperating with each other, the timing information they get is the distance 
> between the surveillor nodes.
>
> Of course, if a node in the circular path drops the circuit is disrupted, so 
> any higher-level protocols on top of that should probably be willing to 
> resume the conversation on another circular circuit.

My immediate purpose for this is for "offers" which cause a invoice
request, followed by an invoice reply.  This will probably be reused
once for the payment itself.  2 uses is not sufficient to justify
setting up a circuit, AFAICT.

> I believe I even tied this to an HTLC in an attempt to provide a
> spam-limit as well:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-November/002294.html

This part was deeply unclear.  Eventually we will have to charge
up-front for forwarding HTLCs (say 5% of existing fee, plus 1msat), and
then we could use the same scheme with lesser amounts (say, 1msat) for
forwarding messages.

But I have been unable to come up with an upfront scheme which doesn't
leak information badly.

The best I can do is some hashcash scheme, combined with the ability to
buy a single-use token to weaken it.  Under load, a node would raise
their hashcash difficulty, and you could either find another route,
grind your onion more to meet it, or send a payment for a token from
that node which would let your HTLC through: the preimage could even be
the XOR of some secret you send with the HTLC, and a shachain key which
gives you 1000 tokens, and you can use them in order, etc.

(Really want to use some kind of Chaumian token here, but it's probably
overkill).

> Finally: what does this improve over, say, requiring that all
> Lightning nodes have a Tor .onion address and just doing direct
> messaging over Tor?

That would be far better!  But that's not happening: lnurl over https is
happening.  Using lightning to tunnel messages is a strict improvement
over that, at least.

Cheers!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-02-20 Thread Rusty Russell
lisa neigut  writes:
>> With PoDLE this would not be possible I think, as you would not be able
> to open the PoDLE commitment with the other node as the target (if we go
> with the modified PoDLE which also commits to which node an opening is for,
> to prevent the pouncing venus flytrap attack).
>
> Good question. It should be possible to do multi-channel open even with the
> PoDLE signature committing to a node_id.
>
> - An initiator can use the same utxo (h2) as their proof for multiple
> peers; the signatures passed to each peer will have to commit to that
> specific peer's node_id, however.
> - The revised PoDLE signature commitment requires every initiator to
> include at least one of their own inputs in the tx. Attempting to initiate
> an additional open etc using someone else's utxo's won't work (this is the
> pouncing venus flytrap attack which we're preventing). The initiator
> including at least one input is expected behavior, at least in the open
> case, since the opener has to cover the fees for the funding output.
> - Ideally, a node would remove the PoDLE TLV data from any 'forwarded'
> `tx_add_inputs` that isn't the input they're proving for, to prevent
> leaking information about which inputs belong to other parties. I say
> ideally here because even if you fail to do this, the peer can iterate
> through all the provided commitment proofs until one of them
> matches/verifies with the upfront provided PoDLE.

Yes, you need to PoDLE your own contribution I think, which means you
need one UTXO per contributor in the N-way-open who you want to
contribute a UTXO.

Consider Alice trying to create a single-tx to open a channel with both
Bob and Carol, and wants *both* of them to also contribute.

Alice sends her own UTXO1 with proof to Bob, he shares his UTXO back.
Alice sends her own UTXO2 with proof to Carol, she shares a UTXO back.
Alice sets the lower bit on the serial_id from Bob and sends to Carol,
and sets the lower on the serial_id from Carol and sends to Bob.  She
similarly reflects everything from Carol to Bob and vice-versa, and
sends both of them the two "channel opening" outputs.

Now all parties have the same tx; unless Bob and Carol chose the same
serial_ids (spec says random, but Bob and Carol don't get along).  But
this is trivially identifiable, and you give up on mutual opening.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Direct Message draft

2020-02-20 Thread Rusty Russell
Hi all!

It seems that messaging over lightning is A Thing, and I want to
use it for the offers protocol anyway.  So I've come up with the
simplest proposal I can, and even implemented it.

Importantly, it's unreliable.  Our implementation doesn't
remember across restarts, limits us to 1000 total remembered forwards
with random drop, and the protocol doesn't (yet?) include a method for
errors.

This is much friendlier on nodes than using an HTLC (which
requires 2 round trips, signature calculations and db commits), so is an
obvious candidate for much more than just invoice requests.

The WIP patch is small enough I've pasted it below, but it's
also at https://github.com/lightningnetwork/lightning-rfc/pull/748

diff --git a/01-messaging.md b/01-messaging.md
index 40d1909..faa5b18 100644
--- a/01-messaging.md
+++ b/01-messaging.md
@@ -56,7 +56,7 @@ The messages are grouped logically into five groups, ordered 
by the most signifi
   - Setup & Control (types `0`-`31`): messages related to connection setup, 
control, supported features, and error reporting (described below)
   - Channel (types `32`-`127`): messages used to setup and tear down 
micropayment channels (described in [BOLT #2](02-peer-protocol.md))
   - Commitment (types `128`-`255`): messages related to updating the current 
commitment transaction, which includes adding, revoking, and settling HTLCs as 
well as updating fees and exchanging signatures (described in [BOLT 
#2](02-peer-protocol.md))
-  - Routing (types `256`-`511`): messages containing node and channel 
announcements, as well as any active route exploration (described in [BOLT 
#7](07-routing-gossip.md))
+  - Routing (types `256`-`511`): messages containing node and channel 
announcements, as well as any active route exploration or forwarding (described 
in [BOLT #7](07-routing-gossip.md))
   - Custom (types `32768`-`65535`): experimental and application-specific 
messages
 
 The size of the message is required by the transport layer to fit into a 
2-byte unsigned int; therefore, the maximum possible size is 65535 bytes.
diff --git a/04-onion-routing.md b/04-onion-routing.md
index 8d0f343..84eff9a 100644
--- a/04-onion-routing.md
+++ b/04-onion-routing.md
@@ -51,6 +51,7 @@ A node:
 * [Legacy HopData Payload Format](#legacy-hop_data-payload-format)
 * [TLV Payload Format](#tlv_payload-format)
 * [Basic Multi-Part Payments](#basic-multi-part-payments)
+* [Directed Messages](#directed-messages)
   * [Accepting and Forwarding a Payment](#accepting-and-forwarding-a-payment)
 * [Payload for the Last Node](#payload-for-the-last-node)
 * [Non-strict Forwarding](#non-strict-forwarding)
@@ -62,6 +63,7 @@ A node:
   * [Returning Errors](#returning-errors)
 * [Failure Messages](#failure-messages)
 * [Receiving Failure Codes](#receiving-failure-codes)
+  * [Directed Message Replies](#directed-message-replies)
   * [Test Vector](#test-vector)
 * [Returning Errors](#returning-errors)
   * [References](#references)
@@ -366,6 +368,13 @@ otherwise meets the amount criterion (eg. some other 
failure, or
 invoice timeout), however if it were to fulfill only some of them,
 intermediary nodes could simply claim the remaining ones.
 
+### Directed Messages
+
+Directed messages have an onion with an alternate `hop_payload`
+format.  If this node is not the intended recipient, the payload is
+simply a 33-byte pubkey indicating the next recipient.  Otherwise, the
+payload is the message for this node.
+
 # Accepting and Forwarding a Payment
 
 Once a node has decoded the payload it either accepts the payment locally, or 
forwards it to the peer indicated as the next hop in the payload.
@@ -1142,6 +1151,11 @@ The _origin node_:
   - MAY use the data specified in the various failure types for debugging
   purposes.
 
+## Directed Message Replies
+
+Directed message replies are encoded the same way as failure messages,
+except the contents is a directed message for the originator.
+
 # Test Vector
 
 ## Returning Errors
diff --git a/07-routing-gossip.md b/07-routing-gossip.md
index ec1a8f0..4c2b836 100644
--- a/07-routing-gossip.md
+++ b/07-routing-gossip.md
@@ -1,4 +1,4 @@
-# BOLT #7: P2P Node and Channel Discovery
+# BOLT #7: P2P Node and Channel Discovery and Directed Messages
 
 This specification describes simple node discovery, channel discovery, and 
channel update mechanisms that do not rely on a third-party to disseminate the 
information.
 
@@ -31,6 +31,7 @@ multiple `node_announcement` messages, in order to update the 
node information.
   * [HTLC Fees](#htlc-fees)
   * [Pruning the Network View](#pruning-the-network-view)
   * [Recommendations for Routing](#recommendations-for-routing)
+  * [Directed Messages](#directed-messages)
   * [References](#references)
 
 ## Definition of `short_channel_id`
@@ -1103,6 +1104,37 @@ A->D's `update_add_htlc` message would be:
 And D->C's `update_add_htlc` would again be the same as B->C's direct 

[Lightning-dev] [RELEASE] c-lightning 0.8.1: "Channel to the Moon"

2020-02-17 Thread Rusty Russell
We're pleased to announce 0.8.1, named by @vasild (who last release was
a new committer!)

https://github.com/ElementsProject/lightning/releases/tag/v0.8.1

*Highlights for Users*

- We now support gifting msat to the peer when opening a channel, via
  push_msat, providing a brand new way to lose money!

- Invoice routehints can be overridden using exposeprivatechannels: try
  setting to [] to eliminate all of them to fit your invoice in twitter
  messages!

- Preliminary support for plugins hooks which can replace the default
  bitcoin-cli with other blockchain querying methods (API may change in
  future releases though!).

Highlights for the Network

- Plugins can set additional feature bits, for more experimentation.

- Wallet withdraw transactions now set nLocktime, making them blend in
  more with other wallets.

- Prevent a case where grossly unbalanced channels could become unusable.

More details can be found in

https://github.com/ElementsProject/lightning/blob/v0.8.1/CHANGELOG.md

Thanks to everyone for their contributions and bug reports; please keep
them coming!

Since 0.8.0 we've had 257 commits from 17 different authors, with 5
first-time contributors:

Niklas Claesson @NickeZ
@minidisk1147
Ken Sedgwick @ksedgwic
Zoe Faltibà @jwilkins
Glen Cooper @jwilkins

Cheers,
Rusty, Lisa, Christian, and ZmnSCPxj.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-02-12 Thread Rusty Russell
ZmnSCPxj via Lightning-dev  writes:
> Good morning niftynei,
>
>
>> Rusty had some suggestions about how to improve the protocol messages for 
>> this, namely adding a serial_id to the inputs and outputs, which can then be 
>> reused for deletions. 
>>
>> The serial id can then also be used as the ordering heuristic for 
>> transaction inputs during construction (replacing current usage of BIP69). 
>> Inputs can be shared amongst peers by flipping the bottom bit of the 
>> serial_id before relaying them to another peer (as your own).
>
> What happens if the initiator deliberately provides serial IDs 0x1, 0x3,  
> while the acceptor naively provides serial IDs from `/dev/urandom`?

This is a feature, and one you might need to use if you have some
SIGHASH_SINGLE or other weirdness for one input.

> Then the balance of probability is that the initiator inputs and outputs are 
> sorted before the acceptor.
> Now, this is probably not an issue, since the initiator and acceptor both 
> know which inputs and outputs are theirs and not theirs, so they could just 
> reveal this information to anyone, so an actor providing such lousy serial 
> IDs is just hurting its own privacy relative to blockchain analysts, so 
> probably will not happen in practice.
>
> My initial reaction was to propose adding a secret-sharing round where the 
> resulting key is XORed to each serial ID before sorting by the XORed serial 
> ID, but this might be too overweight, and again the initiator is only hurting 
> its own privacy, and the two participants already know whose money is whose 
> anyway

>> > - nLocktime is always set to 0x
>>
>> - If a blockheight to be used as nLocktime is communicated in the initiation 
>> step, is set to blockheight-6; otherwise set to zero-
>
> I am unsure what is the purpose of this minus 6.
>
> If you fear blockheight disagreements, this is probably a good time to 
> introduce block headers.
> So for example if the acceptor thinks the initiator blockheight is too high, 
> it could challenge the initiator to give block headers from its known 
> blockheight to the initiator blockheight.
> If the acceptor thinks the initiator blockheight is too low, it could provide 
> block headers itself as proof.
> This could be limited so that gross differences in blockheight are outright 
> rejected by the acceptor (it could `error` the temporary channel ID rather 
> than accept it).

Yes, I would just have the initiator specify nLocktime directly, just
like feerate.  If you don't like it, don't contribute to the tx
construction.

> This is SPV, but neither side is actually making or accepting a payment 
> *yet*, just synchronizing clocks, so maybe not as bad as normal SPV.
>
> Massive chain reorgs cannot reduce blockheight, only increase it (else
> the reorg attempt fails in the first place)

This is not quite true, due to difficulty adjustments.  It's true in
practice, however, and not relevant since you'd just have to wait one
more block.

>> - Serial ids should be chosen at random
>> - For multiparty constructions, the initiator MUST flip the bottom bit of 
>> any received inputs before relaying them to a peer.
>>
>> - Collisions of serial ids between peers is a protocol error
>
> I suppose we should define collision to mean "equal in all bits except the 
> lowest bit".

No, literally equal.  i.e. you can only make this error by clashing with
yourself.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-12 Thread Rusty Russell
Bastien TEINTURIER  writes:
> Hi Rusty,
>
> Thanks for the answer, and good luck with c-lightning 0.8.1-rc1 ;)

... Now -rc2.  I actually had a RL use for lightning (OMG!), and sure
enough found a bug.

> I've been thinking more about improving my scheme to not require any sender
> change, but I don't think that's possible at the moment. As with all
> Lightning
> tricks though, once we have Schnorr then it's really easy to do.
> Alice simply needs to use `s * d_a` as her "preimage" (and the payment point
> becomes the P_I Bob needs). That may depend on the exact multi-hop locks
> construction we end up using though, so I'm not 100% sure about that yet.

I was starting to think this whole thing was of marginal benefit: note
that solving "private channels need a temp scid" is far simpler[1].

But since your scheme extends to rendevous, it's much more tempting!

We would use this for normal private channels as well as private routes
aka new rendezvous.  Even better, this would be a replacement for
current route hints (which lack ability to specify feature bits, which
we would add here, and is also grossly inefficient if you just want to
use it for Routeboost[2]).

Propose we take the `z` to use as bolt11 letter, because even the French
don't pronounce it in "rendez-vous"!)

Then use TLV inside:[3]

* `z` (2): `data_length` variable. One or more entries containing extra
  routing information; there may be more than one `z` field.  Each entry
  looks like:
   * `tlv_len` (8 bits)
   * `rendezvous_tlv` (tlv_len bytes)

1. tlvs: `rendezvous_tlv`
2. types:
   1. type: 1 (`pubkey`)
   2. data:
  * [`point`:`nodeid`]
   1. type: 2 (`short_channel_id`)
   2. data:
  * [`short_channel_id`:`short_channel_id`]
   1. type: 3 (`fee_base_msat`)
   2. data:
  * [`tu32`:`fee_base_msat`]
   1. type: 4 (`fee_proportional_millionths`)
   2. data:
  * [`tu32`:`fee_proportional_millionths`]
   1. type: 5 (`cltv_expiry_delta`)
   2. data:
  * [`tu16`:`cltv_expiry_delta`]
   1. type: 6 (`features`)
   2. data:
  * [`...*byte`:`features`]

That probably adds 6 bytes entry, but worth it I think.

Cheers,
Rusty.

[1] Add a new field to 'funding_locked': "private_scid".  If both sides
support 'option_private_scid' (?) then the "real" scid is no longer
valid for routing, and we use the private scid.

[2] It's enough to give the scid(s) in this case indicating where you
have incoming capacity.

[3] I'm really starting to dislike my bolt11 format.  We should probably
start afresh with a TLV-based one, where signature covers the hash
of each entry (so they can be easily externalized!), but that's a
big, unrelated task.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-09 Thread Rusty Russell
Bastien TEINTURIER  writes:
>> But Mallory can do the same attack, I think.  Just include the P_I from
>> the wrong invoice for Bob.
>
> Good catch, that's true, thanks for keeping me honest there! In that case
> my proposal
> would need the same mitigation as yours, Bob will need to include the
> `scid` he received
> in `update_add_htlc` (this is in fact not that hard once we allow TLV
> extensions on every
> message).

Yes, I've added this to the PR.  Which gives a new validation path, I
think:

## Figuring out what nodeid to use to decode onion

1. Look up scid from HTLC; if it didn't include one, use default.
2. Look up payment_hash; if no invoice is found, use default.
3. If invoice specified this scid, get nodeid and use that.
4. ... and refuse to forward the HTLC (it must terminate here).

My plan is to add an argument to `invoice` which is an array of one or
more scids: we get a temporary scids for each peer and use them in the
routehints.  We also assign a random temporary nodeid to that invoice.

The above algo is designed to ensure we behave like any other node which
has no idea about this nodeid if Mallory:

1. tries to use a temporary node id on a normal channel to us.
2. tries to pay another invoice using this temporary node id.
3. tries to probe our outgoing channels using this routing hint
   (I think we should probably ban forwarding to private channels,
   too, for similar reasons).

---

Note that with any self-assigned SCID schemes, Alice has to respond to
unknown scids in update_add_htlc with some BADONION code (which makes
*Bob* give Carol an error response, since Alice can't without revealing
her identity).

With Bob-assigned SCIDs, Alice simply needs to make him unallocate
it before forgetting the invoice, so she will simply never see old
invoices.

(All these schemes give limited privacy, of course: Bob knows who Alice
is, and fingerprinting and liveness attacks are always possible).

> I'm extremely nervous about custodial lightning services restricting
>> what they will pay to.  This is not theoretical: they will come under
>> immense KYC pressure in the near future, which means they cannot pay
>> arbitrary invoices.
1>>
>
> That's a very good point, thanks for raising this. However I believe that
> there are (and will be) enough
> non-custodial wallets to let motivated users pay whatever they want. Users
> can even run their own
> node to pay such invoices if needed.

Not if ln_strike (no, the other one!) is the future.

> If you are using a custodial wallet and KYC pressure kicks in, then
> regardless of that feature law may
> require users to completely reveal who they are paying, so even normal
> payments wouldn't protect
> them, don't you think? Regulation could for example disallow paying via
> unannounced channels entirely
> (or require you to show the funding tx associated to your unannounced
> channel).

Actually, as long as the same method is required for both normal private
channels (which will all use non-tx-based short_channel_ids in the near
future I hope!), I don't really mind.  I expect such payments to become
significant, and as long as paying to a temporary id and paying to a
private channel looks identical, it's too draconian to ban.  A business
would probably meet any KYC requirements by simply asking the user
(perhaps over a certain amount, etc).

(I've put my implementation on hold for a moment while I'm supposed to
be releasing 0.8.1-rc1 RSN!)

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-04 Thread Rusty Russell
Bastien TEINTURIER  writes:
> Hey again,
>
> Otherwise Mallory gets two invoices, and wants to know if they're
>> actually the same node.  Inv1 has nodeid N1, routehint Bob->C1, Inv2 has
>> nodeid N2, routehint Bob->C2.
>
> I think this attack is interesting. AFAICT my proposal defends against this
> because of the way
> `payment_secret` and `decoy_key` are both used to derive the `decoy_scid`
> (but don't trust me, do
> verify that I'm not missing something).
>
> If Mallory doesn't use both the right `decoy_node_id` and `payment_secret`
> to compute `P_I`, Bob
> will not decode that to a valid real `scid` and will return an
> `unknown_next_peer` which is good
> for privacy.

But Mallory can do the same attack, I think.  Just include the P_I from
the wrong invoice for Bob.

> It seems to me that
> https://github.com/lightningnetwork/lightning-rfc/pull/681 cannot defend
> against this attack. If both invoices are currently valid, Bob will forward
> an HTLC that uses N1
> with C2 (because Bob has no way of knowing N1 from the onion, for privacy
> reasons).
> The only way I'd see to avoid is would be that Alice needs to share her
> `decoy_node_id`s with
> Bob (and the mapping to a `decoy_scid`) which means more state to
> manage...but maybe I'm just
> missing a better mitigation?

No, Bob can include the scid he used in the update_add_htlc message, so
Alice can check.

I'm extremely nervous about custodial lightning services restricting
what they will pay to.  This is not theoretical: they will come under
immense KYC pressure in the near future, which means they cannot pay
arbitrary invoices.

Thus my preference for a system which doesn't add any requirements on
the payer.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-03 Thread Rusty Russell
Rusty Russell  writes:
> Bastien TEINTURIER  writes:
>> That's of course a solution as well. Even with that though, if Alice opens
>> multiple channels to each of her Bobs,
>> she should use Tor and a different node_id each time for better privacy.
>
> There are two uses for this feature (both of which I started implementing):
>
> 1. Simply always use a temporary id when you have a private channel, to
>obscure your onchain footprint.  This is a nobrainer.
>
> 2. For an extra layer of transience, apply a new temporary id and new
>nodeid on every invoice *which applies only for that invoice*.
>
> But implementing the latter securely is fraught!
>
> Firstly, need to brute-force the onion against your N keys.  Secondly,
> if you use a temporary key, then you *don't* end up using the HTLC to
> pay an invoice matching that key, you *MUST* pretend you couldn't
> decrypt the onion!  This applies to all code paths between the two,
> including parsing the TLV, etc: they must ALL return
> WIRE_INVALID_ONION_HMAC.
>
> Otherwise, Mallory can get an invoice, then send malformed payments to
> Alice using the transient key in the invoice and see if she decrypts it.

Actually, that was too hasty.  You can use the payment_hash as a
fastpath:

1. Look up invoice using payment_hash.

2. If there is an invoice, and it has a temporary id associated with it,
   try using that to decrypt the onion.  If that works, and the onion is
   on the final hop, and the TLV decodes, and the payment_secret is
   correct, you can go back and use this temporary key to decrypt the onion.
   Otherwise, go back and use the normal node key.

That's still quite a bit of tricky code though...

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-03 Thread Rusty Russell
Bastien TEINTURIER  writes:
> That's of course a solution as well. Even with that though, if Alice opens
> multiple channels to each of her Bobs,
> she should use Tor and a different node_id each time for better privacy.

There are two uses for this feature (both of which I started implementing):

1. Simply always use a temporary id when you have a private channel, to
   obscure your onchain footprint.  This is a nobrainer.

2. For an extra layer of transience, apply a new temporary id and new
   nodeid on every invoice *which applies only for that invoice*.

But implementing the latter securely is fraught!

Firstly, need to brute-force the onion against your N keys.  Secondly,
if you use a temporary key, then you *don't* end up using the HTLC to
pay an invoice matching that key, you *MUST* pretend you couldn't
decrypt the onion!  This applies to all code paths between the two,
including parsing the TLV, etc: they must ALL return
WIRE_INVALID_ONION_HMAC.

Otherwise, Mallory can get an invoice, then send malformed payments to
Alice using the transient key in the invoice and see if she decrypts it.

And then I realized that Alice can't do this properly without Bob
telling her what the scid he used to route was.

Otherwise Mallory gets two invoices, and wants to know if they're
actually the same node.  Inv1 has nodeid N1, routehint Bob->C1, Inv2 has
nodeid N2, routehint Bob->C2.

Now Mallory uses Bob->C2 to pay to N1 for Inv1.  If it works, he knows
it's the same node issuing both invoices.

So, update_add_htlc needs a new scid field.

At this point, I think we should just add a new channel_flag, which if
you set it (and feature flag is offered) you get assigned random SCID
from the peer in funding_locked.  This overrides your
funding-transaction-based SCID.

That gets the first case for new channels, without adding much
complexity at all.[1]

Thoughts?
Rusty.

[1] If we want to cover existing channels, we need a new "give me a
replacement scid" msg and reply.  But it can be idempotent (you
only ever get one replacement).
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Decoy node_ids and short_channel_ids

2020-02-02 Thread Rusty Russell
Bastien TEINTURIER  writes:
> We can easily get rid of (1.) by leveraging the `payment_secret`. The
> improved scheme is:
>
> * Alice draws a random `decoy_key`
> * Alice computes the corresponding `decoy_node_id = decoy_key * G`
> * Alice draws a random `payment_secret`
> * Alice computes `decoy_short_channel_id = H(payment_secret * decoy_key *
> bob_node_id) xor short_channel_id`
> * Alice uses the `decoy_key` to sign the invoice
> * Carol recovers `decoy_node_id` from the invoice signature
> * Carol includes `P_I = payment_secret * decoy_node_id` in the onion
> payload for Bob
> * Bob can compute `short_channel_id = H(bob_private_key * P_I) xor
> decoy_short_channel_id`
>
> But I don't see how to get rid of (2.). If anyone has a clever idea on how
> to do that, I'd love to hear it!

I really don't want a special marker on Carol; she needs to just pay
like normal.  Not just because it's simple, but because it means that
Carol can use a custodial wallet without having to flag the payment as
somehow special.

AFAICT, having Bob assign scids is the only viable way to do this.  The
current proposal limits to one scid at a time, but it could be extended
to allow multiple scids.

(I'm seeking a clever way that Bob can assign them and trivially tell
which ID is assigned to which peer, but I can't figure it out, so I
guess Bob keeps a mapping and restricts each peer to 256 live scids?).

I've updated and somewhat simplified the PR now.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

2019-12-02 Thread Rusty Russell
ZmnSCPxj  writes:
> Good morning Rusty,
>
>> > Hi all,
>> > I recently revisited the eltoo paper and noticed some things related
>> > watchtowers that might affect channel construction.
>> > Due to NOINPUT, any update transaction can spend from any other, so
>> > in theory the tower only needs the most recent update txn to resolve
>> > any dispute.
>> > In order to spend, however, the tower must also produce a witness
>> > script which when hashed matches the witness program of the input. To
>> > ensure settlement txns can only spend from exactly one update txn,
>> > each update txn uses unique keys for the settlement clause, meaning
>> > that each state has a unique witness program.
>>
>> I didn't think this was the design. The update transaction can spend
>> any prior, with a fixed script, due to NOINPUT.
>>
>> The settlement transaction does not use NOINPUT, and thus can only
>> spend the matching update.
>
> My understanding is that this is not logically possible?

You're right, no wonder I missed this problem :(

OK, so we need to change the key(s) every time.  Can we tweak it based
on something the watchtower will know, i.e. something in the update tx
itself?  Obviously not the output, as that would create a circular
dependency.  Is there some taproot thing we can use to insert some
noise in the input?

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

2019-12-02 Thread Rusty Russell
Conner Fromknecht  writes:
> Hi all,
>
> I recently revisited the eltoo paper and noticed some things related
> watchtowers that might affect channel construction.
>
> Due to NOINPUT, any update transaction _can_ spend from any other, so
> in theory the tower only needs the most recent update txn to resolve
> any dispute.
>
> In order to spend, however, the tower must also produce a witness
> script which when hashed matches the witness program of the input. To
> ensure settlement txns can only spend from exactly one update txn,
> each update txn uses unique keys for the settlement clause, meaning
> that each state has a _unique_ witness program.

I didn't think this was the design.  The update transaction can spend
any prior, with a fixed script, due to NOINPUT.

The settlement transaction does *not* use NOINPUT, and thus can only
spend the matching update.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-22 Thread Rusty Russell
Bastien TEINTURIER  writes:
> I think there's another alternative than upfront payments to prevent spam,
> which is maybe less
> controversial (but potentially less effective as well - to be investigated).
>
> Why not adapt what has been done with email spam and PoW/merkle puzzles?

If we can't come up with an untracable scheme, this is what we'll have
to do (i.e. remove the sats component).

Unfortunately botnets are really good at generating these.  That was
always the hashcash flaw (which is why it was never actually used).

Using a dynamic level is possible, too, so it gets harder in case we're
being spam attacked.

Cheers,
Rusty.

> The high-level idea would be that the sender must solve a small PoW puzzle 
> *for
> each intermediate *
> *node *and communicate the solution in the onion.
> There are many ways we could do that (a new field in each intermediate hop,
> grinding an HMAC
> prefix, etc) so before going into specifics I only wanted to submit the
> high-level idea.
> What's neat with this is that it's simple, doesn't leak any privacy, and
> avoids having to create a
> node reputation system.
>
> We fight spam by forcing the sender to use some resources (instead of sats).
> Maybe this idea has already been proposed and broken, if that's the case
> I'd love to see the
> discussion if someone can surface it.
>
>
> Cheers,
> Bastien
>
> Le lun. 11 nov. 2019 à 00:32, Rusty Russell  a
> écrit :
>
>> Anthony Towns  writes:
>> > On Fri, Nov 08, 2019 at 01:08:04PM +1030, Rusty Russell wrote:
>> >> Anthony Towns  writes:
>> >> [ Snip summary, which is correct ]
>> >
>> > Huzzah!
>> >
>> > This correlates all the hops in a payment when the route reaches its end
>> > (due to the final preimage getting propogated back for everyone to
>> justify
>> > the funds they claim). Maybe solvable by converting from hashes to ECC
>> > as the trapdoor function?
>>
>> I hadn't thought of this, but yes, once we've eliminated the trivial
>> preimage correlation w/scriptless scripts it'd be a shame to reintroduce
>> it here.
>>
>> We need an accumulator with some strange properties though:
>>
>> 1. Alice provides tokens and a base accumulator.
>> 2. Bob et. al can add these tokens to the accumulator.
>> 3. They can tell if invalid tokens have been added to the accumulator.
>> 4. They can tell how many tokens (alt: each token has a value and they
>>can tell the value sum) have been added.
>> 5. They can't tell what tokens have been added (unless they know all
>>the tokens, which is trivial).
>>
>> Any ideas?
>>
>> > The refund amount propogating back also reveals the path, probably.
>> > Could that be obfusticated by somehow paying each intermediate node
>> > both as the funds go out and come back, so the refund decreases on the
>> > way back?
>> >
>> > Oh, can we make the amounts work like the onion, where it stays constant?
>> > So:
>> >
>> >   Alice wants to pay Dave via Bob, Carol. Bob gets 700 msat, Carol gets
>> >   400 msat, Dave gets 300 msat, and Alice gets 100 msat refunded.
>> >
>> >   Success:
>> > Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
>> > Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
>> > Carol forwards 1500 msat to Dave  (-1500, 0, 0, +1500)
>> > Dave refunds 1200 msat to Carol   (-1500, 0, +1200, +300)
>> > Carol refunds 800 msat to Bob (-1500, +800, +400, +300)
>> > Bob refunds 100 msat to Alice (-1400, +700, +400, +300)
>>
>> Or, on success, upfront payment is fully refunded or not refunded at all
>> (since they get paid by normal fees)?  Either way, no data leak for that
>> case.
>>
>> >   Clean routing failure at Carol/Dave:
>> > Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
>> > Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
>> > Carol says Dave's not talking
>> > Carol refunds 1100 msat to Bob(-1500, +1100, +400, 0)
>> > Bob refunds 400 msat to Alice (-1100, +700, +400, 0)
>> >
>> > I think that breaks the correlation pretty well, so you just need a
>> > decent way of obscuring path length?
>>
>> I don't see how this breaks correlation?
>>
>> > In the uncooperative routing failure case, I wonder if using an ECC
>> > trapdoor and perhaps scriptless scripts, you could make it so Carol
>> > doesn't even get an updated state without revealing the preimage...
>>

Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-13 Thread Rusty Russell
Yaacov Akiba Slama  writes:
> So we can integrate between them without intermixing the semantics of
> the protocols but we need only to define the interaction points between
> them.
>
> In the previous worflow, the seller can for instance add in the LN
> invoice H(Quotation (UBL)||Order(UBL)||Prepayment Invoice(UBL)), and use
> H(Receipt(UBL)) as preimage. With such a workflow, the UBL documents are
> cryptographically tied to the LN payment.
>
> So the property of UBL of not being machine *handlable* is not altered
> but the LN cryptographic properties are still used to tie the workflow.
>
> Am I missing something?

Sure, people can do this today: simply set your `d` field to "UBL:
".

But it provide what we want from offers:
1. Does not provide a "static invoice" flow.
2. Does not provide a donation flow.
3. Does not provide a method for wallets to do recurrence.
4. Does not provide end-to-end over LN (i.e. no HTTP(s) requests).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-12 Thread Rusty Russell
Yaacov Akiba Slama  writes:
>   Seller: Quotation (UBL)
>
>   Buyer: Order (UBL)
>
>   Seller: Prepayment Invoice (UBL)
>
>   Seller: Invoice (LN)
>
>   Buyer/Seller: Payment & Ack (LN)
>
>   Buyer: Receipt (UBL)
>
>
> The advantage of such workflow are that we don't need to add any fields
> to the current invoice structure, nor to define in the LN protocol new
> messages like offer or invoice request, nor to intervene in the semantic
> of the business workflow and in the required/optional fields in these
> messages.
 
This would be UBL treating Lightning as a dumb payment layer, which is a
little like faxing email, and not a use case I'd be promoting for
Lightning.

To be clear: the full UBL spec is machine *parsable* but definitely not
designed to be machine *handlable*.  This makes sense, since a machine
cannot generally choose between quotations or interpret general contract
terms.

However, for the simpler (but very common!) case of an offer->purchase
flow, we can define a subset of UBL for which this *can* be done, and a
further-limited subset which must be examined by the user
(e.g. description of goods, price details, shipping info).

In addition, the atomic nature of LN needs to be baked into the
protocol; in LN taking the payment *requires* a cryptographic receipt,
and neutering this property would be horribly short-sighted.

We need to define UBL extensions for the LN fields to tie them all
together (e.g. payment_hash, node_id).  We also need to define a
transport mechanism for these over the Lightning Network.

This is all quite possible!  But it will take time and is a signficant
amount of work: I need to be sure that others feel the same way before I
embark on this project.

Cheers,
Rusty.

>> It's also worth noting that, even compressed, none of the UBL examples
>> fit into the 1023 byte limit of the existing invoice format:
>>
>> UBL-Quotation-2.1-Example.xml: 1864 bytes (gz)
>> UBL-Order-2.1-Example.xml: 2515 bytes (gz)
>> UBL-Invoice-2.1-Example.xml: 3163 bytes (gz)
>>
>> Indeed, that Quotation alone requires a 32x32 QR code.
>>
However, since invoices/offers and UBL are both structures, we
 should have an explicit mapping between the two.  What fields should
 have their own existence in the invoice/offer and what should be in a
 general UBL field is a question we have to think on further.
>>> I agree that we don't want duplication. This is the reason, I propose to 
>>> use only ubl structure and add in the ln standard invoice an ubl 
>>> "opaque" field which will be self-contained and only add in the 
>>> invoice/offer/.. the fields specific to ln.
>> Except we need to go through the UBL spec and indicate exactly what
>> fields are permitted, and which are required.
>>
>> Many UBI fields are not amenable to machine interpretation (eg. note
>> fields).  These must be either explicitly exposed to the buyer (in case
>> the seller uses them) such as shipping conditions, or explicitly
>> forbidden/ignored.
>>
>> This is not a small task, and required intimiate knowledge of the UBL
>> spec.  It's not enough just to make something *look* like UBL.
>>
>> Does anyone have expertise in this area?  Shall we form a sub-group to
>> investigate this properly?
>>
>> Thanks!
>> Rusty.
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-11 Thread Rusty Russell
Joost Jager  writes:
>>
>> > We could
>> > simplify this to a single to_self_delay that is proposed by the
>> initiator,
>> > but what was the original reason to allow distinct values?
>>
>> Because I didn't fight hard enough for simplicity :(
>>
>
> But the people you were fighting with, what reason did they have? Just
> flexibility in general, or was there an actual use case? Maybe these people
> are reading this email and can comment?

Compromise among the committee meant adding everything to the spec if
there was a conceivable reason for it; the simplicity argument was less
strong then (maybe because we hadn't implemented it all yet!).

> So then because unilateral close is the only way to resolve atm, it is
> correct also in theory that there will never be a commitment tx where the
> non-initiator pays fees? But the point is clear, channels can get stuck.

Yeah.  Generally, it doesn't happen because we insist on a reasonable
balance in the channel, but it's theoretically possible.

>> > If we hard-code a constant, we won't be able to adapt to changes of
>> > `dustRelayFee` in the bitcoin network. And we'd also need to deal with a
>> > peer picking a value higher than that constant for its regular funding
>> flow
>> > dust limit parameter.
>>
>> Note that we can't adapt to dustRelayFee *today*, since we can't change
>> it after funding (another thing we probably need to fix).
>>
>
> You can't for an existing channel, but at least for a new channel you can
> pick a different value. Which wouldn't be possible if we'd put a fixed
> (anchor) amount in the spec.

That's not really much consolation though for the existing network.

Still Matt assures me that the relay dust limit is not going to change,
so I think we're best off cutting down our test matrix by choosing a
value and putting it directly into the spec.

By my calculations, at minfee it will cost you ~94 satoshis to spend.
Dust limit is 294 for Segwit outputs (basically assuming 3x minfee).

So I'm actually happy to say "anchor outputs are 294 satoshi".  These
are simply spendable, and still only $3 each if BTC is $1M.  Lower is
better (as long as we stick with funder-pays), as long as they do
eventually get spent.

>> If we really want to make it adjustable, could we make each side pay for
>> its own; if you can't afford it, you don't get one?  There's no point
>> the funder paying for a fundee-anchor if the fundee has no skin in the
>> game.
>>
>> That reduces the pressure somewhat, I think?
>>
>
> If you can't afford you don't get one, not sure about that. I could open a
> channel, send out the total capacity an in htlc to myself via some other
> hops, force close with a very low commit fee, then pull in the htlc (one
> time the money). The victim then needs to get the commit confirmed to claim
> the money, but there is no anchor unfortunately. I wait for the htlc to
> expire, then anchor down the commit tx and time out the htlc (twice the
> money).

Excellent point.  And the complexity of some "you can only use a little
bit of capacity until I have an anchor too" is worse, so let's stick
with your proposal as the simplest: funder pays for two, always.

>> Or what about we rotate the anchors and nothing else, which (assuming we
>> make it anyone-can-spend-after-N-blocks) reduces the amount of onchain
>> spam if someone completely loses their keys?
>>
>> That's a bigger change, but maybe it's worth it?
> We now have David's great proposal to reuse the funding keys for the anchor
> output. That allows us to always let anyone spend after confirmation,
> because they can reconstruct the spend script. But I think this also means
> that we cannot do rotation on the anchor keys. We need to use the funding
> pubkey as is.

I missed that proposal, thanks!

It's stronger than my scheme, in that it works even if neither anchor is
spent; which, if we keep update_fee, is a distinct possibility.  And
makes the script shorter (my fee calc above assume this).

We *could* tweak both anchors by the same amount, but then you'd still
need to see one of them to spend the other.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-10 Thread Rusty Russell
Yaacov Akiba Slama  writes:
> Hi Rusty.
>
> On 08/11/2019 05:09, Rusty Russell wrote:
>> Hi Yaacov,
>>  I've been pondering this since reading your comment on the PR!
>>
>>  As a fan of standards, I am attracted to UBL (I've chaired an
>> OASIS TC in the past and have great respect for them); as a fan of
>> simplicity I am not.  Forcing UBL implementation on wallet providers is
>> simply not going to happen, whatever I were to propose.
>
> In fact, using UBL in LN specification is simpler than trying to 
> understand the semantic of each field needed by businesses. You are 
> right that using such a standard put the burden into wallet providers 
> instead of LN developers, but as a wallet (breez) provider, I can say that:
>
> 1) Most money transactions (currently in fiat) are between users and 
> companies and not between two users. If we want to replace FIAT by 
> bitcoin, we need to create an infrastructure which can be used by 
> businesses. That means that LN needs to be able to be integrated easily 
> into POS systems. So, as a wallet provider who want to help the 
> transition from fiat to bitcoin, I need to be able to support standards 
> even if that means that I have to implement using/parsing big and 
> complicated standards.
>
> For simple user to user transaction, the wallet can decide to use only a 
> subset of the fields defined by the standard.
>
> 2) From a technical point of view, it seems that there are already UBL 
> libraries in java and c#. I don't think such library is hard to write in 
> go, rust.., so every wallet implementation can use them.

That is not the problem.  The problem is that our order flow is simple:

Seller: Offer
Buyer: Invoice Request
Seller: Invoice (or updated Offer)
Buyer/Seller: Payment & Acknowledgement (atomic)

(This could, of course, fit into a larger business flow.)

The closest UBL flow seems to be:

Seller: Quotation
Buyer: Order
Seller: (Prepayment)Invoice (or updated Quotation)

It's also worth noting that, even compressed, none of the UBL examples
fit into the 1023 byte limit of the existing invoice format:

UBL-Quotation-2.1-Example.xml: 1864 bytes (gz)
UBL-Order-2.1-Example.xml: 2515 bytes (gz)
UBL-Invoice-2.1-Example.xml: 3163 bytes (gz)

Indeed, that Quotation alone requires a 32x32 QR code.

>>  However, since invoices/offers and UBL are both structures, we
>> should have an explicit mapping between the two.  What fields should
>> have their own existence in the invoice/offer and what should be in a
>> general UBL field is a question we have to think on further.
> I agree that we don't want duplication. This is the reason, I propose to 
> use only ubl structure and add in the ln standard invoice an ubl 
> "opaque" field which will be self-contained and only add in the 
> invoice/offer/.. the fields specific to ln.

Except we need to go through the UBL spec and indicate exactly what
fields are permitted, and which are required.

Many UBI fields are not amenable to machine interpretation (eg. note
fields).  These must be either explicitly exposed to the buyer (in case
the seller uses them) such as shipping conditions, or explicitly
forbidden/ignored.

This is not a small task, and required intimiate knowledge of the UBL
spec.  It's not enough just to make something *look* like UBL.

Does anyone have expertise in this area?  Shall we form a sub-group to
investigate this properly?

Thanks!
Rusty.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-10 Thread Rusty Russell
Ross Dyson  writes:
> Hi Rusty,
>
> We spoke in detail about this after your presentation at LNconf. I'm one of
> the contributors to LNURL so I am a little familiar with what you're trying
> to achieve and am very grateful you're considering implementing something
> similar to the mainnet protocol.
>
> I can only see delivery address being a nightmare for the network or wallet
> providers. If you take a quick look at any Shopify website right now and
> try to buy something to be delivered you will see validation of address
> inputs before accepting payment.
>
> This is the 'expected' UX of consumer applications in 2019. If offers were
> to not validate address inputs correctly the user will not receive the
> product, lose money, and have a [very] negative review of both the
> wallet-providing and the offer-providing businesses.
>
> Handling these UX expectations will require either the wallet provider or
> the offer provider to validate the inputs before proceeding with the sale.
>
>1. If the offer provider handles validation then the network will have
>to accommodate potentially infinite validation attempts (big no no I 
> assume)
>2. If the wallet provider were to provide the UX for input validation
>they are taking on significant workload to develop a robust address input
>UI, but more importantly the responsibility to correctly validate. There is
>plenty of room to screw up and create a catastrophic user experience.
>
> So I think address validation input is only possible via 2. but I think it
> is too much workload and responsibility to expect from wallet providers.

This is not the area I worry about, TBH, since every shopping website in
existence has implemented address input (and some form of validation).
I'm sure it'll be primitive to start with.

Of course, UBL has a standard 'AddressType' too:


http://docs.oasis-open.org/ubl/os-UBL-2.2/xsd/common/UBL-CommonAggregateComponents-2.2.xsd

>>From what I can see, it would not be impossible to bring delivery address
> functionality into offers retroactively after offers was already in prod.
> Perhaps icebox it?

Quite possibly something we can delay; most current goods are virtual
anyway.  However, delivery address standardization would greatly improve
the UX for such things.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-10 Thread Rusty Russell
Anthony Towns  writes:
> On Fri, Nov 08, 2019 at 01:08:04PM +1030, Rusty Russell wrote:
>> Anthony Towns  writes:
>> [ Snip summary, which is correct ]
>
> Huzzah!
>
> This correlates all the hops in a payment when the route reaches its end
> (due to the final preimage getting propogated back for everyone to justify
> the funds they claim). Maybe solvable by converting from hashes to ECC
> as the trapdoor function?

I hadn't thought of this, but yes, once we've eliminated the trivial
preimage correlation w/scriptless scripts it'd be a shame to reintroduce
it here.

We need an accumulator with some strange properties though:

1. Alice provides tokens and a base accumulator.
2. Bob et. al can add these tokens to the accumulator.
3. They can tell if invalid tokens have been added to the accumulator.
4. They can tell how many tokens (alt: each token has a value and they
   can tell the value sum) have been added.
5. They can't tell what tokens have been added (unless they know all
   the tokens, which is trivial).

Any ideas?

> The refund amount propogating back also reveals the path, probably.
> Could that be obfusticated by somehow paying each intermediate node
> both as the funds go out and come back, so the refund decreases on the
> way back?
>
> Oh, can we make the amounts work like the onion, where it stays constant?
> So:
>
>   Alice wants to pay Dave via Bob, Carol. Bob gets 700 msat, Carol gets
>   400 msat, Dave gets 300 msat, and Alice gets 100 msat refunded.
>
>   Success:
> Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
> Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
> Carol forwards 1500 msat to Dave  (-1500, 0, 0, +1500)
> Dave refunds 1200 msat to Carol   (-1500, 0, +1200, +300)
> Carol refunds 800 msat to Bob (-1500, +800, +400, +300)
> Bob refunds 100 msat to Alice (-1400, +700, +400, +300)

Or, on success, upfront payment is fully refunded or not refunded at all
(since they get paid by normal fees)?  Either way, no data leak for that
case.

>   Clean routing failure at Carol/Dave:
> Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
> Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
> Carol says Dave's not talking
> Carol refunds 1100 msat to Bob(-1500, +1100, +400, 0)
> Bob refunds 400 msat to Alice (-1100, +700, +400, 0)
>
> I think that breaks the correlation pretty well, so you just need a
> decent way of obscuring path length?

I don't see how this breaks correlation?

> In the uncooperative routing failure case, I wonder if using an ECC
> trapdoor and perhaps scriptless scripts, you could make it so Carol
> doesn't even get an updated state without revealing the preimage...

I'm not sure.  We can make it so Carol has Bob's preimage(s), etc, so
that the node which fails doesn't get paid.  I initially thought this
would just make people pair up (fake) nodes, but it's probably not worth
it since their path would be less-selected in that case.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-07 Thread Rusty Russell
Yaacov Akiba Slama  writes:
> Hi Rusty,
>
> It seems that there are two kind of TLV fields in your proposition:
> 1) LN specific fields like `num_paths` and `payment_preimage`.
> 2) "Business" fields like `address1` and `currency`.
> I understand the need to define and include the first category, but I 
> don't think that we need or can define the second category. These fields 
> already exists in software like crm, erp, etc.. and are well defined by 
> standard body.
> My suggestion is to have a generic field containing well defined 
> structured and standardized data. See for instance 
> https://en.wikipedia.org/wiki/EDIFACT and/or 
> https://en.wikipedia.org/wiki/Universal_Business_Language.

Hi Yaacov,

I've been pondering this since reading your comment on the PR!

As a fan of standards, I am attracted to UBL (I've chaired an
OASIS TC in the past and have great respect for them); as a fan of
simplicity I am not.  Forcing UBL implementation on wallet providers is
simply not going to happen, whatever I were to propose.

We also don't want duplication; what if the "UBL field" were to
say I were selling you a bridge for $1 and the description and amount
fields actually said I was selling you a coffee for $3?

However, since invoices/offers and UBL are both structures, we
should have an explicit mapping between the two.  What fields should
have their own existence in the invoice/offer and what should be in a
general UBL field is a question we have to think on further.

Anyway, you'll have to bear with me as I read this 172 page
standard...

Cheers,
Rusty.

> What do you think?
> PS: Sorry for crossposting here and in 
> https://github.com/lightningnetwork/lightning-rfc/pull/694
> --yas
>
> On 05/11/2019 06:23, Rusty Russell wrote:
>> Hi all,
>>
>>  This took longer than I'd indicated; for that I'm sorry.
>> However, this should give us all something to chew on.  I've started
>> with a draft "BOLT 12" (it might be BOLT 13 by the time it's finalized
>> though!).
>>
>> I've also appended indications where we touch other BOLTs:
>> 1. BOLT 7 gains a message/reply system, encoded like htlc onions and
>> failure messages.
>> 2. BOLT 11 gains a `q` field for quantity; this avoids changing the
>> description when the user requests an invoice for more than one of 
>> something
>> (since changing the description between offer and invoice requires user
>> interaction: it's the *invoice* which you are committing to).
>>
>> There's definite handwaving in here; let's see if you can find it!
>>
>> Cheers,
>> Rusty.
>>
>> # BOLT #12: Offer Protocols for Lightning Payments
>>
>> An higher-level, QR-code-ready protocol for dealing with invoices over
>> Lightning.  There are two simple flows supported: in one, a user gets
>> an offer (`lno...`) and requests an invoice over the lightning
>> network, obtaining one (or an error) in reply.  In the other, a user
>> gets an invoice request (`lni...`), and sends the invoice over the
>> lightning network, retreiving an empty reply.
>>
>> # Table of Contents
>>
>>* [Offers](#offers)
>>  * [Encoding](#encoding)
>>  * [TLV Fields](#tlv-fields)
>>* [Invrequests](#invrequests)
>>  * [Encoding](#encoding)
>>  * [TLV Fields](#tlv-fields)
>>
>> # Offers
>>
>> Offers supply a reader with enough information to request one or more
>> invoices via the lightning network itself.
>>
>> ## Encoding
>>
>> The human-readable part of a Lightning offer is `lno`.  The data part
>> consists of three parts:
>>
>> 1. 0 or more [TLV](01-messaging.md#type-length-value-format) encoded fields.
>> 2. A 32-byte nodeid[1]
>> 3. 64-byte signature of SHA256(hrp-as-utf8 | tlv | nodeid).
>>
>> ## TLV Fields
>>
>> The TLV fields define how to get the invoice, and what it's for.
>> Each offer has a unique `offer_idenfitier` so the offering node can
>> distinguish different invoice requests.
>>
>> Offers can request recurring payments of various kinds, and specify
>> what base currency they are calculated in (the actual amount will be
>> in the invoice).
>>
>> `additional_data` is a bitfield which indicates what information the
>> invoice requester should (odd) or must (even) supply:
>> 1. Bits `0/1`: include `delivery_address`
>> 2. Bits `2/3`: include `delivery_telephone_number`
>> 3. Bits `4/5`: include `voucher_code`
>> 4. Bits `6/7`: include `refund_proof`
>>
>> `refund_for` indicates an offer for a (whole or part) refu

Re: [Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-07 Thread Rusty Russell
ZmnSCPxj  writes:
> First, please confirm my understanding of the message flow.
> Suppose I have a donation offer on my website and Rusty wants to donate to me.
> Then:
>
>  ZmnSCPxj  Rusty
> ||
> +-- `lno` -->+ (via non-Lightning communication 
> channel e.g. https)
> ||
> +< `invoice_request` + (via a normal Rusty->ZmnSCPxj 
> payment)
> ||
> + `invoice_or_error` --->| (by failing the above payment and 
> embedding in the failure blob)
> ||
> +<--- `sendpay` -+ (via a normal Rusty->ZmnSCPxj 
> payment)
>
> Is it approximately correct?

Sorry for delayed response; yes, this is correct.

>> gets an invoice request (`lni...`), and sends the invoice over the
>> lightning network, retreiving an empty reply.
>
> Here are completely pointless counterproposals for the offer and 
> invoice-request HRPs:
>
> * Offers:
>   * `lnpayme`
>   * `lnbuyit`
>   * `lnforsale`
> * Invoice Requests:
>   * `lnpaying`
>   * `lnbuying`
>   * `lnshutupandtakemymoney`
>
> `lno` and `lni` feel wrong to me.
> Their juxtaposition implies `lno` == output and `lni` == input to me, due to 
> the use of `o` and `i`, though `lno` is where you get money in exchange for 
> product and `lni` is the request-for-service.

lnx and lny?  Nobody can interpret them at all, that way :)
>> 3.  type: 2 (`description`)
>> 4.  data:
>> -   [`...*byte`:`description`]
>
> UTF-8?
> Null-terminated?

I was thinking UTF-8 like current field.

>> -   MUST include `amount` if it includes `recurrence`.
>> -   if it includes `amount`:
>> -   MUST specify `currency` as the ISO 4712 or BIP-0173, padded with 
>> zero bytes if required
>
> I cannot find ISO 4712, but could find ISO 4217.

Oops, I fixed my typo wrong.  Thanks.

> BIP-173 does not have a list of currencies, but refers to SLIP-0173.
> Some of the listed currencies there seem to have more than 4 characters.

Oh, I'd never seen SLIP-0173.  Cool, I increased it to 5; SLIP-0173 has
no limit but I find it hard to care about any of them anyway.

> Should I assume encoding is ASCII?
> We will "never" see a non-ASCII currency code?

Not really, but if you don't understand it you can't do much, ASCII or
no.

>> The "default offer" of a node is a nominal offer used to send
>> unsolicited payments. It is generally not actually sent, but can be
>> used by any other node as if it has been. It has the following
>> fields:
>>
>> -   `offer_idenfitier`: zero-length
>> -   `d`: any
>> -   `n`: the node id of the recipient.
>
> In essence, this is an implicitly-existing offer that never expires, and 
> which can be used by any node at any time to construct an invoice request?

Yep!

>> The `refund_proof` refers to a previous invoice paid by the sender for
>> the specific case of a `refund_for` offer. It provides proof of
>> payment (the `payment_preimage` and also a signature of the
>> `payment_hash` from the `key` which requested the being-refunded
>> invoice (which does not have to be the same as this `key`!).
>
> An earlier requirement mentions that writers of offers or invoice request 
> MUST have `paths` in some condition.
> The below does not have `paths`, but there is a "human-readable" alternate 
> encoding which *does* have `paths`.
> It might be better to clarify this point.

The in-wire one doesn't have paths, since you respond by reply; you
don't need (and should not be able to) find the sender.

The non-wire one needs a path, since you need to initiate a reply.

>> The `directed` and `directed_reply` Messages
>>
>> -
>>
>> 1.  type: 384 (`directed`) (`option_directed_messages`)
>> 2.  data:
>> -   [`chain_hash`:`chain_hash`]
>> -   [`u64`:`id`]
>> -   [`1366*byte`:`onion_routing_packet`]
>> 3.  type: 384 (`directed_reply`) (`option_directed_messages`)
>> 4.  data:
>> -   [`chain_hash`:`chain_hash`]
>> -   [`u64`:`id`]
>> -   [`u16`:`len`]
>> -   [`len*byte`:`reply`]
>
> This new `directed` message will be the mechanism for sending invoice 
> requests and receiving invoice request responses?

Yes.

> What incentive is there for a forwarding node to actually forward a 
> `directed` message?

It's a strong liveness indicator to the sender, so they're likely to use
the same path for the actual payment.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-07 Thread Rusty Russell
Anthony Towns  writes:
> On Thu, Nov 07, 2019 at 02:56:51PM +1030, Rusty Russell wrote:
>> > What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
>> > field: it says 14, ." but Alice doesn't know anything other than
>> >  so can't put  in the onion?
>> Alice created the onion.  Alice knows all the preimages, since she
>> created the chain AZ.
>
> In your reply to Zmn, it was Rusty (Bob) preparing the nonce and creating
> the chain ... -- so I was lost as to what you were proposing...

Oops.  Don't trust that Rusty guy, let's stick with Alice.

[ Snip summary, which is correct ]

> As far as the "fair price" goes, the spitballed formula is "16 - X/4"
> where X is number of zero bits in some PoW-y thing. The proposal is
> the thing is SHA256(blockhash|revealedonion) which works, and (I think)
> means each step is individually grindable.
>
> I think an alternative would be to use the prepayment hashes themselves,
> so you generate the nonce  as the value you'll send to Dave then
> hash it repeatedly to get .., then check if pow(,) has
> 60 leading zero bits or pow(,) has 56 leading zero bits etc.
> If you made pow(a,b) be SHA256(a,b,shared-onion-key) I think it'd
> preserve privacy, but also mean you can't meaningfully grind unfairly
> cheap routing except for very short paths?
>
> If you don't grind and just go by luck, the average number of hashes
> per hop is ~15.93 (if I got my maths right), so you should be able to
> estimate path length pretty accurate by dividing claimed prepaid funds by
> 15.93*25msat or whatever. If everyone grinds at each level independently,
> I think you'd just subtract maybe 6 hops from that, but the maths would
> mostly stay the same?
>
> Though I think you could obfusticate that pretty well by moving
> some of the value from the HTLC into the prepayment -- you'd risk losing
> that extra value if the payment made it all the way to the recipient but
> they declined the HTLC that way though.

Yeah, and doesn't help obscure in the in-the-middle failure case unf.
Which is really bad with current payment_hash since you can spot
multiple attempts so easily.  Hence my attempt to roll in some PoW to
obscure the amounts.

The ideal prepay range would be wider, so you can believably have
payments between 16 and 4 per hop, say.  But if I can grind it I'll
naturally restrict the range to the lower end, and if it's ungrindable
(eg. based on nodeid and payment_hash or recent block hash) then
everyone on the path knows what it so too.

So, hashcash here is better than nothing, but still not very good.

>> >> Does Alice lose everything on any routing failure?
>> > That was my thought yeah; it seems weird to pay upfront but expect a
>> > refund on failure -- the HTLC funds are already committed upfront and
>> > refunded on failure.
>> AFAICT you have to overpay, since anything else is very revealing of
>> path length.  Which kind of implies a refund, I think.
>
> I guess you'd want to pay for a path length of about 20 whether the
> path is actually 17, 2, 10 or 5. But a path length of 20 is just paying
> for bandwidth for maybe 200kB of total traffic which at $1/GB is 2% of
> 1 cent, which doesn't seem that worth refunding (except for really tiny
> micropayments, where paying for payment bandwidth might not be feasible
> at all).
>
> If you're buying a $2 coffee and paying 500ppm in regular fees per hop
> with 5 hops, then each routing attempt increases your fees by 4%, which
> seems pretty easy to ignore to me.

True, but ideally we'd have lots of noise even if people are trying to
minimize fees (which, if they're sending messages rather than payments,
they might).

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-07 Thread Rusty Russell
Joost Jager  writes:
>>
>> > Isn't spam something that can also be addressed by using rate limits for
>> > failures? If all relevant nodes on the network employ rate limits, they
>> can
>> > isolate the spammer and diminish their disruptive abilities.
>>
>> Sure, once the spammer has jammed up the network, he'll be stopped.  So
>> will everyone else.  Conner had a proposal like this which didn't work,
>> IIRC.
>
> Do you have ref to this proposal?
>
> Imagine the following setup: a network of nodes that trust each other (as
> far as spam is concerned) applies a 100 htlc/sec rate limit to the channels
> between themselves. Channels to untrusted nodes get a rate of only 1
> htlc/sec. Assuming the spammer isn't a trusted node, they can only spam at
> 1 htlc/s and won't jam up the network?

Damn, I searched for it but all the obvious keywords turned up blank.
Conner CC'd in case he remembers the discussion and I'm not imagining it?

Anyway, if there are 100 nodes in the network I can still open a channel
to each one and jam it up immediately.  And that's not even assuming I
play nice until you trust me, then attack or get taken over.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-06 Thread Rusty Russell
Anthony Towns  writes:
> On Wed, Nov 06, 2019 at 10:43:23AM +1030, Rusty Russell wrote:
>> >> Rusty prepares a nonce, A and hashes it 25 times = Z.
>> >> ZmnSCPxj prepares the onion, but adds extra fields (see below).  
>> > It would have made more sense to me for Alice (Zmn) to generate
>> > the nonce, hash it, and prepare the onion, so that the nonce is
>> > revealed to Dave (Rusty) if/when the message ever actually reaches its
>> > destination. Otherwise Rusty has to send A to Zmn already so that
>> > Zmn can prepare the onion?
>> The entire point is to pay *up-front*, though, to prevent spam.
>
> Hmm, I'm not sure I see the point of paying upfront but not
> unconditionally -- you already commit the funds as part of the HTLC,
> and if you're refunding some of them, you kind-of have to keep them
> reserved or you risk finalising the HTLC causing a failure because you
> don't have enough msats spare to do the refund?

?  These are upfront an unconditional.  I'm confused.  You pay per
HTLC added (or, in future, to send a message).

What part was unclear here?

Alice pays X to Bob.  Bob gives X- back to Alice.  Bob
gets preimages from the onion, and from Carol etc.

This happens independent of HTLC success or failure.

>> Bob/ZmnSCPxj doesn't prepare anything in the onion.  They get handed the
>> last hash directly: Alice is saying "I'll pay you 50msat for each
>> preimage you can give me leading to this hash".
>
> So my example was Alice paying Dave via Bob and Carol (so Alice/Bob,
> Bob/Carol, Carol/Dave being the individual channels).
>
> What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
> field: it says 14, ." but Alice doesn't know anything other than
>  so can't put  in the onion?

Alice created the onion.  Alice knows all the preimages, since she
created the chain AZ.

>> > I'm not sure why lucky hashing should result in a discount?
>> Because the PoW adds noise to the amounts, otherwise the path length is
>> trivially exposed, esp in the failure case.  It's weak protection
>> though.
>
> With a linear/exponential relationship you just get "half the time it's
> 1 unit, 25% of the time it's 2 units, 12% of the time it's 3 units", so
> I don't think that's adding much noise?

It depends how much people are prepared to grind, doesn't it?

>> > You've only got two nonce choices -- the initial  and the depth
>> > that you tell Bob and Carol to hash to as steps in the route;
>> No, the sphinx construction allows for grinding, that was my intent
>> here.  The prepay hashes are independent.
>
> Oh, because you're also xoring with the onion packet, right, I see.
>
>> > I think you could just make the scheme be:
>> >   Alice sends HTLC(k,v) + 1250 msat to Bob
>> >   Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
>> >   Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
>> >   Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to 
>> > Carol
>
> The math here doesn't add up. Let's assume I meant:
>
>   Bob keeps 500 sat, forwards 750 sat
>   Carol keeps 250 sat, forwards 500 sat
>   Dave keeps 300 sat, refunds 200 sat
>
>> >   Carol redeems the HTLC and refunds 200 msat to Bob
>> >   Bob redeems the HTLC and refunds 200 msat to Alice
>> >
>> > If there's a failure, Alice loses the 1250 msat, and someone in the
>> > path steals the funds.
>> This example confuses me.
>
> Well, that makes us even at least? :)
>
>> So, you're charging 250msat per hop?  Why is Bob taking 750?  Does Carol
>> now know Dave is the last hop?
>
> No, Alice is choosing to pay 500, 250 and 300 msat to Bob, Carol and
> Dave respectively, as part of setting up the onion, and picks those
> numbers via some magic algo trading off privacy and cost.

OK.

>> Does Alice lose everything on any routing failure?
>
> That was my thought yeah; it seems weird to pay upfront but expect a
> refund on failure -- the HTLC funds are already committed upfront and
> refunded on failure.

AFAICT you have to overpay, since anything else is very revealing of
path length.  Which kind of implies a refund, I think.

>> If so, that is strong incentive for Alice to reduce path-length privacy
>> by keeping payments minimal, which I was really trying to avoid.
>
> Assuming v is much larger than 1250msat, and 1250 msat is much lower than
> the cost to Bob of losing the channel with Alice, I don't think that's
> a problem. 1250msat pays for 125kB of bandwdith under your assumptions
> I think?

That's irreleva

Re: [Lightning-dev] A proposal for up-front payments.

2019-11-06 Thread Rusty Russell
Joost Jager  writes:
> In my opinion, the prepayment should be a last resort. It does take away
> some of the attractiveness of the Lightning Network. Especially if you need
> to make many payment attempts over long routes, the tiny prepays do add up.
> For a $10 payment, it's probably nothing to worry about. But for
> micro-payments this can become prohibitively expensive. And it is exactly
> the micro-payment use case where Lightning outshines other payment systems.
> A not yet imagined micro-payment based service could even be the launchpad
> to world domination. So I think we should be very careful with interfering
> with that potential.

I completely agree, yeah.  And maybe we'll never need it, but it's one
of my main concerns for the network.

> Isn't spam something that can also be addressed by using rate limits for
> failures? If all relevant nodes on the network employ rate limits, they can
> isolate the spammer and diminish their disruptive abilities.

Sure, once the spammer has jammed up the network, he'll be stopped.  So
will everyone else.  Conner had a proposal like this which didn't work,
IIRC.

> If a node sees that its outgoing htlc packets stack up, it can reduce
> the incoming flow on the channels where the htlcs originate
> from. Large routing nodes could agree with their peers on service
> levels that define these rate limits.

Unfortunately, if we *don't* address this, then the network will defend
itself with the simple tactic of deanonymizing payments.

And every other solution I've seen ends up the same way :(

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-05 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
> Hi Rusty,
>
> Agreed w.r.t the need for prepaid HTLCS, I've been mulling over other
> alternatives for a few years now, and none of them seems to resolve the
> series of routing related incentive issues that prepaid HTLCs would.
>
>> Since both Offers and Joost's WhatSat are looking at sending messages,
>> it's time to float actual proposals.
>
> IMO both should just be done over HORNET, so we don't need introduce a new
> set of internal protocol level messages whenever we have some new
> control/signalling need. Instead, we'd have a control/signal channel (give
> me
> routes, invoices, sign this, etc), and a payment channel (HTLCs as used
> today).

I'm not so sure, as I don't think we're going to actually use each one
more than once or twice?

I mean, we could stream movies through LN, but I think that's an added
service, which would be best done by HORNET.

>> 2. Adding an HTLC causes a *push* of a number of msat on commitment_signed
>> (new field), and a hash.
>
> The prepay amount should be signalled in the update add message instead.
> This lets HTLCs carry a heterogeneous set of prepay amounts. In addition, we
> need a new onion field as well to signal the incoming amount the node
> _should_ have received (allows them to detect deviations in the sender's
> intended route).

Sorry, brain fart: it's a new field in the update_add_htlc of course.

I just, um, added that to make sure you were all reading carefully! :_

>> 3. Failing/succeeding an HTLC returns some of those msat, and a count and
>> preimage (new fields).
>
> Failing shouldn't return the prepay amount, otherwise extending long lived
> HTLCs then cancelling them at the last minute is still costless. This
> costlessness of _adding_ an HTLC to a _remote_ commitment is IMO, the
> biggest incentive flaw that exists today in the greater routing network.

No, that's type (liquidity) 3 spam, which needs a completely different
solution.

Definitely needs fixing, but up-front fees don't do it (except in the
case where you might want to indicate you're *going* to have a long-held
HTLC, where you'd pay additional up-front, but that's future work).

>>  You get to keep 50 msat[1] per preimage you present[2].
>
> We should avoid introducing any new constants to the protocol, as they're
> typically dreamed up independent of any empirical lessons learned from
> deployment.

OTOH, we should avoid creating more complex knobs for users, since the
complexity of the protocol is becoming unmanagable.  I think we did this
too much with v1, so instead of getting empirical data we got defaults
which in practice are unspecified specifications.

I like a flat value to start, since it's easy to implement and deploy.

> On the topic of the prepay cost, the channel update message should be
> extended to allow nodes to signal prepay costs similar to the way we handle
> regular payment success fees. In order to eliminate a number of costless
> attacks possible today on the routing network, nodes should also be able to
> signal a new coefficient used to _scale_ the prepay fee as a function of the
> CLTV value of the incoming HTLC.

... and HTLC amount, surely?  That becomes a pretty complex tuning
parameter.

I think we should directly target type 3 spam through a separate
mechanism (as discussed previously).  This is just to prevent quantity
of messages.

> With this addition, senders need to pay to
> _add_ an HTLC to a remote commitment transaction (fixed base cost), then
> also need to pay a variable rate that scales with the duration of the
> proposed outgoing CLTV value (senders ofc don't prepay to themselves).  Once
> we introduce this, loop attacks and the like are no longer free to launch,
> and nodes can dynamically respond to congestion in the network by raising
> their prepay prices.

I disagree; you should signal with normal fees, not prepay.  Otherwise
we're increasing fees at a time that success rates are lowering, which
makes the incentive misalightment far more promiment :(

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-05 Thread Rusty Russell
Anthony Towns  writes:
> On Tue, Nov 05, 2019 at 07:56:45PM +1030, Rusty Russell wrote:
>> Sure: for simplicity I'm sending a 0-value HTLC.
>> ZmnSCPxj has balance 1msat in channel with Rusty, who has 1000msat
>> in the channel with YAIjbOJa.
>
> Alice, Bob and Carol sure seem simpler than Zmn YAI and Rusty...

Agreed, I should not have directly answered the q.

>> Rusty prepares a nonce, A and hashes it 25 times = Z.
>> ZmnSCPxj prepares the onion, but adds extra fields (see below).  
>
> It would have made more sense to me for Alice (Zmn) to generate
> the nonce, hash it, and prepare the onion, so that the nonce is
> revealed to Dave (Rusty) if/when the message ever actually reaches its
> destination. Otherwise Rusty has to send A to Zmn already so that
> Zmn can prepare the onion?

The entire point is to pay *up-front*, though, to prevent spam.

Bob/ZmnSCPxj doesn't prepare anything in the onion.  They get handed the
last hash directly: Alice is saying "I'll pay you 50msat for each
preimage you can give me leading to this hash".

>> He then
>> sends the HTLC to Rusty, but also sends Z, and 25x50 msat (ie. those
>> fields are in the update_add_htlc msg).  His balance with Rusty is now
>> 8750msat (ie. 25x50 to Rusty).
>> 
>> Rusty decrypts the onion, reads the prepay field: it says 14, L.
>> Rusty checks: the hash of the onion & block (or something) does indeed
>> have the top 8 bits clear, so the cost is in fact 16 - 8/2 == 14.  He
>> then hashes L 14 times, and yes, it's Z as ZmnSCPxj said it
>> should be.
>
> I'm not sure why lucky hashing should result in a discount?

Because the PoW adds noise to the amounts, otherwise the path length is
trivially exposed, esp in the failure case.  It's weak protection
though.

> You're giving a linear discount for exponentially more luck in hashing
> which also seems odd.

Because you really want some actual payment, not just PoW.  Botnets are
really good at PoW, less good at sending msats.  And the PoW is hard to
calibrate (I guessed: real numbers will be necessary)/

> You've only got two nonce choices -- the initial  and the depth
> that you tell Bob and Carol to hash to as steps in the route;

No, the sphinx construction allows for grinding, that was my intent
here.  The prepay hashes are independent.

> I think you could just make the scheme be:
>
>   Alice sends HTLC(k,v) + 1250 msat to Bob
>   Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
>   Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
>   Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to 
> Carol
>   Carol redeems the HTLC and refunds 200 msat to Bob
>   Bob redeems the HTLC and refunds 200 msat to Alice
>
> If there's a failure, Alice loses the 1250 msat, and someone in the
> path steals the funds.

This example confuses me.

So, you're charging 250msat per hop?  Why is Bob taking 750?  Does Carol
now know Dave is the last hop?

Does Alice lose everything on any routing failure?

If so, that is strong incentive for Alice to reduce path-length privacy
by keeping payments minimal, which I was really trying to avoid.

> You could make the accountable by having Alice
> also provide "Hash(, refund=200)" to everyone, encoding  in the
> onion to Dave, and then each hop reveals  and refunds 200msat to
> demonstrate their honesty.
>
> Does that miss anything that all the hashing achieves?

It does nothing if Carol is the one who can't route.

> I think the idea here is that you're paying tiny amounts for the
> bandwidth, which when it's successful does in fact pay for the bandwidth;
> and when it's unsuccessful results in a channel closure, which makes it
> unprofitable to cheat the system, but doesn't change the economics of
> lightning much overall because channel closures can happen anytime anyway.

Not at all.  You can still fail to route, and still get paid.  You can't
steal *more* money without channel closure though.

> I think that approach makes sense.
>
> Cheers,
> aj

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-05 Thread Rusty Russell
Joost Jager  writes:
>>
>> > * Add `to_remote_delay OP_CHECKSEQUENCEVERIFY OP_DROP` to the `to_remote`
>> > output. `to_remote_delay` is the csv delay that the remote party accepted
>> > in the funding flow for their outputs. This not only ensures that the
>> > carve-out works as intended, but also removes the incentive to game the
>> > other party into force-closing. If desired, both parties can still agree
>> =
>> to
>> > have different `to_self_delay` values.
>>
>> I think we should unify to_self_delay if we're doing this.  Otherwise
>> the game returns.
>
> The game returns, but both parties will be aware of the game they are
> playing. They agreed to their peer's to_self_delay up front. (This is
> different from the current situation where both peers are forced to accept
> a remote_to_self_delay of 0.) With validation on the open/accept_channel
> message, a node can still enforce both to_self_delays to be equal. We could
> simplify this to a single to_self_delay that is proposed by the initiator,
> but what was the original reason to allow distinct values?

Because I didn't fight hard enough for simplicity :(

There is no "negotiation" on opening; it's accept or error.  That leads
to a situation where every implementation MUST accept what every
implementation offers.

The unification proposal was to use the max of the two settings.  That's
fair; if you want me to suffer a 2 week delay, you should too.

>> Agreed, this doesn't really work.  We actually needed a bitcoin rule
>> that allowed a single anyone-can-spend output.  Seems like we didn't get
>> that.
>
> With the mempool acceptance carve-out in bitcoind 0.19, we indeed won't be
> able to safely produce a single OP_TRUE output for anyone to spend. An
> attacker could attach low fee child transactions, reach the limits and
> block further fee bumping.

Indeed :(

>> This is horribly spammy.  At the moment we see ~ one unilateral close
>> every 3 blocks.  Hopefully that will reduce, but there'll always be
>> some.
>
> It seems there isn't another way to do the anchor outputs given the mempool
> limitations that exist? Each party needs to have their own anchor,
> protected by a key. Otherwise it would open up these attack scenarios where
> an attacker blocks the commitment tx confirmation until htlcs time out.
> Even with the script OP_DEPTH OP_IF  OP_CHECKSIG OP_ELSE 10 OP_CSV
> OP_ENDIF, the "anyones" don't know the pubkey and still can't sweep after
> 10 blocks.

I think you're right, but I don't *like* it...

>> * Within each version of the commitment transaction, both anchors always
>> > have equal values and are paid for by the initiator.
>>
>> Who pays if they can't afford it?  What if they push_msat everything to
>> the other side?
>
> Similar to how it currently works. There should never be a commitment
> transaction in which the initiator cannot pay the fee.

Unfortunately, this is not correct (in theory).

We can always get into a case where fees are insufficient (simultanous
HTLC adds), but it's unusual.  We used to specify that the non-funder
would pay the remaining fee, but we dropped this in favor of allow
unilateral close if this ever happened.

> With anchor outputs
> there should never be a commitment tx in which the initiator cannot pay the
> fee and the anchors.

There can be, but I think we can simply modify this so you have to pay
the anchors *first* before fees.

> Also currently you cannot push everything to the other
> side with push_msat. The initiator still needs to have enough balance to
> pay for the on-chain costs (miner fee and anchors).

This is true; I forgot we fixed that, sorry.  push_msat is a red herring.

>> The value of the
>> > anchors is the dust limit that was negotiated in the `open_channel` or
>> > `accept_channel` message of the party that publishes the transaction.
>>
>> Now initiator has to care about the other side's dust limit, which is
>> bad.  And as accepter I now want this much higher, since I get those
>> funds instantly.  I don't think we gain anything by making this
>> configurable at all; just pick a number and be done.
>>
>> Somewhere between 1000 and 10,000 sat is a good idea.
>>
>
> Yes, it is free money. Therefore we need to validate the dust limit in the
> funding flow. Check whether it is reasonable. That should also be done in
> the current implementation. Otherwise your peer can set a really high dust
> limit that lets your htlc disappear on-chain (although that is only free
> money for the miner).

True, and spec should note this BTW!  I've added an issue.

https://github.com/lightningnetwork/lightning-rfc/issues/696

> If we hard-code a constant, we won't be able to adapt to changes of
> `dustRelayFee` in the bitcoin network. And we'd also need to deal with a
> peer picking a value higher than that constant for its regular funding flow
> dust limit parameter.

Note that we can't adapt to dustRelayFee *today*, since we can't change
it after funding (another thing we probably need to 

[Lightning-dev] BOLT 11: add optional vendor field.

2019-11-04 Thread Rusty Russell
It was pointed out to me[1] at thelightningconference.com that it's often
a legal requirement to list the vendor on a receipt.  It also makes
perfect sense.

It can be done in the description field, but that's really supposed to
be a description of the *items*.  Dividing it also lets wallets have
much better UX.

The spec change itself is genuinely trivial:

 * `v` (12): `data_length` variable.  Optional name of vendor/supplier (UTF-8).

Feedback from wallets and vendors appreciatedg!
Rusty.
PS.  Pull req at https://github.com/lightningnetwork/lightning-rfc/pull/694,
 but I realize that can be intimidating, hence this mail.

[1] I'd like to credit this properly, but I was jetlagged and having way
too much fun.  Please claim if this was your idea :)
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] [VERY ROUGH DRAFT] BOLT 12: Offers

2019-11-04 Thread Rusty Russell
Hi all,

This took longer than I'd indicated; for that I'm sorry.
However, this should give us all something to chew on.  I've started
with a draft "BOLT 12" (it might be BOLT 13 by the time it's finalized
though!).

I've also appended indications where we touch other BOLTs:
1. BOLT 7 gains a message/reply system, encoded like htlc onions and
   failure messages.
2. BOLT 11 gains a `q` field for quantity; this avoids changing the
   description when the user requests an invoice for more than one of something
   (since changing the description between offer and invoice requires user
   interaction: it's the *invoice* which you are committing to).

There's definite handwaving in here; let's see if you can find it!

Cheers,
Rusty.

# BOLT #12: Offer Protocols for Lightning Payments

An higher-level, QR-code-ready protocol for dealing with invoices over
Lightning.  There are two simple flows supported: in one, a user gets
an offer (`lno...`) and requests an invoice over the lightning
network, obtaining one (or an error) in reply.  In the other, a user
gets an invoice request (`lni...`), and sends the invoice over the
lightning network, retreiving an empty reply.

# Table of Contents

  * [Offers](#offers)
* [Encoding](#encoding)
* [TLV Fields](#tlv-fields)
  * [Invrequests](#invrequests)
* [Encoding](#encoding)
* [TLV Fields](#tlv-fields)

# Offers

Offers supply a reader with enough information to request one or more
invoices via the lightning network itself.

## Encoding

The human-readable part of a Lightning offer is `lno`.  The data part
consists of three parts:

1. 0 or more [TLV](01-messaging.md#type-length-value-format) encoded fields.
2. A 32-byte nodeid[1]
3. 64-byte signature of SHA256(hrp-as-utf8 | tlv | nodeid).

## TLV Fields

The TLV fields define how to get the invoice, and what it's for.
Each offer has a unique `offer_idenfitier` so the offering node can
distinguish different invoice requests.

Offers can request recurring payments of various kinds, and specify
what base currency they are calculated in (the actual amount will be
in the invoice).

`additional_data` is a bitfield which indicates what information the
invoice requester should (odd) or must (even) supply:
1. Bits `0/1`: include `delivery_address`
2. Bits `2/3`: include `delivery_telephone_number`
3. Bits `4/5`: include `voucher_code`
4. Bits `6/7`: include `refund_proof`

`refund_for` indicates an offer for a (whole or part) refund for a
previous invoice, as indicated by the `payment_hash`.

1. tlvs: `offer`
2. types:
1. type: 1 (`paths`)
2. data:
* [`u16`:`num_paths`]
* [`num_paths*path`:`path`]
1. type: 2 (`description`)
2. data:
* [`...*byte`:`description`]
1. type: 3 (`expiry`)
2. data:
* [`tu64`:`seconds_from_epoch`]
1. type: 4 (`offer_identifier`)
2. data:
* [`...*byte`:`id`]
1. type: 5 (`amount`)
2. data:
* [`4*byte`:`currency`]
* [`tu64`:`amount`]
1. type: 6 (`additional_data`)
2. data:
* [`...*byte`:`rbits`]
1. type: 7 (`recurrance`)
2. data:
* [`byte`:`time_unit`]
* [`u32`:`period`]
* [`tu32`:`number`]
1. type: 8 (`recurrance_base`)
2. data:
* [`u32`:`basetime`]
* [`tu32`:`paywindow`]
1. type: 9 (`quantity`)
2. data:
* [`tu64`:`max`]
1. type: 10 (`refund_for`)
2. data:
* [`32*byte`:`payment_hash`]

1. subtype: `path`
2. data:
   * [`u16`:`num_hops`]
   * [`num_hops*hop`:`hops`]

1. subtype: `hop`
2. data:
   * [`pubkey`:`nodeid`]
   * [`short_channel_id`:`short_channel_id`]
   * [`u16`:`flen`]
   * [`flen*byte`:`features`]

## Requirements For Offers And Invrequests

A writer of an offer or an invrequest:
  - if it is connected only by private channels:
- MUST include `paths` containing a path to the node.
  - otherwise:
- MAY include `paths` containing a path to the node.
  - MUST describe the item(s) being offered or purpose of invoice in 
`description`.
  - MUST include `expiry` if the offer/invrequest will not be valid after some 
time.
  - if it includes `expiry`:
- MUST set `seconds_from_epoch` to the expiry time in seconds since 1970 
UTC.

## Requirements For Offers

A writer of an offer:
  - MUST use a unique `offer_idenfitier` for each offer.
  - MAY include `recurrence` to indicate offer should trigger time-spaced
invoices.
  - MUST include `amount` if it includes `recurrence`.
  - if it includes `amount`:
- MUST specify `currency` as the ISO 4712 or BIP-0173, padded with zero 
bytes if required
- MUST specify `amount` to the amount expected for the invoice, as the 
ISO 4712 currency unit multiplied by exponent, OR the BIP-0173 minimum unit 
(eg. `satoshis`).
  - if it requires specific fields in the invoice:
- MUST set the corresponding even bits in the `additional_data` field

A reader of an offer:
  - SHOULD 

[Lightning-dev] A proposal for up-front payments.

2019-11-04 Thread Rusty Russell
Hi all,

It's been widely known that we're going to have to have up-front
payments for msgs eventually, to avoid Type 2 spam (I think of Type 1
link-local, Type 2 though multiple nodes, and Type 3 liquidity-using
spam).

Since both Offers and Joost's WhatSat are looking at sending
messages, it's time to float actual proposals.  I've been trying to come
up with something for several years now, so thought I'd present the best
I've got in the hope that others can improve on it.

1. New feature bit, extended messages, etc.
2. Adding an HTLC causes a *push* of a number of msat on
   commitment_signed (new field), and a hash.
3. Failing/succeeding an HTLC returns some of those msat, and a count
   and preimage (new fields).

How many msat can you take for forwarding?  That depends on you
presenting a series of preimages (which chain into a final hash given in
the HTLC add), which you get by decoding the onion.  You get to keep 50
msat[1] per preimage you present[2].

So, how many preimages does the user have to give to have you forward
the payment?  That depends.  The base rate is 16 preimages, but subtract
one for each leading 4 zero bits of the SHA256(blockhash | hmac) of the
onion.  The blockhash is the hash of the block specified in the onion:
reject if it's not in the last 3 blocks[3].

This simply adds some payment noise, while allowing a hashcash style
tradeoff of sats for work.

The final node gets some variable number of preimages, which adds noise.
It should take all and subtract from the minimum required invoice amount
on success, or take some random number on failure.

This leaks some forward information, and makes an explicit tradeoff for
the sender between amount spent and privacy, but it's the best I've been
able to come up with.

Thoughts?
Rusty.

[1] If we assume $1 per GB, $10k per BTC and 64k messages, we get about
655msat per message.  Flat pricing for simplicity; we're trying to
prevent spam, not create a spam market.
[2] Actually, a number and a single preimage; you can check this is
indeed the n'th preimage.
[3] This reduces incentive to grind the damn things in advance, though
maybe that's dumb?  We can also use a shorter hash (siphash?), or
even truncated SHA256 (128 bits).
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Increasing fee defaults to 5000+500 for a healthier network?

2019-11-03 Thread Rusty Russell
Rusty Russell  writes:
> Olaoluwa Osuntokun  writes:
>> Defaults don't necessarily indicate higher/lower reliability. Issuing a
>> single CLI command to raise/lower the fees on one's node doesn't magically
>> make the owner of said node a _better_ routing node operator.
>
> No, but those who put effort into their node presumably have more
> reliable nodes, and this is a signal of that.
>
> Anyone have data on channel reliability that they can correlate with
> channel fees?

Actually, since lnd sends out a disable update for nodes which are
offline for > 20 minutes, we can simply look at the current gossip:

half-channels online: 45157
percentage using 1000/1 fees: 56%
half-channels offline: 10225
percentage using 1000/1 fees: 51%

So my assumption seems completely wrong here; if there's any
correlation, it's negative.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-11-01 Thread Rusty Russell
Matt Corallo  writes:
> Why not stick with the original design from Adelaide with a spending path 
> with a 1CSV that is anyone can spend (or that is revealed by spending another 
> output).

The original design IIRC was a single anyone-can-spend anchor output.

If we need two anchor outputs, and want the other to turn into an
anyone-can-spend after it's mined, it's possible by gratuitously
mentioning the other key in the script, I think:

# If they provide a signature, they can push this:
OP_DEPTH OP_IF
   OP_CHECKSIG
OP_ELSE
  # Reveal the other key so you can spend the other anchor, too.
   OP_DROP
  # Now, anyone can spend after 1 block.
  1 OP_CHECKSEQUENCEVERIFY
  OP_TRUE
OP_ENDIF

The other anchor output reverses  and .

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment

2019-10-30 Thread Rusty Russell
Joost Jager  writes:
> We started to look at the `push_me` outputs again. Will refer to them as
> `anchor` outputs from now on, to prevent confusion with `push_msat` on the
> `open_channel` message.
>
> The cpfp carve-out https://github.com/bitcoin/bitcoin/pull/15681 has been
> merged and for reasons described earlier in this thread, we now need to a=
dd
> a csv time lock to every non-anchor output on the commitment transaction.
>
> To realize this, we are currently considering the following changes:
>
> * Add `to_remote_delay OP_CHECKSEQUENCEVERIFY OP_DROP` to the `to_remote`
> output. `to_remote_delay` is the csv delay that the remote party accepted
> in the funding flow for their outputs. This not only ensures that the
> carve-out works as intended, but also removes the incentive to game the
> other party into force-closing. If desired, both parties can still agree =
to
> have different `to_self_delay` values.

I think we should unify to_self_delay if we're doing this.  Otherwise
the game returns.

> * Add `1 OP_CHECKSEQUENCEVERIFY OP_DROP` to the non-revocation clause of
> the HTLC outputs.

> For the anchor outputs we consider:
>
> * Output type: normal P2WKH. At one point, an additional spending path was
> proposed that was unconditional except for a 10 block csv lock. The
> intention of this was to prevent utxo set pollution by allowing anyone to
> clean up. This however also opens up the possibility for an attacker to
> 'use up' the cpfp carve-out after those 10 blocks. If the user A is offli=
ne
> for that period of time, a malicious peer B may already have broadcasted
> the commitment tx and pinned down user A's anchor output with a low fee
> child. That way, the commitment tx could still remain unconfirmed while an
> important htlc expires.

Agreed, this doesn't really work.  We actually needed a bitcoin rule
that allowed a single anyone-can-spend output.  Seems like we didn't get
that.

> * For the keys to use for `to_remote_anchor` and `to_local_anchor`, we=E2=
=80=99d
> like to introduce new addresses that both parties communicate in the
> `open_channel` and `accept_channel` messages. We don=E2=80=99t want to re=
use the
> main commitment output addresses, because those may (at some point) be co=
ld
> storage addresses and the cpfp is likely to happen from a hot wallet.

This is horribly spammy.  At the moment we see ~ one unilateral close
every 3 blocks.  Hopefully that will reduce, but there'll always be
some.

> * Within each version of the commitment transaction, both anchors always
> have equal values and are paid for by the initiator.

Who pays if they can't afford it?  What if they push_msat everything to
the other side?

> The value of the
> anchors is the dust limit that was negotiated in the `open_channel` or
> `accept_channel` message of the party that publishes the transaction.

Now initiator has to care about the other side's dust limit, which is
bad.  And as accepter I now want this much higher, since I get those
funds instantly.  I don't think we gain anything by making this
configurable at all; just pick a number and be done.

Somewhere between 1000 and 10,000 sat is a good idea.

> Furthermore, there doesn=E2=80=99t seem to be a compelling reason anymore=
 for
> tweaking the keys (new insights into watchtower designs, encrypt by txid).

That's not correct.  This seems more like "forgotten insights" than "new
insights", which isn't surprising how long ago Tadge and I did the
watchtower design (BTW: I was the one who came up with encrypt by
txid for that!).

There are several design constraints in the original watchtowers:

1. A watchtower shouldn't be able to guess the channel history.
2. ... even if it sees a unilateral tx.
3. ... even if it sees a revoked unilateral tx it has a penalty for.
4. A watchtower shouldn't be able to tell if it's being used for both
   parties in the same channel.

If you don't rotate keys, a watchtower can brute-force the HTLCs for all
previous transactions it was told about, and previous channel balances.

We removed key rotation on the to-remote output because you can simply
not tell the watchtower about txs which don't have anything but an
output to you.

Here are the options I see:

1. Abandon privacy from watchtowers and don't rotate keys.  Watchtowers
   will be able to brute-force your history if they see a unilateral
   close.

2. Remove HTLC output key rotation, and assume watchtowers don't handle
   HTLCs (so you don't tell them about early txs where the peer has no
   output but there are HTLCs pending).  This seems less useful, since
   HTLC txs require metadata anyway.

3. Change to-local key rotation to use BIP32 (unhardened).  We could
   also take some of the 48 bits (maybe 24?) we currently use to encode
   the commitment number, to encode a BIP32 sub-path for this channel.
   This would make it easier for hardware wallets to reconstruct.

Cheers,
Rusty.
___
Lightning-dev mailing list

Re: [Lightning-dev] Increasing fee defaults to 5000+500 for a healthier network?

2019-10-13 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
> Hi Rusty,
>
> I think this change may be a bit misguided, and we should be careful about
> making sweeping changes to default values like this such as fees. I'm
> worried that this post (and the subsequent LGTMs by some developers)
> promotes the notion that somehow in Lightning, developers decide on fees
> (fees are too low, let's raise them!).

Phew, I'm glad someone else is uncomfortable!  Yes, I held off this
kind of proposal for a long time for exactly this reason.  However,
the truth seems to be that defaults have that power, whether we want
it or not :(

If we make defaults more awkward, it will encourage people to change
them.  In the end, I settled on a simple number because I want to to
be easy to filter out these defaults from further analysis.

> IMO, there're a number of flaws in the reasoning behind this proposal:
>
>> defaults actually indicate lower reliability, and routing gets tarpitted
>> trying them all
>
> Defaults don't necessarily indicate higher/lower reliability. Issuing a
> single CLI command to raise/lower the fees on one's node doesn't magically
> make the owner of said node a _better_ routing node operator.

No, but those who put effort into their node presumably have more
reliable nodes, and this is a signal of that.

Anyone have data on channel reliability that they can correlate with
channel fees?

> If a node has
> many channels, with all of them poorly managed, then path finding algorithms
> can move extrapolate the overall reliability of a node based on failures of
> a sample of channels connected to that node. We've start to experiment with
> such an approach here, so far the results are promising[1].

That's great if you're making many payments, but then you have many
heuristics at your disposal.   Most people won't be making many
payments, so such techniques are not useful.

>> There's no meaningful market signal in fees, since you can't drop much
>> below 1ppm.
>
> The market signal one should be extracting from the current state is: a true
> market hasn't yet emerged as routing node operators are mostly hands off (as
> they're used to being with their exiting bitcoin node) and have yet to begin
> to factor in the various costs of operating a node into their fees schedule.
> Only a handful of routing node operators have started to experiment with
> distinct fee settings in an attempt to feel out the level of elasticity in
> the forwarding market today (if I double by fees, by how much do my daily
> forwards and fee revenue drop off?).
>
> Ken Sedgwick had a pretty good talk on this topic as the most recent SF
> Lightning Devs meet up[2]. The talk itself unfortunately wasn't recorded,
> but there're a ton of cool graphs really digging into the various parameters
> in the current market. He draws a similar conclusion stating that: "Many
> current lightning channels are not charging enough fees to cover on-chain
> replacement".

This is all true, too.

> Developers raising the default fees (on their various implementations) won't
> address this as it shows that the majority of participants today (routing
> node operators) aren't yet thinking about their break even costs. IMO
> generally this is due to a lack of education, which we're working to address
> with our blog post series (eventually to be a fully fledged standalone
> guide) on routing node operation[3]. Tooling also needs to improve to give
> routing node operators better insight into their current level of
> performance and efficacy of their capital allocation decisions.

Assuming a network in which many people are running nodes for their
own use and only forwarding as a side-effect, the biggest factor will
*always* be the default settings.

BTW, a quick look at the percentiles (ignoring "default setting" channels):

Percentile  Min CapacityMax CapacityMedian Base Median PPM
(sats)  (sats)  (msat)  
0-10110010  0   10
10-20   10  20  1   10
20-30   20  358517  1   8
30-40   358517  546639  1   10
40-50   546639  100 1   42
50-60   100 200 106 10
60-70   200 3143170 2   35
70-80   3145265 555 1   800
80-90   5561878 16000   800
90-100  160 2   0   1
Overall:1   10

>> Compare lightningpowerusers.com which charges (1 msat + 5000 ppm),
>> and seems to have significant usage, so there clearly is market tolerance
>> for higher fees.
>
> IIRC, the fees on that node are only that high due to user error by the
> operator when setting their fees.

No; fiatjaf measured 

[Lightning-dev] Increasing fee defaults to 5000+500 for a healthier network?

2019-10-10 Thread Rusty Russell
Hi all,

I've been looking at the current lightning network fees, and it
shows that 2/3 are sitting on the default (1000 msat + 1 ppm).

This has two problems:
1. Low fees are now a negative signal: defaults actually indicate
   lower reliability, and routing gets tarpitted trying them all.
2. There's no meaningful market signal in fees, since you can't
   drop much below 1ppm.

Compare lightningpowerusers.com which charges (1 msat + 5000 ppm),
and seems to have significant usage, so there clearly is market
tolerance for higher fees.

I am proposing that as of next release of c-lighting, we change defaults
on new channels to 5000 msat + 500ppm, and I'd like the other
implementations to join me.

Over time, that should move the noise floor up.  I picked 500ppm because
that's still 1% at 20 hops, so minimally centralizing.  I picked 5000
msat base for less quantifiable reasons.

Here's default fee a rate table in USD (@10k per BTC):

Amount   Before  After
0.1c 0.011c  0.05005c
1c   0.010001c   0.0505c
10c  0.01001c0.055c
$1   0.0101c 0.1c
$10  0.011c  0.55c
$100 0.02c   5.05c
$10000.11c   50.05c

Thoughts?
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


  1   2   3   >