Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-26 Thread ZmnSCPxj via Lightning-dev
Good morning aj,

> > Basically, the intuition "small decrease in `htlc_max_msat` == small 
> > decrease in payment volume" inherently assumes that HTLC sizes have a flat 
> > distribution across all possible sizes.
> 
> 
> The intuition is really the other way around: if you want a stable,
> decentralised network, then you need the driving decision on routing to
> be something other than just "who's cheaper by 0.0001%" -- otherwise
> everyone just chooses the same route at all times (which becomes
> centralised towards the single provider who can best monetise forwarding
> via something other than fees), and probably that route quickly becomes
> unusable due to being drained (which isn't stable).

All monetisation is fee-based; the question is who pays the fees.
Certainly gossiped feerates will work less effectively if fees are paid via 
another mechanism.

In particular, discussing with actual forwarding node operators reveals that 
most of them think that CLBOSS undercuts fees too much searching a short-term 
profit, quickly depleting its usable liquidity in the long term.
In short, they want CLBOSS modified to raise fees and preserve the liquidity 
supply.
This suggests to me that channel saturation due to being cheaper by 0.0001% is 
not something that will occur often, as most operators will settle to a feerate 
that maximizes their earnings per unit liquidity they can provide, not trying 
to undercut everyone.

In particular, the fact that rebalancing already exists as part of the network 
protocol means that anyone trying to undercut will find their liquidity being 
bought out by more patient operators, who are willing to sacrifice short-term 
profits for long-term consistent earnings.

In short, the market will fix itself once we have more rational automated 
actors in place (i.e. not CLBOSS).
Indeed, price signals are always places where you should pay attention to 
whether you need more of a good or not.

But maybe I am just modelling everything incorrectly.
Certainly the fact that fees can be paid by somebody else other than senders 
can make gossiped feerates (which are the sender-paid feerates) less effective.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-26 Thread Anthony Towns
On Mon, Sep 26, 2022 at 01:26:57AM +, ZmnSCPxj via Lightning-dev wrote:
> > > * you're providing a way of throttling payment traffic independent of
> > > fees -- since fees are competitive, they can have discontinuous effects
> > > where a small change to fee can cause a large change to traffic volume;
> > > but this seems like it should mostly have a proportional response,
> > > with a small decrease in htlc_max_msat resulting in a small decrease in
> > > payment volume, and conversely. Much better for stability/optimisation!

> > This may depend on what gets popular for sender algorithms.
> > Senders may quantize their payments, i.e. select a "standard" value and 
> > divide all payments into multipath sub-payments of this value.

I don't think that's really the case. 

One option is that you quantize based on the individual payment -- you
want to send $100, great, your software splits it into 50x $2 payments,
and routes them. But that doesn't have an all or nothing effect: if you
reject anything over $1.99, then instead of routing 1/50th of payments
up to $100, you're routing 1/50th of payments up to $99.50.

The other approach is to quantize by some fixed value no matter what the
payment is (maybe for better privacy?). I don't think that's a good idea
in the first place -- it trades off maybe a small win for your privacy
for using up everyone else's HTLC slots -- but if it is, it'll need to be
quite a small value so as not to force you to round up the overall payment
too much, and to allow small payments in the first place. But in that case
most channels will have their html_max_msat well above that value anyway.

> Basically, the intuition "small decrease in `htlc_max_msat` == small decrease 
> in payment volume" inherently assumes that HTLC sizes have a flat 
> distribution across all possible sizes.

The intuition is really the other way around: if you want a stable,
decentralised network, then you need the driving decision on routing to
be something other than just "who's cheaper by 0.0001%" -- otherwise
everyone just chooses the same route at all times (which becomes
centralised towards the single provider who can best monetise forwarding
via something other than fees), and probably that route quickly becomes
unusable due to being drained (which isn't stable).

(But of course, I hadn't had any ideas on what such a thing could be,
otherwise I'd have suggested something like this earlier!)

So, to extend the intuition further: that means that if using
htlc_max_msat as a valve/throttle can fill that role, then that's a reason
to not do weird things like force every HTLC to be 2**n msats or similar.

If there is a conflict, far better to have a lightning network that's
decentralised, stable, and doesn't require node operators to spy on
transactions to pay for their servers.

It's not quite as bad as you suggest though -- the payment sizes
don't need to have a flat distribution, they only need to have a
smooth/continuous distribution.

> * Coffee or other popular everyday product may settle on a standard price, 
> which again implies a spike around that standard price.

Imagine the price of coffee is $5, and you find three potential paths 
to pay for that coffee:

  Z -> A -> X
  Z -> B -> C -> X
  Z -> B -> D -> X

(I think you choose both the fee and max_msat for Z->A and Z->B hops,
so we'll assume they're 0%/infinite, respectively)

Suppose the fee on AX is 0.01%, and the total fee for BCX is 0.02%
and the total fee for BDX is 0.1%.

If AX's max_msat is $5, they'll get the entire transaction. If it's
$4.99, you might instead optimise fees by doing AMP: send $4.99 through
AX and $0.01 through BCX, for a total fee rate of 0.01002%.

If everyone quantizes at 10c (500sat?) instead of 1c (50sat?) or lower
then that just means instead of getting maybe a 0.2% reduction in payment
flow, AX gets a 2% reduction in payment flow.

Likewise, if AX's max_msat is $1, BCX's max_msat is $3, and BDX's max_msat
is $20, then you split your payment up as $1/$3/$1 and pay a fee of
0.034%. Meanwhile AX's payment flow has been reduced by perhaps 80%
(if everyone's buying $5 coffees), and BCX's by perhaps 25% (from $4 to
$3), allowing them to maintain balanced channels.

> So the reliability of `htlc_max_msat` as a valve is dependent on market 
> forces, and may be as non-linear as feerates, which *are* the sum total of 
> the market force.

No: without some sort of external throttle, fees have a tendency to be all
or nothing. If there's no metric other than fees, why would I ever choose
to pay 0.02% (let alone 0.1%!) in fees? And if a new path comes along
offering a fee rate of 0.00999% fees, why would I continue paying 0.01%?

Even if everyone does start quantizing their payments -- and does so with
an almost 6 order of magnitude jump from 1msat to 500sats -- you're only
implying traffic bumps of perhaps 2% when tweaking parameters that are
near important thresholds, rather than 100%.

> Feerates on the other hand 

Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-09-26 Thread Greg Sanders
> I think this mitigation requires reliable access to the UTXO set

In this case, how about just setting nsequence to the value 1? UTXO may not
exist, but maybe that's ok since it means it cannot pin the commitment tx.

> If this concern is correct, I'm not sure we have a current good solution,
the WIP package RBF proposal would be limited to only 2 descendants [1],
and here we might have 3 generations: the splice, a commitment, a CPFP.

I maybe misunderstood the point, but if we're assuming some future V3
transaction update, you could certainly add anchors to the splice and CPFP
it from there. I think the effort was to attempt to avoid waiting for such
an update.

Best,
Greg

On Mon, Sep 26, 2022 at 3:51 PM Antoine Riard 
wrote:

> Hi Dustin,
>
> From my understanding, splice pinning is problematic for channel funds
> safety. In the sense once you have a splice floating in network mempools
> and your latest valid commitment transaction pre-signed fees isn't enough
> to replace the splice, lack of confirmation might damage the claim of HTLCs.
>
> I don't know if the current splice proposal discourages pending HTLCs
> during the splice lifetime, this would at least downgrade the pinning
> severity in the splicing case to a simple liquidity timevalue loss.
>
> W.r.t, about the mitigation proposed.
>
> > For “ancestor bulking”, every `tx_add_input` proposed by a peer must be
> > included in the UTXO set. A node MUST verify the presence of a proposed
> > input before adding it to the splicing transaction.
>
> I think this mitigation requires reliable access to the UTXO set, a
> significant constraint for LN mobile clients relying on lightweight
> validation backends. While this requirement already exists in matters of
> routing to authenticate channel announcements, on the LDK-side we have
> alternative infrastructure to offer source-based routing to such a class of
> clients, without them having to care about the UTXO set [0]. I don't
> exclude there would be infrastructure in the future to access a subset of
> the UTXO set (e.g if utreexo is deployed on the p2p network) for
> resource-constraint clients, however as of today this is still pure
> speculation and vaporware.
>
> In the meantime, mobile clients might not be able to partake in splicing
> operations with their LSPs, or without a decrease in trust-minimization
> (e.g assuming your LSP doesn't initiate malicious pinnings against you).
>
> > 1) You cannot CPFP a splice transaction. All splices must be RBF’d to be
> > fee-bumped. The interactive tx protocol already provides a protocol for
> > initiating an RBF, which we re-use for splicing.
>
> The issue with RBF, it assumes interactivity with your counterparties. As
> splicing is built on top of the interactive transaction construction
> protocol, from my understanding you could have a high order of participants
> to coordinate with, without knowledge of their signing policies (e.g if
> they're time-constraints) therefore any re-signing operations might have
> odds to fail. Moreover, one of these participants could be malicious and
> refuses straightly to sign, therefore the  already splicing transactions
> stay as a pin in the network mempools.
>
> If this concern is correct, I'm not sure we have a current good solution,
> the WIP package RBF proposal would be limited to only 2 descendants [1],
> and here we might have 3 generations: the splice, a commitment, a CPFP.
>
> [0] https://github.com/lightningdevkit/rapid-gossip-sync-server
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
>
> Le mar. 9 août 2022 à 16:15, Dustin Dettmer  a
> écrit :
>
>> As raised by @crypto-iq and @roasbeef, splices which permit arbitrary
>> script and input inclusion are at risk of being mempool pinned. Here we
>> present a solution to this splice pinning problem.
>>
>>
>> ## Background
>>
>> Pinning can be done by building a very large “junk” transaction that
>> spends from an important pending one. There are two known pinning vectors:
>> ancestor bulking thru addition of new inputs and junk pinning via the
>> spending of outputs.
>>
>>
>> Pinning pushes transactions to the bottom of the priority list without a
>> practical way of bumping it up. It is in effect a griefing attack, but in
>> the case of lightning can risk funds loss for HTLCs that have timed out for
>> a pinned commitment transaction.
>>
>>
>> Anchor outputs were introduced to lightning to mitigate the junk pinning
>> vector; they work by adding a minimum of  `1 CSV` lock to all outputs on
>> the commitment transaction except for two “anchor” outputs, one for each
>> channel peer. (These take advantage of a 1-tx carve-out exception to enable
>> propagation of anchors despite any junk attached to the peer’s anchor).
>>
>>
>> ## Mitigation
>>
>> Splice transactions are susceptible to both junk and bulk pinning
>> attacks. Here’s how we propose mitigating these for splice.
>>
>>
>> [https://i.imgur.com/ayiO1Qt.png]
>>

Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-09-26 Thread Antoine Riard
Hi Dustin,

>From my understanding, splice pinning is problematic for channel funds
safety. In the sense once you have a splice floating in network mempools
and your latest valid commitment transaction pre-signed fees isn't enough
to replace the splice, lack of confirmation might damage the claim of HTLCs.

I don't know if the current splice proposal discourages pending HTLCs
during the splice lifetime, this would at least downgrade the pinning
severity in the splicing case to a simple liquidity timevalue loss.

W.r.t, about the mitigation proposed.

> For “ancestor bulking”, every `tx_add_input` proposed by a peer must be
> included in the UTXO set. A node MUST verify the presence of a proposed
> input before adding it to the splicing transaction.

I think this mitigation requires reliable access to the UTXO set, a
significant constraint for LN mobile clients relying on lightweight
validation backends. While this requirement already exists in matters of
routing to authenticate channel announcements, on the LDK-side we have
alternative infrastructure to offer source-based routing to such a class of
clients, without them having to care about the UTXO set [0]. I don't
exclude there would be infrastructure in the future to access a subset of
the UTXO set (e.g if utreexo is deployed on the p2p network) for
resource-constraint clients, however as of today this is still pure
speculation and vaporware.

In the meantime, mobile clients might not be able to partake in splicing
operations with their LSPs, or without a decrease in trust-minimization
(e.g assuming your LSP doesn't initiate malicious pinnings against you).

> 1) You cannot CPFP a splice transaction. All splices must be RBF’d to be
> fee-bumped. The interactive tx protocol already provides a protocol for
> initiating an RBF, which we re-use for splicing.

The issue with RBF, it assumes interactivity with your counterparties. As
splicing is built on top of the interactive transaction construction
protocol, from my understanding you could have a high order of participants
to coordinate with, without knowledge of their signing policies (e.g if
they're time-constraints) therefore any re-signing operations might have
odds to fail. Moreover, one of these participants could be malicious and
refuses straightly to sign, therefore the  already splicing transactions
stay as a pin in the network mempools.

If this concern is correct, I'm not sure we have a current good solution,
the WIP package RBF proposal would be limited to only 2 descendants [1],
and here we might have 3 generations: the splice, a commitment, a CPFP.

[0] https://github.com/lightningdevkit/rapid-gossip-sync-server
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html

Le mar. 9 août 2022 à 16:15, Dustin Dettmer  a écrit :

> As raised by @crypto-iq and @roasbeef, splices which permit arbitrary
> script and input inclusion are at risk of being mempool pinned. Here we
> present a solution to this splice pinning problem.
>
>
> ## Background
>
> Pinning can be done by building a very large “junk” transaction that
> spends from an important pending one. There are two known pinning vectors:
> ancestor bulking thru addition of new inputs and junk pinning via the
> spending of outputs.
>
>
> Pinning pushes transactions to the bottom of the priority list without a
> practical way of bumping it up. It is in effect a griefing attack, but in
> the case of lightning can risk funds loss for HTLCs that have timed out for
> a pinned commitment transaction.
>
>
> Anchor outputs were introduced to lightning to mitigate the junk pinning
> vector; they work by adding a minimum of  `1 CSV` lock to all outputs on
> the commitment transaction except for two “anchor” outputs, one for each
> channel peer. (These take advantage of a 1-tx carve-out exception to enable
> propagation of anchors despite any junk attached to the peer’s anchor).
>
>
> ## Mitigation
>
> Splice transactions are susceptible to both junk and bulk pinning attacks.
> Here’s how we propose mitigating these for splice.
>
>
> [https://i.imgur.com/ayiO1Qt.png]
>
>
> For “ancestor bulking”, every `tx_add_input` proposed by a peer must be
> included in the UTXO set. A node MUST verify the presence of a proposed
> input before adding it to the splicing transaction.
>
>
> For “output junk”, every output included directly in a splice transaction
> MUST be a v0 P2SH witness script which begins with a minimum of `1 CSV`
> relative timelock. No output on the splice transaction will be spendable
> until it is included in a block. This prevents junk pinning by removing the
> ability to propose spends of splice outputs before the transaction is
> included in a block.
>
>
> There are two side effects here.
>
>
> 1) You cannot CPFP a splice transaction. All splices must be RBF’d to be
> fee-bumped. The interactive tx protocol already provides a protocol for
> initiating an RBF, which we re-use for splicing.
>
> 2) Arbitrary 3rd