Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-22 Thread Eric Voskuil via bitcoin-dev
The fees paid to mine the set of transactions in a given block are known only to the miner that produced the block. Assuming that tx inputs less outputs represents an actual economic force is an error.eOn Dec 22, 2023, at 09:24, jlspc via bitcoin-dev  wrote:Hi Antoine,

Thanks for your thoughtful response.

Comments inline below:

> Hi John,

> While the idea of using sliding reaction window for blockchain congestion
> detection has been present in the "smart contract" space at large [0] and
> this has been discussed informally among Lightning devs and covenant
> designers few times [1] [2], this is the first and best formalization of
> sliding-time-locks in function of block fee rates for Bitcoin I'm aware
> off, to the best of my knowledge.

Thanks!

> Here my understanding of the feerate-dependent timelock proposal.

> A transaction cannot be included in a block:
> - height-based or epoch-based absolute or relative timelocks are not
> satisfied according to current consensus rules (bip68 and bip 113 and
> implementation details)
> - less than `block_count` has a block median-feerate above the
> median-feerate of the `window_size` period

It's a little bit different from that.
The transaction cannot be included in the blockchain until after an aligned window W of window_size blocks where:
1) W starts no sooner than when the height-based or epoch-based absolute and/or relative timelocks have been satisfied, and
2) W contains fewer than block_count blocks with median feerate greater than feerate_value_bound.

Note that the aligned window cannot start until the absolute and/or relative timelocks have been satisfied and the transaction itself has to come after the aligned window.
However, once such an aligned window exists in the blockchain, the transaction can appear at any later time (and not just within a window that itself meets the block_count and feerate_value_bound limitations).

> A median feerate is computed for each block.
> (This is unclear to me if this is the feerate for half of the block's
> weight or the median feerate with all weight units included in the
> block as the sample)

A feerate F is the median feerate of a block B if F is the largest feerate such that the total size of the transactions in B with feerate greater or equal to F is at least 2 million vbytes.

> From then, you have 3 parameters included in the nSequence field.
> - feerate_value_bound
> - window_size
> - block_count

> Those parameters can be selected by the transaction builder (and
> committed with a signature or hash chain-based covenant).
> As such, off-chain construction counterparties can select the
> feerate_value_bound at which their time-sensitive transaction
> confirmation will be delayed.

> E.g let's say you have a LN-penalty Alice-Bob channel. Second-stage
> HTLC transactions are pre-signed with feerate_value_bound at 100 sat /
> vbytes.
> The window_size selected is 100 blocks and the block_count is 70 (this
> guarantees tampering-robustness of the feerate_value_bound in face of
> miners coalitions).

> There is 1 BTC offered HTLC pending with expiration time T, from Alice to Bob.

> If at time T, the per-block median feerate of at least 70 blocks over
> the latest 100 block is above 100 sat / vbytes, any Alice's
> HTLC-timeout or Bob's HTLC-preimage cannot be included in the chain.

The rules are actually:
1) wait until time T, then
2) wait until the start of a full aligned window W with 100 consecutive blocks that starts no earlier than T and that has fewer than 70 blocks with median feerate above 100 sats/vbyte.
(The values 100, 70, and 100 cannot actually be selected in the implementation in the paper, but that's a technical detail and could be changed if the FDT is specified in the annex, as you propose.)

> From my understanding, Feerate-Dependent Timelocks effectively
> constitute the lineaments of a solution to the "Forced Expiration
> Spam" as described in the LN paper.

Great!

> I think you have few design caveats to be aware off:
> - for current LN-penalty, the revokeable scripts should be modified to
> ensure the CSV opcode inspect the enforcement of FDT's parameters, as
> those revokeable scripts are committed by all parties

Yes, definitely.

> - there should be a delay period at the advantage of one party
> otherwise you still a feerate-race if the revocation bip68 timelock
> has expired during the FDT delay

> As such, I believe the FDT parameters should be enriched with another
> parameter : `claim_grace_period`, a new type of relative timelock of
> which the endpoint should be the `feerate_value_bound` itself.

I'm not sure I'm following your proposal.
Are you suggesting that the transaction with the FDT has to wait an additional claim_grace_period in order to allow conflicting transactions from the other party to win the race?
For example, assume the HTLC-success transaction has a higher feerate than the feerate_value_bound, and the conflicting HTLC-timeout transaction has an FDT with th

Re: [bitcoin-dev] Lamport scheme (not signature) to economize on L1

2023-12-22 Thread Yuri S VB via bitcoin-dev
There are three possible cryptanalysis to LAMPPRI in my scheme:

1.  From ADDR alone before T0-1 (to be precise, publishing of (TX, SIG));
2.  From ADDR and (TX, SIG) after T0-1 (to be precise, publishing of (TX, SIG));
3.  Outmine the rest of mining community starting from a disadvantage of not 
less than (T1-T0-1) after T1 (to be precise, at time of publishing of LAMPRI);

...which bring us back to my argument with Boris: There is something else we 
missed in our considerations, which you said yourself: *ironically, LAMPPUB is 
never published*.

We can have LAMPPUB be 1Mb or even 1Gb long aiming at having rate of collision 
in HL(.) be negligible (note this is perfectly adherent to the proposition of 
memory-hard-hash, and would have the additional benefit of allowing 
processing-storage trade-off). In this case, we really have:

For 1: a pre-image problem for a function 

f1: {k| k is a valid ECCPRI} X {l | l is a valid LAMPPRI} -> {a | a is in the 
format of a ADDR}
having as domain the Cartesian product of set of possible ECCPRIs with set of 
possible LAMPPRIs and counter domain, the set of possible ADDRs.

For 2: a pre-image problem for a function 

f2_(TX,ECCPUB): {l | l is 'a valid LAMPPRI'} -> {a | a is 'in the format of 
ADDRs'} X {LSIG}
(notice the nuance: {LSIG} means the singleton containing of only the specific 
LSIG that was actually public, not 'in the format of LSIGs').

Notice that, whatever advantage of 2 over 1 has to be compensated by the 
perspective of having the protocol be successfully terminated before the attack 
being carried out.

For 3: Equivalente of a double-spending attack with, in the worst case, not 
less than (T1-T0-1) blocks in advantage for the rest of the community.

When I have the time, I'll do the math on what is the entropy on each case, 
assuming ideal hashes, but taking for granted the approximation given by Boris, 
we would have half of size of ADDR as strength, not half of LAMPPRI, so mission 
accomplished!

Another ramification of that is we can conceive a multi-use version of this 
scheme, in which LAMPPRI is the ADDR resulting of a previous (ECCPUB, LAMPPUB) 
pair. The increased size of LAMPPRI would be compensated by one entire ADDR 
less in the blockchain. Namely, we'd have an extra marginal reduction of 12 
bytes per use (possibly more, depending on how much more bytes we can economize 
given that added strength).

YSVB.

On Friday, December 22nd, 2023 at 5:52 AM, G. Andrew Stone 
 wrote:


> Does this affect the security model WRT chain reorganizations? In the classic 
> doublespend attack, an attacker can only redirect UTXOs that they spent. With 
> this proposal, if I understand it correctly, an attacker could redirect all 
> funds that have "matured" (revealed the previous preimage in the hash chain) 
> to themselves. The # blocks to maturity in your proposal becomes the classic 
> "embargo period" and every coin spent by anyone between the fork point and 
> the maturity depth is available to the attacker to doublespend?
> 

> On Thu, Dec 21, 2023, 8:05 PM Yuri S VB via bitcoin-dev 
>  wrote:
> 

> > You are right to point out that my proposal was lacking defense against 
> > rainbow-table, because there is a simple solution for it:
> > To take nonces from recent blocks, say, T0-6, ..., T0-13, for salting LSIG, 
> > and ECCPUB to salt LAMPPUB. Salts don't need to be secret, only unknown by 
> > the builder of rainbow table while they made it, which is the case, since 
> > here we have 8*32=256 bits for LSIG, and the entropy of ECCPUB in the 
> > second.
> > 

> > With rainbow table out of our way, there is only brute-force analysis to 
> > mind. Honestly, Guess I should find a less 'outrageously generous' upper 
> > bound for adversary's model, than just assume they have a magic wand to 
> > convert SHA256 ASICS into CPU with the same hashrate for memory- and 
> > serial-work-hard hashes (therefore giving away hash hardness). That's 
> > because with such 'magic wand' many mining pools would, not only be capable 
> > of cracking 2^48 hashes far within the protocol's prescribed 48 hours, but 
> > also 2^64 within a block time, which would invalidate a lot of what is 
> > still in use today.
> > 

> > Please, allow me a few days to think that through.
> > 

> > YSVB
> > 

> > Sent with Proton Mail secure email.
> > 

> > On Wednesday, December 20th, 2023 at 10:33 PM, Nagaev Boris 
> >  wrote:
> > 

> > 

> > > On Tue, Dec 19, 2023 at 6:22 PM yuri...@pm.me wrote:
> > >
> > > > Thank you for putting yourself through the working of carefully 
> > > > analyzing my proposition, Boris!
> > > >
> > > > 1) My demonstration concludes 12 bytes is still a very conservative 
> > > > figure for the hashes. I'm not sure where did you get the 14 bytes 
> > > > figure. This is 2*(14-12) = 4 bytes less.
> > >
> > >
> > > I agree. It should have been 12.
> > >
> > > > 2) Thank you for pointing out that ECCPUB is necessary. That's exactly 
> > > > right and I failed to 

Re: [bitcoin-dev] Swift Activation - CTV

2023-12-22 Thread alicexbt via bitcoin-dev
Hi Luke,

This is not the first time I am writing this but you keep ignoring it and 
threaten with URSF. Please build BIP 8 client with LOT=TRUE if you think its 
the best way to activate and share it so that users can run it.

I had created this branch specifically for it but needed help which I didn't 
get: https://github.com/xbtactivation/bitcoin/tree/bip8-ctv

Discussing trade-offs involved in different activation methods and providing 
options to users is not an attack.

/dev/fd0
floppy disk guy

Sent with Proton Mail secure email.

On Friday, December 22nd, 2023 at 1:05 AM, Luke Dashjr  wrote:


> This IS INDEED an attack on Bitcoin. It does not activate CTV - it would
> (if users are fooled into using it) give MINERS the OPTION to activate
> CTV. Nobody should run this, and if it gets any traction, it would
> behoove the community to develop and run a "URSF" making blocks
> signalling for it invalid.
> 
> Luke
> 
> 
> On 12/19/23 20:44, alicexbt via bitcoin-dev wrote:
> 
> > Hello World,
> > 
> > Note: This is not an attack on bitcoin. This is an email with some text and 
> > links. Users can decide what works best for themselves. There is also scope 
> > for discussion about changing method or params.
> > 
> > I want to keep it short and no energy left. I have explored different 
> > options for activation and this feels the safest at this point in 2023. I 
> > have not done any magic or innovation but updated activation params. If you 
> > agree with them, please run this client else build your own. Anyone who 
> > calls this attack and do not build alternative option is an attack in 
> > itself.
> > 
> > It activates CTV which is simple covenant proposal and achieves what we 
> > expect it to. It is upgradeable. I like simple things, at least to start 
> > with.
> > 
> > Activation parameters:
> > 
> > `consensus.vDeployments[Consensus::DEPLOYMENT_COVTOOLS].nStartTime = 
> > 1704067200; consensus.vDeployments[Consensus::DEPLOYMENT_COVTOOLS].nTimeout 
> > = 1727740800; 
> > consensus.vDeployments[Consensus::DEPLOYMENT_COVTOOLS].min_activation_height
> >  = 874874;`
> > 
> > I need payment pools and it does it for me. Apart from that it enables 
> > vaults, congestion control etc. We have more PoCs for CTV than we had for 
> > taproot and I understand it better.
> > 
> > If you agree with me, please build and run this client before 01 Jan 2024 
> > else we can discuss ordinals for next 5 years and activate something in 
> > 2028.
> > 
> > Cheers
> > 
> > /dev/fd0
> > floppy disk guy
> > 
> > Sent with Proton Mail secure email.
> > 
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-22 Thread jlspc via bitcoin-dev
Hi Antoine,

Thanks for your thoughtful response.

Comments inline below:

> Hi John,

> While the idea of using sliding reaction window for blockchain congestion
> detection has been present in the "smart contract" space at large [0] and
> this has been discussed informally among Lightning devs and covenant
> designers few times [1] [2], this is the first and best formalization of
> sliding-time-locks in function of block fee rates for Bitcoin I'm aware
> off, to the best of my knowledge.

Thanks!

> Here my understanding of the feerate-dependent timelock proposal.

> A transaction cannot be included in a block:
> - height-based or epoch-based absolute or relative timelocks are not
> satisfied according to current consensus rules (bip68 and bip 113 and
> implementation details)
> - less than `block_count` has a block median-feerate above the
> median-feerate of the `window_size` period

It's a little bit different from that.
The transaction cannot be included in the blockchain until after an aligned 
window W of window_size blocks where:
1) W starts no sooner than when the height-based or epoch-based absolute and/or 
relative timelocks have been satisfied, and
2) W contains fewer than block_count blocks with median feerate greater than 
feerate_value_bound.

Note that the aligned window cannot start until the absolute and/or relative 
timelocks have been satisfied and the transaction itself has to come after the 
aligned window.
However, once such an aligned window exists in the blockchain, the transaction 
can appear at any later time (and not just within a window that itself meets 
the block_count and feerate_value_bound limitations).

> A median feerate is computed for each block.
> (This is unclear to me if this is the feerate for half of the block's
> weight or the median feerate with all weight units included in the
> block as the sample)

A feerate F is the median feerate of a block B if F is the largest feerate such 
that the total size of the transactions in B with feerate greater or equal to F 
is at least 2 million vbytes.

> From then, you have 3 parameters included in the nSequence field.
> - feerate_value_bound
> - window_size
> - block_count

> Those parameters can be selected by the transaction builder (and
> committed with a signature or hash chain-based covenant).
> As such, off-chain construction counterparties can select the
> feerate_value_bound at which their time-sensitive transaction
> confirmation will be delayed.

> E.g let's say you have a LN-penalty Alice-Bob channel. Second-stage
> HTLC transactions are pre-signed with feerate_value_bound at 100 sat /
> vbytes.
> The window_size selected is 100 blocks and the block_count is 70 (this
> guarantees tampering-robustness of the feerate_value_bound in face of
> miners coalitions).

> There is 1 BTC offered HTLC pending with expiration time T, from Alice to Bob.

> If at time T, the per-block median feerate of at least 70 blocks over
> the latest 100 block is above 100 sat / vbytes, any Alice's
> HTLC-timeout or Bob's HTLC-preimage cannot be included in the chain.

The rules are actually:
1) wait until time T, then
2) wait until the start of a full aligned window W with 100 consecutive blocks 
that starts no earlier than T and that has fewer than 70 blocks with median 
feerate above 100 sats/vbyte.
(The values 100, 70, and 100 cannot actually be selected in the implementation 
in the paper, but that's a technical detail and could be changed if the FDT is 
specified in the annex, as you propose.)

> From my understanding, Feerate-Dependent Timelocks effectively
> constitute the lineaments of a solution to the "Forced Expiration
> Spam" as described in the LN paper.

Great!

> I think you have few design caveats to be aware off:
> - for current LN-penalty, the revokeable scripts should be modified to
> ensure the CSV opcode inspect the enforcement of FDT's parameters, as
> those revokeable scripts are committed by all parties

Yes, definitely.

> - there should be a delay period at the advantage of one party
> otherwise you still a feerate-race if the revocation bip68 timelock
> has expired during the FDT delay

> As such, I believe the FDT parameters should be enriched with another
> parameter : `claim_grace_period`, a new type of relative timelock of
> which the endpoint should be the `feerate_value_bound` itself.

I'm not sure I'm following your proposal.
Are you suggesting that the transaction with the FDT has to wait an additional 
claim_grace_period in order to allow conflicting transactions from the other 
party to win the race?
For example, assume the HTLC-success transaction has a higher feerate than the 
feerate_value_bound, and the conflicting HTLC-timeout transaction has an FDT 
with the feerate_value_bound (and suitable window_size and block_count 
parameters to defend against miner attacks).
In this case, is the worry that the HTLC-success and HTLC-timeout transactions 
could both be delayed until there is a window W that

Re: [bitcoin-dev] Altruistic Rebroadcasting - A Partial Replacement Cycling Mitigation

2023-12-22 Thread Peter Todd via bitcoin-dev
On Thu, Dec 21, 2023 at 09:59:04PM +, Antoine Riard wrote:
> Hi Peter,
> 
> > Obviously a local replacement cache is a possibility. The whole point of
> my
> > email is to show that a local replacement cache isn't necessarily needed,
> as
> > any alturistic party can implement that cache for all nodes.
> 
> Sure, note as soon as you start to introduce an altruistic party
> rebroadcasting for everyone in the network, we're talking about a different
> security model for time-sensitive second-layers. And unfortunately, one can
> expect the same dynamics we're seeing with BIP157 filter servers, a handful
> of altruistic nodes for a large swarm of clients.

Are you claiming that BIP157 doesn't work well? In my experience it does.

Rebroadcasting is additive security, so the minimum number you need for the
functionality to work is just one.

> > 1) That is trivially avoided by only broadcasting txs that meet the local
> > mempool min fee, plus some buffer. There's no point to broadcasting txs
> that
> > aren't going to get mined any time soon.
> >
> > 2) BIP-133 feefilter avoids this as well, as Bitcoin Core uses the
> feefilter
> > P2P message to tell peers not to send transactions below a threshold fee
> rate.
> >
> > https://github.com/bitcoin/bips/blob/master/bip-0133.mediawiki
> 
> I know we can introduce BIP-133 style of filtering here only the top % of
> the block template is altruistically rebroadcast. I still think this is an
> open question how a second-layer node would discover the "global" mempool
> min fee, and note an adversary might inflate the top % of block template to
> prevent rebroadcast of malicious HTLC-preimage.

Huh? Bitcoin nodes almost always use the same mempool limit, 300MB, so mempool
min fees are very consistent across nodes. I just checked four different long
running nodes I have access to, running a variety of Bitcoin Core versions on
different platforms and very different places in the world, and their minfees
all agree to well within 1%

In fact, they agree on min fee much *more* closely than the size of their
mempools (in terms of # of transactions). Which makes sense when you think
about it, as the slope of the supply/demand curve is fairly flat right now.

> > Did you actually read my email? I worked out the budget required in a
> > reasonable worst case scenario:
> 
> Yes. I think my uncertainty lies in the fact that an adversary can
> rebroadcast _multiple_ times the same UTXO amount under different txids,
> occupying all your altruistic bandwidth. Your economic estimation makes an
> assumption per-block (i.e 3.125 BTC / 10 minutes). Where it might be
> pernicious is that an adversary should be able to reuse those 3.125 BTC of
> value for each inter-block time.
> 
> Of course, you can introduce an additional rate-limitation per-UTXO, though
> I think this is very unsure how this rate-limit would not introduce a new
> pinning vector for "honest" counterparty transaction.

From the point of view of a single node, an attacker can not reuse a UTXO in a
replacement cycling attack. The BIP125 rules, in particular rule #4, ensure
that each replacement consumes liquidity because each replacement requires a
higher fee, at least high enough to "pay for" the bandwidth of the replacement.

An attacker trying to use the same UTXO's to cycle out multiple victim
transactions has to pay a higher fee for each victim cycled out. This is
because at each step of the cycle, the attacker had to broadcast a transaction
with a higher fee than some other transaction.

> > Here, we're talking about an attacker that has financial resources high
> enough
> > to possibly do 51% attacks. And even in that scenario, storing the entire
> > replacement database in RAM costs under $1000
> 
> > The reality is such an attack would probably be limited by P2P bandwidth,
> not
> > financial resources, as 2MB/s of tx traffic would likely leave mempools
> in an
> > somewhat inconsistent state across the network due to bandwidth
> limitations.
> > And that is *regardless* of whether or not anyone implements alturistic tx
> > broadcasting.
> 
> Sure, I think we considered bandwidth limitations in the design of the
> package relay. Though see my point above, the real difficulty in your
> proposed design sounds to me in the ability to reuse the same UTXO
> liquidity for an unbounded number of concurrent replacement transactions
> exhausting all the altruistic bandwidth.
> 
> One can just use a 0.1 BTC UTXO and just fan-out 3.125 BTC of concurrent
> replacement transactions from then. So I don't think the assumed adversary
> financial resources are accurate.

If I understand correctly, here you are talking about an attacker with
connections to many different nodes at once, using the same UTXO(s) to do
replacement cycling attacks against different victim transactions on many
different nodes at once.

There is no free lunch in this strategy. By using the same UTXO(s) against
multiple victims, the attacker dramatically