Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-29 15:17, Nagaev Boris wrote:

Feerate-Dependent Timelocks do create incentives to accept out-of-band
fees to decrease in-band fees and speed up mining of transactions
using FDT! Miners can make a 5% discount on fees paid out-of-band and
many people will use it. Observed fees decrease and FDT transactions
mature faster. It is beneficial for both parties involved: senders of
transactions save 5% on fees, miners get FDT transactions mined faster
and get more profits (for the sake of example more than 5%).


Hi Nagaev,

That's an interesting idea, but I don't think that it works due to the 
free rider problem: miner Alice offers a 5% discount on fees paid out of 
band now in the hopes of collecting more than 5% in extra fees later due 
to increased urgency from users that depended on FDTs.  However, 
sometimes the person who actually collects extra fees is miner Bob who 
never offered a 5% discount.  By not offering a discount, Bob earns more 
money on average per block than Alice (all other things being equal), 
eventually forcing her to stop offering the discount or to leave the 
market.


Additionally, if nearly everyone was paying discounted fees out of band, 
participants in contract protocols using FDTs would know to use 
proportionally higher FDT amounts (e.g. 5% over their actual desired 
fee), negating the benefit to miners of offering discounted fees.


-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-28 08:42, Eric Voskuil via bitcoin-dev wrote:

Assuming a “sufficient fraction” of
one of several economically rational behaviors is a design flaw.


The amount of effort it takes a user to pay additional miners out of
band is likely to increase much faster than probability that the user's
payment will confirm on time.  For example, offering payment to the set
of miners that controls 90% of hash rate will result in confirmation
within 6 blocks 99.% of the time, meaning it's probably not worth
putting any effort into offering payment to the other 10% of miners.  If
out of band payments become a significant portion of mining revenue via
a mechanism that results in small miners making significantly less
revenue than large miners, there will be an incentive to centralize
mining even more than it is today.  The more centralized mining is, the
less resistant Bitcoin is to transaction censorship.

We can't prevent people from paying out of band, but we can ensure that
the easiest and most effective way to pay for a transaction is through
in-band fees and transactions that are relayed to every miner who is
interested in them.  If we fail at that, I think Bitcoin losing its
censorship resistance will be inevitable.  LN, coinpools, and channel
factories all strongly depend on Bitcoin transactions not being
censored, so I don't think any security is lost by redesigning them to
additionally depend on reasonably accurate in-band fee statistics.

Miners mining their own transactions, accepting the occasional
out-of-band fee, or having varying local transaction selection policies
are situations that are easily addressed by the user of fee-dependent
timelocks choosing a long window and setting the dependent feerate well
below the maximum feerate they are willing to pay.

-Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread David A. Harding via bitcoin-dev

On 2023-12-28 08:06, jlspc via bitcoin-dev wrote:

On Friday, December 22nd, 2023 at 8:36 AM, Nagaev Boris
 wrote:

To validate a transaction with FDT [...]
a light client would have to determine the median fee
rate of the recent blocks. To do that without involving trust, it has
to download the blocks. What do you think about including median
feerate as a required OP_RETURN output in coinbase transaction?


Yes, I think that's a great idea!


I think this points to a small challenge of implementing this soft fork 
for pruned full nodes.  Let's say a fee-dependent timelock (FDT) soft 
fork goes into effect at time/block _t_.  Both before and for a while 
after _t_, Alice is running an older pruned full node that did not 
contain any FDT-aware code, so it prunes blocks after _t_ without 
storing any median feerate information about them (not even commitments 
in the coinbase transaction).  Later, well after _t_, Alice upgrades her 
node to one that is aware of FDTs.  Unfortunately, as a pruned node, it 
doesn't have earlier blocks, so it can't validate FDTs without 
downloading those earlier blocks.


I think the simplest solution would be for a recently-upgrade node to 
begin collecting median feerates for new blocks going forward and to 
only enforce FDTs for which it has the data.  That would mean anyone 
depending on FDTs should be a little more careful about them near 
activation time, as even some node versions that nominally enforced FDT 
consensus rules might not actually be enforcing them yet.


Of course, if the above solution isn't satisfactory, upgraded pruned 
nodes could simply redownload old blocks or, with extensions to the P2P 
protocol, just the relevant parts of them (i.e., coinbase transactions 
or, with a soft fork, even just commitments made in coinbase 
transactions[1]).


-Dave

[1] An idea discussed for the segwit soft fork was requiring the witness 
merkle root OP_RETURN to be the final output of the coinbase transaction 
so that all chunks of the coinbase transaction before it could be 
"compressed" into a SHA midstate and then the midstate could be extended 
with the bytes of the OP_RETURN commitment to produce the coinbase 
transaction's txid, which could then be connected to the block header 
using the standard Bitcoin-style merkle inclusion proof.  This would 
allow trailing commitments in even a very large coinbase transaction to 
be communicated in just a few hundred bytes (not including the size of 
the commitments themselves).  This idea was left out of segwit because 
at least one contemporary model of ASIC miner had a hardware-enforced 
requirement to put a mining reward payout in the final output.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ordinal Inscription Size Limits

2023-12-29 Thread Greg Tonoski via bitcoin-dev

Unfortunately, as near as I can tell there is no sensible way to
prevent people from storing arbitrary data in witnesses ...


To prevent "from storing arbitrary data in witnesses" is the extreme
case of the size limit discussed in this thread. Let's consider it along
with other (less radical) options in order not to lose perspective, perhaps.


...without incentivizing even worse behavior and/or breaking
legitimate use cases.


I can't find evidence that would support the hypothesis. There had not
been "even worse behavior and/or breaking legitimate use cases" observed
before witnesses inception. The measure would probably restore
incentives structure from the past.

As a matter of fact, it is the current incentive structure that poses
the problem - incentivizes worse behavior ("this sort of data is toxic
to the network") and breaks legitimate use cases like a simple transfer
of BTC.


If we ban "useless data" then it would be easy for would-be data
storers to instead embed their data inside "useful" data such as dummy
signatures or public keys.


There is significant difference when storing data as dummy signatures
(or OP_RETURN) which is much more expensive than (discounted) witness.
Witness would not have been chosen as the storage of arbitrary data if
it cost as much as alternatives, e.g. OP_RETURN.

Also, banning "useless data" seems to be not the only option suggested
by the author who asked about imposing "a size limit similar to OP_RETURN".


But from a technical point of view, I don't see any principled way to
stop this.


Let's discuss ways that bring improvement rather than inexistence of a
perfect technical solution that would have stopped "toxic data"/"crap on
the chain". There are at least a few:
- https://github.com/bitcoin/bitcoin/pull/28408
- https://github.com/bitcoin/bitcoin/issues/29146
- deprecate OP_IF opcode.

I feel like the elephant in the room has been brought up. Do you want to
maintain Bitcoin without spam or a can't-stop-crap alternative, everybody?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread jlspc via bitcoin-dev
Hi Eric,

I agree that users can pay miners offchain and miners can create blocks where 
the difference between inputs and outputs exceeds the fees paid (by mining 
their own transactions). I model that behavior as dishonest mining. Onchain 
fees seem to reflect congestion for now, but it's true that FDTs rely on having 
a sufficient fraction of honest miners.

Regards,
John

Sent with [Proton Mail](https://proton.me/) secure email.

On Friday, December 22nd, 2023 at 8:09 PM, Eric Voskuil  
wrote:

> The fees paid to mine the set of transactions in a given block are known only 
> to the miner that produced the block. Assuming that tx inputs less outputs 
> represents an actual economic force is an error.
>
> e
>
>> On Dec 22, 2023, at 09:24, jlspc via bitcoin-dev 
>>  wrote:
>
>> 
>>
>> Hi Antoine,
>>
>> Thanks for your thoughtful response.
>>
>> Comments inline below:
>>
>>> Hi John,
>>
>>> While the idea of using sliding reaction window for blockchain congestion
>>> detection has been present in the "smart contract" space at large [0] and
>>> this has been discussed informally among Lightning devs and covenant
>>> designers few times [1] [2], this is the first and best formalization of
>>> sliding-time-locks in function of block fee rates for Bitcoin I'm aware
>>> off, to the best of my knowledge.
>>
>> Thanks!
>>
>>> Here my understanding of the feerate-dependent timelock proposal.
>>
>>> A transaction cannot be included in a block:
>>> - height-based or epoch-based absolute or relative timelocks are not
>>> satisfied according to current consensus rules (bip68 and bip 113 and
>>> implementation details)
>>> - less than `block_count` has a block median-feerate above the
>>> median-feerate of the `window_size` period
>>
>> It's a little bit different from that.
>> The transaction cannot be included in the blockchain until after an aligned 
>> window W of window_size blocks where:
>> 1) W starts no sooner than when the height-based or epoch-based absolute 
>> and/or relative timelocks have been satisfied, and
>> 2) W contains fewer than block_count blocks with median feerate greater than 
>> feerate_value_bound.
>>
>> Note that the aligned window cannot start until the absolute and/or relative 
>> timelocks have been satisfied and the transaction itself has to come after 
>> the aligned window.
>> However, once such an aligned window exists in the blockchain, the 
>> transaction can appear at any later time (and not just within a window that 
>> itself meets the block_count and feerate_value_bound limitations).
>>
>>> A median feerate is computed for each block.
>>> (This is unclear to me if this is the feerate for half of the block's
>>> weight or the median feerate with all weight units included in the
>>> block as the sample)
>>
>> A feerate F is the median feerate of a block B if F is the largest feerate 
>> such that the total size of the transactions in B with feerate greater or 
>> equal to F is at least 2 million vbytes.
>>
>>> From then, you have 3 parameters included in the nSequence field.
>>> - feerate_value_bound
>>> - window_size
>>> - block_count
>>
>>> Those parameters can be selected by the transaction builder (and
>>> committed with a signature or hash chain-based covenant).
>>> As such, off-chain construction counterparties can select the
>>> feerate_value_bound at which their time-sensitive transaction
>>> confirmation will be delayed.
>>
>>> E.g let's say you have a LN-penalty Alice-Bob channel. Second-stage
>>> HTLC transactions are pre-signed with feerate_value_bound at 100 sat /
>>> vbytes.
>>> The window_size selected is 100 blocks and the block_count is 70 (this
>>> guarantees tampering-robustness of the feerate_value_bound in face of
>>> miners coalitions).
>>
>>> There is 1 BTC offered HTLC pending with expiration time T, from Alice to 
>>> Bob.
>>
>>> If at time T, the per-block median feerate of at least 70 blocks over
>>> the latest 100 block is above 100 sat / vbytes, any Alice's
>>> HTLC-timeout or Bob's HTLC-preimage cannot be included in the chain.
>>
>> The rules are actually:
>> 1) wait until time T, then
>> 2) wait until the start of a full aligned window W with 100 consecutive 
>> blocks that starts no earlier than T and that has fewer than 70 blocks with 
>> median feerate above 100 sats/vbyte.
>> (The values 100, 70, and 100 cannot actually be selected in the 
>> implementation in the paper, but that's a technical detail and could be 
>> changed if the FDT is specified in the annex, as you propose.)
>>
>>> From my understanding, Feerate-Dependent Timelocks effectively
>>> constitute the lineaments of a solution to the "Forced Expiration
>>> Spam" as described in the LN paper.
>>
>> Great!
>>
>>> I think you have few design caveats to be aware off:
>>> - for current LN-penalty, the revokeable scripts should be modified to
>>> ensure the CSV opcode inspect the enforcement of FDT's parameters, as
>>> those revokeable scripts are committed by all parties
>>
>> Yes, de

Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-29 Thread jlspc via bitcoin-dev
Hi Boris,

Responses inline below:


Sent with Proton Mail secure email.

On Friday, December 22nd, 2023 at 8:36 AM, Nagaev Boris  
wrote:


> Hi John!
> 
> I have two questions regarding the window, which are related.
> 
> 1. Why is the window aligned? IIUC, this means that the blocks mined
> since the latest block whose height is divisible by window_size do not
> affect transaction's validity. So a recent change of fees does not
> reflect if a transaction can be confirmed.

FDTs are not based on the most recent window; instead, an FDT requires that 
there exist *some* aligned window between when the child transaction's absolute 
and relative timelocks were satisfied and the current block. The alignment 
requirement allows one to prove tighter security bounds over a given time 
period. For example, 2 consecutive aligned 64-block windows give dishonest 
miners 2 chances to create artificial aligned low-feerate windows, but 65 
chances to create such windows if alignment isn't required.

> 
> 2. Does it make sense to have a global window_size? This would save
> space in FDT (= in transaction) and simplify verification, especially
> for a non-aligned window case (see 1). An array of integers of size
> window_size would be sufficient to give answer to a question if there
> were at least x blocks among window_size latest blocks with median fee
> rate <= y, using O(1) time per query.
> 

The ability to tune the window size allows for a trade-off between latency and 
security (also, see my response above about alignment). 

> Moving on to another topic, what are the implications for light
> clients? A light client can validate current timelocks without
> downloading whole blocks, because they depend on timestamps and block
> height only, the information from block headers. To validate a
> transaction with FDT or to choose FDT parameters for its own
> transaction, a light client would have to determine the median fee
> rate of the recent blocks. To do that without involving trust, it has
> to download the blocks. What do you think about including median
> feerate as a required OP_RETURN output in coinbase transaction? A
> block without it would be invalid (new consensus rule). A light client
> can rely on median feerate value from coinbase transaction,
> downloading only one tx instead of the whole block.

Yes, I think that's a great idea!

Regards,
John

> 
> On Fri, Dec 15, 2023 at 6:20 AM jlspc via bitcoin-dev
> bitcoin-dev@lists.linuxfoundation.org wrote:
> 
> > TL;DR
> > =
> > * All known Lightning channel and factory protocols are susceptible to 
> > forced expiration spam attacks, in which an attacker floods the blockchain 
> > with transactions in order to prevent honest users from putting their 
> > transactions onchain before timelocks expire.
> > * Feerate-Dependent Timelocks (FDTs) are timelocks that automatically 
> > extend when blockchain feerates spike.
> > - In the normal case, there's no spike in feerates and thus no tradeoff 
> > between capital efficiency and safety.
> > - If a dishonest user attempts a forced expiration spam attack, feerates 
> > increase and FDTs are extended, thus penalizing the attacker by keeping 
> > their capital timelocked for longer.
> > - FDTs are tunable and can be made to be highly resistant to attacks from 
> > dishonest miners.
> > * Of separate interest, an exact analysis of the risk of double spend 
> > attacks is presented that corrects an error in the original Bitcoin 
> > whitepaper.
> > 
> > Overview
> > 
> > 
> > Because the Lightning protocol relies on timelocks to establish the correct 
> > channel state, Lightning users could lose their funds if they're unable to 
> > put their transactions onchain quickly enough.
> > The original Lightning paper [1] states that "[f]orced expiration of many 
> > transactions may be the greatest systemic risk when using the Lightning 
> > Network" and it uses the term "forced expiration spam" for an attack in 
> > which a malicious party "creates many channels and forces them all to 
> > expire at once", thus allowing timelocked transactions to become valid.
> > That paper also says that the creation of a credible threat against 
> > "spamming the blockchain to encourage transactions to timeout" is 
> > "imperative" [1].
> > 
> > Channel factories that create multiple Lightning channels with a single 
> > onchain transaction [2][3][4][5] increase this risk in two ways.
> > First, factories allow more channels to be created, thus increasing the 
> > potential for many channels to require onchain transactions at the same 
> > time.
> > Second, channel factories themselves use timelocks, and thus are vulnerable 
> > to a "forced expiration spam" attack.
> > 
> > In fact, the timelocks in Lightning channels and factories are risky even 
> > without an attack from a malicious party.
> > Blockchain congestion is highly variable and new applications (such as 
> > ordinals) can cause a sudden spike in congestion at a

Re: [bitcoin-dev] Lamport scheme (not signature) to economize on L1

2023-12-29 Thread Yuri S VB via bitcoin-dev
Dear all,

Upon a few more days working on my proposed protocol, I've found a way to waive 
the need for:
1) mining the conventional public key;
2) user broadcasting 2 distinct payloads a few blocks apart;

Up to 66% footprint reduction.

I'll be publishing and submitting it as BIP soon. Those who got interested are 
more than welcome to get in touch directly.

It's based on my proposed cryptosystem based on the conjectured hardness of 
factorization of polynomials in finite fields:
https://github.com/Yuri-SVB/FFM-cryptography/

YSVB

Sent with Proton Mail secure email.

On Saturday, December 23rd, 2023 at 1:26 AM, yuri...@pm.me  
wrote:


> Dear all,
> 

> Upon second thoughts, I concluded have to issue a correction on my last 
> correspondence. Where I wrote:
> 

> > For 2: a pre-image problem for a function
> > f2_(TX,ECCPUB): {l | l is 'a valid LAMPPRI'} -> {a | a is 'in the format of 
> > ADDRs'} X {LSIG}
> > 

> > (notice the nuance: {LSIG} means the singleton containing of only the 
> > specific LSIG that was actually public, not 'in the format of LSIGs').
> 

> 

> Please read
> 

> "For 2: a pre-image problem for a function
> f2_(TX,ECCPUB): {l | l is 'a valid LAMPPRI'} -> {a | a is 'in the format of 
> ADDRs'} X {s | s is 'in the format of LSIGs'}"
> 

> 

> instead, and completely disregard the nuance below, which is wrong. I 
> apologize for the mistake, and hope I have made myself clear. Thank you, 
> again for your interest, and I'll be back with formulas for entropy in both 
> cases soon!
> 

> YSVB
> 

> Sent with Proton Mail secure email.
> 

> 

> On Friday, December 22nd, 2023 at 4:32 PM, yuri...@pm.me yuri...@pm.me wrote:
> 

> 

> 

> > There are three possible cryptanalysis to LAMPPRI in my scheme:
> > 

> > 1. From ADDR alone before T0-1 (to be precise, publishing of (TX, SIG));
> > 2. From ADDR and (TX, SIG) after T0-1 (to be precise, publishing of (TX, 
> > SIG));
> > 3. Outmine the rest of mining community starting from a disadvantage of not 
> > less than (T1-T0-1) after T1 (to be precise, at time of publishing of 
> > LAMPRI);
> > 

> > ...which bring us back to my argument with Boris: There is something else 
> > we missed in our considerations, which you said yourself: ironically, 
> > LAMPPUB is never published.
> > 

> > We can have LAMPPUB be 1Mb or even 1Gb long aiming at having rate of 
> > collision in HL(.) be negligible (note this is perfectly adherent to the 
> > proposition of memory-hard-hash, and would have the additional benefit of 
> > allowing processing-storage trade-off). In this case, we really have:
> > 

> > For 1: a pre-image problem for a function
> > f1: {k| k is a valid ECCPRI} X {l | l is a valid LAMPPRI} -> {a | a is in 
> > the format of a ADDR}
> > 

> > having as domain the Cartesian product of set of possible ECCPRIs with set 
> > of possible LAMPPRIs and counter domain, the set of possible ADDRs.
> > 

> > For 2: a pre-image problem for a function
> > f2_(TX,ECCPUB): {l | l is 'a valid LAMPPRI'} -> {a | a is 'in the format of 
> > ADDRs'} X {LSIG}
> > 

> > (notice the nuance: {LSIG} means the singleton containing of only the 
> > specific LSIG that was actually public, not 'in the format of LSIGs').
> > 

> > Notice that, whatever advantage of 2 over 1 has to be compensated by the 
> > perspective of having the protocol be successfully terminated before the 
> > attack being carried out.
> > 

> > For 3: Equivalente of a double-spending attack with, in the worst case, not 
> > less than (T1-T0-1) blocks in advantage for the rest of the community.
> > 

> > When I have the time, I'll do the math on what is the entropy on each case, 
> > assuming ideal hashes, but taking for granted the approximation given by 
> > Boris, we would have half of size of ADDR as strength, not half of LAMPPRI, 
> > so mission accomplished!
> > 

> > Another ramification of that is we can conceive a multi-use version of this 
> > scheme, in which LAMPPRI is the ADDR resulting of a previous (ECCPUB, 
> > LAMPPUB) pair. The increased size of LAMPPRI would be compensated by one 
> > entire ADDR less in the blockchain. Namely, we'd have an extra marginal 
> > reduction of 12 bytes per use (possibly more, depending on how much more 
> > bytes we can economize given that added strength).
> > 

> > YSVB.
> > 

> > On Friday, December 22nd, 2023 at 5:52 AM, G. Andrew Stone 
> > g.andrew.st...@gmail.com wrote:
> > 

> > > Does this affect the security model WRT chain reorganizations? In the 
> > > classic doublespend attack, an attacker can only redirect UTXOs that they 
> > > spent. With this proposal, if I understand it correctly, an attacker 
> > > could redirect all funds that have "matured" (revealed the previous 
> > > preimage in the hash chain) to themselves. The # blocks to maturity in 
> > > your proposal becomes the classic "embargo period" and every coin spent 
> > > by anyone between the fork point and the maturity depth is available to 
> > > the attacker to double