[bitcoin-dev] BIP: output descriptors for PSBT

2023-12-17 Thread SeedHammer Team via bitcoin-dev
Hi,

This is a draft of a BIP that adds a PSBT_GLOBAL_OUTPUT_DESCRIPTOR field
for transferring output descriptors between wallets. The full text is 
reproduced below,
which is also hosted on GitHub:

https://github.com/seedhammer/bips/blob/master/bip-psbt-descriptors.mediawiki

An implementation is here: https://github.com/seedhammer/bip-psbt-descriptors

Live playground: https://go.dev/play/p/JrpWF0A9dxD


BIP: ?
Layer: Applications
Title: PSBT Encoded Output Descriptors
Author: SeedHammer 
License: BSD-2-Clause


==Introduction==

===Abstract===

A BIP 174 PSBT may contain an extended key for deriving input and output
addresses. This document proposes an additional field for PSBTs to represent
arbitrary BIP 380 output script descriptors.

To support transfer of output descriptors outside signing flows, the proposal
makes the unsigned transaction optional.

===Copyright===

This BIP is licensed under the BSD 2-clause license.

===Motivation===

A BIP 380 output descriptor is a textual representation of a set of output
scripts, such that Bitcoin wallets may agree on the scripts and addresses
to provide the user. However, a descriptor string by itself is not ideal for
transferring between wallets: it lacks a machine recognizable header and cannot
represent metadata such as name and birth block.

The PSBT encoding gives us a self-describing file format, metadata as well as a
compact, binary representation of keys. Assuming most wallets already implements
the PSBT format, the additional implementation overhead of this extension is
expected to be low.

==Specification==

The new global type is defined as follows:

{|
! Name
! 
! 
!  Description
! 
!  Description
|-
| Output Descriptor
| PSBT_GLOBAL_OUTPUT_DESCRIPTOR = 0x07
| 
| The earliest block height that may contain transactions for the descriptor, 
optionally followed by the UTF-8 encoded name of the descriptor.
| 
| The output descriptor in BIP380 format without inline keys.
|-
|}

When PSBT_GLOBAL_OUTPUT_DESCRIPTOR is present, multiple PSBT_GLOBAL_XPUB 
entries are
allowed.

When PSBT_GLOBAL_OUTPUT_DESCRIPTOR is present, the presence of 
PSBT_GLOBAL_UNSIGNED_TX
is optional. Note that this is a relaxation of the PSBT specification.

All key expressions in the PSBT_GLOBAL_OUTPUT_DESCRIPTOR descriptor string must 
be
specified as references on the form @ where index is
the 0-based index into the ordered list of PSBT_GLOBAL_XPUB entries. An index
out of range is invalid.

A PSBT_GLOBAL_OUTPUT_DESCRIPTOR with inline keys is invalid'''Why not 
allow inline
keys?'''
Allowing inline keys risks incompatible implementations that omit parsing of 
referenced
keys.'''What about named pk(NAME) references?'''
Named references would allow Miniscript descriptors as-is in 
PSBT_GLOBAL_OUTPUT_DESCRIPTOR.
They are left out because they complicate decoders and can trivially be 
replaced by indexed
references. However, if key names are deemed desirable for display purposes, 
they could be
squeezed into  of PSBT_GLOBAL_XPUB entries..

Key references may be followed by derivation paths as specified in BIP 389.

==Test Vectors==

===Invalid Cases===

A descriptor with a key reference out of bounds.
Descriptor: wpkh(@0/*)
Hex encoded PSBT: 70736274ff0207000a77706b682840302f2a2900

A descriptor with an invalid UTF-8 name.
Hex encoded PSBT: 
70736274ff05070061c57a0a77706b682840302f2a2953010488b21e041c0ae90680025afed56d755c088320ec9bc6acd84d33737b580083759e0a0ff8f26e429e0b77028342f5f7773f6fab374e1c2d3ccdba26bc0933fc4f63828b662b4357e4cc3791bec0fbd814c5d87297488000800080028000

A descriptor with an inline key.
Hex encoded PSBT: 
70736274ff0207008e77706b68285b64633536373237362f3438682f30682f30682f32685d7870756236446959726652774e6e6a655834764873574d616a4a56464b726245456e753867415739764475517a675457457345484531367347576558585556314c42575145317943546d657072534e63715a3357373468715664674462745948557633654d3457325445556870616e2f2a2900

===Valid Cases===

A 2-of-3 multisig descriptor
Descriptor: wsh(sortedmulti(2,@0/<0;1>/*,@1/<0;1>/*,@2/<0;1>/*))
Name: Satoshi's Stash
Birth block: 123456789012345
Key 0: 
[dc567276/48h/0h/0h/2h]xpub6DiYrfRwNnjeX4vHsWMajJVFKrbEEnu8gAW9vDuQzgTWEsEHE16sGWeXXUV1LBWQE1yCTmeprSNcqZ3W74hqVdgDbtYHUv3eM4W2TEUhpan
Key 1: 
[f245ae38/48h/0h/0h/2h]xpub6DnT4E1fT8VxuAZW29avMjr5i99aYTHBp9d7fiLnpL5t4JEprQqPMbTw7k7rh5tZZ2F5g8PJpssqrZoebzBChaiJrmEvWwUTEMAbHsY39Ge
Key 2: 
[c5d87297/48h/0h/0h/2h]xpub6DjrnfAyuonMaboEb3ZQZzhQ2ZEgaKV2r64BFmqymZqJqviLTe1JzMr2X2RfQF892RH7MyYUbcy77R7pPu1P71xoj8cDUMNhAMGYzKR4noZ
Hex encoded PSBT: 

Re: [bitcoin-dev] Addressing the possibility of profitable fee manipulation attacks

2023-12-17 Thread Nagaev Boris via bitcoin-dev
On Sun, Dec 17, 2023 at 1:47 PM ArmchairCryptologist via bitcoin-dev
 wrote:
> Critically, this means that the higher the ratio of the hashrate is 
> participating, the lower the cost of the attack. If 100% of miners 
> participate with a ratio of transactions equal to their hashrate, the cost of 
> the attack is zero, since every participating miner will get back on average 
> 100% of the fees they contributed, and 0% of the fees will be lost to honest 
> miners (of which there are none).

It would not be an equilibrium, because each of them can increase his
profit by not participating. He can still collect fees from
fee-stuffing transactions of others and high fees from transactions of
normal users.

> Observe also that miners would not have to actively coordinate or share funds 
> in any way to participate. If a miner with 10% of the participating hashrate 
> contributes 10% of the fee-stuffing transactions, they would also get back on 
> average 10% of the total fees paid by transactions that are included in 
> blocks mined by participating miners, giving them 10% of the profits. As 
> such, each participating miner would simply have to watch the mempool to 
> verify that the other participating miners are still broadcasting their 
> agreed rate/ratio of transactions, the rest should average out over time.

He can pretend to have less hashrate and mine some blocks under the
table. For example, a miner who has 10% real hash rate could say to
other colluding miners that he only has 5%. Another 5% are secretly
allocated to a new pool. So his share of costs of fee-stuffing
transactions decreases, while he actually collects the same amount of
fees using both public and secret parts of his hash rate. Eventually
every rational participant of this collusion will do this and the
ratio of participating miners will decrease.

-- 
Best regards,
Boris Nagaev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning Safely With Feerate-Dependent Timelocks

2023-12-17 Thread Antoine Riard via bitcoin-dev
Hi John,

While the idea of using sliding reaction window for blockchain congestion
detection has been present in the "smart contract" space at large [0] and
this has been discussed informally among Lightning devs and covenant
designers few times [1] [2], this is the first and best formalization of
sliding-time-locks in function of block fee rates for Bitcoin I'm aware
off, to the best of my knowledge.

Here my understanding of the feerate-dependent timelock proposal.

A transaction cannot be included in a block:
- height-based or epoch-based absolute or relative timelocks are not
satisfied according to current consensus rules (bip68 and bip 113 and
implementation details)
- less than `block_count` has a block median-feerate above the
median-feerate of the `window_size` period

A median feerate is computed for each block.
(This is unclear to me if this is the feerate for half of the block's
weight or the median feerate with all weight units included in the
block as the sample)

>From then, you have 3 parameters included in the nSequence field.
- feerate_value_bound
- window_size
- block_count

Those parameters can be selected by the transaction builder (and
committed with a signature or hash chain-based covenant).
As such, off-chain construction counterparties can select the
feerate_value_bound at which their time-sensitive transaction
confirmation will be delayed.

E.g let's say you have a LN-penalty Alice-Bob channel. Second-stage
HTLC transactions are pre-signed with feerate_value_bound at 100 sat /
vbytes.
The window_size selected is 100 blocks and the block_count is 70 (this
guarantees tampering-robustness of the feerate_value_bound in face of
miners coalitions).

There is 1 BTC offered HTLC pending with expiration time T, from Alice to Bob.

If at time T, the per-block median feerate of at least 70 blocks over
the latest 100 block is above 100 sat / vbytes, any Alice's
HTLC-timeout or Bob's HTLC-preimage cannot be included in the chain.

>From my understanding, Feerate-Dependent Timelocks effectively
constitute the lineaments of a solution to the "Forced Expiration
Spam" as described in the LN paper.

I think you have few design caveats to be aware off:
- for current LN-penalty, the revokeable scripts should be modified to
ensure the CSV opcode inspect the enforcement of FDT's parameters, as
those revokeable scripts are committed by all parties
- there should be a delay period at the advantage of one party
otherwise you still a feerate-race if the revocation bip68 timelock
has expired during the FDT delay

As such, I believe the FDT parameters should be enriched with another
parameter : `claim_grace_period`, a new type of relative timelock of
which the endpoint should be the `feerate_value_bound` itself.

I think it works in terms of consensus chain state, validation
resources and reorg-safety are all the parameters that are
self-contained in the spent FDT-encumbered transaction itself.
If the per-block feerate fluctuates, the validity of the ulterior
FDT-locked transactions changes too, though this is already the case
with timelock-encumbered transactions.

(One corollary for Lightning, it sounds like all the channels carrying
on a HTLC along a payment path should have the same FDT-parameters to
avoid off-chain HTLC double-spend, a risk not clearly articulated in
the LN paper).

Given the one more additional parameter `claim_grace_period`, I think
it would be wiser design to move all the FDT parameters in the bip341
annex.
There is more free bits room there and additionally you can have
different FDT parameters for each of your HTLC outputs in a single LN
transaction, if combined with future covenant mechanisms like HTLC
aggregation [3].
(The current annex design draft has been designed among others to
enable such "block-feerate-lock-point" [4] [5])

I cannot assert that the FDT proposal makes the timeout-tree protocol
more efficient than state-of-the-art channel factories and payment
pool constructions.
Still from my understanding, all those constructions are sharing
frailties in face of blockchain congestion and they would need
something like FDT.

I'm truly rejoicing at the idea that we have now the start of a
proposal solving one of the most imperative issues of Lightning and
other time-sensitive use-cases.
(Note, I've not reviewed the analysis and game-theory in the face of
miners collusion / coalition, as I think the introduction of a
`claim_grace_period` is modifying the fundamentals).

Best,
Antoine

[0] https://fc22.ifca.ai/preproceedings/119.pdf
[1] 
https://github.com/ariard/bitcoin-contracting-primitives-wg/blob/main/meetings/meetings-18-04.md
[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022180.html
[3] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-October/022093.html
[4] https://github.com/bitcoin-inquisition/bitcoin/pull/9

[5] https://github.com/bitcoin/bips/pull/1381


Le ven. 15 déc. 2023 à 09:20, jlspc via bitcoin-dev <

Re: [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-12-17 Thread Antoine Riard via bitcoin-dev
Hi Johan,

> Is this a concern though, if we assume there's no revoked state that
> can be broadcast (Eltoo)? Could you share an example of how this would
> be played out by an attacker?

Sure, let's assume no revoked state can be broadcast (Eltoo).

My understanding of the new covenant mechanism is the aggregation or
collapsing of all HTLC outputs in one or at least 2 outputs (offered /
received).
Any spend of an aggregated HTLC "payout" should satisfy the script
locking condition by presenting a preimage and a signature.
An offerd aggregated HTLC output might collapse a M number of HTLC
"payout", where M is still limited by the max standard transaction
relay, among other things.

The offered-to counterparty can claim any subset N of the aggregation
M by presenting the list of signatures and preimages (How they're
feeded to the spent script is a covenant implementation detail).
However, there is no guarantee that the offered-to counterparty reveal
"all" the preimages she is awarded off. Non-spent HTLC outputs are
clawback to a remainder subset of M, M'.

I think this partial reveal of HTLC payout preimages still opens the
door to replacement cycling attacks.

Let's say you have 5 offered HTLC "payouts" between Alice and Bob
aggregated in a single output, 4 of value 0.1 BTC and 1 of value 1
BTC. All expire at timelock T.
At T, Alice broadcasts an aggregated HTLC-timeout spend for the 5 HTLC
with 0.0.1 BTC on-chain fee.

Bob can craft a HTLC-preimage spend of the single offered output
spending one of 0.1 BTC HTLC payout (and revealing its preimage) while
burning all the value as fee. This replaces out Alice's honest
HTLC-timeout out of network mempools, are they're concurrent spend.
Bob can repeat this trick as long as there is HTLC "payout" remaining
in the offered set, until he's being able to do a HTLC off-chain
double-spend of the 1 BTC HTLC "payout".

This stealing gain of the 1 BTC HTLC "payout" covers what has been
burned as miners fees to replace cycle out Alice's sequence of honest
HTLC-timeout.

And it should be noted that Bob might benefit from network mempools
congestion delaying the confirmation of his malicious low-value
high-fee HTLC-preimage transactions.

> I'm not sure what you mean here, could you elaborate?

See 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-December/022191.html
and my answer there.
I think "self-sustained" fees is one only part of the solution, the
other part being the sliding delay of the HTLC timelock based on block
feerate.
Otherwise, a replacement cycling attacker can always benefit from
network mempools congestion spontaneously pushing out a malicious
cycling transaction out of block templates.

> That sounds possible, but how would you deal with the exponential
> blowup in the number of combinations?

In a taproot-world, "swallow the bullet" in terms of witness size
growth in case of non-cooperative closure.
I think this is where introducing an accumulator at the Script level
to efficiently test partial set membership would make sense.
Note, exponential blowup is an issue for mass non-coordinated
withdrawals of a payment pool too.

Best,
Antoine


Le lun. 11 déc. 2023 à 09:17, Johan Torås Halseth  a
écrit :

> Hi, Antoine.
>
> > The attack works on legacy channels if the holder (or local) commitment
> transaction confirms first, the second-stage HTLC claim transaction is
> fully malleable by the counterparty.
>
> Yes, correct. Thanks for pointing that out!
>
> > I think one of the weaknesses of this approach is the level of
> malleability still left to the counterparty, where one might burn in miners
> fees all the HTLC accumulated value promised to the counterparty, and for
> which the preimages have been revealed off-chain.
>
> Is this a concern though, if we assume there's no revoked state that
> can be broadcast (Eltoo)? Could you share an example of how this would
> be played out by an attacker?
>
> > I wonder if a more safe approach, eliminating a lot of competing
> interests style of mempool games, wouldn't be to segregate HTLC claims in
> two separate outputs, with full replication of the HTLC lockscripts in both
> outputs, and let a covenant accepts or rejects aggregated claims with
> satisfying witness and chain state condition for time lock.
>
> I'm not sure what you mean here, could you elaborate?
>
> > I wonder if in a PTLC world, you can generate an aggregate curve point
> for all the sub combinations of scalar plausible. Unrevealed curve points
> in a taproot branch are cheap. It might claim an offered HTLC near-constant
> size too.
>
> That sounds possible, but how would you deal with the exponential
> blowup in the number of combinations?
>
> Cheers,
> Johan
>
>
> On Tue, Nov 21, 2023 at 3:39 AM Antoine Riard 
> wrote:
> >
> > Hi Johan,
> >
> > Few comments.
> >
> > ## Transaction recycling
> > The transaction recycling attack is made possible by the change made
> > to HTLC second level transactions for the anchor channel type[8];

Re: [bitcoin-dev] Addressing the possibility of profitable fee manipulation attacks

2023-12-17 Thread Peter Todd via bitcoin-dev
On Sun, Dec 17, 2023 at 11:11:10AM +, ArmchairCryptologist via bitcoin-dev 
wrote:
> ** A possible solution, with some caveats **
> 
> My proposed solution to this would be to add partial transaction fee burning. 
> If 50% or more of transaction fees were burned, this should effectively 
> curtail any incentive miners have for padding blocks with junk transactions 
> in general, as it would both significantly reduce the amount of spent fees 
> they would be able to recoup, and also reduce the amount of benefit they gain 
> from the transaction fees being high. The burn rate would however necessarily 
> have to be less than 100%, otherwise miners would not be incentivized to 
> include any transactions at all, and might as well be mining empty blocks.

Fee-burning solutions are easily bypassed with out-of-band fee payments.

If fee-burning was possible, I would have proposed it already as a way to
ensure there is always an incentive for miners to mine blocks. Unfortunately,
it does not work.

> Changing fee estimation algorithms across the board to not take historical 
> fee data into account, eliminating the long-term decaying fee effects 
> observed after short-term flooding of high-fee transactions, would of course 
> significantly help prevent such attacks from being profitable in the first 
> place without requiring any sort of fork. As such, I believe this should also 
> be done as a short-term makeshift solution. A less exploitable estimate could 
> be made by limiting the algorithms to only use the current mempool state and 
> influx rate, as well as possibly the estimated current blockrate and the 
> arrival times of recent blocks. Additionally, wallets could pre-sign a number 
> of replacement transactions spending the same UTXO(s) with gradually 
> increasing fees up to a maximum specified by the user, and automatically 
> broadcast them in order as the state of the mempool changed. And I'm sure 
> there are additional strategies that could be used here as well to make the 
> ecosystem more fee-optimal in general.

Yes, RBF needs to be used a lot more. CPFP is inefficient and wasteful, and
estimates are quite often wrong.

> Unfortunately, as long as some parties still use historic fee data for their 
> fee estimation, the attack could still be effective up to a point. Payment 
> providers like BitPay for example currently specify that you need to use a 
> historically high fee for the initial transaction for it to be accepted, and 
> does not recognize replacement transactions that bump the fee.

BitPay is just a garbage payment processor. Possibly intentionally so, with the
aim of getting big block policies introduced.


BTW note how if mining pools are intentionally flooding the network to increase
fees, mining pools such as Ocean that filter out fee-paying transactions that
are part of the (possible) attack actually contribute to this problem by
reducing the cost of the attack.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Addressing the possibility of profitable fee manipulation attacks

2023-12-17 Thread ArmchairCryptologist via bitcoin-dev
** Motivation **

As everyone already knows, over the last seven months or so, the size of the 
mempool as well as transaction fees of on the bitcoin network have both been 
abnormally high. This tend has generally been ascribed to the introduction of 
ordinals, and while this may be both technically and actually true, 
disregarding the debate of whether ordinals is a "valid use" or the blockchain 
or not, the specific patterns we are seeing for some of these transactions have 
been making me somewhat suspicious that there could be other underlying 
motivations for this trend.

Crucially, as other people have noted, the dust UTXOs these ordinals 
transactions leave behind combined with the fact that consolidation 
transactions are being priced out due to persistent high fees is also causing 
the size of the UTXO set to blow up. As you can see on the chart below, on 
April 30 2023 there were roughly 90M UTXOs, while as of this writing roughly 7 
months later, there are more than 140M. Practically, this means that over the 
course of the last year, the chainstate as stored by Bitcoin Core has increased 
from ~5GB to ~9GB.

See https://www.blockchain.com/explorer/charts/utxo-count - the 3Y chart makes 
the sudden change in the rate of increase very obvious. More than twice the 
number of UTXOs has been added in the last six months than in the preceding two 
and a half years.

While it is certainly not constant, if you watch the fee rates and timing of 
when transactions are broadcast using a live view of the mempool like 
mempool.space, you can see that especially during periods of low mempool influx 
like early mornings on weekends, there tends to be large bursts (often several 
hundred kvB worth) of tiny ordinals/BRC-20 transactions with a single dust UTXO 
broadcast right after each block is found, with a fee set moderately higher 
than the current average of the top of the mempool, which makes it highly 
likely that this is done by a single actor. There may of course be legitimate 
explanations for these patterns, like that they are simply taking advantage of 
the lower fees, but the impression they leave me is that they seem deliberately 
timed and priced to pad blocks during such periods to prevent the mempool from 
draining, under the guise of "minting" BRC-20 tokens.

For example, in the two-minute span between blocks #820562 and #820563 from 
Sunday December 10th, over a thousand of these transactions were broadcast:

https://mempool.space/block/00015d5065ea2ade8bfd0bb9483835c907e34dd969854345
https://mempool.space/block/1ae267367ade834627df7b119a2710091b3f5d8c1a88

Most of these transactions have been in the 30-60 sat/vB range, with occasional 
periods of increasingly higher-fee transactions going higher. The morning of 
Saturday December 16th is a good example of the latter, where there was an ~8 
hour flood where the fees were pushed all the way up to 700 sat/vB. These are 
particularly suspicious, seeing as it would not make much sense to "take 
advantage of lower fees" by flooding transactions with fees increasingly and 
systematically set this much higher than the best next-block fee at this time 
of the week. There are many blocks during this period with noticeable large 
clusters of these high-fee BRC-20 transactions - for example, see #821428 and 
#821485:

https://mempool.space/block/00011dd74372ff2d5fdec5e7431340a160b0304f3f145e82
https://mempool.space/block/653a2389a42549943c859e414f451f86944ac60b411b

You would think that if someone were in fact making a large volume of 
transactions specifically to inflate the transaction fees, they would 
eventually run out of funds to do so. In other words, considering how long this 
trend has been going on, they would either need to have exceptionally deep 
pockets, or directly profit from the transaction fees being high. This line of 
thought lead me to consider the possibility that such patterns could be 
indicative of ongoing fee manipulation by either a large miner or a consortium 
of miners, and whether such manipulation could be practically profitable, even 
with a minority hashrate. While miners have always had the ability to pad their 
own blocks with junk transactions, it seems to be generally assumed that at the 
very least there would be an opportunity cost of doing so, and that it would 
therefore would be unprofitable. The general ability to pad all blocks with 
junk transactions would of course both be much more severe and much less 
obvious. So if this were the case, I believe it would break a fundamental 
assumption to the design of Bitcoin - seeing as transaction fees are central to 
prevent DoS attacks on the blockchain, if such an attack could be done in a way 
where both the base costs and opportunity costs are fully (or more than) 
recouped, we have a problem.

Just to be clear - I'm not saying with any certainty that such an attack is 
currently ongoing, 

Re: [bitcoin-dev] Altruistic Rebroadcasting - A Partial Replacement Cycling Mitigation

2023-12-17 Thread Peter Todd via bitcoin-dev
On Fri, Dec 15, 2023 at 10:29:22PM +, Antoine Riard wrote:
> Hi Peter,
> 
> > Altruistic third parties can partially mitigate replacement cycling(1)
> attacks
> > by simply rebroadcasting the replaced transactions once the replacement
> cycle
> > completes. Since the replaced transaction is in fact fully valid, and the
> > "cost" of broadcasting it has been paid by the replacement transactions,
> it can
> > be rebroadcast by anyone at all, and will propagate in a similar way to
> when it
> > was initially propagated. Actually implementing this simply requires code
> to be
> > written to keep track of all replaced transactions, and detect
> opportunities to
> > rebroadcast transactions that have since become valid again. Since any
> > interested third party can do this, the memory/disk space requirements of
> > keeping track of these replacements aren't important; normal nodes can
> continue
> > to operate exactly as they have before.
> 
> I think there is an alternative to altruistic and periodic rebroadcasting
> still solving replacement cycling at the transaction-relay level, namely
> introducing a local replacement cache.
> 
> https://github.com/bitcoin/bitcoin/issues/28699
> 
> One would keep in bounded memory a list of all seen transactions, which
> have entered our mempool at least once at current mempool min fee.

Obviously a local replacement cache is a possibility. The whole point of my
email is to show that a local replacement cache isn't necessarily needed, as
any alturistic party can implement that cache for all nodes.

> For the full-nodes which cannot afford extensive storage in face of
> medium-liquidity capable attackers, could imagine replacement cache nodes
> entering into periodic altruistic rebroadcast. This would introduce a
> tiered hierarchy of full-nodes participating in transaction-relay. I think
> such topology would be more frail in face of any sybil attack or scarce
> inbound slots connections malicious squatting.
> 
> The altruistic rebroadcasting default rate could be also a source of
> amplification attacks, where there is a discrepancy between the feerate of
> the rebroadcast traffic and the current dynamic mempool min fee of the
> majority of network mempools. As such wasting bandwidth for everyone.

1) That is trivially avoided by only broadcasting txs that meet the local
mempool min fee, plus some buffer. There's no point to broadcasting txs that
aren't going to get mined any time soon.

2) BIP-133 feefilter avoids this as well, as Bitcoin Core uses the feefilter
P2P message to tell peers not to send transactions below a threshold fee rate.

https://github.com/bitcoin/bips/blob/master/bip-0133.mediawiki

> Assuming some medium-liquidity or high-liquidity attackers might reveal any
> mitigation as insufficient, as an unbounded number of replacement
> transactions might be issued from a very limited number of UTXOs, all
> concurrent spends. In the context of multi-party time-sensitive protocol,
> the highest feerate spend of an "honest" counterparty might fall under the
> lowest concurrent replacement spend of a malicious counterparty, occupying
> all the additional replacement cache storage.

Did you actually read my email? I worked out the budget required in a
reasonable worst case scenario:

> > Suppose the DoS attacker has a budget equal to 50% of the total block
> > reward.
> > That means they can spend 3.125 BTC / 10 minutes, or 520,833sats/s.
> >
> > 520,833 sats/s
> > -- = 2,083,332 bytes / s
> > 0.25 sats/byte
> >
> > Even in this absurd case, storing a one day worth of replacements would
> > require
> > just 172GB of storage. 256GB of RAM costs well under $1000 these days,
> > making
> > altruistic rebroadcasting a service that could be provided to the network
> > for
> > just a few thousand dollars worth of hardware even in this absurd case.

Here, we're talking about an attacker that has financial resources high enough
to possibly do 51% attacks. And even in that scenario, storing the entire
replacement database in RAM costs under $1000

The reality is such an attack would probably be limited by P2P bandwidth, not
financial resources, as 2MB/s of tx traffic would likely leave mempools in an
somewhat inconsistent state across the network due to bandwidth limitations.
And that is *regardless* of whether or not anyone implements alturistic tx
broadcasting.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev