Re: [bitcoin-dev] V3 Transactions are still vulnerable to significant tx pinning griefing attacks

2024-01-02 Thread Gloria Zhao via bitcoin-dev
Hi Peter,

> You make a good point that the commitment transaction also needs to be
included
> in my calculations. But you are incorrect about the size of them.

> With taproot and ephemeral anchors, a typical commitment transaction
would have
> a single-sig input (musig), two taproot outputs, and an ephemeral anchor
> output.  Such a transaction is only 162vB, much less than 1000vB.

Note that these scenarios are much less interesting for commitment
transactions with no HTLC outputs, so 162 isn't what I would use for the
minimum.

Looking at expected weights in bolt 3 (
https://github.com/lightning/bolts/blob/master/03-transactions.md#expected-weight-of-the-commitment-transaction)
with 1 HTLC and anchors, we get (900 + 172 * num-htlc-outputs + 224
weight)/4 = 324vB.

So, I apologize for not using a more accurate minimum, though I think this
helps illustrate the 100x reduction of v3 a lot better.
While I think the true minimum is higher, let's go ahead and use your
number N=162vB.
- Alice is happy to pay 162sat/vB * (162 + 152vB) = 50,868sat
- In a v3 world, Mallory can make the cost to replace 80sat/vB * (1000vB) +
152 = 80,152sat
- Mallory succeeds, forcing Alice to pay 80,152 - 50,868 = *29,284sat*
more
- In a non-v3 world, Mallory can make the cost to replace 80sat/vB *
(100,000vB) + 152 = 8,000,152sat
- Mallory succeeds, forcing Alice to pay 8,000,152 - 50,868 = *7,949,284sat
*more (maxed out by the HTLC amount)

As framed above, what we've done here is quantify the severity of the
pinning damage in the v3 and non-v3 world by calculating the additional
fees Mallory can force Alice to pay using Rule 3. To summarize this
discussion, at the lower end of possible commitment transaction sizes,
pinning is possible but is restricted by 100x, as claimed.

Best,
Gloria

On Wed, Dec 20, 2023 at 9:11 PM Peter Todd  wrote:

> On Wed, Dec 20, 2023 at 03:16:25PM -0500, Greg Sanders wrote:
> > Hi Peter,
> >
> > Thanks for taking the time to understand the proposal and give thoughtful
> > feedback.
> >
> > With this kind of "static" approach I think there are fundamental
> > limitations because
> > the user has to commit "up front" how large the CPFP later will have to
> be.
> > 1kvB
> > is an arbitrary value that is two orders of magnitude less than the
> > possible package
> > size, and allows fairly flexible amounts of inputs(~14 taproot inputs
> > IIRC?) to effectuate a CPFP.
>
> Why would you need so many inputs to do a CPFP if they all have to be
> confirmed? The purpose of doing a CPFP is to pay fees to get another
> transaction mined. Unless you're in some degenerate, unusual, situation
> where
> you've somehow ended up with just some dust left in your wallet, dust that
> is
> barely worth its own fees to spend, one or maybe two UTXOs are going to be
> sufficient for a fee payment.
>
> I had incorrectly thought that V3 transctions allowed for a single up-to
> 1000vB
> transaction to pay for multiple parents at once. But if you can't do that,
> due
> to the restriction on unconfirmed inputs, I can't see any reason to have
> such a
> large limit.
>
> > I'd like something much more flexible, but we're barely at whiteboard
> stage
> > for alternatives and
> > they probably require more fundamental work. So within these limits, we
> > have to pick some number,
> > and it'll have tradeoffs.
> >
> > When I think of "pinning potential", I consider not only the parent size,
> > and not
> > only the maximum child size, but also the "honest" child size. If the
> honest
> > user does relatively poor utxo management, or the commitment transaction
> > is of very high value(e.g., lots of high value HTLCs), the pin is
> > essentially zero.
> > If the honest user ever only have one utxo, then the "max pin" is
> effective
> > indeed.
>
> Which is the situation you would expect in the vast majority of cases.
>
> > > Alice would have had to pay a 2.6x higher fee than
> > expected.
> >
> > I think that's an acceptable worst case starting point, versus the status
> > quo which is ~500-1000x+.
>
> No, the status quo is signed anchors, like Lightning already has with
> anchor
> channels. Those anchors could still be zero-valued. But as long as there
> is a
> signature associated with them, pinning isn't a problem as only the
> intended
> party can spend them.
>
> Note BTW that existing Lightning anchor channels inefficiently use two
> anchor
> outputs when just one is sufficient:
>
>
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-December/004246.html
> [Lightning-dev] The remote anchor of anchor channels is redundant
> Peter Todd, Dec 13th, 2023
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] V3 Transactions are still vulnerable to significant tx pinning griefing attacks

2023-12-21 Thread Gloria Zhao via bitcoin-dev
Hi Peter,

Thanks for spending time thinking about RBF pinning and v3.

> Enter Mallory. His goal is to grief Alice by forcing her to spend more
money
than she intended...
> ...Thus the total fee of Mallory's package would have
> been 6.6 * 1/2.5 = 2.6x more than Alice's total fee, and to get her
transaction
> mined prior to her deadline, Alice would have had to pay a 2.6x higher
fee than
> expected.

I think this is a good understanding of the goal of Rule #3, but I'm not
sure how you're getting these numbers without specifying the size and fees
of the commitment transaction. We should also quantify the severity of the
"damage" of this pin in a meaningful way; the issue of "Alice may need to
pay to replace descendant(s) she isn't aware of" is just a property of
allowing unconfirmed descendants.

Let's use some concrete numbers with your example. As you describe, we need
80-162sat/vB to get into the next block, and Alice can fund a CPFP with a
152vB CPFP. Let's say the commitment transaction has size N, and pays 0
fees.

The lower number of 80sat/vB describes what Mallory needs to shoot below in
order to "pay nothing" for the attack (i.e. otherwise it's a CPFP and gets
the tx confirmed). Mallory can maximize the cost of replacement according
to Rule #3 by keeping a low feerate while maximizing the size of the tx.

The higher number of 162sat/vB describes the reasonable upper bound of what
Alice should pay to get the transactions confirmed. As in: If Alice pays
exactly 162sat/vB * (N + 152vB) satoshis to get her tx confirmed, nothing
went wrong. She hopes to not pay more than that, but she'll keep
broadcasting higher bumps until it confirms.

The "damage" of the pin can quantified by the extra fees Alice has to pay.

For a v3 transaction, Mallory can attach 1000vB at 80sat/vB. This can
increase the cost of replacement to 80,000sat.
For a non-v3 transaction, Mallory can attach (101KvB - N) before maxing out
the descendant limit.
Rule #4 is pretty negligible here, but since you've already specified
Alice's child as 152vB, she'll need to pay Rule #3 + 152sats for a
replacement.

Let's say N is 1000vB. AFAIK commitment transactions aren't usually smaller
than this:
- Alice is happy to pay 162sat/vB * (1000 + 152vB) = 186,624sat
- In a v3 world, Mallory can make the cost to replace 80sat/vB * (1000vB) +
152 = 80,152sat
- Mallory doesn't succeed, Alice's CPFP easily pays for the replacement
- In a non-v3 world, Mallory can make the cost to replace 80sat/vB *
(100,000vB) + 152 = 8,000,152sat
- Mallory does succeed, Alice would need to pay ~7 million sats extra

Let's say N is 10,000vB:
- Alice is happy to pay 162sat/vB * (10,000 + 152vB) = 1,644,624
- In a v3 world, Mallory can make the cost to replace 80sat/vB * (1000vB) +
152 = 80,152sat
- Mallory doesn't succeed, Alice's CPFP easily pays for the replacement
- In a non-v3 world, Mallory can make the cost to replace 80sat/vB *
(91,000vB) + 152 = 7,280,152sat
- Mallory does succeed Alice would need to pay ~5 million sats extra

Let's say N is 50,000vB:
- Alice is happy to pay 162sat/vB * (50,000 + 152vB) = 8,124,624
- In a v3 world, Mallory can make the cost to replace 80sat/vB * (1000vB) +
152 = 80,152sat
- Mallory doesn't succeed, Alice's CPFP easily pays for the replacement
- In a non-v3 world, Mallory can make the cost to replace 80sat/vB *
(51,000vB) + 152 = 4,080,152sat
- Mallory doesn't succeed, Alice's CPFP easily pays for the replacement
- The key idea here is that there isn't much room for descendants and
the cost to CPFP is very high

These numbers change if you tweak more variables - the fees paid by the
commitment tx, the feerate range, etc. But the point here is to reduce the
potential "damage" by 100x by restricting the allowed child size.

> If V3 children are restricted to, say, 200vB, the attack is much less
effective
as the ratio of Alice vs Mallory size is so small.

This is exactly the idea; note that we've come from 100KvB to 1000vB.

> Mallory can improve the efficiency of his griefing attack by attacking
multiple
> targets at once. Assuming Mallory uses 1 taproot input and 1 taproot
output for
> his own funds, he can spend 21 ephemeral anchors in a single 1000vB
> transaction.

Note that v3 does not allow more than 1 unconfirmed parent per tx.

Let me know if I've made an arithmetic error, but hopefully the general
idea is clear.

Best,
Gloria

On Wed, Dec 20, 2023 at 5:16 PM Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> V3 transactions(1) is a set of transaction relay policies intended to aim
> L2/contracting protocols, namely Lightning. The main aim of V3
> transactions is
> to solve Rule 3 transaction pinning(2), allowing the use of ephemeral
> anchors(3) that do not contain a signature check; anchor outputs that _do_
> contain a signature check are not vulnerable to pinning attacks, as only
> the
> intended party is able to spend them while unconfirmed.
>
> The main way that 

Re: [bitcoin-dev] Package Relay Proposal

2022-11-01 Thread Gloria Zhao via bitcoin-dev
rns of package-feerate downgrades attacks.
> E.g, in the LN context, where your channel counterparty is aiming to jam
> the propagation of the best-feerate version of the package.
>
> Let's say you have :
> - Alice's commitment_tx, at 1s/vB
> - package A + child B, at 3s/vB
> - package A + child C, at 10s/vB
> - block inclusion feerate at 10s/vB
> - Alice and Mallory are LN channel counterparties
> - commitment_tx is using LN's anchor outputs
>
> Alice's LN node broadcasts A+C to her mempool.
> Bob's feefilter is at 3s/vB.
> Mallory broadcasts her child B in Alice's mempool.
> LN commitment does not meet Bob's feefilter.
> Package A+child B at 3s/vB meets Bob's feefilter and is announced to Bob.
> Mallory broadcasts her own commitment_tx at 4s/vB in Bob's mempool.
> When Alice's child C is relayed to Bob, it's bounced off Bob's mempool.
>
> Do you think this situation is plausible ? Of course, it might be heavily
> dependent on package-relay yet-not-implemented internal p2p logic.
> I think it could be fixable if LN removes the counterparty's
> `anchor_output` on the local node's version of the commitment transaction,
> once package relay is deployed.
>
> Another question, at the next fee-bump iteration, Alice rebroadcasts
> A+child D, at 12 s/vB. Her node has already marked Alice's commitment_tx as
> known in Bob's `m_tx_inventory_known_filter`. So when a new higher fee
> child is  discovered, should a `child-with-unconfirmed-parents` be
> announced between Alice and Bob ?
>
> Anyway, I think it would be interesting to pseudo-specify the
> package-assemblage algorithm (or if there is code already available) to see
> if it's robust against adversarial or unlucky situations ?
>
> > In fact, a package
> > of transactions may be announced using both Erlay and package relay.
> > After reconciliation, if the initiator would have announced a
> > transaction by wtxid but also has package information for it, they may
> > send "inv(MSG_PCKG)" instead of "inv(WTX)".
>
> Yes, I think this holds. Note, we might have to add to the reconciliation
> set low-fee parents succeeding the feefilter check due to a child. When the
> reconcildiff, we might have to bifucarte again on feefilter to decide to
> announce missing wtixds either as `inv(MSG_PCKG)` or `inv(WTX)`.
>
> (IIRC, I've already made few feedbacks offline though good to get them in
> the public space and think more)
>
> Antoine
>
> Le mar. 17 mai 2022 à 12:09, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi everybody,
>>
>> I’m writing to propose a set of p2p protocol changes to enable package
>> relay, soliciting feedback on the design and approach. Here is a link
>> to the most up-to-date proposal:
>>
>> https://github.com/bitcoin/bips/pull/1324
>>
>> If you have concept or approach feedback, *please respond on the
>> mailing list* to allow everybody to view and participate in the
>> discussion. If you find a typo or inaccurate wording, please feel free
>> to leave suggestions on the PR.
>>
>> I’m also working on an implementation for Bitcoin Core.
>>
>>
>> The rest of this post will include the same contents as the proposal,
>> with a bit of reordering and additional context. If you are not 100%
>> up-to-date on package relay and find the proposal hard to follow, I
>> hope you find this format more informative and persuasive.
>>
>>
>> ==Background and Motivation==
>>
>> Users may create and broadcast transactions that depend upon, i.e.
>> spend outputs of, unconfirmed transactions. A “package” is the
>> widely-used term for a group of transactions representable by a
>> connected Directed Acyclic Graph (where a directed edge exists between
>> a transaction that spends the output of another transaction).
>>
>> Incentive-compatible mempool and miner policies help create a fair,
>> fee-based market for block space. While miners maximize transaction
>> fees in order to earn higher block rewards, non-mining users
>> participating in transaction relay reap many benefits from employing
>> policies that result in a mempool with the same contents, including
>> faster compact block relay and more accurate fee estimation.
>> Additionally, users may take advantage of mempool and miner policy to
>> bump the priority of their transactions by attaching high-fee
>> descendants (Child Pays for Parent or CPFP).  Only considering
>> transactions one at a time for submission to the mempool creates a
>> limitation in the node's ability to determine which transactions have
>> the highest feerate

Re: [bitcoin-dev] On mempool policy consistency

2022-10-27 Thread Gloria Zhao via bitcoin-dev
Hi AJ,

Not going to comment on what Bitcoin Core's philosophy on mempol policy is
or should be. I want to note that I think this:

> It's also possible that this is something of a one time thing: full rbf
> has been controversial for ages, but widely liked by devs, and other
> attempts (eg making it available in knots) haven't actually achieved
> much of a result in practice. So maybe this is just a special case

is true.

> The second thing is that whatever your relay policy is, you still
> need a path all the way to miners through nodes that will accept your
> transaction at every step. If you're making your mempool more restrictive
> (eg -permitbaremultisig=0, -datacarrier=0), that's easy for you (though
> you're making life more difficult for people who do create those sorts
> of txs); but if you want a more permissive policy (package relay,
> version-3-rbf, full-rbf), you might need to do some work.

> The cutoff for that is probably something like "do 30% of listening
> nodes have a compatible policy"? If they do, then you'll have about a
> 95% chance of having at least one of your outbound peers accept your tx,
> just by random chance.

Yes, in most cases, whether Bitcoin Core is restricting or loosening
policy, the user in question is fine as long as they have a path from their
node to a miner that will accept it. This is the case for something like
-datacarriersize if the use case is putting stuff into OP_RETURN outputs,
or if they're LN and using CPFP carveout, v3, package relay, etc. But
replacement is not only a question of "will my transaction propagate" but
also, "will someone else's transaction propagate, invalidating mine" or, in
other words, "can I prevent someone else's transaction from propagating." A
zeroconf user relies on there *not* being a path from someone else's full
RBF node to a full RBF miner. This is why I think RBF is so controversial
in general, why -mempoolfullrbf on someone else's node is considered more
significant than another policy option, and why full RBF shouldn't be
compared with something like datacarriersize. I don't think past patterns
can be easily applied here, and I don't think this necessarily shows a
different "direction" in thinking about mempool policy in general.

Best,
Gloria

On Thu, Oct 27, 2022 at 12:52 AM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi *,
>
> TLDR: Yes, this post is too long, and there's no TLDR. If it's any
> consolation, it took longer to write than it does to read?
>
> On Tue, Jun 15, 2021 at 12:55:14PM -0400, Antoine Riard via bitcoin-dev
> wrote:
> > Subject: Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0
> > I'm writing to propose deprecation of opt-in RBF in favor of full-RBF
>
> > If there is ecosystem agreement on switching to full-RBF, but 0.24 sounds
> > too early, let's defer it to 0.25 or 0.26. I don't think Core has a
> > consistent deprecation process w.r.t to policy rules heavily relied-on by
> > Bitcoin users, if we do so let sets a precedent satisfying as many folks
> as
> > we can.
>
> One precedent that seems to be being set here, which to me seems fairly
> novel for bitcoin core, is that we're about to start supporting and
> encouraging nodes to have meaningfully different mempool policies. From
> what I've seen, the baseline expectation has always been that while
> certainly mempools can and will differ, policies will be largely the same:
>
>   Firstly, there is no "the mempool". There is no global mempool. Rather
>   each node maintains its own mempool and accepts and rejects transaction
>   to that mempool using their own internal policies. Most nodes have
>   the same policies, but due to different start times, relay delays,
>   and other factors, not every node has the same mempool, although they
>   may be very similar.
>
>   -
> https://bitcoin.stackexchange.com/questions/98585/how-to-find-if-two-transactions-in-mempool-are-conflicting
>
> Up until now, the differences between node policies supported by different
> nodes running core have been quite small, with essentially the following
> options available:
>
>  -minrelaytxfee, -maxmempool - changes the lowest fee rate you'll accept
>
>  -mempoolexpiry - how long to keep txs in the mempool
>
>  -datacarrier - reject txs creating OP_RETURN outputs
>
>  -datacarriersize - maximum size of OP_RETURN data
>
>  -permitbaremultisig - prevent relay of bare multisig
>
>  -bytespersigop - changes how SIGOP accounting works for relay and
>  mining prioritisation
>
> as well as these, marked as "debug only" options (only shown with
> -help-debug):
>
>  -incrementalrelayfee - make it easier/harder to spam txs by only
>  slightly bumping the fee; marked as a "debug only" option
>
>  -dustrelayfee - make it easier/harder to create uneconomic utxos;
>  marked as a "debug only" option
>
>  -limit{descendant,ancestor}{count,size} - changes how large the
>  transaction chains can be; marked as a "debug only" option
>
> and in 

Re: [bitcoin-dev] New transaction policies (nVersion=3) for contracting protocols

2022-09-26 Thread Gloria Zhao via bitcoin-dev
t;>> wallets are likely to have one big UTXO in their fee-bumping reserve pool,
>>> as the cost of acquiring UTXO is non-null and in the optimistic case, you
>>> don't need to do unilateral closure. Let's say you close dozens of channels
>>> at the same time, a UTXO pool management strategy might be to fan-out the
>>> first spends UTXOs in N fan-out outputs ready to feed the remaining
>>> in-flight channels.
>>>
>>> > 1. The rule around unconfirmed inputs was
>>> > originally "A package may include new unconfirmed inputs, but the
>>> > ancestor feerate of the child must be at least as high as the ancestor
>>> > feerates of every transaction being replaced."
>>>
>>> Note, I think we would like this new RBF rule to also apply to single
>>> transaction package, e.g second-stage HTLC transactions, where a
>>> counterparty pins a HTLC-preimage by abusing rule 3. In that case, the
>>> honest LN node should be able to broadcast a "at least as high ancestor
>>> feerate" HTLC-timeout transaction. With `option_anchor_outputs" there is no
>>> unconfirmed ancestor to replace, as the commitment transaction, whatever
>>> the party it is originating from, should already be confirmed.
>>>
>>> > "Is this a privacy issue, i.e. doesn't it allow fingerprinting LN
>>> transactions based on nVersion?"
>>>
>>> As of today, I think yes you can already fingerprint LN transactions on
>>> the  spec-defined amount value of the anchor outputs, 330 sats. There is
>>> always one of them on post-anchor commitment transactions. And sadly I
>>> would say we'll always have tricky fingerprints leaking from unilateral LN
>>> closures such as HTLC/PTLC timelocks...
>>>
>>> > "Can a V2 transaction replace a V3 transaction and vice versa?"
>>>
>>> IIUC, a V3 package could replace a V2 package, with the benefit of the
>>> new package RBF rules applied. I think this would be a significant
>>> advantage for LN, as for the current ~85k of opened channels, the old V2
>>> states shouldn't be pinning vectors. Currently, commitment transactions
>>> signal replaceability.
>>>
>>> Le ven. 23 sept. 2022 à 11:26, Gloria Zhao via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>>>
>>>> Hi everyone,
>>>>
>>>> I'm writing to propose a very simple set of mempool/transaction relay
>>>> policies intended to aid L2/contract protocols. I realized that
>>>> the previously proposed Package Mempool Accept package RBF [1]
>>>> had a few remaining problems after digging into the RBF logic more [2].
>>>> This additional set of policies solves them without requiring a huge
>>>> RBF overhaul.
>>>>
>>>> I've written an implementation (and docs) for Bitcoin Core:
>>>> https://github.com/bitcoin/bitcoin/pull/25038
>>>>
>>>> (You may notice that this proposal incorporates feedback on the PR -
>>>> thanks Suhas Daftuar, Gregory Sanders, Bastien Teinturier, Anthony Towns,
>>>> and others.)
>>>>
>>>> If you are interested in using package RBF/relay to bump presigned
>>>> transactions, I think you may be interested in reviewing this proposal.
>>>> This should solve Rule 3 pinning and perhaps allow us
>>>> to get rid of CPFP carve-out (yay!). I'm keen to hear if people find
>>>> the 1-anchor-output, 1000vB child limit too restrictive. Also, if you
>>>> find a
>>>> pinning attack or something that makes it unusable for you, I would
>>>> really really like to know.
>>>>
>>>> Note that transactions with nVersion=3 ("V3 transactions") are
>>>> currently non-standard in Bitcoin Core. That means **anything that was
>>>> standard before this policy change would still be standard
>>>> afterwards.** If you don't want your transactions to be subject to
>>>> these rules, just continue whatever you're doing and don't use
>>>> nVersion=3. AFAICT this shouldn't break anything, but let me know if
>>>> this would be disruptive for you?
>>>>
>>>> **New Policies:**
>>>>
>>>> This includes:
>>>> - a set of additional policy rules applying to V3 transactions
>>>> - modifications to package RBF rules
>>>>
>>>> **V3 transactions:**
>>>>
>>>> Existing standardness

[bitcoin-dev] New transaction policies (nVersion=3) for contracting protocols

2022-09-23 Thread Gloria Zhao via bitcoin-dev
Hi everyone,

I'm writing to propose a very simple set of mempool/transaction relay
policies intended to aid L2/contract protocols. I realized that
the previously proposed Package Mempool Accept package RBF [1]
had a few remaining problems after digging into the RBF logic more [2].
This additional set of policies solves them without requiring a huge RBF
overhaul.

I've written an implementation (and docs) for Bitcoin Core:
https://github.com/bitcoin/bitcoin/pull/25038

(You may notice that this proposal incorporates feedback on the PR - thanks
Suhas Daftuar, Gregory Sanders, Bastien Teinturier, Anthony Towns, and
others.)

If you are interested in using package RBF/relay to bump presigned
transactions, I think you may be interested in reviewing this proposal.
This should solve Rule 3 pinning and perhaps allow us
to get rid of CPFP carve-out (yay!). I'm keen to hear if people find
the 1-anchor-output, 1000vB child limit too restrictive. Also, if you find a
pinning attack or something that makes it unusable for you, I would
really really like to know.

Note that transactions with nVersion=3 ("V3 transactions") are
currently non-standard in Bitcoin Core. That means **anything that was
standard before this policy change would still be standard
afterwards.** If you don't want your transactions to be subject to
these rules, just continue whatever you're doing and don't use
nVersion=3. AFAICT this shouldn't break anything, but let me know if
this would be disruptive for you?

**New Policies:**

This includes:
- a set of additional policy rules applying to V3 transactions
- modifications to package RBF rules

**V3 transactions:**

Existing standardness rules apply to V3 (e.g. min/max tx weight,
standard output types, cleanstack, etc.). The following additional
rules apply to V3:

1. A V3 transaction can be replaced, even if it does not signal BIP125
   replaceability. (It must also meet the other RBF rules around fees,
etc. for replacement to happen).

2. Any descendant of an unconfirmed V3 transaction must also be V3.

*Rationale*: Combined with Rule 1, this gives us the property of
"inherited" replaceability signaling when descendants of unconfirmed
transactions are created. Additionally, checking whether a transaction
signals replaceability this way does not require mempool traversal,
and does not change based on what transactions are mined. It also
makes subsequent rules about descendant limits much easier to check.

*Note*: The descendant of a *confirmed* V3 transaction does not need to be
V3.

3. An unconfirmed V3 transaction cannot have more than 1 descendant.

*Rationale*: (Upper bound) the larger the descendant limit, the more
transactions may need to be replaced. This is a problematic pinning
attack, i.e., a malicious counterparty prevents the transaction from
being replaced by adding many descendant transactions that aren't
fee-bumping.

(Lower bound) at least 1 descendant is required to allow CPFP of the
presigned transaction. The contract protocol can create presigned
transactions paying 0 fees and 1 output for attaching a CPFP at
broadcast time ("anchor output"). Without package RBF, multiple anchor
outputs would be required to allow each counterparty to fee-bump any
presigned transaction. With package RBF, since the presigned
transactions can replace each other, 1 anchor output is sufficient.

4. A V3 transaction that has an unconfirmed V3 ancestor cannot be
   larger than 1000 virtual bytes.

*Rationale*: (Upper bound) the larger the descendant size limit, the
more vbytes may need to be replaced. With default limits, if the child
is e.g. 100,000vB, that might be an additional 100,000sats (at
1sat/vbyte) or more, depending on the feerate.

(Lower bound) the smaller this limit, the fewer UTXOs a child may use
to fund this fee-bump. For example, only allowing the V3 child to have
2 inputs would require L2 protocols to manage a wallet with high-value
UTXOs and make batched fee-bumping impossible. However, as the
fee-bumping child only needs to fund fees (as opposed to payments),
just a few UTXOs should suffice.

With a limit of 1000 virtual bytes, depending on the output types, the
child can have 6-15 UTXOs, which should be enough to fund a fee-bump
without requiring a carefully-managed UTXO pool. With 1000 virtual
bytes as the descendant limit, the cost to replace a V3 transaction
has much lower variance.

*Rationale*: This makes the rule very easily "tacked on" to existing
logic for policy and wallets. A transaction may be up to 100KvB on its
own (`MAX_STANDARD_TX_WEIGHT`) and 101KvB with descendants
(`DEFAULT_DESCENDANT_SIZE_LIMIT_KVB`). If an existing V3 transaction
in the mempool is 100KvB, its descendant can only be 1000vB, even if
the policy is 10KvB.

**Package RBF modifications:**

1. The rule around unconfirmed inputs was
originally "A package may include new unconfirmed inputs, but the
ancestor feerate of the child must be at least as high as the ancestor
feerates of every transaction being 

Re: [bitcoin-dev] Package Relay Proposal

2022-06-14 Thread Gloria Zhao via bitcoin-dev
arn all of T's unconfirmed
> parents (~32 bytes * number of parents(T)).
> 4) Such nodes will reject T.  But because of transaction malleability, and
> to avoid being blinded to a transaction unnecessarily, these nodes will
> likely still send getdata(PKGINFO1, T) to every node that announces T, in
> case someone has a transaction that includes an alternate set of parent
> transactions that would pass policy checks.
>
> Is that understanding correct?  I think a good design goal would be to not
> waste bandwidth in non-adversarial situations.  In this case, there would
> be bandwidth waste from downloading duplicate data from all your peers,
> just because the announcement doesn't commit to the set of parent wtxids
> that we'd get from the peer (and so we are unable to determine that all our
> peers would be telling us the same thing, just based on the announcement).
>
> Some ways to mitigate this might be to: (a) include a hash (maybe even
> just a 20-byte hash -- is that enough security?) of the package wtxids (in
> some canonical ordering) along with the wtxid of the child in the initial
> announcement; (b) limit the use of v1 packages to transactions with very
> few parents (I don't know if this is reasonable for the use cases we have
> in mind).
>
> Another point I wanted to bring up is about the rules around v1 package
> validation generally, and the use of a blockhash in transaction relay
> specifically.  My first observation is that it won't always be the case
> that a v1 package relay node will be able to validate that a set of package
> transactions is fully sorted topologically, because there may be
> (non-parent) ancestors that are missing from the package and the best a
> peer can validate is topology within the package -- this means that a peer
> can validly (under this BIP) relay transaction packages out of the true
> topological sort (if all ancestors were included).
>
> This makes me wonder how useful this topological rule is.  I suppose there
> is some value in preventing completely broken implementations from staying
> connected and so there is no harm in having the rule, but perhaps it would
> be helpful to add that nodes SHOULD order transactions based on topological
> sort in the complete transaction graph, so that if missing-from-package
> ancestors are already known by a peer (which is the expected case when
> using v1 package relay on transactions that have more than one generation
> of unconfirmed ancestor) then the remaining transactions are already
> properly ordered, and this is helpful even if unenforceable in general.
>
> The other observation I wanted to make was that having transaction relay
> gated on whether two nodes agree on chain tip seems like an overly
> restrictive criteria.  I think an important design principle is that we
> want to minimize disruption from network splits -- if there are competing
> blocks found in a small window of time, it's likely that the utxo set is
> not materially different on the two chains (assuming miners are selecting
> from roughly the same sets of transactions when this happens, which is
> typical).  Having transaction relay bifurcate on the two network halves
> would seem to exacerbate the difference between the two sides of the split
> -- users ought to be agnostic about how benign splits are resolved and
> would likely want their transactions to relay across the whole network.
>
> Additionally, use of a chain tip might impose a larger burden than is
> necessary on software that would seek to participate in transaction relay
> without implementing headers sync/validation.  I don't know what software
> exists on the network, but I imagine there are a lot of scripts out there
> for transaction submission to the public p2p network, and in thinking
> about modifying such a script to utilize package relay it seems like an
> unnecessary added burden to first learn a node's tip before trying to relay
> a transaction.
>
> Could you explain again what the benefit of including the blockhash is?
> It seems like it is just so that a node could prioritize transaction relay
> from peers with the same chain tip to maximize the likelihood of
> transaction acceptance, but in the common case this seems like a pretty
> negligible concern, and in the case of a chain fork that persists for many
> minutes it seems better to me that we not partition the network into
> package-relay regimes and just risk a little extra bandwidth in one
> direction or the other.  If we solve the problem I brought up at the
> beginning (of de-duplicating package data across peers with a
> package-wtxid-commitment in the announcement), I think this is just some
> wasted pkginfo bandwidth on a single-link, and not across links (as we
> could cache validation failure fo

Re: [bitcoin-dev] Package Relay Proposal

2022-06-07 Thread Gloria Zhao via bitcoin-dev
Hi Eric, aj, all,

Sorry for the delayed response. @aj I'm including some paraphrased points
from our offline discussion (thanks).

> Other idea: what if you encode the parent txs as a short hash of the
wtxid (something like bip152 short ids? perhaps seeded per peer so
collisions will be different per peer?) and include that in the inv
announcement? Would that work to avoid a round trip almost all of the time,
while still giving you enough info to save bw by deduping parents?

> As I suggested earlier, a package is fundamentally a compact block (or
> block) announcement without the header. Compact block (BIP152)
announcement
> is already well-defined and widely implemented...

> Let us not reinvent the wheel and/or introduce accidental complexity. I
see
> no reason why packaging is not simply BIP152 without the 'header' field,
an
> updated protocol version, and the following sort of changes to names

Interestingly, "why not use BIP 152 shortids to save bandwidth?" is by far
the most common suggestion I hear (including offline feedback). Here's a
full explanation:

BIP 152 shortens transaction hashes (32 bytes) to shortids (6 bytes) to
save a significant amount of network bandwidth, which is extremely
important in block relay. However, this comes at the expense of
computational complexity. There is no way to directly calculate a
transaction hash from a shortid; upon receipt of a compact block, a node is
expected to calculate the shortids of every unconfirmed transaction it
knows about to find the matches (BIP 152: [1], Bitcoin Core: [2]). This is
expensive but appropriate for block relay, since the block must have a
valid Proof of Work and new blocks only come every ~10 minutes. On the
other hand, if we require nodes to calculate shortids for every transaction
in their mempools every time they receive a package, we are creating a DoS
vector. Unconfirmed transactions don't need PoW and, to have a live
transaction relay network, we should expect nodes to handle transactions at
a high-ish rate (i.e. at least 1000s of times more transactions than
blocks). We can't pre-calculate or cache shortids for mempool transactions,
since the SipHash key depends on the block hash and a per-connection salt.

Additionally, shortid calculation is not designed to prevent intentional
individual collisions. If we were to use these shortids to deduplicate
transactions we've supposedly already seen, we may have a censorship
vector. Again, these tradeoffs make sense for compact block relay (see
shortid section in BIP 152 [3]), but not package relay.

TLDR: DoSy if we calculate shortids on every package and censorship vector
if we use shortids for deduplication.

> Given this message there is no reason
> to send a (potentially bogus) fee rate with every package. It can only be

> validated by obtaining the full set of txs, and the only recourse is
> dropping (etc.) the peer, as is the case with single txs.

Yeah, I agree with this. Combined with the previous discussion with aj
(i.e. we can't accurately communicate the incentive compatibility of a
package without sending the full graph, and this whole dance is to avoid
downloading a few low-fee transactions in uncommon edge cases), I've
realized I should remove the fee + weight information from pkginfo. Yay for
less complexity!

Also, this might be pedantic, but I said something incorrect earlier and
would like to correct myself:

>> In theory, yes, but maybe it was announced earlier (while our node was
down?) or had dropped from our mempool or similar, either way we don't have
those txs yet.

I said "It's fine if they have Erlay, since a sender would know in advance
that B is missing and announce it as a package." But this isn't true since
we're only using reconciliation in place of flooding to announce
transactions as they arrive, not for rebroadcast, and we're not doing full
mempool set reconciliation. In any case, making sure a node receives the
transactions announced when it was offline is not something we guarantee,
not an intended use case for package relay, and not worsened by this.

Thanks for your feedback!

Best,
Gloria

[1]:
https://github.com/bitcoin/bips/blob/master/bip-0152.mediawiki#cmpctblock
[2]:
https://github.com/bitcoin/bitcoin/blob/master/src/blockencodings.cpp#L49
[3]:
https://github.com/bitcoin/bips/blob/master/bip-0152.mediawiki#short-transaction-id-calculation

On Thu, May 26, 2022 at 3:59 AM  wrote:

> Given that packages have no header, the package requires identity in a
> BIP152 scheme. For example 'header' and 'blockhash' fields can be replaced
> with a Merkle root (e.g. "identity" field) for the package, uniquely
> identifying the partially-ordered set of txs. And use of 'getdata' (to
> obtain a package by hash) can be eliminated (not a use case).
>
> e
>
> > -Original Message-
> > From: e...@voskuil.org 
> > Sent: Wednesday, May 25, 2022 1:52 PM
> > To: 'Anthony Towns' ; 'Bitcoin Protocol Discussion'
> > ; 'Gloria Zhao'
> > 
> > Subject: RE: 

Re: [bitcoin-dev] Package Relay Proposal

2022-05-28 Thread Gloria Zhao via bitcoin-dev
Hi aj, answering slightly out of order:

> what happens if the peer announcing packages to us is dishonest?
> They announce pkg X, say X has parents A B C and the fee rate is garbage.
But actually X has parent D and the fee rate is excellent. Do we request
the package from another peer, or every peer, to double check? Otherwise
we're allowing the first peer we ask about a package to censor that tx from
us?

Yes, providing false information shouldn't be worse than not announcing the
package at all, otherwise we have a censorship vector. In general, the
request logic should not let one peer prevent us from requesting a similar
announcement from another peer.
Yes I was indeed expecting that we would ask for package info from everyone
who announces it until it accepts the package or has full information.
I can see that it's a fair bit of messages (request pckginfo, oh it's low
fee, request pckginfo from somebody else), but we also need to track
announcements / potentially go through the same circle to handle
"notfound"s, right?
In normal running, the fee filter should stop a bunch of honest nodes from
telling us packages that are low fee.

> I think the fix for that is just to provide the fee and weight when
announcing the package rather than only being asked for its info? Then if
one peer makes it sound like a good deal you ask for the parent txids from
them, dedupe, request, and verify they were honest about the parents.
> Likewise, I think you'd have to have the graph info from many nodes if
you're going to make decisions based on it and don't want hostile peers to
be able to trick you into ignoring txs.

I don't think providing more information up front can ever sufficiently
resolve the censorship issue. If we want to prevent any one peer from being
able to censor requests to other peers, we need to store all announcements
and be prepared to request from everybody.

Would it be better if we just took out the fee information and had
"pckginfo" only consist of transaction ids? Sender tries its best to apply
the fee filter? Presumably you have a txInventoryKnown of your peer based
on what they've announced to you... just take the ancestor set of a
transaction, subtract what they already have, and apply the fee filter to
that? Or some kind of algorithm that ensures we don't underestimate? If
it's imperfect, the worst case is the receiver downloads a few transactions
and rejects them. Given that our goal is just to avoid this case, perhaps
opting for simplicity is better than adding a topology graph
serialization/deserialization + feerate assessment algorithm on top of this
protocol...?

>>We'd erroneously ask for A+B+C+X, but really we should only take A+B.
>>But wouldn't A+B also be a package that was announced for B?

> In theory, yes, but maybe it was announced earlier (while our node was
down?) or had dropped from our mempool or similar, either way we don't have
those txs yet.

Hm. It's fine if they have Erlay, since a sender would know in advance that
B is missing and announce it as a package. A potential tack-on solution
would be to request package information whenever you have a "low fee" error
on a parent and "missing inputs" on a child. Or we solve it at the
validation level - instead of submitting each tx individually, we submit
each ancestor subset. Do you think any of these is sufficient? At least the
package properly propagates across nodes which are online when it is
broadcasted...

Best,
Gloria

On Wed, May 25, 2022 at 11:55 AM Anthony Towns  wrote:

> On 24 May 2022 5:05:35 pm GMT-04:00, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> >To clarify, in this situation, I'm imagining something like
> >A: 0 sat, 100vB
> >B: 1500 sat, 100vB
> >C: 0 sat, 100vB
> >X: 500 sat, 100vB
> >feerate floor is 3sat/vB
> >
> >With the algo:
> >>  * is X alone above my fee rate? no, then forget it
> >>  * otherwise, s := X.size, f := X.fees, R := [X]
> >>  * for P = P1..Pn:
> >>   * do I already have P? then skip to the next parent
> >>   * s += P.size, f += P.fees, R += [P]
> >>  * if f/s above my fee rate floor? if so, request all the txs in R
> >
> >We'd erroneously ask for A+B+C+X, but really we should only take A+B.
> >But wouldn't A+B also be a package that was announced for B?
>
> In theory, yes, but maybe it was announced earlier (while our node was
> down?) or had dropped from our mempool or similar, either way we don't have
> those txs yet.
>
> >Please lmk if you were imagining something different. I think I may be
> >missing something.
>
> That's what I was thinking, yes.
>
> So the other thing is what happens if the peer announcing packages to us
> is dishonest?
>
> They announce pkg X, say X has parents A B C and the fee ra

Re: [bitcoin-dev] Package Relay Proposal

2022-05-24 Thread Gloria Zhao via bitcoin-dev
Hi aj,

> If you've got (A,B,C,X) where B spends A and X spends A,B,C where X+C is
below fee floor while A+B and A+B+C+X are above fee floor you have the
problem though.

To clarify, in this situation, I'm imagining something like
A: 0 sat, 100vB
B: 1500 sat, 100vB
C: 0 sat, 100vB
X: 500 sat, 100vB
feerate floor is 3sat/vB

With the algo:
>  * is X alone above my fee rate? no, then forget it
>  * otherwise, s := X.size, f := X.fees, R := [X]
>  * for P = P1..Pn:
>   * do I already have P? then skip to the next parent
>   * s += P.size, f += P.fees, R += [P]
>  * if f/s above my fee rate floor? if so, request all the txs in R

We'd erroneously ask for A+B+C+X, but really we should only take A+B.
But wouldn't A+B also be a package that was announced for B?
Please lmk if you were imagining something different. I think I may be
missing something.

> Is it plausible to add the graph in?

Fun to think about. Most basic design would be to represent {spends,
doesn’t spend} for a previous transaction in the package as a bit. Can
think of it as a matrix where row i, column j tells you whether Tx j
(directly) spends Tx i.
But of course you can omit the last row, since the child spends all of
them. And since topological ordering is a requirement, you only need as
many bits as there are transactions preceding this one in the package.
If you have up to 24 parents, you need 1 + 2 + ... + 23 bits to codify
spending for the 2nd ... 24th parent. For a maximum 25 transactions,
23*24/2 = 276, seems like 36 bytes for a child-with-parents package. A few
more for tx-with-ancestors.
Then you can split it up into sub-packages and everything. Still not sure
if we really need to.

Also side note, since there are no size/count params, wondering if we
should just have "version" in "sendpackages" be a bit field instead of
sending a message for each version. 32 versions should be enough right?

Best,
Gloria

On Tue, 24 May 2022 at 12:48 Anthony Towns  wrote:

> On 23 May 2022 9:13:43 pm GMT-04:00, Gloria Zhao 
> wrote:
> >> If you're asking for the package for "D", would a response telling you:
> >>   txid_D (500 sat, 100vB)
> >>   txid_A (0 sat, 100vB)
> >>   txid_B (2000 sat, 100 vB)
> >> be better, in that case? Then the receiver can maybe do the logic
> >> themselves to figure out that they already have A in their mempool
> >> so it's fine, or not?
> >Right, I also considered giving the fees and sizes of each transaction in
> >the package in “pckginfo1”. But I don’t think that information provides
> >additional meaning unless you know the exact topology, i.e. also know if
> >the parents have dependency relationships between them. For instance, in
> >the {A, B, D} package there, even if you have the information listed, your
> >decision should be different depending on whether B spends from A.
>
> I don't think that's true? We already know D is above our fee floor so if
> B with A is also above the floor, we want them all, but also if B isn't
> above the floor, but all of them combined are, then we also do?
>
> If you've got (A,B,C,X) where B spends A and X spends A,B,C where X+C is
> below fee floor while A+B and A+B+C+X are above fee floor you have the
> problem though.
>
> Is it plausible to add the graph in?
>
> Cheers,
> aj
>
>
>
> --
> Sent from my phone.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Package Relay Proposal

2022-05-24 Thread Gloria Zhao via bitcoin-dev
Hi aj,

> if you're writing a protocol that's
> dependent on people seeing that a package as a whole pays a competitive
> feerate, don't you want to know in advance what conditions the network
> is going to impose on your transactions in order to consider them as a
> package?

I do think unifying the size/count constraints would result in a more
stable/easier to reason about interface for L2 devs. Then the requirement
for propagation is just a path of nodes that support v1 package relay, and
it’s implied their mempool policy supports it as well. Also seems like it
could be a fingerprinting problem for nodes to give very specific
count/size limits.

> (… maybe core's defaults should be reconsidered rather than standardised
as-is)

> Worst case, you could presumably do a new package relay version with
> different constraints, if needed.

Maybe this was my actual concern. I think the defaults are safe but it’s
not like they’ve been proven to be optimal. This creates an obstacle to
changing them, especially if we want to make them smaller. But I think it’s
unlikely we’ll do that, and adding another version for new constraints
doesn’t seem too bad.


(Agreed with everything here, thanks for the feedback and clarifications!)
TLDR, making these changes:
- Count and size are implied by the version. Version 1 is specifically
child-with-unconfirmed-parents, where the whole package is at most 25
transactions and 101KvB.
- Announce sendpackages based on our own state. It’s ok to send
“sendpackages” if they sent fRelay=false.
- At verack, require fRelay=true and wtxidrelay if they sent sendpackages,
otherwise disconnect.
- If we get “getpckgtxns” or “pckgtxns” without having negotiated
“sendpackages” ahead of time, ignore, don’t disconnect. Emphasize that the
intention is to validate all of the transactions received through
“pckgtxns” together.

> If you're asking for the package for "D", would a response telling you:
>   txid_D (500 sat, 100vB)
>   txid_A (0 sat, 100vB)
>   txid_B (2000 sat, 100 vB)
> be better, in that case? Then the receiver can maybe do the logic
> themselves to figure out that they already have A in their mempool
> so it's fine, or not?

Right, I also considered giving the fees and sizes of each transaction in
the package in “pckginfo1”. But I don’t think that information provides
additional meaning unless you know the exact topology, i.e. also know if
the parents have dependency relationships between them. For instance, in
the {A, B, D} package there, even if you have the information listed, your
decision should be different depending on whether B spends from A. The only
thing you know for sure about a child with direct parents is: if the
aggregate feerate is too low, you won’t want the child since it depends on
everyone else. If there’s a good-feerate transaction in there that doesn’t
have a dependency, you’re fine as long as someone sends it to you
individually.

Best,
Gloria

On Mon, May 23, 2022 at 2:34 PM Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, May 18, 2022 at 02:40:58PM -0400, Gloria Zhao via bitcoin-dev
> wrote:
> > > Does it make sense for these to be configurable, rather than implied
> > > by the version?
> > > … would it be better to either just not do sendpackages
> > > at all if you're limiting ancestors in the mempool incompatibly
> > Effectively: if you’re setting your ancestor/descendant limits lower than
> > the default, you can’t do package relay. I wonder if this might be
> > controversial, since it adds pressure to adhere to Bitcoin Core’s current
> > mempool policy? I would be happy to do it this way, though - makes things
> > easier to implement.
>
> How about looking at it the other way: if you're writing a protocol that's
> dependent on people seeing that a package as a whole pays a competitive
> feerate, don't you want to know in advance what conditions the network
> is going to impose on your transactions in order to consider them as a
> package? In that case, aren't the "depth" and "size" constraints things
> we should specify in a standard?
>
> (The above's not a rhetorical question; I'm not sure what the answer is.
> And even if it's "yes", maybe core's defaults should be reconsidered
> rather than standardised as-is)
>
> Worst case, you could presumably do a new package relay version with
> different constraints, if needed.
>
> > > > 5. If 'fRelay==false' in a peer's version message, the node must not
> > > >send "sendpackages" to them. If a "sendpackages" message is
> > > > received by a peer after sending `fRelay==false` in their version
> > > > message, the sender should be disconnected.
> > > Seems better to just say "if you s

Re: [bitcoin-dev] Package Relay Proposal

2022-05-18 Thread Gloria Zhao via bitcoin-dev
u might end up with even
more transaction data that’s just sitting around waiting to be validated.

> The "only be sent if both peers agreed to do package relay" rule could
> simply be dropped, I think.

Wouldn’t we need some way of saying “hey I support batchtxns?” Otherwise
you would have to guess by sending a request and waiting to see if it’s
ignored?

> Shouldn't the sender only be sending package announcements when they know
> the recipient will be interested in the package, based on their feefilter?

I think there are cases where the sender doesn’t necessarily know.
Consider this example:
https://raw.githubusercontent.com/glozow/bitcoin-notes/master/mempool_garden/rich_parent_bad_cpfp.png
D (5sat/vB) has 2 parents, A (0sat/vB) and B (20sat/vB). All same size.
Feefilter is 3sat/vB.
If the receiver already has B, they’ll know they can just reject the
package already based on the pckginfo.
But the sender doesn’t really know that. The sender just knows A is below
feerate and D is above. D is above the fee filter, and its ancestor feerate
is above the fee filter.

> CAmount in consensus/amount.h is a int64_t
> The maximum block weight is 4M…

 Oops yes. I think we just usually use int64_t for vsizes afaik. Agree that
it should be 8 bytes for fee, and 4 bytes is enough for vsize.

> I guess tx relay is low priority enough that it wouldn't be worth tagging
> some peers as "high bandwidth" and having them immediately announce the
> PCKGINFO1 message, and skip the INV/GETDATA step?

I had the same idea as well, but seemed unnecessary. It would reduce the
number of round trips, but I don’t think an extra round trip is that big of
a deal for transaction relay. Block relay, yes of course, but I don’t think
we care that much if it takes an extra second to send a transaction?

Best,
Gloria

On Tue, May 17, 2022 at 8:35 PM Anthony Towns  wrote:

> On Tue, May 17, 2022 at 12:01:04PM -0400, Gloria Zhao via bitcoin-dev
> wrote:
> > New Messages
> > Three new protocol messages are added for use in any version of
> > package relay. Additionally, each version of package relay must define
> > its own inv type and "pckginfo" message version, referred to in this
> > document as "MSG_PCKG" and "pckginfo" respectively. See
> > BIP-v1-packages for a concrete example.
>
> The "PCKG" abbreviation threw me for a loop; isn't the usual
> abbreviation "PKG" ?
>
> > =sendpackages=
> > |version || uint32_t || 4 || Denotes a package version supported by the
> > node.
> > |max_count || uint32_t || 4 ||Specifies the maximum number of
> transactions
> > per package this node is
> > willing to accept.
> > |max_weight || uint32_t || 4 ||Specifies the maximum total weight per
> > package this node is willing
> > to accept.
>
> Does it make sense for these to be configurable, rather than implied
> by the version?
>
> I presume the idea is to cope with people specifying different values for
> -limitancestorcount or -limitancestorsize, but if people are regularly
> relaying packages around, it seems like it becomes hard to have those
> values really be configurable while being compatible with that?
>
> I guess I'm asking: would it be better to either just not do sendpackages
> at all if you're limiting ancestors in the mempool incompatibly; or
> alternatively, would it be better to do the package relay, then reject
> the particular package if it turns out too big, and log that you've
> dropped it so that the node operator has some way of realising "whoops,
> I'm not relaying packages properly because of how I configured my node"?
>
> > 5. If 'fRelay==false' in a peer's version message, the node must not
> >send "sendpackages" to them. If a "sendpackages" message is
> > received by a peer after sending `fRelay==false` in their version
> > message, the sender should be disconnected.
>
> Seems better to just say "if you set fRelay=false in your version
> message, you must not send sendpackages"? You already won't do packages
> with the peer if they don't also announce sendpackages.
>
> > 7. If both peers send "wtxidrelay" and "sendpackages" with the same
> >version, the peers should announce, request, and send package
> > information to each other.
>
> Maybe: "You must not send sendpackages unless you also send wtxidrelay" ?
>
>
> As I understand it, the two cases for the protocol flow are "I received
> an orphan, and I'd like its ancestors please" which seems simple enough,
> and "here's a child you may be interested in, even though you possibly
> weren't interested in the parents of that child". I think the lo

Re: [bitcoin-dev] Package Relay Proposal

2022-05-17 Thread Gloria Zhao via bitcoin-dev
Hi Greg,

Thanks for reading!

>> A child-with-unconfirmed-parents package sent between nodes must abide by
the rules below, otherwise the package is malformed and the sender should
be disconnected.

>> However, if the child has confirmed parents, they must not be in the
package.

> If my naive understanding is correct, this means things like otherwise
common situations such as a new block will result in disconnects, say when
> the sender doesn't hear about a new block which makes the relay package
superfluous/irrelevant. Similar would be disconnection
> when confirmed gets turned into unconfirmed, but those situations are
extremely uncommon. The other rules are entirely under the control
> of the sender, which leads me to wonder if it's appropriate.

This is why the "pckginfo1" message includes the blockhash at which the
package was defined.
Also please see Clarifications - "Q: What if a new block arrives in between
messages?'' section in the v1-packages portion. It covers both cases, i.e.
a transaction going from unconfirmed->confirmed and confirmed->unconfirmed
in a reorg.

In case anybody is wondering "why don't we just allow confirmed parents?":
Since we validate based on the UTXO set, when we see a recently-confirmed
transaction, it just looks like it spends nonexistent inputs. In these
cases, we don't really know if the input was recently spent in a block or
just never existed, unless we plan on looking up transactions in past
blocks. We do some guesswork when we deal with new blocks in normal
transaction relay (e.g. we requested the tx before a block arrived):
https://github.com/bitcoin/bitcoin/blob/d5d40d59f8d12cf53c5ad1ce9710f3f108cec386/src/validation.cpp#L780-L784
I believe it's cleaner to just explicitly say which blockhash you're on to
avoid confusion.

Thanks,
Gloria

On Tue, May 17, 2022 at 1:56 PM Greg Sanders  wrote:

> Hi Gloria,
>
> Thanks for working on this important proposal!
>
> Still a lot to digest, but I just had on area of comment/question:
>
> > A child-with-unconfirmed-parents package sent between nodes must abide by
> the rules below, otherwise the package is malformed and the sender should
> be disconnected.
>
> > However, if the child has confirmed parents, they must not be in the
> package.
>
> If my naive understanding is correct, this means things like otherwise
> common situations such as a new block will result in disconnects, say when
> the sender doesn't hear about a new block which makes the relay package
> superfluous/irrelevant. Similar would be disconnection
> when confirmed gets turned into unconfirmed, but those situations are
> extremely uncommon. The other rules are entirely under the control
> of the sender, which leads me to wonder if it's appropriate.
>
> Cheers,
> Greg
>
> On Tue, May 17, 2022 at 12:09 PM Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi everybody,
>>
>> I’m writing to propose a set of p2p protocol changes to enable package
>> relay, soliciting feedback on the design and approach. Here is a link
>> to the most up-to-date proposal:
>>
>> https://github.com/bitcoin/bips/pull/1324
>>
>> If you have concept or approach feedback, *please respond on the
>> mailing list* to allow everybody to view and participate in the
>> discussion. If you find a typo or inaccurate wording, please feel free
>> to leave suggestions on the PR.
>>
>> I’m also working on an implementation for Bitcoin Core.
>>
>>
>> The rest of this post will include the same contents as the proposal,
>> with a bit of reordering and additional context. If you are not 100%
>> up-to-date on package relay and find the proposal hard to follow, I
>> hope you find this format more informative and persuasive.
>>
>>
>> ==Background and Motivation==
>>
>> Users may create and broadcast transactions that depend upon, i.e.
>> spend outputs of, unconfirmed transactions. A “package” is the
>> widely-used term for a group of transactions representable by a
>> connected Directed Acyclic Graph (where a directed edge exists between
>> a transaction that spends the output of another transaction).
>>
>> Incentive-compatible mempool and miner policies help create a fair,
>> fee-based market for block space. While miners maximize transaction
>> fees in order to earn higher block rewards, non-mining users
>> participating in transaction relay reap many benefits from employing
>> policies that result in a mempool with the same contents, including
>> faster compact block relay and more accurate fee estimation.
>> Additionally, users may take advantage of mempool and miner policy to
>> bump the prior

[bitcoin-dev] Package Relay Proposal

2022-05-17 Thread Gloria Zhao via bitcoin-dev
Hi everybody,

I’m writing to propose a set of p2p protocol changes to enable package
relay, soliciting feedback on the design and approach. Here is a link
to the most up-to-date proposal:

https://github.com/bitcoin/bips/pull/1324

If you have concept or approach feedback, *please respond on the
mailing list* to allow everybody to view and participate in the
discussion. If you find a typo or inaccurate wording, please feel free
to leave suggestions on the PR.

I’m also working on an implementation for Bitcoin Core.


The rest of this post will include the same contents as the proposal,
with a bit of reordering and additional context. If you are not 100%
up-to-date on package relay and find the proposal hard to follow, I
hope you find this format more informative and persuasive.


==Background and Motivation==

Users may create and broadcast transactions that depend upon, i.e.
spend outputs of, unconfirmed transactions. A “package” is the
widely-used term for a group of transactions representable by a
connected Directed Acyclic Graph (where a directed edge exists between
a transaction that spends the output of another transaction).

Incentive-compatible mempool and miner policies help create a fair,
fee-based market for block space. While miners maximize transaction
fees in order to earn higher block rewards, non-mining users
participating in transaction relay reap many benefits from employing
policies that result in a mempool with the same contents, including
faster compact block relay and more accurate fee estimation.
Additionally, users may take advantage of mempool and miner policy to
bump the priority of their transactions by attaching high-fee
descendants (Child Pays for Parent or CPFP).  Only considering
transactions one at a time for submission to the mempool creates a
limitation in the node's ability to determine which transactions have
the highest feerates, since it cannot take into account descendants
until all the transactions are in the mempool. Similarly, it cannot
use a transaction's descendants when considering which of two
conflicting transactions to keep (Replace by Fee or RBF).

When a user's transaction does not meet a mempool's minimum feerate
and they cannot create a replacement transaction directly, their
transaction will simply be rejected by this mempool. They also cannot
attach a descendant to pay for replacing a conflicting transaction.
This limitation harms users' ability to fee-bump their transactions.
Further, it presents a security issue in contracting protocols which
rely on **presigned**, time-sensitive transactions to prevent cheating
(HTLC-Timeout in LN Penalty [1] [2] [3], Unvault Cancel in Revault
[4], Refund Transaction in Discreet Log Contracts [5], Updates in
eltoo [6]). In other words, a key security assumption of many
contracting protocols is that all parties can propagate and confirm
transactions in a timely manner.

In the past few years, increasing attention [0][1][2][3][6] has been
brought to **pinning attacks**, a type of censorship in which the
attacker uses mempool policy restrictions to prevent a transaction
from being relayed or getting mined.  TLDR: revocation transactions
must meet a certain confirmation target to be effective, but their
feerates are negotiated well ahead of broadcast time. If the
forecasted feerate was too low and no fee-bumping options are
available, attackers can steal money from their counterparties. I walk
through a concrete example for stealing Lightning HTLC outputs at
~23:58 in this talk [7][8].  Note that most attacks are only possible
when the market for blockspace at broadcast time  demands much higher
feerates than originally anticipated at signing time. Always
overestimating fees may sidestep this issue temporarily (while mempool
traffic is low and predictable), but this solution is not foolproof
and wastes users' money. The feerate market can change due to sudden
spikes in traffic (e.g. huge 12sat/vB dump a few days ago [9]) or
sustained, high volume of Bitcoin payments (e.g.  April 2021 and
December 2017).

The best solution is to enable nodes to consider packages of
transactions as a unit, e.g. one or more low-fee parent transactions
with a high-fee child, instead of separately. A package-aware mempool
policy can help determine if it would actually be economically
rational to accept a transaction to the mempool if it doesn't meet fee
requirements individually. Network-wide adoption of these policies
would create a more purely-feerate-based market for block space and
allow contracting protocols to adjust fees (and therefore mining
priority) at broadcast time.  Some support for packages has existed in
Bitcoin Core for years. Since v0.13, Bitcoin Core has used ancestor
packages instead of individual transactions to evaluate the incentive
compatibility of transactions in the mempool [10] and select them for
inclusion in blocks [11].

Package Relay, the concept of {announcing, requesting, downloading}
packages between nodes 

Re: [bitcoin-dev] Improving RBF Policy

2022-03-14 Thread Gloria Zhao via bitcoin-dev
g
>> broadcasted (like rate limiting could), but would still be effective at
>> limiting unnecessary data transmission. Another benefit is that in the
>> non-adversarial case, replacement transactions wouldn't be subject to any
>> delay at all (while in the staggered broadcast idea, most replacements
>> would experience some delay). And in the adversarial case, where a
>> malicious actor broadcasts a low-as-possible-value replacement just before
>> yours, the worst case delay is just whatever the cooldown period is. I
>> would imagine that maybe 1 minute would be a reasonable worst-case delay.
>> This would limit spam for a transaction that makes it into a block to ~10x
>> (9 to 1). I don't see much of a downside to doing this beyond just the
>> slight additional complexity of relay rules (and considering it could save
>> substantial additional code complexity, even that is a benefit).
>>
>> All a node would need to do is keep a timestamp on each transaction they
>> receive for when it was broadcasted and check it when a replacement comes
>> in. If now-broadcastDate < cooldown, set a timer for cooldown -
>> (now-broadcastDate) to broadcast it. If another replacement comes in, clear
>> that timer and repeat using the original broadcast date (since the
>> unbroadcast transaction doesn't have a broadcast date yet).
>>
>> I think it might also be useful to note that eliminating "extra data"
>> caused by careless or malicious actors (spam or whatever you want to call
>> it) should not be the goal. It is impossible to prevent all spam. What we
>> should be aiming for is more specific: we should attempt to design a system
>> where spam is manageable. Eg if our goal is to ensure that a bitcoin node
>> uses no more than 10% of the bandwidth of a "normal" user, if current
>> non-spam traffic only requires 1% of a "normal" users's bandwidth, then the
>> network can bear a 9 to 1 ratio of spam. When a node spins up, there is a
>> lot more data to download and process. So we know that all full nodes can
>> handle at least as much traffic as they handle during IBD. What's the
>> difference between those amounts? I'm not sure, but I would guess that IBD
>> is at least a couple times more demanding than a fully synced node. So I
>> might suggest that as long as spam can be kept below a ratio of maybe 2 to
>> 1, we should consider the design acceptable (and therefore more complexity
>> unnecessary).
>>
>> The 1 minute broadcast cooldown I mentioned before wouldn't be quite
>> sufficient to achieve that ratio. But a 3.33 minute cooldown would be.
>> Whether this is "too much" is something that would have to be discussed, I
>> suspect a worst-case adversarial 3.33 minute delay would not be "too much".
>> Doing this could basically eliminate any risk of actual service denial via
>> replacement transactions.
>>
>> However, I do think that these DOS concerns are quite overblown. I wrote
>> up a comment on your rbf-improvements.md
>> <https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4093100#gistcomment-4093100>
>>  detailing
>> my thought process on that. The summary is that as long as the fee-rate
>> relay rule is maintained, any "spam" is actually paid for, either by the
>> "first" transaction in the spam chain, or by the "spam" itself. Even
>> without something like a minimum RBF relay delay limiting how much spam
>> could be created, the economics of the fee-rate rule already sufficiently
>> mitigate the issue of spam.
>> On Wed, Mar 9, 2022 at 9:37 AM Gloria Zhao via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi RBF friends,
>>>
>>> Posting a summary of RBF discussions at coredev (mostly on transaction
>>> relay rate-limiting), user-elected descendant limit as a short term
>>> solution to unblock package RBF, and mining score, all open for feedback:
>>>
>>> One big concept discussed was baking DoS protection into the p2p level
>>> rather than policy level. TLDR: The fees are not paid to the node operator,
>>> but to the miner. While we can use fees to reason about the cost of an
>>> attack, if we're ultimately interested in preventing resource exhaustion,
>>> maybe we want to "stop the bleeding" when it happens and bound the amount
>>> of resources used in general. There were two main ideas:
>>>
>>> 1. Transaction relay rate limiting (i.e. the one you proposed above or
>>> some variation) with a feera

Re: [bitcoin-dev] Improving RBF Policy

2022-03-09 Thread Gloria Zhao via bitcoin-dev
Hi RBF friends,

Posting a summary of RBF discussions at coredev (mostly on transaction
relay rate-limiting), user-elected descendant limit as a short term
solution to unblock package RBF, and mining score, all open for feedback:

One big concept discussed was baking DoS protection into the p2p level
rather than policy level. TLDR: The fees are not paid to the node operator,
but to the miner. While we can use fees to reason about the cost of an
attack, if we're ultimately interested in preventing resource exhaustion,
maybe we want to "stop the bleeding" when it happens and bound the amount
of resources used in general. There were two main ideas:

1. Transaction relay rate limiting (i.e. the one you proposed above or some
variation) with a feerate-based priority queue
2. Staggered broadcast of replacement transactions: within some time
interval, maybe accept multiple replacements for the same prevout, but only
relay the original transaction.

Looking to solicit feedback on these ideas and the concept in general. Is
it a good idea (separate from RBF) to add rate-limiting in transaction
relay? And is it the right direction to think about RBF DoS protection this
way?

A lingering concern that I have about this idea is it would then be
possible to impact the propagation of another person’s transaction, i.e.,
an attacker can censor somebody’s transaction from ever being announced by
a node if they send enough transactions to fill up the rate limit.
Obviously this would be expensive since they're spending a lot on fees, but
I imagine it could be profitable in some situations to spend a few thousand
dollars to prevent anyone from hearing about a transaction for a few hours.
This might be a non-issue in practice if the rate limit is generous and
traffic isn’t horrendous, but is this a problem?

And if we don't require an increase in (i.e. addition of "new") absolute
fees, users are essentially allowed to “recycle” fees. In the scenario
where we prioritize relay based on feerate, users could potentially be
placed higher in the queue, ahead of other users’ transactions, multiple
times, without ever adding more fees to the transaction. Again, maybe this
isn’t a huge deal in practice if we set the parameters right, but it seems…
not great, in principle.

-

It's probably also a good idea to point out that there's been some
discussion happening on the gist containing my original post on this thread
(https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff).

Suhas and Matt [proposed][0] adding a policy rule allowing users to specify
descendant limits on their transactions. For example, some nth bit of
nSequence with nVersion 3 means "this transaction won't have more than X
vbytes of descendants" where X = max(1000, vsizeof(tx)) or something. It
solves the pinning problem with package RBF where the attacker's package
contains a very large and high-fee descendant.

We could add this policy and deploy it with package RBF/package relay so
that LN can use it by setting the user-elected descendant limit flag on
commitment transactions. (Otherwise package RBF is blocked until we find a
more comprehensive solution to the pinning attack).

It's simple to [implement][1] as a mempool policy, but adds some complexity
for wallets that use it, since it limits their use of UTXOs from
transactions with this bit set.

-

Also, coming back to the idea of "we can't just use {individual, ancestor}
feerate," I'm interested in soliciting feedback on adding a “mining score”
calculator. I've implemented one [here][2] which takes the transaction in
question, grabs all of the connected mempool transactions (including
siblings, coparents, etc., as they wouldn’t be in the ancestor nor
descendant sets), and builds a “block template” using our current mining
algorithm. The mining score of a transaction is the ancestor feerate at
which it is included.

This would be helpful for something like ancestor-aware funding and
fee-bumping in the wallet: [3], [4]. I think if we did the rate-limited
priority queue for transaction relay, we'd want to use something like this
as the priority value. And for RBF, we probably want to require that a
replacement have a higher mining score than the original transactions. This
could be computationally expensive to do all the time; it could be good to
cache it but that could make mempool bookkeeping more complicated. Also, if
we end up trying to switch to a candidate set-based algorithm for mining,
we'd of course need a new calculator.

[0]:
https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff?permalink_comment_id=4058140#gistcomment-4058140
[1]: https://github.com/glozow/bitcoin/tree/2022-02-user-desclimit
[2] https://github.com/glozow/bitcoin/tree/2022-02-mining-score
[3]: https://github.com/bitcoin/bitcoin/issues/9645
[4]: https://github.com/bitcoin/bitcoin/issues/15553

Best,
Gloria

On Tue, Feb 8, 2022 at 4:58 AM Anthony Towns  wrote:

> On Mon, Feb 07, 2022 at 11:16:26AM +, 

Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Gloria Zhao via bitcoin-dev
up. Few thoughts and more context comments if it
>>> can help other readers.
>>>
>>> > For starters, the absolute fee pinning attack is especially
>>> > problematic if we apply the same rules (i.e. Rule #3 and #4) in
>>> > Package RBF. Imagine that Alice (honest) and Bob (adversary) share a
>>> > LN channel. The mempool is rather full, so their pre-negotiated
>>> > commitment transactions' feerates would not be considered high
>>> > priority by miners. Bob broadcasts his commitment transaction and
>>> > attaches a very large child (100KvB with 100,000sat in fees) to his
>>> > anchor output. Alice broadcasts her commitment transaction with a
>>> > fee-bumping child (200vB with 50,000sat fees which is a generous
>>> > 250sat/vB), but this does not meet the absolute fee requirement. She
>>> > would need to add another 50,000sat to replace Bob's commitment
>>> > transaction.
>>>
>>> Solving LN pinning attacks, what we're aiming for is enabling a fair
>>> feerate bid between the counterparties, thus either forcing the adversary
>>> to overbid or to disengage from the confirmation competition. If the
>>> replace-by-feerate rule is adopted, there shouldn't be an incentive for Bob
>>> to
>>> pick up the first option. Though if he does, that's a winning outcome
>>> for Alice, as one of the commitment transactions confirms and her
>>> time-sensitive second-stage HTLC can be subsequently confirmed.
>>>
>>> > It's unclear to me if
>>> > we have a very strong reason to change this, but noting it as a
>>> > limitation of our current replacement policy. See [#24007][12].
>>>
>>> Deployment of Taproot opens interesting possibilities in the
>>> vaults/payment channels design space, where the tapscripts can commit to
>>> different set of timelocks/quorum of keys. Even if the pre-signed states
>>> stay symmetric, whoever is the publisher, the feerate cost to spend can
>>> fluctuate.
>>>
>>> > While this isn't completely broken, and the user interface is
>>> > secondary to the safety of the mempool policy
>>>
>>> I think with L2s transaction broadcast backend, the stability and
>>> clarity of the RBF user interface is primary. What we could be worried
>>> about is a too-much complex interface easing the way for an attacker to
>>> trigger your L2 node to issue policy-invalid chain of transactions.
>>> Especially, when we consider that an attacker might have leverage on chain
>>> of transactions composition ("force broadcast of commitment A then
>>> commitment B, knowing they will share a CPFP") or even transactions size
>>> ("overload commitment A with HTLCs").
>>>
>>> > * If the original transaction is in the top {0.75MvB, 1MvB} of the
>>> > mempool, apply the current rules (absolute fees must increase and
>>> > pay for the replacement transaction's new bandwidth). Otherwise, use a
>>> > feerate-only rule.
>>>
>>> How this new replacement rule would behave if you have a parent in the
>>> "replace-by-feerate" half but the child is in the "replace-by-fee" one ?
>>>
>>> If we allow the replacement of the parent based on the feerate, we might
>>> decrease the top block absolute fees.
>>>
>>> If we block the replacement of the parent based on the feerate because
>>> the replacement absolute fees aren't above the replaced package, we still
>>> preclude a pinning vector. The child might be low-feerate junk and even
>>> attached to a low ancestor-score branch.
>>>
>>> If I'm correct on this limitation, maybe we could turn off the
>>> "replace-by-fee" behavior as soon as the mempool is fulfilled with a few
>>> blocks ?
>>>
>>> > * Rate-limit how many replacements we allow per prevout.
>>>
>>> Depending on how it is implemented, though I would be concerned it
>>> introduces a new pinning vector in the context of shared-utxo. If it's a
>>> hardcoded constant, it could be exhausted by an adversary starting at the
>>> lowest acceptable feerate then slowly increasing while still not reaching
>>> the top of the mempool. Same if it's time-based or block-based, no
>>> guarantee the replacement slot is honestly used by your counterparty.
>>>
>>> Further, an above-the-average replacement frequency might just be the
>>> reflection of your confirmatio

[bitcoin-dev] Improving RBF Policy

2022-01-27 Thread Gloria Zhao via bitcoin-dev
Hi everyone,

This post discusses limitations of current Bitcoin Core RBF policy and
attempts to start a conversation about how we can improve it,
summarizing some ideas that have been discussed. Please reply if you
have any new input on issues to be solved and ideas for improvement!

Just in case I've screwed up the text wrapping again, another copy can be
found here: https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff

## Background

Please feel free to skip this section if you are already familiar
with RBF.

Nodes may receive *conflicting* unconfirmed transactions, aka
"double spends" of the same inputs. Instead of always keeping the
first transaction, since v0.12, Bitcoin Core mempool policy has
included a set of Replace-by-Fee (RBF) criteria that allows the second
transaction to replace the first one and any descendants it may have.

Bitcoin Core RBF policy was previously documented as BIP 125.
The current RBF policy is documented [here][1]. In summary:

1. The directly conflicting transactions all signal replaceability
   explicitly.

2. The replacement transaction only includes an unconfirmed input if
   that input was included in one of the directly conflicting
transactions.

3. The replacement transaction pays an absolute fee of at least the
   sum paid by the original transactions.

4. The additional fees pays for the replacement transaction's
   bandwidth at or above the rate set by the node's *incremental relay
feerate*.

5. The sum of all directly conflicting transactions' descendant counts
   (number of transactions inclusive of itself and its descendants)
does not exceed 100.

We can split these rules into 3 categories/goals:

- **Allow Opting Out**: Some applications/businesses are unable to
  handle transactions that are replaceable (e.g. merchants that use
zero-confirmation transactions). We (try to) help these businesses by
honoring BIP125 signaling; we won't replace transactions that have not
opted in.

- **Incentive Compatibility**: Ensure that our RBF policy would not
  accept replacement transactions which would decrease fee profits
  of a miner. In general, if our mempool policy deviates from what is
economically rational, it's likely that the transactions in our
mempool will not match the ones in miners' mempools, making our
fee estimation, compact block relay, and other mempool-dependent
functions unreliable. Incentive-incompatible policy may also
encourage transaction submission through routes other than the p2p
network, harming censorship-resistance and privacy of Bitcoin payments.

- **DoS Protection**: Limit two types of DoS attacks on the node's
  mempool: (1) the number of times a transaction can be replaced and
(2) the volume of transactions that can be evicted during a
replacement.

Even more abstract: our goal is to make a replacement policy that
results in a useful interface for users and safe policy for
node operators.

## Motivation

There are a number of known problems with the current RBF policy.
Many of these shortcomings exist due to mempool limitations at the
time RBF was implemented or result from new types of Bitcoin usage;
they are not criticisms of the original design.

### Pinning Attacks

The most pressing concern is that attackers may take advantage of
limitations in RBF policy to prevent other users' transactions from
being mined or getting accepted as a replacement.

 SIGHASH_ANYONECANPAY Pinning

BIP125#2 can be bypassed by creating intermediary transactions to be
replaced together. Anyone can simply split a 1-input 1-output
transaction off from the replacement transaction, then broadcast the
transaction as is. This can always be done, and quite cheaply. More
details in [this comment][2].

In general, if a transaction is signed with SIGHASH\_ANYONECANPAY,
anybody can just attach a low feerate parent to this transaction and
lower its ancestor feerate.  Even if you require SIGHASH\_ALL which
prevents an attacker from changing any outputs, the input can be a
very low amount (e.g. just above the dust limit) from a low-fee
ancestor and still bring down the ancestor feerate of the transaction.

TLDR: if your transaction is signed with SIGHASH\_ANYONECANPAY and
signals replaceability, regardless of the feerate you broadcast at, an
attacker can lower its mining priority by adding an ancestor.

 Absolute Fee

The restriction of requiring replacement transactions to increase the
absolute fee of the mempool has been described as "bonkers." If the
original transaction has a very large descendant that pays a large
amount of fees, even if it has a low feerate, the replacement
transaction must now pay those fees in order to meet Rule #3.

 Package RBF

There are a number of reasons why, in order to enable Package RBF, we
cannot use the same criteria.

For starters, the absolute fee pinning attack is especially
problematic if we apply the same rules (i.e. Rule #3 and #4) in
Package RBF. Imagine that Alice (honest) and Bob (adversary) share a
LN 

Re: [bitcoin-dev] A fee-bumping model

2021-12-07 Thread Gloria Zhao via bitcoin-dev
Hi Darosior and Ariard,

Thank you for your work looking into fee-bumping so thoroughly, and for
sharing your results. I agree about fee-bumping's importance in contract
security and feel that it's often under-prioritized. In general, what
you've described in this post, to me, is strong motivation for some of the
proposed changes to RBF we've been discussing. Mostly, I have some
questions.

> The part of Revault we are interested in for this study is the delegation
process, and more
> specifically the application of spending policies by network monitors
(watchtowers).

I'd like to better understand how fee-bumping would be used, i.e. how the
watchtower model works:
- Do all of the vault parties both deposit to the vault and a refill/fee to
the watchtower, is there a reward the watchtower collects for a successful
Cancel, or something else? (Apologies if there's a thorough explanation
somewhere that I haven't already seen).
- Do we expect watchtowers tracking multiple vaults to be batching multiple
Cancel transaction fee-bumps?
- Do we expect vault users to be using multiple watchtowers for a better
trust model? If so, and we're expecting batched fee-bumps, won't those
conflict?

> For Revault we can afford to introduce malleability in the Cancel
transaction since there is no
> second-stage transaction depending on its txid. Therefore it is
pre-signed with ANYONECANPAY. We
> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
Note how we can't leverage
> the carve out rule, and neither can any other more-than-two-parties
contract.

We've already talked about this offline, but I'd like to point out here
that even transactions signed with ANYONECANPAY|ALL can be pinned by RBF
unless we add an ancestor score rule. [0], [1] (numbers are inaccurate,
Cancel Tx feerates wouldn't be that low, but just to illustrate what the
attack would look like)

[0]:
https://user-images.githubusercontent.com/25183001/135104603-9e775062-5c8d-4d55-9bc9-6e9db92cfe6d.png
[1]:
https://user-images.githubusercontent.com/25183001/145044333-2f85da4a-af71-44a1-bc21-30c388713a0d.png

> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3].
Note how we can't leverage
> the carve out rule, and neither can any other more-than-two-parties
contract.

Well stated about CPFP carve out. I suppose the generalization is that
allowing n extra ancestorcount=2 descendants to a transaction means it can
help contracts with <=n+1 parties (more accurately, outputs)? I wonder if
it's possible to devise a different approach for limiting
ancestors/descendants, e.g. by height/width/branching factor of the family
instead of count... :shrug:

> You could keep a single large UTxO and peel it as you need to sponsor
transactions. But this means
> that you need to create a coin of a specific value according to your need
at the current feerate
> estimation, hope to have it confirmed in a few blocks (at least for now!
[5]), and hope that the
> value won't be obsolete by the time it confirmed.

IIUC, a Cancel transaction can be generalized as a 1-in-1-out where the
input is presigned with counterparties, SIGHASH_ANYONECANPAY. The fan-out
UTXO pool approach is a clever solution. I also think this smells like a
case where improving lower-level RBF rules is more appropriate than
requiring applications to write workarounds and generate extra
transactions. Seeing that the BIP125#2 (no new unconfirmed inputs)
restriction really hurts in this case, if that rule were removed, would you
be able to simply keep the 1 big UTXO per vault and cut out the exact
nValue you need to fee-bump Cancel transactions? Would that feel less like
"burning" for the sake of fee-bumping?

> First of all, when to fee-bump? At fixed time intervals? At each block
connection? It sounds like,
> given a large enough timelock, you could try to greed by "trying your
luck" at a lower feerate and
> only re-bumping every N blocks. You would then start aggressively bumping
at every block after M
> blocks have passed.

I'm wondering if you also considered other questions like:
- Should a fee-bumping strategy be dependent upon the rate of incoming
transactions? To me, it seems like the two components are (1) what's in the
mempool and (2) what's going to trickle into the mempool between now and
the target block. The first component is best-effort keeping
incentive-compatible mempool; historical data and crystal ball look like
the only options for incorporating the 2nd component.
- Should the fee-bumping strategy depend on how close you are to your
timelock expiry? (though this seems like a potential privacy leak, and the
game theory could get weird as you mentioned).
- As long as you have a good fee estimator (i.e. given a current mempool,
can get an accurate feerate given a % probability of getting into target
block n), is there any reason to devise a fee-bumping strategy beyond
picking a time interval?

It would be interesting to see stats on the spread of feerates in 

Re: [bitcoin-dev] death to the mempool, long live the mempool

2021-10-26 Thread Gloria Zhao via bitcoin-dev
Hi Lisa,

Some background for people who are not familiar with mempool:

The mempool is a cache of unconfirmed transactions, designed in a way
to help miners efficiently pick the highest feerate packages to
include in new blocks. It stores more than a block's worth of
transactions because transaction volume fluctuates and any rational
miner would try to maximize the fees in their blocks; in a reorg, we
also don't want to completely forget what transactions were in the
now-stale tip.

In Bitcoin Core, full nodes keep a mempool by default. The additional
requirements for keeping a mempool are minimal (300MB storage, can be
configured to be lower) because anyone, anywhere in the world, should
be able to run a node and broadcast a Bitcoin payment without special
connectivity to some specific set of people or expensive/inaccessible
hardware. Perhaps connecting directly to miners can be a solution for
some people, but I don't think it's healthy for the network.

Some benefits of keeping a mempool as a non-mining node include:
- Fee estimation based on your node's knowledge of unconfirmed
transactions + historical data.
- Dramatically increased block validation (and thus propagation)
speed, since you cache signature and script results of transactions
before they are confirmed.
- Reduced block relay bandwidth usage (Bitcoin Core nodes use BIP152
compact block relay), as you don't need to re-request the block
transactions you already have in your mempool.
- Wallet ability to send/receive transactions that spend unconfirmed outputs.

> I had the realization that the mempool is obsolete and should be eliminated.

I assume you mean that the mempool should still exist but be turned
off for non-mining nodes. A block template producer needs to keep
unconfirmed transactions somewhere.
On Bitcoin Core today, you can use the -blocksonly config option to
ignore incoming transactions (effectively switching off your mempool),
but there are strong reasons not to do this:
- It is trivial for your peers to detect that all transactions
broadcasted by your node = from your wallet. Linking your node to your
transactions is a very bad privacy leak.
- You must query someone else for what feerate to put on your transaction.
- You can't use BIP152 compact block relay, so your network bandwidth
usage spikes at every block. You also can't cache validation results,
so your block validation speed slows down.

> Removing the mempool would greatly reduce the bandwidth requirement for 
> running a node...

If you're having problems with your individual node's bandwidth usage,
you can also decrease the number of connections you make or turn off
incoming connections. There are efforts to reduce transaction relay
bandwidth usage network-wide [1].

> Removing the mempool would... keep intentionality of transactions private 
> until confirmed/irrevocable...

I'm confused - what is the purpose of keeping a transaction private
until it confirms? Don't miners still see your transaction? A
confirmed transaction is not irrevocable; reorgs happen.

> Removing the mempool would... naturally resolve all current issues inherent 
> in package relay and rbf rules.

Removing the mempool does not help with this. How does a miner decide
whether a conflicting transaction is an economically-advantageous
replacement or a DoS attack? How do you submit your CPFP if the parent
is below the mempool minimum feerate? Do they already have a different
relay/mempool implementation handling all of these problems but don't
aren't sharing it with the rest of the community?

> Initial feerate estimation would need to be based on published blocks, not 
> pending transactions (as this information would no longer be available), or 
> from direct interactions with block producers.

There are many reasons why using only published blocks for fee
estimates is a flawed design, including:

- The miner of a block can artificially inflate the feerate of the
transactions in their mempool simply by including a few of their own
transactions that pay extremely high feerates. This costs them
nothing, as they collect the fees.
- A miner constructs a block based on the transactions in their
mempool. Your transaction's feerate may have been enough to be
included 2 blocks ago or a week ago, but it will be compared to the
other unconfirmed transactions available to the miner now. They can
tell you what's in their mempool or what the next-block feerate is,
but you would be a fool to believe them.

See also [2],[3].

> Provided the number of block template producing actors remains beneath, say 
> 1000, it’d be quite feasible to publish a list of tor endpoints that nodes 
> can independently  + directly submit their transactions to. In fact, merely 
> allowing users to select their own list of endpoints to use alternatively to 
> the mempool would be a low effort starting point for the eventual replacement.

As a thought experiment, let's imagine we have some public registry of
mining nodes' tor 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-23 Thread Gloria Zhao via bitcoin-dev
a loss of funds.
>>>
>>> That said, if you're broadcasting commitment transactions without
>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>> saving as you don't have to duplicate the CPFP.
>>>
>>> IMHO, I'm leaning towards deploying during a first phase
>>> 1-parent/1-child. I think it's the most conservative step still improving
>>> second-layer safety.
>>>
>>> > *Rationale*:  It would be incorrect to use the fees of transactions
>>> that are
>>> > already in the mempool, as we do not want a transaction's fees to be
>>> > double-counted for both its individual RBF and package RBF.
>>>
>>> I'm unsure about the logical order of the checks proposed.
>>>
>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
>>> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
>>> fails. For this reason I think the individual RBF should be bypassed and
>>> only the package RBF apply ?
>>>
>>> Note this situation is plausible, with current LN design, your
>>> counterparty can have a commitment transaction with a better fee just by
>>> selecting a higher `dust_limit_satoshis` than yours.
>>>
>>> > Examples F and G [14] show the same package, but P1 is submitted
>>> > individually before
>>> > the package in example G. In example F, we can see that the 300vB
>>> package
>>> > pays
>>> > an additional 200sat in fees, which is not enough to pay for its own
>>> > bandwidth
>>> > (BIP125#4). In example G, we can see that P1 pays enough to replace
>>> M1, but
>>> > using P1's fees again during package submission would make it look
>>> like a
>>> > 300sat
>>> > increase for a 200vB package. Even including its fees and size would
>>> not be
>>> > sufficient in this example, since the 300sat looks like enough for the
>>> 300vB
>>> > package. The calculcation after deduplication is 100sat increase for a
>>> > package
>>> > of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>> have
>>> > a
>>> > size of 100vB.
>>>
>>> What problem are you trying to solve by the package feerate *after*
>>> dedup rule ?
>>>
>>> My understanding is that an in-package transaction might be already in
>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>> vsize of this transaction could be discarded lowering the cost of package
>>> RBF.
>>>
>>> If we keep a "safe" dedup mechanism (see my point above), I think this
>>> discount is justified, as the validation cost of node operators is paid for
>>> ?
>>>
>>> > The child cannot replace mempool transactions.
>>>
>>> Let's say you issue package A+B, then package C+B', where B' is a child
>>> of both A and C. This rule fails the acceptance of C+B' ?
>>>
>>> I think this is a footgunish API, as if a package issuer send the
>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>> rejected. So it's breaking the naive broadcaster assumption that a
>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>> in protocols where states are symmetric. E.g a malicious counterparty
>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>> fees.
>>>
>>> > All mempool transactions to be replaced must signal replaceability.
>>>
>>> I think this is unsafe for L2s if counterparties have malleability of
>>> the child transaction. They can block your package replacement by
>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>> ability.
>>>
>>> I think it's better to either fix inherited signaling or move towards
>>> full-rbf.
>>>
>>> > if a package parent has already been submitted, it would
>>> > look
>>> >like the child is spending a "new" unconfirmed input.
>>>
>>> I think this is an issue brought by the trimming during the dedup phase.
>>> If we preserve the package integrity, only re-using the tx-level checks
>>> results of already in-mempool transactions to gain in CPU time we won't
>>> have this issue. Package childs can add unc

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-22 Thread Gloria Zhao via bitcoin-dev
m misunderstanding).
>>>>
>>>> I believe this attack is mitigated as long as we attempt to submit
>>>> transactions individually (and thus take advantage of CPFP carve out)
>>>> before attempting package validation. So, in scenario A2, even if the
>>>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>>>> individual transaction, and allow it due to the CPFP carve out exemption. A
>>>> more general goal is: if a transaction would propagate successfully on its
>>>> own now, it should still propagate regardless of whether it is included in
>>>> a package. The best way to ensure this, as far as I can tell, is to always
>>>> try to submit them individually first.
>>>>
>>>> I would note that this proposal doesn't accommodate something like
>>>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>>>> C has very low fees and is bumped by D). I don't think this is a use case
>>>> since C should be the one fee-bumping A, but since we're talking about
>>>> limitations around the CPFP carve out, this is it.
>>>>
>>>> Let me know if this addresses your concerns?
>>>>
>>>> Thanks,
>>>> Gloria
>>>>
>>>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
>>>> wrote:
>>>>
>>>>> Hi Gloria,
>>>>>
>>>>> Thanks for this detailed post!
>>>>>
>>>>> The illustrations you provided are very useful for this kind of graph
>>>>> topology problems.
>>>>>
>>>>> The rules you lay out for package RBF look good to me at first glance
>>>>> as there are some subtle improvements compared to BIP 125.
>>>>>
>>>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>
>>>>> I have a question regarding this rule, as your example 2C could be
>>>>> concerning for LN (unless I didn't understand it correctly).
>>>>>
>>>>> This also touches on the package RBF rule 5 ("The package cannot
>>>>> replace more than 100 mempool transactions.")
>>>>>
>>>>> In your example we have a parent transaction A already in the mempool
>>>>> and an unrelated child B. We submit a package C + D where C spends
>>>>> another of A's inputs. You're highlighting that this package may be
>>>>> rejected because of the unrelated transaction(s) B.
>>>>>
>>>>> The way I see this, an attacker can abuse this rule to ensure
>>>>> transaction A stays pinned in the mempool without confirming by
>>>>> broadcasting a set of child transactions that reach these limits
>>>>> and pay low fees (where A would be a commit tx in LN).
>>>>>
>>>>> We had to create the CPFP carve-out rule explicitly to work around
>>>>> this limitation, and I think it would be necessary for package RBF
>>>>> as well, because in such cases we do want to be able to submit a
>>>>> package A + C where C pays high fees to speed up A's confirmation,
>>>>> regardless of unrelated unconfirmed children of A...
>>>>>
>>>>> We could submit only C to benefit from the existing CPFP carve-out
>>>>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>>>>> but other remote mempools do.
>>>>>
>>>>> Is my concern justified? Is this something that we should dig into a
>>>>> bit deeper?
>>>>>
>>>>> Thanks,
>>>>> Bastien
>>>>>
>>>>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>>>>> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>>>>>
>>>>>> Hi there,
>>>>>>
>>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>>> package
>>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>>> would not
>>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>>> significantly affects transaction propagation, I believe this is
>>>>>> relevant for
>>>>>> the mailing list.
>>>>>>
>>>>>> My proposal 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Gloria Zhao via bitcoin-dev
lustrations you provided are very useful for this kind of graph
>>> topology problems.
>>>
>>> The rules you lay out for package RBF look good to me at first glance
>>> as there are some subtle improvements compared to BIP 125.
>>>
>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>
>>> I have a question regarding this rule, as your example 2C could be
>>> concerning for LN (unless I didn't understand it correctly).
>>>
>>> This also touches on the package RBF rule 5 ("The package cannot
>>> replace more than 100 mempool transactions.")
>>>
>>> In your example we have a parent transaction A already in the mempool
>>> and an unrelated child B. We submit a package C + D where C spends
>>> another of A's inputs. You're highlighting that this package may be
>>> rejected because of the unrelated transaction(s) B.
>>>
>>> The way I see this, an attacker can abuse this rule to ensure
>>> transaction A stays pinned in the mempool without confirming by
>>> broadcasting a set of child transactions that reach these limits
>>> and pay low fees (where A would be a commit tx in LN).
>>>
>>> We had to create the CPFP carve-out rule explicitly to work around
>>> this limitation, and I think it would be necessary for package RBF
>>> as well, because in such cases we do want to be able to submit a
>>> package A + C where C pays high fees to speed up A's confirmation,
>>> regardless of unrelated unconfirmed children of A...
>>>
>>> We could submit only C to benefit from the existing CPFP carve-out
>>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>>> but other remote mempools do.
>>>
>>> Is my concern justified? Is this something that we should dig into a
>>> bit deeper?
>>>
>>> Thanks,
>>> Bastien
>>>
>>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>>>
>>>> Hi there,
>>>>
>>>> I'm writing to propose a set of mempool policy changes to enable package
>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>> would not
>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>> significantly affects transaction propagation, I believe this is
>>>> relevant for
>>>> the mailing list.
>>>>
>>>> My proposal enables packages consisting of multiple parents and 1
>>>> child. If you
>>>> develop software that relies on specific transaction relay assumptions
>>>> and/or
>>>> are interested in using package relay in the future, I'm very
>>>> interested to hear
>>>> your feedback on the utility or restrictiveness of these package
>>>> policies for
>>>> your use cases.
>>>>
>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>> PR#22290][1].
>>>>
>>>> An illustrated version of this post can be found at
>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>> I have also linked the images below.
>>>>
>>>> ## Background
>>>>
>>>> Feel free to skip this section if you are already familiar with mempool
>>>> policy
>>>> and package relay terminology.
>>>>
>>>> ### Terminology Clarifications
>>>>
>>>> * Package = an ordered list of related transactions, representable by a
>>>> Directed
>>>>   Acyclic Graph.
>>>> * Package Feerate = the total modified fees divided by the total
>>>> virtual size of
>>>>   all transactions in the package.
>>>> - Modified fees = a transaction's base fees + fee delta applied by
>>>> the user
>>>>   with `prioritisetransaction`. As such, we expect this to vary
>>>> across
>>>> mempools.
>>>> - Virtual Size = the maximum of virtual sizes calculated using
>>>> [BIP141
>>>>   virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>> Core][3].
>>>> - Note that feerate is not necessarily based on the base fees and
>>>> serialized
>>>>   size.
>>>>
>>>> * Fee-Bumping = user/wallet actions that take advantage o

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-21 Thread Gloria Zhao via bitcoin-dev
Hi Bastien,

Thank you for your feedback!

> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.

> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).

I believe you are describing a pinning attack in which your adversarial
counterparty attempts to monopolize the mempool descendant limit of the
shared  transaction A in order to prevent you from submitting a fee-bumping
child C; I've tried to illustrate this as diagram A here:
https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
(please let me know if I'm misunderstanding).

I believe this attack is mitigated as long as we attempt to submit
transactions individually (and thus take advantage of CPFP carve out)
before attempting package validation. So, in scenario A2, even if the
mempool receives a package with A+C, it would deduplicate A, submit C as an
individual transaction, and allow it due to the CPFP carve out exemption. A
more general goal is: if a transaction would propagate successfully on its
own now, it should still propagate regardless of whether it is included in
a package. The best way to ensure this, as far as I can tell, is to always
try to submit them individually first.

I would note that this proposal doesn't accommodate something like diagram
B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
very low fees and is bumped by D). I don't think this is a use case since C
should be the one fee-bumping A, but since we're talking about limitations
around the CPFP carve out, this is it.

Let me know if this addresses your concerns?

Thanks,
Gloria

On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER 
wrote:

> Hi Gloria,
>
> Thanks for this detailed post!
>
> The illustrations you provided are very useful for this kind of graph
> topology problems.
>
> The rules you lay out for package RBF look good to me at first glance
> as there are some subtle improvements compared to BIP 125.
>
> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
> I have a question regarding this rule, as your example 2C could be
> concerning for LN (unless I didn't understand it correctly).
>
> This also touches on the package RBF rule 5 ("The package cannot
> replace more than 100 mempool transactions.")
>
> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.
>
> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).
>
> We had to create the CPFP carve-out rule explicitly to work around
> this limitation, and I think it would be necessary for package RBF
> as well, because in such cases we do want to be able to submit a
> package A + C where C pays high fees to speed up A's confirmation,
> regardless of unrelated unconfirmed children of A...
>
> We could submit only C to benefit from the existing CPFP carve-out
> rule, but that wouldn't work if our local mempool doesn't have A yet,
> but other remote mempools do.
>
> Is my concern justified? Is this something that we should dig into a
> bit deeper?
>
> Thanks,
> Bastien
>
> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi there,
>>
>> I'm writing to propose a set of mempool policy changes to enable package
>> validation (in preparation for package relay) in Bitcoin Core. These
>> would not
>> be consensus or P2P protocol changes. However, since mempool policy
>> significantly affects transaction propagation, I believe this is relevant
>> for
>> the mailing list.
>>
>> My proposal enables packages consisting of multiple parents and 1 child.
>> If you
>> develop software that relies on specific transaction relay assumptions
>> and/or
>> are interested in using package relay in the future, I'm very interested
>> to hear
>> your feedback on the utility or restrictiveness of these package policies
>> for
>> your use cases

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-20 Thread Gloria Zhao via bitcoin-dev
re B pays better
> fees.
>
> > All mempool transactions to be replaced must signal replaceability.
>
> I think this is unsafe for L2s if counterparties have malleability of the
> child transaction. They can block your package replacement by opting-out
> from RBF signaling. IIRC, LN's "anchor output" presents such an ability.
>
> I think it's better to either fix inherited signaling or move towards
> full-rbf.
>
> > if a package parent has already been submitted, it would
> > look
> >like the child is spending a "new" unconfirmed input.
>
> I think this is an issue brought by the trimming during the dedup phase.
> If we preserve the package integrity, only re-using the tx-level checks
> results of already in-mempool transactions to gain in CPU time we won't
> have this issue. Package childs can add unconfirmed inputs as long as
> they're in-package, the bip125 rule2 is only evaluated against parents ?
>
> > However, we still achieve the same goal of requiring the
> > replacement
> > transactions to have a ancestor score at least as high as the original
> > ones.
>
> I'm not sure if this holds...
>
> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
> sat/vb for 1000 vbytes.
>
> Package A + B ancestor score is 10 sat/vb.
>
> D has a higher feerate/absolute fee than B.
>
> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
> sats + D's 1500 sats) /
> A's 100 vb + C's 1000 vb + D's 100 vb)
>
> Overall, this is a review through the lenses of LN requirements. I think
> other L2 protocols/applications
> could be candidates to using package accept/relay such as:
> * https://github.com/lightninglabs/pool
> * https://github.com/discreetlogcontracts/dlcspecs
> * https://github.com/bitcoin-teleport/teleport-transactions/
> * https://github.com/sapio-lang/sapio
> * https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
> * https://github.com/revault/practical-revault
>
> Thanks for rolling forward the ball on this subject.
>
> Antoine
>
> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi there,
>>
>> I'm writing to propose a set of mempool policy changes to enable package
>> validation (in preparation for package relay) in Bitcoin Core. These
>> would not
>> be consensus or P2P protocol changes. However, since mempool policy
>> significantly affects transaction propagation, I believe this is relevant
>> for
>> the mailing list.
>>
>> My proposal enables packages consisting of multiple parents and 1 child.
>> If you
>> develop software that relies on specific transaction relay assumptions
>> and/or
>> are interested in using package relay in the future, I'm very interested
>> to hear
>> your feedback on the utility or restrictiveness of these package policies
>> for
>> your use cases.
>>
>> A draft implementation of this proposal can be found in [Bitcoin Core
>> PR#22290][1].
>>
>> An illustrated version of this post can be found at
>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>> I have also linked the images below.
>>
>> ## Background
>>
>> Feel free to skip this section if you are already familiar with mempool
>> policy
>> and package relay terminology.
>>
>> ### Terminology Clarifications
>>
>> * Package = an ordered list of related transactions, representable by a
>> Directed
>>   Acyclic Graph.
>> * Package Feerate = the total modified fees divided by the total virtual
>> size of
>>   all transactions in the package.
>> - Modified fees = a transaction's base fees + fee delta applied by
>> the user
>>   with `prioritisetransaction`. As such, we expect this to vary across
>> mempools.
>> - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>>   virtual size][2] and sigop weight. [Implemented here in Bitcoin
>> Core][3].
>> - Note that feerate is not necessarily based on the base fees and
>> serialized
>>   size.
>>
>> * Fee-Bumping = user/wallet actions that take advantage of miner
>> incentives to
>>   boost a transaction's candidacy for inclusion in a block, including
>> Child Pays
>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
>> mempool policy is to recognize when the new transa

[bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-16 Thread Gloria Zhao via bitcoin-dev
Hi there,

I'm writing to propose a set of mempool policy changes to enable package
validation (in preparation for package relay) in Bitcoin Core. These would
not
be consensus or P2P protocol changes. However, since mempool policy
significantly affects transaction propagation, I believe this is relevant
for
the mailing list.

My proposal enables packages consisting of multiple parents and 1 child. If
you
develop software that relies on specific transaction relay assumptions
and/or
are interested in using package relay in the future, I'm very interested to
hear
your feedback on the utility or restrictiveness of these package policies
for
your use cases.

A draft implementation of this proposal can be found in [Bitcoin Core
PR#22290][1].

An illustrated version of this post can be found at
https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
I have also linked the images below.

## Background

Feel free to skip this section if you are already familiar with mempool
policy
and package relay terminology.

### Terminology Clarifications

* Package = an ordered list of related transactions, representable by a
Directed
  Acyclic Graph.
* Package Feerate = the total modified fees divided by the total virtual
size of
  all transactions in the package.
- Modified fees = a transaction's base fees + fee delta applied by the
user
  with `prioritisetransaction`. As such, we expect this to vary across
mempools.
- Virtual Size = the maximum of virtual sizes calculated using [BIP141
  virtual size][2] and sigop weight. [Implemented here in Bitcoin
Core][3].
- Note that feerate is not necessarily based on the base fees and
serialized
  size.

* Fee-Bumping = user/wallet actions that take advantage of miner incentives
to
  boost a transaction's candidacy for inclusion in a block, including Child
Pays
for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
mempool policy is to recognize when the new transaction is more economical
to
mine than the original one(s) but not open DoS vectors, so there are some
limitations.

### Policy

The purpose of the mempool is to store the best (to be most
incentive-compatible
with miners, highest feerate) candidates for inclusion in a block. Miners
use
the mempool to build block templates. The mempool is also useful as a cache
for
boosting block relay and validation performance, aiding transaction relay,
and
generating feerate estimations.

Ideally, all consensus-valid transactions paying reasonable fees should
make it
to miners through normal transaction relay, without any special
connectivity or
relationships with miners. On the other hand, nodes do not have unlimited
resources, and a P2P network designed to let any honest node broadcast their
transactions also exposes the transaction validation engine to DoS attacks
from
malicious peers.

As such, for unconfirmed transactions we are considering for our mempool, we
apply a set of validation rules in addition to consensus, primarily to
protect
us from resource exhaustion and aid our efforts to keep the highest fee
transactions. We call this mempool _policy_: a set of (configurable,
node-specific) rules that transactions must abide by in order to be accepted
into our mempool. Transaction "Standardness" rules and mempool restrictions
such
as "too-long-mempool-chain" are both examples of policy.

### Package Relay and Package Mempool Accept

In transaction relay, we currently consider transactions one at a time for
submission to the mempool. This creates a limitation in the node's ability
to
determine which transactions have the highest feerates, since we cannot take
into account descendants (i.e. cannot use CPFP) until all the transactions
are
in the mempool. Similarly, we cannot use a transaction's descendants when
considering it for RBF. When an individual transaction does not meet the
mempool
minimum feerate and the user isn't able to create a replacement transaction
directly, it will not be accepted by mempools.

This limitation presents a security issue for applications and users
relying on
time-sensitive transactions. For example, Lightning and other protocols
create
UTXOs with multiple spending paths, where one counterparty's spending path
opens
up after a timelock, and users are protected from cheating scenarios as
long as
they redeem on-chain in time. A key security assumption is that all parties'
transactions will propagate and confirm in a timely manner. This assumption
can
be broken if fee-bumping does not work as intended.

The end goal for Package Relay is to consider multiple transactions at the
same
time, e.g. a transaction with its high-fee child. This may help us better
determine whether transactions should be accepted to our mempool,
especially if
they don't meet fee requirements individually or are better RBF candidates
as a
package. A combination of changes to mempool validation logic, policy, and
transaction relay allows us to better propagate the transactions with 

Re: [bitcoin-dev] L2s Onchain Support IRC Workshop

2021-04-26 Thread Gloria Zhao via bitcoin-dev
Hi Antoine,

Thanks for initiating this! I'm interested in joining. Since I mostly live
in L1, my primary goal is to understand what simplest version of package
relay would be sufficient to support transaction relay assumptions made by
L2 applications. For example, if a parent + child package covers the vast
majority of cases and a package limit of 2 is considered acceptable, that
could simplify things quite a bit.

A small note - I believe package relay and sponsorship (or other
fee-bumping primitive) should be separate discussions.

Re: L2-zoology... In general, for the purpose of creating a stable API /
set of assumptions between layers, I'd like to be as concrete as possible.
Speaking for myself, if I'm TDDing for a specific L2 attack, I need test
vectors. A simple description of mempool contents + p2p messages sent is
fine, but pubkeys + transaction hex would be appreciated because we don't
(and probably shouldn't, for the purpose of maintainability) have a lot of
tooling to build L2 transactions in Bitcoin Core. In the other direction,
it's hard to make any guarantees given the complexity of mempool policy,
but perhaps it could be helpful to expose a configurable RPC (e.g. #21413
) to test a range of
scenarios?

Anyway, looking forward to discussions :)

Best,
Gloria

On Fri, Apr 23, 2021 at 8:51 AM Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> During the lastest years, tx-relay and mempool acceptances rules of the
> base layer have been sources of major security and operational concerns for
> Lightning and other Bitcoin second-layers [0]. I think those areas require
> significant improvements to ease design and deployment of higher Bitcoin
> layers and I believe this opinion is shared among the L2 dev community. In
> order to make advancements, it has been discussed a few times in the last
> months to organize in-person workshops to discuss those issues with the
> presence of both L1/L2 devs to make exchange fruitful.
>
> Unfortunately, I don't think we'll be able to organize such in-person
> workshops this year (because you know travel is hard those days...) As a
> substitution, I'm proposing a series of one or more irc meetings. That
> said, this substitution has the happy benefit to gather far more folks
> interested by those issues that you can fit in a room.
>
> # Scope
>
> I would like to propose the following 4 items as topics of discussion.
>
> 1) Package relay design or another generic L2 fee-bumping primitive like
> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
> making obsolete propagation of transactions with pre-signed feerate, solve
> pinning attacks compromising Lightning/multi-party contract protocol
> safety, offer an usable and stable API to L2 software stack, stay
> compatible with miner and full-node operators incentives and obviously
> minimize CPU/memory DoS vectors.
>
> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
> for an attacker to partition network mempools in divergent subsets and from
> then launch advanced security or privacy attacks against a Lightning node.
> Note, it might also be a concern for bandwidth bleeding attacks against L1
> nodes.
>
> 3) Guidelines about coordinated cross-layers security disclosures.
> Mitigating a security issue around tx-relay or the mempool in Core might
> have harmful implications for downstream projects. Ideally, L2 projects
> maintainers should be ready to upgrade their protocols in emergency in
> coordination with base layers developers.
>
> 4) Guidelines about L2 protocols onchain security design. Currently
> deployed like Lightning are making a bunch of assumptions on tx-relay and
> mempool acceptances rules. Those rules are non-normative, non-reliable and
> lack documentation. Further, they're devoid of tooling to enforce them at
> runtime [2]. IMHO, it could be preferable to identify a subset of them on
> which second-layers protocols can do assumptions without encroaching too
> much on nodes's policy realm or making the base layer development in those
> areas too cumbersome.
>
> I'm aware that some folks are interested in other topics such as extension
> of Core's mempools package limits or better pricing of RBF replacement. So
> l propose a 2-week concertation period to submit other topics related to
> tx-relay or mempools improvements towards L2s before to propose a finalized
> scope and agenda.
>
> # Goals
>
> 1) Reaching technical consensus.
> 2) Reaching technical consensus, before seeking community consensus as it
> likely has ecosystem-wide implications.
> 3) Establishing a security incident response policy which can be applied
> by dev teams in the future.
> 4) Establishing a philosophy design and associated documentations (BIPs,
> best practices, ...)
>
> # Timeline
>
> 2021-04-23: Start of concertation period
> 2021-05-07: End of concertation period
> 2021-05-10: