Re: [Lightning-dev] LN Summit 2023 Notes

2023-08-03 Thread Anthony Towns
On Mon, Jul 31, 2023 at 02:42:29PM -0400, Clara Shikhelman wrote:
> > A different way of thinking about the monetary approach is in terms of
> > scaling rather than deterrance: that is, try to make the cost that the
> > attacker pays sufficient to scale up your node/the network so that you
> > continue to have excess capacity to serve regular users.

Just to clarify, my goal for these comments was intended to be mostly
along the lines of:

 "I think monetary-based DoS deterrence is still likely to be a fruitful
  area for research if people are interested, even if the current
  implementation work is focussed on reputation-based methods"

At least the way I read the summit notes, I could see people coming away
with the alternative impression; ie "we've explored monetary approaches
and think there's nothing possible there; don't waste your time", and
mostly just wanted to provide a counter to that impression.

The scheme I outlined was mostly provided as a rough proof-of-work to
justify thinking that way and as perhaps one approach that could be
researched further, rather than something people should be actively
working on, let alone anything that should distract from working on the
reputation-based approach.

After talking with harding on irc, it seems that was not as obvious in
text as it was in my head, so just thought I'd spell it out...

> As for liquidity DoS, the “holy grail” is indeed charging fees as a
> function of the time the HTLC was held. As for now, we are not aware of a
> reasonable way to do this. 

Sure.

> There is no universal clock,

I think that's too absolute a statement. The requirement is either that
you figure out a way of using the chain tip as a clock (which I gave a
sketch of), or you setup local clocks with each peer and have a scheme
for dealing with them being slightly out of sync (and probably use the
chain tip as a way of ensuring they aren't very out of sync).

> and there is no way
> for me to prove that a message was sent to you, and you decided to pretend
> you didn't.

All the messages in the scheme I suggested involve commitment tx updates
-- either introducing/closing a HTLC or making a payment for keeping a
HTLC active and tying up your counterparty's liquidity. You don't need to
prove that messages were/weren't sent -- if they were, your commitment
tx is already updated to deal with it, if they weren't but should have
been, your channel is in an invalid state, and you close it onchain.

To me, proving things seems like something that comes up in reputation
based approaches, where you need to reference a hit on someone else's
reputation to avoid taking a hit on yours, rather than a monetary based
one, where all you should need to do is check you got paid for whatever
service you were providing, and conversely pay for whatever services
you've been requiring.

> It can easily happen that the fee for a two-week unresolved
> HTLC is higher than the fee for a quickly resolving one.

That should be the common case, yes, and it's problematic if you can have
both a high percentage fee (or a high amount), and a distant timeout.
But that may be a situation you can avoid, and I gave a sketch of one
way you could do that.

> I think this is another take on time-based fees. In this variation, the
> victim is trying to take a fee from the attacker. If the attacker is not
> willing to pay the fee (and why would they?), then the victim has to force
> close. There is no way for the victim to prove that it is someone
> downstream holding the HTLC and not them.

The point is that you get paid for your liquidity beind held hostage;
whether the channel is closed or stays open. If that works, there's
no victim in this scenario -- you set a price for your liquidity to be
reserved over time in the hope that the payment will eventually succeed,
and you get paid that fee, until whoever currently holds the HTLC the
chance of success isn't worth the ongoing cost anymore.

The point of force closing it the same as any force close -- your
counterparty stops following the protocol you both agreed to. That can
happy any time, even just due to cosmic rays.

> > >  - They’re not large enough to be enforceable, so somebody always has
> > > to give the money back off chain.
> > If the cap is 500ppm per block, then the liquidity fees for a 2000sat
> > payment ($0.60) are redeemable onchain.
> This heavily depends on the on-chain fees, and so will need to be
> updated as a function of that, and adds another layer of complication.

I don't think that's true -- this is just a direct adjustment to the
commitment tx balance outputs, so doesn't change the on-chain size/cost
of the commitment tx.

The link to on-chain fees (at least in the scheme I outlined) is via
the cap (for which I gave an assumed value above) -- you don't want the
extra profit your counterparty would get from from that adjustment to
outweigh something like sum(their liquidity value of locking their funds
up due to a unilateral close;

Re: [Lightning-dev] LN Summit 2023 Notes

2023-08-02 Thread Clara Shikhelman
Hi AJ,


A different way of thinking about the monetary approach is in terms of
> scaling rather than deterrance: that is, try to make the cost that the
> attacker pays sufficient to scale up your node/the network so that you
> continue to have excess capacity to serve regular users.
>
> In that case, if people are suddenly routing their netflix data and
> nostr photo libraries over lightning onion packets, that's fine: you
> make them pay amazon ec2 prices plus 50% for the resources they use,
> and when they do , you deploy more servers. ie, turn your attackers and
> spammers into a profit centre.
>
> I've had an email about this sitting in my drafts for a few years now,
> but I think this could work something like:
>
>  - message spam (ie, onion traffic costs): when you send a message
>to a peer, pay for its bandwidth and compute. Perhaps something
>like 20c/GB is reasonable, which is something like 1msat per onion
>packet, so perhaps 20msat per onion packet if you're forwarding it
>over 20 hops.
>
>  - liquidity DoS prevention: if you're in receipt of a HTLC/PTLC and
>aren't cancelling or confirming it, you pay your peer a fee for
>holding their funds. (if you're forwarding the HTLC, then whoever you
>forwarded to pays you a slightly higher fee, while they hold your
>funds) Something like 1ppm per hour matches a 1% pa return, so if
>you're an LSP holding on to a $20 payment waiting for the recipient to
>come online and claim it, then you might be paying out $0.0004 per hour
>(1.4sat) in order for 20 intermediate hops to each be making 20%
>pa interest on their held up funds.
>
>  - actual payment incentives: eg, a $20 payment paying 0.05% fees
>(phoenix's minimum) costs $0.01 (33sat). Obviously you want this
>number to be a lot higher than all the DoS prevention fees.
>
> If you get too much message spam, you fire up more amazon compute
> and take maybe 10c/GB in profit; if all your liquidity gets used up,
> congrats you've just gotten 20%+ APY on your bitcoin without third party
> risk and you can either reinvest your profits or increase your fees; and
> all of those numbers are just noise compared to actual payment traffic,
> which is 30x or 1500x more profitable.
>

Message spam is outside the scope of this solution.

As for liquidity DoS, the “holy grail” is indeed charging fees as a
function of the time the HTLC was held. As for now, we are not aware of a
reasonable way to do this. There is no universal clock, and there is no way
for me to prove that a message was sent to you, and you decided to pretend
you didn't. It can easily happen that the fee for a two-week unresolved
HTLC is higher than the fee for a quickly resolving one. Because of this,
we turn to reputation for slow jamming and use fees only for quick jamming.

See later comments on the specific suggestion outlined.

The amounts here are all very low, so I don't think you really need much
> more precision than "hourly". I think you could even do it "per block"
> and convert "1% pa" as actually "0.2 parts per million per block", since
> the only thing time is relevant for is turning liquidity DoS into an APY
> figure. Presumably that needs some tweaking to deal with the possibility
> of reorgs or stale blocks.
>

This is again the same problem with a node preferring the fees for a
two-week unresolved HTLC over a quick resolving one. See comments below for
the far downstream node.

I think the worst case for that scenario is if you have a route
>
>A1 -> A2 -> .. -> A19 -> B -> A20
>
> then B closes the {B,A20} channel and at the end of the timeout A20 claims
> the funds. At that point B will have paid liquidity fees to A1..A19 for
> the full two week period, but will have only received a fixed payout
> from A20 due to the channel close.
>
> At 10% APY, with a $1000 payment, B will have paid ~$73 to A (7.3%). If
> the close channel transaction costs, say, $5, then either you end up with
> B wanting to close the channel early in non-attack scenarios (they collect
> $73 from A20, but only pay perhaps 4c back to A1..A19, and perhaps $6 to
> open and close the channel), or you end up with A holding up the funds
> and leaching off B (B only collects, say, $20 from A20, but then A20
> claims the funds after two weeks so is either up $75 if B didn't claim
> the funds from A19, or is up $53 after B paid liquidity fees for 2 weeks).
>
> _But_ I think this is an unreasonable scenario: the only reason for B to
> forward a HTLC with a 2 week expiry is if they're early in the route,
> but the only reason to accept a large liquidity fee is if they're late
> in the route. So I think you can solve that by only forwarding a payment
> if the liquidity fee rate multiplied by the expiry is below a cap, eg:
>
>A19 -> B   : wants 36ppm per block; cap/ppm = 500/36 = 13.8
>B   -> A20 : expiry is in 13 blocks; wants 38ppm per block
>
> (2ppm per block ~= 10% APY)
>
> For comparison, at the start of

Re: [Lightning-dev] LN Summit 2023 Notes

2023-07-26 Thread Anthony Towns
On Wed, Jul 19, 2023 at 09:56:11AM -0400, Carla Kirk-Cohen wrote:
> Thanks to everyone who traveled far, Wolf for hosting us in style in
> NYC and to Michael Levin for helping out with notes <3

Thanks for the notes!

Couple of comments:

> - What is the “top of mempool” assumption?

FWIW, I think this makes much more sense if you think about this as a
few related, but separate goals:

 * transactors want their proposed txs to go to miners
 * pools/miners want to see the most profitable txs asap
 * node operators want to support bitcoin users/businesses
 * node operators also want to avoid wasting too much bandwidth/cpu/etc
   relaying txs that aren't going to be mined, both their own and that
   of other nodes'
 * people who care about decentralisation want miners to get near-optimal
   tx selection with a default bitcoind setup, so there's no secret
   sauce or moats that could encourage a mining monopoly to develop

Special casing lightning unilateral closes [0] probably wouldn't be
horrible. It's obviously good for the first three goals. As far as the
fourth, if it was lightning nodes doing the relaying, they could limit
each unilateral close to one rbf attempt (based on to_local/to_remote
outputs changing). And for the fifth, provided unilateral closes remain
rare, the special config isn't likely to cause much of a profit difference
between big pools and small ones (and maybe that's only a short term
issue, and a more general solution will be found and implemented, where
stuff that would be in the next block gets relayed much more aggressively,
even if it replaces a lot of transactions).

[0] eg, by having lightning nodes relay the txs even when bitcoind
doesn't relay them, and having some miners run special configurations
to pull those txs in.

> - Is there a future where miners don’t care about policy at all?

Thinking about the different goals above seems like it gives a clear
answer to this: as far as mining goes, no there's no need to care
about policy restricitions -- policy is just there to meet other goals:
making it possible to run a node without wasting bandwidth, and to help
decentralisation by letting miners just buy hardware and deploy it,
without needing to do a bunch of protocol level trade secret/black magic
stuff in order to be competitive.

>   - It must be zero fee so that it will be evicted.

The point of making a tx with ephemeral outputs be zero fee is to
prevent it from being mined in non-attack scenarios, which in turn avoids
generating a dust utxo. (An attacking miner can just create arbitrary
dust utxos already, of course)

> - Should we add trimmed HTLCs to the ephemeral anchor?
>   - You can’t keep things in OP_TRUE because they’ll be taken.
>   - You can also just put it in fees as before.

The only way value in an OP_TRUE output can be taken is by confirming
the parent tx that created the OP_TRUE output, exactly the same as if
the value had been spent to fees instead.

Putting the value to fees directly would violate the "tx must be zero
fee if it creates ephemeral outputs" constraint above.

> ### Hybrid Approach to Channel Jamming
> - Generally when we think about jamming, there are three “classes” of
> mitigations:
>   - Monetary: unconditional fees, implemented in various ways.
> - The problem is that none of these solutions work in isolation.
>   - Monetary: the cost that will deter an attacker is unreasonable for an
> honest user, and the cost that is reasonable for an honest user is too low
> for an attacker.

A different way of thinking about the monetary approach is in terms of
scaling rather than deterrance: that is, try to make the cost that the
attacker pays sufficient to scale up your node/the network so that you
continue to have excess capacity to serve regular users.

In that case, if people are suddenly routing their netflix data and
nostr photo libraries over lightning onion packets, that's fine: you
make them pay amazon ec2 prices plus 50% for the resources they use,
and when they do , you deploy more servers. ie, turn your attackers and
spammers into a profit centre.

I've had an email about this sitting in my drafts for a few years now,
but I think this could work something like:

 - message spam (ie, onion traffic costs): when you send a message
   to a peer, pay for its bandwidth and compute. Perhaps something
   like 20c/GB is reasonable, which is something like 1msat per onion
   packet, so perhaps 20msat per onion packet if you're forwarding it
   over 20 hops.

 - liquidity DoS prevention: if you're in receipt of a HTLC/PTLC and
   aren't cancelling or confirming it, you pay your peer a fee for
   holding their funds. (if you're forwarding the HTLC, then whoever you
   forwarded to pays you a slightly higher fee, while they hold your
   funds) Something like 1ppm per hour matches a 1% pa return, so if
   you're an LSP holding on to a $20 payment waiting for the recipient to
   come online and claim it, then you might be paying out $0.0004

Re: [Lightning-dev] LN Summit 2023 Notes

2023-07-26 Thread Antoine Riard
Hi Zeeman,

> A proposal I made in the Signal group after the summit would be to not
use MuSig2 signing for commitment transactions (== unilateral closes).
> Instead, we add a tapscript branch that is just ` OP_CHECKSIGVERIFY
 OP_CHECKSIG` and use that for unilateral closes.
> We only sign with MuSig2 on mutual closes, after we have negotiated
closing fees (whether that is the current long closing conversation, or
simplified closing) so that only mutual closes require nonce
> storage.
>
> As mutual closes assume continuous connectivity anyway, we can keep
counterparty nonces in volatile RAM and not store nonces in persistent
storage; if a disconnection occurs, we just remove it from
> volatile RAM and restart the mutual close negotiation on reconnection.
>
> This is more palatable as even with a very restrictive hardware device
you do not have to store the peer nonce in persistent storage.
> The hope is that mutual closes dominate over unilateral closes.

Hardware devices with reasonable persistent storage sounds cheaper than the
additional fee-bumping reverse one (locally or a third-party), must
conserve to pay for the probabilistic worst-case ` OP_CHECKSIGVERIFY 
OP_CHECKSIG` satisfying witness.

> Conditinoal fees already on the Lightning Network are already dynamic,
with many people (including myself) writing software that measures demand
and changes price accordingly.
> Why would unconditional fees be necessarily static, when there is no
mention of it being static?

While I'm in sync with you on the Lightning Network being a system driven
by demand and charge price accordingly, some of the recent jamming
mitigation proposals where build on the proposal of "static fees" or
unconditional fees, e.g https://eprint.iacr/2022/1454.pdf. As soon as you
start to think in terms of dynamic fees, you start to have issues w.r.t
gossip convergence delays and rate-limitation of your local unconditional
fees updates.

> Given a "stereotypical" forwarding node, what is the most likely
subjective valuation?
> If a node is not a stereotypical forwarding node, how does it deviate
from the stereotypical one?

Answer is a function of your collection of historical forwarding HTLC
traffic and secondary source of information.
Somehow the same as for base-layer fee-estimation, the more consistent your
mempool data-set, the better will be your valuation.

> The problem is that the Bitcoin clock is much too coarsely grained, with
chain height advances occasionally taking several hours in the so-called
"real world" I have heard much rumor about.

Sure, I still think Bitcoin as a universal clock is still the most costly
for a Lightning counterparty to game on, if it has to be used as a
mechanism to arbitrate the fees / reputation paid for the in-flight
duration of a HTLC. Even relying on timestamp sounds to offer some margin
of malleation (e.g advance of max 2h, consensus rule) if you have hashrate
capabilities.

> Would not the halt of the channel progress be considered worthy of a
reputation downgrade by itself?

That's an interesting point, rather than halting channel progress being
marked as a strict reputation downgrade, you could negotiate a "grace
delay" during which channel progress must be made forward (to allow for
casualties like software upgrade or connectivity issue).

Best,
Antoine

Le lun. 24 juil. 2023 à 09:14, ZmnSCPxj  a écrit :

>
>
> > > - For taproot/musig2 we need nonces:
> > > - Today we store the commitment signature from the remote party. We
> don’t need to store our own signature - we can sign at time of broadcast.
> > > - To be able to sign you need the verification nonce - you could
> remember it, or you could use a counter:
> > > - Counter based:
> > > - We re-use shachain and then just use it to generate nonces.
> > > - Start with a seed, derive from that, use it to generate nonces.
> > > - This way you don’t need to remember state, since it can always be
> generated from what you already have.
> > > - Why is this safe?
> > > - We never re-use nonces.
> > > - The remote party never sees your partial signature.
> > > - The message always stays the same (the dangerous re-use case is
> using the same nonce for different messages).
> > > - If we used the same nonce for different messages we could leak our
> key.
> > > - You can combine the sighash + nonce to make it unique - this also
> binds more.
> > > - Remote party will only see the full signature on chain, never your
> partial one.
> > > - Each party has sign and verify nonces, 4 total.
> > > - Co-op close only has 2 because it’s symmetric.
> >
> > (I don't know when mailing list post max size will be reached)
> >
> > Counter-based nonces versus stateful memorization of them from a user
> perspective depends on the hardware capabilities you have access to.
> >
> > The taproot schnorr flow could be transparent from the underlying
> signature scheme (FROST, musig2, TAPS in the future maybe).
>
> A proposal I made in the Signal group after the summit wo

Re: [Lightning-dev] LN Summit 2023 Notes

2023-07-25 Thread ZmnSCPxj via Lightning-dev


> > - For taproot/musig2 we need nonces:
> > - Today we store the commitment signature from the remote party. We don’t 
> > need to store our own signature - we can sign at time of broadcast.
> > - To be able to sign you need the verification nonce - you could remember 
> > it, or you could use a counter:
> > - Counter based:
> > - We re-use shachain and then just use it to generate nonces.
> > - Start with a seed, derive from that, use it to generate nonces.
> > - This way you don’t need to remember state, since it can always be 
> > generated from what you already have.
> > - Why is this safe?
> > - We never re-use nonces.
> > - The remote party never sees your partial signature.
> > - The message always stays the same (the dangerous re-use case is using the 
> > same nonce for different messages).
> > - If we used the same nonce for different messages we could leak our key.
> > - You can combine the sighash + nonce to make it unique - this also binds 
> > more.
> > - Remote party will only see the full signature on chain, never your 
> > partial one.
> > - Each party has sign and verify nonces, 4 total.
> > - Co-op close only has 2 because it’s symmetric.
> 
> (I don't know when mailing list post max size will be reached)
> 
> Counter-based nonces versus stateful memorization of them from a user 
> perspective depends on the hardware capabilities you have access to.
> 
> The taproot schnorr flow could be transparent from the underlying signature 
> scheme (FROST, musig2, TAPS in the future maybe).

A proposal I made in the Signal group after the summit would be to not use 
MuSig2 signing for commitment transactions (== unilateral closes).

Instead, we add a tapscript branch that is just ` OP_CHECKSIGVERIFY  
OP_CHECKSIG` and use that for unilateral closes.
We only sign with MuSig2 on mutual closes, after we have negotiated closing 
fees (whether that is the current long closing conversation, or simplified 
closing) so that only mutual closes require nonce storage.

As mutual closes assume continuous connectivity anyway, we can keep 
counterparty nonces in volatile RAM and not store nonces in persistent storage; 
if a disconnection occurs, we just remove it from volatile RAM and restart the 
mutual close negotiation on reconnection.

This is more palatable as even with a very restrictive hardware device you do 
not have to store the peer nonce in persistent storage.
The hope is that mutual closes dominate over unilateral closes.



> > - We run into the same pricing issues.
> > - Why these combinations?
> > - Since scarce resources are essentially monetary, we think that 
> > unconditional fees are the simplest possible monetary solution.
> > - Unconditional Fees:
> > - As a sender, you’re building a route and losing money if it doesn’t go 
> > through?
> > - Yes, but they only need to be trivially small compared to success case 
> > fee budgets.
> > - You can also eventually succeed so long as you retry enough, even if 
> > failure rates are very high.
> > - How do you know that these fees will be small? The market could decide 
> > otherwise.
> 
> Static unconditional fees is a limited tool in a world where rational 
> economic actors are pricing their liquidity in function of demand.

Conditinoal fees already on the Lightning Network are already dynamic, with 
many people (including myself) writing software that measures demand and 
changes price accordingly.
Why would unconditional fees be necessarily static, when there is no mention of 
it being static?


> > - We have to allow some natural rate of failure in the network.
> > - An attacker can still aim to fall just below that failure threshold and 
> > go through multiple channels to attack an individual channel.
> > - THere isn’t any way to set a bar that an attacker can’t fall just beneath.
> > - Isn’t this the same for reputation? We have a suggestion for reputation 
> > but all of them fail because they can be gamed below the bar.
> > - If reputation matches the regular operation of nodes on the network, you 
> > will naturally build reputation up over time.
> > - If we do not match reputation accumulation to what normal nodes do, then 
> > an attacker can take some other action to get more reputation than the rest 
> > of the network. We don’t want attackers to be able to get ahead of regular 
> > nodes.
> > - Let’s say you get one point for success and one for failure, a normal 
> > node will always have bad reputation. An attacker could then send 1 say 
> > payments all day long, pay a fee for it > and gain reputation.
> > - Can you define jamming? Is it stuck HTLCs or a lot of 1 sat HTLCS 
> > spamming up your DB?
> 
> Jamming is an economic notion, as such relying on the subjectivism of node 
> appreciation of local ressources.

Given a "stereotypical" forwarding node, what is the most likely subjective 
valuation?
If a node is not a stereotypical forwarding node, how does it deviate from the 
stereotypical one?


> > - The dream solutio

[Lightning-dev] LN Summit 2023 Notes

2023-07-19 Thread Carla Kirk-Cohen
Hi List,

At the end of June we got together in NYC for the annual specification
meeting. This time around we made an attempt at taking transcript-style
notes which are available here:
https://docs.google.com/document/d/1MZhAH82YLEXWz4bTnSQcdTQ03FpH4JpukK9Pm7V02bk/edit?usp=sharing
.
To decrease our dependence on my google drive I've also included the
full set of notes at the end of this email (no promises about the
formatting however).

We made a semi-successful attempt at recording larger group topics, so
these notes roughly follow the structure of the discussions that we had
at the summit (rather than being a summary). Speakers are not
attributed, and any mistakes are my own.

Thanks to everyone who traveled far, Wolf for hosting us in style in
NYC and to Michael Levin for helping out with notes <3

# LN Summit - NYC 2023

## Day One

### Package Relay
- The current proposal for package relay is ancestor package relay:
  - One child can have up to 24 ancestors.
  - Right now, we only score mempool transactions by ancestry anyway, so
there isn’t much point in other types of packages.
- For base package relay, commitment transactions will still need to have
the minimum relay fee.
  - No batch bumping is allowed, because it can open up pinning attacks.
  - With one anchor, we can package RBF.
- Once we have package relay, it will be easier to get things into the
mempool.
- Once we have V3 transactions, we can drop minimum relay fees because we
are restricted to one child pays for one parent transaction:
  - The size of these transactions is limited.
  - You can’t arbitrarily attach junk to pin them.
- If we want to get rid of 330 sat anchors, we will need ephemeral anchors:
  - If there is an OP_TRUE output, it can be any value including zero.
  - It must be spent by a child in the same package.
  - Only one child can spend the anchor (because it’s one output).
  - The parent must be zero fee because we never want it in a block on its
own, or relayed without the child.
  - If the child is evicted, we really want the parent to be evicted as
well (there are some odd edge cases at the bottom of the mempool, so zero
ensures that we’ll definitely be evicted).
- The bigger change is with HLTCs:
  - With SIGHASH_ANYONECANPAY, your counterparty can inflate the size of
your transaction - eg: HTLC success, anyone can attack junk.
  - How much do we want to change here?
- So, we can get to zero fee commitment transactions and one (ephemeral)
anchor per transaction.
  - With zero fee commitments, where do we put trimmed HTLCs?
 - You can just drop it in an OP_TRUE, and reasonably expect the miner
to take it.
- In a commitment with to_self and to_remote and an ephemeral anchor that
must be spent, you can drop the 1 block CSV in the transaction that does
not have a revocation path (ie, you can drop it for to_remote).
  - Any spend of this transaction must spend the one anchor in the same
block.
  - No other output is eligible to have a tx attached to it, so we don’t
need the delay anymore.
 - Theoretically, your counterparty could get hold of your signed local
copy, but then you can just RBF.
- Since these will be V3 transactions, the size of the child must be small
so you can’t pin it.
  - Both parties can RBF the child spending the ephemeral anchor.
  - This isn’t specifically tailored to lightning, it’s a more general
concept.
  - Child transactions of V3 are implicitly RBF, so we won’t have to worry
about it not being replaceable (no inheritance bug).
- In general, when we’re changing the mempool policy we want to make the
minimal relaxation that allows the best improvement.
- We need “top of block” mempool / cluster mempool:
  - The mempool can have clusters of transactions (parents/children
arranged in various topologies) and the whole mempool could even be a
single cluster (think a trellis of parents and children “zigzagging”).
  - The mining algorithm will pick one “vertical” using ancestor fee rate.
  - There are some situations where you add a single transaction and it
would completely change the order in which our current selection picks
things.
  - Cluster mempool groups transactions into “clusters” that make this
easier to sort and reason about.
  - It can be expensive (which introduces a risk of denial of service), but
if we have limits on cluster sizes then we can limit this.
  - This is the only way to get package RBF.
- How far along is all of this?
  - BIP331: the P2P part that allows different package types is moving
along and implementation is happening. This is done in a way that would
allow us to add different types of packages in future if we need them.
 - There are some improvements being made to core’s orphan pool,
because we need to make sure that peers can’t knock orphans out of your
pool that you may later need to retrieve as package ancestors.
 - We reserve some spots with tokens, and if you have a token we keep
some space for you.
  - V3 transactions: implemented on top of pa