Re: [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-09 Thread Peter Todd via bitcoin-dev
On Sat, Jan 01, 2022 at 12:04:00PM -0800, Jeremy via bitcoin-dev wrote:
> Happy new years devs,
> 
> I figured I would share some thoughts for conceptual review that have been
> bouncing around my head as an opportunity to clean up the fee paying
> semantics in bitcoin "for good". The design space is very wide on the
> approach I'll share, so below is just a sketch of how it could work which
> I'm sure could be improved greatly.
> 
> Transaction fees are an integral part of bitcoin.
> 
> However, due to quirks of Bitcoin's transaction design, fees are a part of
> the transactions that they occur in.
> 
> While this works in a "Bitcoin 1.0" world, where all transactions are
> simple on-chain transfers, real world use of Bitcoin requires support for
> things like Fee Bumping stuck transactions, DoS resistant Payment Channels,
> and other long lived Smart Contracts that can't predict future fee rates.
> Having the fees paid in band makes writing these contracts much more
> difficult as you can't merely express the logic you want for the
> transaction, but also the fees.
> 
> Previously, I proposed a special type of transaction called a "Sponsor"
> which has some special consensus + mempool rules to allow arbitrarily
> appending fees to a transaction to bump it up in the mempool.
> 
> As an alternative, we could establish an account system in Bitcoin as an
> "extension block".



> This type of design works really well for channels because the addition of
> fees to e.g. a channel state does not require any sort of pre-planning
> (e.g. anchors) or transaction flexibility (SIGHASH flags). This sort of
> design is naturally immune to pinning issues since you could offer to pay a
> fee for any TXID and the number of fee adding offers does not need to be
> restricted in the same way the descendant transactions would need to be.

So it's important to recognize that fee accounts introduce their own kind of
transaction pinning attacks: third parties would be able to attach arbitrary
fees to any transaction without permission. This isn't necessarily a good
thing: I don't want third parties to be able to grief my transaction engines by
getting obsolete transactions confirmed in liu of the replacments I actually
want confirmed. Eg a third party could mess up OpenTimestamps calendars at
relatively low cost by delaying the mining of timestamp txs.

Of course, there's an obvious way to fix this: allow transactions to designate
a pubkey allowed to add further transaction fees if required. Which Bitcoin
already has in two forms: Replace-by-Fee and Child Pays for Parent.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-02-09 Thread lisa neigut via bitcoin-dev
Changing the way that RBF works is an excellent idea. Bitcoin is overdue
for revisiting these rules. Thanks to @glowzo et al for kicking off the
discussion.

I've been thinking about RBF for a very long time[1], and it's been fun to
see other people's thoughts on the topics. Here's my current thinking about
this, and proposal for how we should update the rules.

### Changing the Rules? How to Change Them
Regarding how to change them, Bram and aj are right -- we should move to a
model where transaction relay candidates are evaluated on their net
increase in fees per byte paid, and remove the requirement that the gross
fee of the preceding transaction is met or exceeded.

Our current ruleset is over complicated because it attempts to solve two
problems at once: the miner's best interests (highest possible fee take)
and relay policy.

I believe this is a mistake and the mempool should change its goals.
Instead, the mempool relay design for RBFs should be built around 1)
increasing the the per-byte fees paid of a transaction and 2) providing a
simple policy for applications building on top of bitcoin, such that
knowledge of the mempool is not required for successfully issuing
relay-able RBF transactions.

(A simple "must increase the per-byte feerate" heuristic for RBF relay
candidates has the nice benefit of being easy to program to on the
application side, and only requires knowledge of the previous candidate
transaction, not the entire mempool or any previous tx's relative position
within it.)

Finally, when blockspace is competitive , this simple policy ensures that
the per-byte value of every subsequent relayed transaction increases the
per-byte value of pending bytes for the next block. This provides a measure
of DoS protection and ensures that every relayed byte is more valuable (to
the miner/network) than the last.

*The only time that RBF is critical for relay is during full block periods
-- if there's not enough transactions to fill a block, using RBF to ensure
that a transaction is mined in a timely manner is moot. As such, RBF rules
should not be concerned with low-block environments.

### Mempools and Relay
The mempool currently serves two masters: the profit motive of the miner
and the relay motive of a utxo holder. It is in the interest of a user to
send the miner a high per-byte tx, such that it might end up in the next
block. It is in the miner's best interest to include the highest per-byte
set of transactions in their block template.

There is some conflation here in the current RBF policies between what is
in the mempool and what is considered a candidate for the block template.
If a miner has already included a more profitable package of txs into their
block template than a more valuable per-byte tx that the network has
relayed to them, it should be the responsibility of the block template
constructor to reject the new proposed tx, not the nodes relaying the
transaction to said miner.

This is a policy that the miner can (and should) implement at the level of
the template construction, however.

Is it the responsibility of the mempool to provide the best "historical"
block opportunity for a miner (e.g. the highest paying block given all txs
it's ever seen)? I would say no, that the ability of a utxo owner to
re-state the spend condition of a pending transaction is more important,
from a use-case perspective, and that the mempool should concern itself
solely with relaying increasingly more profitable bytes to miners. Let the
miners concern themselves with deciding what the best policy for their own
block construction is, and the mempool with relaying the highest value
bytes for the network. Net-net, this will benefit everyone as it becomes
easier for users to re-submit txs with increasingly greater feerates,
creating more active competition for available blockspace as more
applications are able to include it as a feature (and it works, reliable,
as a relay mechanism).

### Packages and RBF
Packages make the increasing per-byte rule less of a guarantee that
increasing the per-byte value of a single transaction will net a given
miner more fees than including the entire package.

Let's decompose this a bit. It's helpful to think of tx packages as
'composable txs'. Basically when you consider a 'package' it is actually a
large tx with sub-components, the individual txs. As a 'composed tx', you
can calculate the per-byte feerate of the entire set. This is the number
that you, as someone issuing an RBF, would need to beat in order to move
your tx up in the pending block queue.

RBF, however, is a transaction level policy: it allows you to replace any
*one* component of a package, or tree, with the side effect of possibly
invalidating other candidate txs. If the 'composed tx' (aka package) had a
net per-byte value that was higher than the new replacement transaction
because of a leaf tx that had an outsized per-byte feerate, then it would
be more profitable for the miner to have mined the entire 

[bitcoin-dev] CTV Meeting Notes #3

2022-02-09 Thread Jeremy Rubin via bitcoin-dev
Bitcoin Developers,

The Third CTV meeting was held earlier today (Tuesday February 8th, 2022).
You can find the meeting log here:
https://gnusha.org/ctv-bip-review/2022-02-08.log

A best-effort summary:

- Not much new to report on the Bounty
- Non Interactive Lightning Channel Opens
  Non interactive lightning Channel opens seems to work!
  There are questions around being able operate a channel in a "unipolar"
way for routing with the receiver's key offline, as HTLCs might require
sync revocation. This is orthogonal to the opening of the channels.
- DLCs w/ CTV
  DLCs built with CTV does seem to be a "key enabler" for DLCs.
  The non interactivity provides a dramatic speedup (30x - 300x depending
on multi-oracle setup)
  Changes the client/server setup enable new use cases to explore, and
simplify the spec substantially.
  Backfilling lets clients commit to the DLC faster and lazily backfill at
cost of state storage.
  For M-N oracles, precompiling N choose M groups + musig'ing the
attestation points can possibly save some witness space because
log2(N)*32 + N*32 > log2(N*(N choose M))*32 for many values of N and M.
- Pathcoin
  Not well understood yet concretely.
  Seems like the API of a "a coin that 1-of-N can spend" shared by N is
new/unique and not something LN can do (which always requires N online to
sign txns)
  Binary expansion of coins could allow arbitrary value transfer (binary
expansion can live in a CTV tree too).
  Best way to think of Pathcoin at this point is an important theoretical
result that should open up new exploration/improvement
- TXHash
  Main concerns: more complexity, potential for recursion, script size
overhead
- Soft Forks, Generally
  Big question: Are the fork processes themselves (e.g., BIP9/8/ST
activiations) riskier than the upgrades (CTV)?
  On the one hand, validation rules are something we have to live with
forever so they should be riskier. Soft fork rules and coordination might
be bad, but after activation they go away.
  On the other hand, we can "prove" a technical upgrade correct, but
soft-fork signalling requires unprovable user behavior and coordination
(e.g., actually upgrading).
  If you perceive the forking mechanism as high risk, it makes sense to
make the upgrades have as much content as possible since you need to
justify the high risk.
  If you perceive the forking mechanism as low risk, it is fine to make the
upgrades smaller and easier to prove safe since there's not a high cost to
forking.
- Elements CTV Emulation
  Seems to be workable.
  Questionable if any of the use cases one might want CTV for (Lightning,
DLCs, Vaults) would have much demand on Liquid today.

Feel free to correct me where I've not represented perspectives decently,
as always the logs are the only true summary.

Best,

Jeremy

--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev