Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Anthony Towns via bitcoin-dev
On Mon, Feb 07, 2022 at 11:16:26AM +, Gloria Zhao wrote:
> @aj:
> > I wonder sometimes if it could be sufficient to just have a relay rate
> > limit and prioritise by ancestor feerate though. Maybe something like:
> > - instead of adding txs to each peers setInventoryTxToSend immediately,
> >   set a mempool flag "relayed=false"
> > - on a time delay, add the top N (by fee rate) "relayed=false" txs to
> >   each peer's setInventoryTxToSend and mark them as "relayed=true";
> >   calculate how much kB those txs were, and do this again after
> >   SIZE/RATELIMIT seconds

> > - don't include "relayed=false" txs when building blocks?

The "?" was me not being sure that point is a good suggestion...

Miners might reasonably decide to have no rate limit, and always relay,
and never exclude txs -- but the question then becomes is whether they
hear about the tx at all, so rate limiting behaviour could still be a
potential problem for whoever made the tx.

> Wow cool! I think outbound tx relay size-based rate-limiting and
> prioritizing tx relay by feerate are great ideas for preventing spammers
> from wasting bandwidth network-wide. I agree, this would slow the low
> feerate spam down, preventing a huge network-wide bandwidth spike. And it
> would allow high feerate transactions to propagate as they should,
> regardless of how busy traffic is. Combined with inbound tx request
> rate-limiting, might this be sufficient to prevent DoS regardless of the
> fee-based replacement policies?

I think you only want to do outbound rate limits, ie, how often you send
INV, GETDATA and TX messages? Once you receive any of those, I think
you have to immediately process / ignore it, you can't really sensibly
defer it (beyond the existing queues we have that just build up while
we're busy processing other things first)?

> One point that I'm not 100% clear on: is it ok to prioritize the
> transactions by ancestor feerate in this scheme? As I described in the
> original post, this can be quite different from the actual feerate we would
> consider a transaction in a block for. The transaction could have a high
> feerate sibling bumping its ancestor.
> For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB). If
> we just received C, it would be incorrect to give it a priority equal to
> its ancestor feerate (3sat/vB) because if we constructed a block template
> now, B would bump A, and C's new ancestor feerate is 5sat/vB.
> Then, if we imagine that top N is >5sat/vB, we're not relaying C. If we
> also exclude C when building blocks, we're missing out on good fees.

I think you're right that this would be ugly. It's something of a
special case:

 a) you really care about C getting into the next block; but
 b) you're trusting B not being replaced by a higher fee tx that
doesn't have A as a parent; and
 c) there's a lot of txs bidding the floor of the next block up to a
level in-between the ancestor fee rate of 3sat/vB and the tx fee
rate of 5sat/vB

Without (a), maybe you don't care about it getting to a miner quickly.
If your trust in (b) was misplaced, then your tx's effective fee rate
will drop and (because of (c)), you'll lose anyway. And if the spam ends
up outside of (c)'s range, either the rate limiting won't take effect
(spam's too cheap) and you'll be fine, or you'll miss out on the block
anyway (spam's paying more than your tx rate) and you never had any hope
of making it in.

Note that we already rate limit via INVENTORY_BROADCAST_MAX /
*_INVENTORY_BROADCAST_INTERVAL; which gets to something like 10,500 txs
per 10 minutes for outbound connections. This would be a weight based
rate limit instead-of/in-addition-to that, I guess.

As far as a non-ugly approach goes, I think you'd have to be smarter about
tracking the "effective fee rate" than the ancestor fee rate manages;
maybe that's something that could fall out of Murch and Clara's candidate
set blockbuilding ideas [0] ?

Perhaps that same work would also make it possible to come up with
a better answer to "do I care that this replacement would invalidate
these descendents?"

[0] https://github.com/Xekyo/blockbuilding

> > - keep high-feerate evicted txs around for a while in case they get
> >   mined by someone else to improve compact block relay, a la the
> >   orphan pool?
> Replaced transactions are already added to vExtraTxnForCompact :D

I guess I was thinking that it's just a 100 tx LRU cache, which might
not be good enough?

Maybe it would be more on point to have a rate limit apply only to
replacement transactions?

> For wallets, AJ's "All you need is for there to be *a* path that follows
> the new relay rules and gets from your node/wallet to perhaps 10% of
> hashpower" makes sense to me (which would be the former).

Perhaps a corollarly of that is that it's *better* to have the mempool
acceptance rule only consider economic incentives, and have the spam
prevention only be about "shall I tell my peers about this?"

If you don't have t

Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-07 Thread Jeremy Rubin via bitcoin-dev
Rusty,

Note that this sort of design introduces recursive covenants similarly to
how I described above.

Whether that is an issue or not precluding this sort of design or not, I
defer to others.

Best,

Jeremy


On Mon, Feb 7, 2022 at 7:57 PM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Russell O'Connor via bitcoin-dev 
> writes:
> > Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> > makes sense to decompose their operations into their constituent pieces
> and
> > reassemble their behaviour programmatically.  To this end, I'd like to
> > instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
> >
> > OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> > txhash in accordance with that flag, and push the resulting hash onto the
> > stack.
>
> It may be worth noting that OP_TXHASH can be further decomposed into
> OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).
>
> OP_TX would place the concatenated selected fields onto the stack
> (rather than hashing them) This is more compact for some tests
> (e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
> "OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
> testing (e.g amount less than X or greater than X, or less than 3 inputs).
>
> > I believe the difficulties with upgrading TXHASH can be mitigated by
> > designing a robust set of TXHASH flags from the start.  For example
> having
> > bits to control whether (1) the version is covered; (2) the locktime is
> > covered; (3) txids are covered; (4) sequence numbers are covered; (5)
> input
> > amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> > inputs is covered; (8) output amounts are covered; (9) output
> scriptpubkeys
> > are covered; (10) number of outputs is covered; (11) the tapbranch is
> > covered; (12) the tapleaf is covered; (13) the opseparator value is
> > covered; (14) whether all, one, or no inputs are covered; (15) whether
> all,
> > one or no outputs are covered; (16) whether the one input position is
> > covered; (17) whether the one output position is covered; (18) whether
> the
> > sighash flags are covered or not (note: whether or not the sighash flags
> > are or are not covered must itself be covered).  Possibly specifying
> which
> > input or output position is covered in the single case and whether the
> > position is relative to the input's position or is an absolute position.
>
> These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
> locktime is pushed as u32, ...".
>
> We might want to push SHA256() of scripts instead of scripts themselves,
> to reduce possibility of DoS.
>
> I suggest, also, that 14 (and similarly 15) be defined two bits:
> 00 - no inputs
> 01 - all inputs
> 10 - current input
> 11 - pop number from stack, fail if >= number of inputs or no stack elems.
>
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-07 Thread Rusty Russell via bitcoin-dev
Russell O'Connor via bitcoin-dev  writes:
> Given the overlap in functionality between CTV and ANYPREVOUT, I think it
> makes sense to decompose their operations into their constituent pieces and
> reassemble their behaviour programmatically.  To this end, I'd like to
> instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY.
>
> OP_TXHASH would pop a txhash flag from the stack and compute a (tagged)
> txhash in accordance with that flag, and push the resulting hash onto the
> stack.

It may be worth noting that OP_TXHASH can be further decomposed into
OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256).

OP_TX would place the concatenated selected fields onto the stack
(rather than hashing them) This is more compact for some tests
(e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs
"OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range
testing (e.g amount less than X or greater than X, or less than 3 inputs).

> I believe the difficulties with upgrading TXHASH can be mitigated by
> designing a robust set of TXHASH flags from the start.  For example having
> bits to control whether (1) the version is covered; (2) the locktime is
> covered; (3) txids are covered; (4) sequence numbers are covered; (5) input
> amounts are covered; (6) input scriptpubkeys are covered; (7) number of
> inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys
> are covered; (10) number of outputs is covered; (11) the tapbranch is
> covered; (12) the tapleaf is covered; (13) the opseparator value is
> covered; (14) whether all, one, or no inputs are covered; (15) whether all,
> one or no outputs are covered; (16) whether the one input position is
> covered; (17) whether the one output position is covered; (18) whether the
> sighash flags are covered or not (note: whether or not the sighash flags
> are or are not covered must itself be covered).  Possibly specifying which
> input or output position is covered in the single case and whether the
> position is relative to the input's position or is an absolute position.

These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the
locktime is pushed as u32, ...".

We might want to push SHA256() of scripts instead of scripts themselves,
to reduce possibility of DoS.

I suggest, also, that 14 (and similarly 15) be defined two bits:
00 - no inputs
01 - all inputs
10 - current input
11 - pop number from stack, fail if >= number of inputs or no stack elems.

Cheers,
Rusty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-07 Thread Russell O'Connor via bitcoin-dev
On Mon, Jan 31, 2022 at 8:16 PM Anthony Towns  wrote:

> On Fri, Jan 28, 2022 at 08:56:25AM -0500, Russell O'Connor via bitcoin-dev
> wrote:
> > >
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html
> > For more complex interactions, I was imagining combining this TXHASH
> > proposal with CAT and/or rolling SHA256 opcodes.  If TXHASH ended up
> > supporting relative or absolute input/output indexes then users could
> > assemble the hashes of the particular inputs and outputs they care about
> > into a single signed message.
>
> That's certainly possible, but it sure seems overly complicated and
> error prone...
>

Indeed, and we really want something that can be programmed at redemption
time.
That probably involves something like how the historic MULTISIG worked by
having list of input / output indexes be passed in along with length
arguments.

I don't think there will be problems with quadratic hashing here because as
more inputs are list, the witness in turns grows larger itself.  The amount
of stack elements that can be copied is limited by a constant (3DUP).
Certainly care is needed here, but also keep in mind that an OP_HASH256
does a double hash and costs one weight unit.

That said, your SIGHASH_GROUP proposal suggests that some sort of
intra-input communication is really needed, and that is something I would
need to think about.

While normally I'd be hesitant about this sort of feature creep, when we
are talking about doing soft-forks, I really think it makes sense to think
through these sorts of issues (as we are doing here).


> > I don't think there is much in the way of lessons to be drawn from how we
> > see Bitcoin Script used today with regards to programs built out of
> > reusable components.
>
> I guess I think one conclusion we should draw is some modesty in how
> good we are at creating general reusable components. That is, bitcoin
> script looks a lot like a relatively general expression language,
> that should allow you to write interesting things; but in practice a
> lot of it was buggy (OP_VER hardforks and resource exhaustion issues),
> or not powerful enough to actually be interesting, or too complicated
> to actually get enough use out of [0].
>

> TXHASH + CSFSV won't be enough by itself to allow for very interesting
> > programs Bitcoin Script yet, we still need CAT and friends for that,
>
> "CAT" and "CHECKSIGFROMSTACK" are both things that have been available in
> elements for a while; has anyone managed to build anything interesting
> with them in practice, or are they only useful for thought experiments
> and blog posts? To me, that suggests that while they're useful for
> theoretical discussion, they don't turn out to be a good design in
> practice.
>

Perhaps the lesson to be drawn is that languages should support multiplying
two numbers together.

Having 2/3rd of the language you need to write interesting programs doesn't
mean that you get 2/3rd of the interesting programs written.

But beyond that, there is a lot more to a smart contract than just the
Script.  Dmitry Petukhov has a fleshed out design for Asset based lending
on liquid at https://ruggedbytes.com/articles/ll/, despite the limitations
of (pre-taproot) Elements Script.  But to make it a real thing you need
infrastructure for working with partial transactions, key management, etc.

> but
> > CSFSV is at least a step in that direction.  CSFSV can take arbitrary
> > messages and these messages can be fixed strings, or they can be hashes
> of
> > strings (that need to be revealed), or they can be hashes returned from
> > TXHASH, or they can be locktime values, or they can be values that are
> > added or subtracted from locktime values, or they can be values used for
> > thresholds, or they can be other pubkeys for delegation purposes, or they
> > can be other signatures ... for who knows what purpose.
>
> I mean, if you can't even think of a couple of uses, that doesn't seem
> very interesting to pursue in the near term? CTV has something like half
> a dozen fairly near-term use cases, but obviously those can all be done
> just with TXHASH without a need for CSFS, and likewise all the ANYPREVOUT
> things can obviously be done via CHECKSIG without either TXHASH or CSFS...
>
> To me, the point of having CSFS (as opposed to CHECKSIG) seems to be
> verifying that an oracle asserted something; but for really simply boolean
> decisions, doing that via a DLC seems better in general since that moves
> more of the work off-chain; and for the case where the signature is being
> used to authenticate input into the script rather than just gating a path,
> that feels a bit like a weaker version of graftroot?
>

I didn't really mean this as a list of applications; it was a list of
values that CSFSV composes with. Applications include delegation of pubkeys
and oracles, and, in the presence of CAT and transaction reflection
primitives, presumably many more things.


> I guess I'd still be interested 

Re: [bitcoin-dev] BIP-119 CTV Meeting #3 Draft Agenda for Tuesday February 8th at 12:00 PT

2022-02-07 Thread Jeremy Rubin via bitcoin-dev
Reminder:

This is in ~24 hours.

There have been no requests to add content to the agenda.

Best,

Jeremy
--
@JeremyRubin 


On Wed, Feb 2, 2022 at 12:29 PM Jeremy Rubin 
wrote:

> Bitcoin Developers,
>
> The 3rd instance of the recurring meeting is scheduled for Tuesday
> February 8th at 12:00 PT in channel ##ctv-bip-review in libera.chat IRC
> server.
>
> The meeting should take approximately 2 hours.
>
> The topics proposed to be discussed are agendized below. Please review the
> agenda in advance of the meeting to make the best use of everyone's time.
>
> Please send me any feedback, proposed topic changes, additions, or
> questions you would like to pre-register on the agenda.
>
> I will send a reminder to this list with a finalized Agenda in advance of
> the meeting.
>
> Best,
>
> Jeremy
>
> - Bug Bounty Updates (10 Minutes)
> - Non-Interactive Lightning Channels (20 minutes)
>   + https://rubin.io/bitcoin/2021/12/11/advent-14/
>   + https://utxos.org/uses/non-interactive-channels/
> - CTV's "Dramatic" Improvement of DLCs (20 Minutes)
>   + Summary: https://zensored.substack.com/p/supercharging-dlcs-with-ctv
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019808.html
>   + https://rubin.io/bitcoin/2021/12/20/advent-23/
> - PathCoin (15 Minutes)
>   + Summary: A proposal of coins that can be transferred in an offline
> manner by pre-compiling chains of transfers cleverly.
>   + https://gist.github.com/AdamISZ/b462838cbc8cc06aae0c15610502e4da
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019809.html
> - OP_TXHASH (30 Minutes)
>   + An alternative approach to OP_CTV + APO's functionality by
> programmable tx hash opcode.
>   + See discussion thread at:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html
> - Emulating CTV for Liquid (10 Minutes)
>   +
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019851.html
> - General Discussion (15 Minutes)
>
> Best,
>
> Jeremy
>
>
>
>
> --
> @JeremyRubin 
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-07 Thread shymaa arafat via bitcoin-dev
On Mon, Feb 7, 2022, 16:44 Billy Tetrud via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > every lightning network transaction adds one dust UTXO
>
> Could you clarify what you mean here? What dust do lightning transactions
> create?
>
I mean this msg
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019636.html
Even though, the writer clarified after my enquiry I still think it is the
same meaning most of the time only one will be spent. His words:
..
*My statement was technically incorrect, it should have been "most of the
time only one of them is spent".*
*But nothing prevents them to be both spent, or none of them to be spent.*
*They are strictly equivalent, the only difference is the public key that
can sign for them: one of these outputs belongs to you, the other belongs
to your peer.*

*You really cannot distinguish anything when inserting them into the utxo
set, they are perfectly symmetrical and you cannot know beforehand for sure
which one will be spent.*
*You can guess which one will be spent most of the time, but your heuristic
will never be 100% correct, so I don't think it's worth pursuing.*
*.*

>
> I do think that UTXO set size is something that will need to be addressed
> at some point. I liked the idea of utreexo or some other accumulator as the
> ultimate solution to this problem. In the mean time, I kind of agree with
> Eric that outputs unlikely to be spent can easily be stored off ram and so
> I wouldn't expect them to really be much of an issue to keep around. 3
> million utxos is only like 100MB. If software could be improved to move
> dust off ram, that sounds like a good win tho.
>
> On Sun, Feb 6, 2022, 13:14 Eric Voskuil via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>>
>> > On Feb 6, 2022, at 10:52, Pieter Wuille via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>> >
>> > 
>> >> Dear Bitcoin Developers,
>> >
>> >> -When I contacted bitInfoCharts to divide the first interval of
>> addresses, they kindly did divided to 3 intervals. From here:
>> >> https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
>> >> -You can see that there are more than 3.1m addresses holding ≤
>> 0.01 BTC (1000 Sat) with total value of 14.9BTC; an average of 473 Sat
>> per address.
>> >
>> >> -Therefore, a simple solution would be to follow the difficulty
>> adjustment idea and just delete all those
>> >
>> > That would be a soft-fork, and arguably could be considered theft.
>> While commonly (but non universally) implemented standardness rules may
>> prevent spending them currently, there is no requirement that such a rule
>> remain in place. Depending on how feerate economics work out in the future,
>> such outputs may not even remain uneconomical to spend. Therefore, dropping
>> them entirely from the UTXO set is potentially destroying potentially
>> useful funds people own.
>> >
>> >> or at least remove them to secondary storage
>> >
>> > Commonly adopted Bitcoin full nodes already have two levels of storage
>> effectively (disk and in-RAM cache). It may be useful to investigate using
>> amount as a heuristic about what to keep and how long. IIRC, not even every
>> full node implementation even uses a UTXO model.
>>
>> You recall correctly. Libbitcoin has never used a UTXO store. A full node
>> has no logical need for an additional store of outputs, as transactions
>> already contain them, and a full node requires all of them, spent or
>> otherwise.
>>
>> The hand-wringing over UTXO set size does not apply to full nodes, it is
>> relevant only to pruning. Given linear worst case growth, even that is
>> ultimately a non-issue.
>>
>> >> for Archiving with extra cost to get them back, along with
>> non-standard UTXOs and Burned ones (at least for publicly known, published,
>> burn addresses).
>> >
>> > Do you mean this as a standardness rule, or a consensus rule?
>> >
>> > * As a standardness rule it's feasible, but it makes policy (further)
>> deviate from economically rational behavior. There is no reason for miners
>> to require a higher price for spending such outputs.
>> > * As a consensus rule, I expect something like this to be very
>> controversial. There are currently no rules that demand any minimal fee for
>> anything, and given uncertainly over how fee levels could evolve in the
>> future, it's unclear what those rules, if any, should be.
>> >
>> > Cheers,
>> >
>> > --
>> > Pieter
>> >
>> > ___
>> > bitcoin-dev mailing list
>> > bitcoin-dev@lists.linuxfoundation.org
>> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundatio

Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-07 Thread Billy Tetrud via bitcoin-dev
> every lightning network transaction adds one dust UTXO

Could you clarify what you mean here? What dust do lightning transactions
create?

I do think that UTXO set size is something that will need to be addressed
at some point. I liked the idea of utreexo or some other accumulator as the
ultimate solution to this problem. In the mean time, I kind of agree with
Eric that outputs unlikely to be spent can easily be stored off ram and so
I wouldn't expect them to really be much of an issue to keep around. 3
million utxos is only like 100MB. If software could be improved to move
dust off ram, that sounds like a good win tho.

On Sun, Feb 6, 2022, 13:14 Eric Voskuil via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
> > On Feb 6, 2022, at 10:52, Pieter Wuille via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > 
> >> Dear Bitcoin Developers,
> >
> >> -When I contacted bitInfoCharts to divide the first interval of
> addresses, they kindly did divided to 3 intervals. From here:
> >> https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
> >> -You can see that there are more than 3.1m addresses holding ≤ 0.01
> BTC (1000 Sat) with total value of 14.9BTC; an average of 473 Sat per
> address.
> >
> >> -Therefore, a simple solution would be to follow the difficulty
> adjustment idea and just delete all those
> >
> > That would be a soft-fork, and arguably could be considered theft. While
> commonly (but non universally) implemented standardness rules may prevent
> spending them currently, there is no requirement that such a rule remain in
> place. Depending on how feerate economics work out in the future, such
> outputs may not even remain uneconomical to spend. Therefore, dropping them
> entirely from the UTXO set is potentially destroying potentially useful
> funds people own.
> >
> >> or at least remove them to secondary storage
> >
> > Commonly adopted Bitcoin full nodes already have two levels of storage
> effectively (disk and in-RAM cache). It may be useful to investigate using
> amount as a heuristic about what to keep and how long. IIRC, not even every
> full node implementation even uses a UTXO model.
>
> You recall correctly. Libbitcoin has never used a UTXO store. A full node
> has no logical need for an additional store of outputs, as transactions
> already contain them, and a full node requires all of them, spent or
> otherwise.
>
> The hand-wringing over UTXO set size does not apply to full nodes, it is
> relevant only to pruning. Given linear worst case growth, even that is
> ultimately a non-issue.
>
> >> for Archiving with extra cost to get them back, along with non-standard
> UTXOs and Burned ones (at least for publicly known, published, burn
> addresses).
> >
> > Do you mean this as a standardness rule, or a consensus rule?
> >
> > * As a standardness rule it's feasible, but it makes policy (further)
> deviate from economically rational behavior. There is no reason for miners
> to require a higher price for spending such outputs.
> > * As a consensus rule, I expect something like this to be very
> controversial. There are currently no rules that demand any minimal fee for
> anything, and given uncertainly over how fee levels could evolve in the
> future, it's unclear what those rules, if any, should be.
> >
> > Cheers,
> >
> > --
> > Pieter
> >
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Gloria Zhao via bitcoin-dev
Hi everyone,

Thanks for giving your attention to the post! I haven't had time to write
responses to everything, but sending my thoughts about what has been most
noteworthy to me:

@jeremy:
> A final point is that a verifiable delay function could be used over,
e.g., each of the N COutpoints individually to rate-limit transaction
replacement. The VDF period can be made shorter / eliminated depending on
the feerate increase.

Thanks for the suggestion! In general, I don't think rate limiting by
outpoint/prevout is a safe option, as it is particularly dangerous for L2
applications with shared prevouts. For example, the prevout that LN channel
counterparties conflict on is the output from their shared funding tx. Any
kind of limit on spending this prevout can be monopolized by a spammy
attacker. For example, if you only allow 1 per minute, the attacker will
just race to take up that slot every minute to prevent the honest party's
transaction from being accepted.
This is similar to the pinning attack based on monopolizing the
transaction's descendant limit, except we can't carve out an exemption
because we wouldn't know whose replacement we're looking at.

@tbast:
> The way I understand it, limiting the impact on descendant transactions
is only important for DoS protection, not for incentive compatibility.

> I believe it's completely ok to require increasing both the fees and
feerate if we don't take descendants into account, because you control your
ancestor set - whereas the descendant set may be completely out of your
control.

Ignoring descendants of direct conflicts would certainly make our lives
much easier! Unfortunately, I don't think we can do this since they can be
fee bumps, i.e., in AJ's example. Considering descendants is important for
both incentive compatibility and DoS.
If the replacement transaction has a higher feerate than its direct
conflict, but the direct conflict also has high feerate descendants, we
might end up with lower fees and/or feerates by accepting the replacement.

@aj:
> I wonder sometimes if it could be sufficient to just have a relay rate
limit and prioritise by ancestor feerate though. Maybe something like:
>
> - instead of adding txs to each peers setInventoryTxToSend immediately,
>   set a mempool flag "relayed=false"
>
> - on a time delay, add the top N (by fee rate) "relayed=false" txs to
>   each peer's setInventoryTxToSend and mark them as "relayed=true";
>   calculate how much kB those txs were, and do this again after
>   SIZE/RATELIMIT seconds
>
> - don't include "relayed=false" txs when building blocks?

Wow cool! I think outbound tx relay size-based rate-limiting and
prioritizing tx relay by feerate are great ideas for preventing spammers
from wasting bandwidth network-wide. I agree, this would slow the low
feerate spam down, preventing a huge network-wide bandwidth spike. And it
would allow high feerate transactions to propagate as they should,
regardless of how busy traffic is. Combined with inbound tx request
rate-limiting, might this be sufficient to prevent DoS regardless of the
fee-based replacement policies?

One point that I'm not 100% clear on: is it ok to prioritize the
transactions by ancestor feerate in this scheme? As I described in the
original post, this can be quite different from the actual feerate we would
consider a transaction in a block for. The transaction could have a high
feerate sibling bumping its ancestor.
For example, A (1sat/vB) has 2 children: B (49sat/vB) and C (5sat/vB). If
we just received C, it would be incorrect to give it a priority equal to
its ancestor feerate (3sat/vB) because if we constructed a block template
now, B would bump A, and C's new ancestor feerate is 5sat/vB.
Then, if we imagine that top N is >5sat/vB, we're not relaying C. If we
also exclude C when building blocks, we're missing out on good fees.

> - keep high-feerate evicted txs around for a while in case they get
>   mined by someone else to improve compact block relay, a la the
>   orphan pool?

Replaced transactions are already added to vExtraTxnForCompact :D

@ariard
> Deployment of Taproot opens interesting possibilities in the
vaults/payment channels design space, where the tapscripts can commit to
different set of timelocks/quorum of keys. Even if the pre-signed states
stay symmetric, whoever is the publisher, the feerate cost to spend can
fluctuate.

Indeed, perhaps with taproot we may legitimately have
same-txid-different-witness transactions as a normal thing rather than rare
edge case. But as with everything enabled by taproot, I wouldn't count our
tapscript eggs until a concrete use case hatches and/or an application
actually implements it.

> How this new replacement rule would behave if you have a parent in the
"replace-by-feerate" half but the child is in the "replace-by-fee" one ?

Thanks for considering my suggestion! This particular scenario is not
possible, since a child cannot be considered for the next block without its
parent. But if th

Re: [bitcoin-dev] Improving RBF Policy

2022-02-07 Thread Bastien TEINTURIER via bitcoin-dev
Good morning,

> The tricky question is what happens when X arrives on its own and it
> might be that no one ever sends a replacement for B,C,D)

It feels ok to me, but this is definitely arguable.

It covers the fact that B,C,D could have been fake transactions whose
sole purpose was to do a pinning attack: in that case the attacker would
have found a way to ensure these transactions don't confirm anyway (or
pay minimal/negligible fees).

If these transactions were legitimate, I believe that their owners would
remake them at some point (because these transactions reflect a business
relationship that needed to happen, so it should very likely still
happen). It's probably hard to verify because the new corresponding
transactions may have nothing in common with the first, but I think the
simplifications it offers for wallets is worth it (which is just my
opinion and needs more scrutiny/feedback).

> But if your backlog's feerate does drop off, *and* that matters, then
> I don't think you can ignore the impact of the descendent transactions
> that you might not get a replacement for.

That is because you're only taking into account the current backlog, and
not taking into account the fact that new items will be added to it soon
to replace the evicted descendants. But I agree that this is a bet: we
can't predict the future and guarantee these replacements will come.

It is really a trade-off, ignoring descendents provides a much simpler
contract that doesn't vary from one mempool to another, but when your
backlog isn't full enough, you may lose some future profits if
transactions don't come in later.

> I think "Y% higher" rather than just "higher" is only useful for
> rate-limiting, not incentive compatibility. (Though maybe it helps
> stabilise a greedy algorithm in some cases?)

That's true. I claimed these policies only address incentives, but using
a percentage increase addresses rate-limiting a bit as well (I couldn't
resist trying to do at least something for it!). I find it a very easy
mechanism to implement, while choosing an absolute value is hard (it's
always easier to think in relatives than absolutes).

> This is why I think it is important to understand the rationales for
introducing the rules in the first place

I completely agree. As you mentioned, we are still in brainstorming
phase, once (if?) we start to converge on what could be better policies,
we do need to clearly explain each policy's expected goal. That will let
future Bastien writing code in 2030 clearly highlight why the 2022 rules
don't make sense anymore!

Cheers,
Bastien

Le sam. 5 févr. 2022 à 14:22, Michael Folkson 
a écrit :

> Thanks for this Bastien (and Gloria for initially posting about this).
>
> I sympathetically skimmed the eclair PR (
> https://github.com/ACINQ/eclair/pull/2113) dealing with replaceable
> transactions fee bumping.
>
> There will continue to be a (hopefully) friendly tug of war on this
> probably for the rest of Bitcoin's existence. I am sure people like Luke,
> Prayank etc will (rightfully) continue to raise that Lightning and other
> second layer protocols shouldn't demand that policy rules be changed if
> there is a reason (e.g. DoS vector) for those rules on the base network.
> But if there are rules that have no upside, introduce unnecessary
> complexity for no reason and make Lightning implementers like Bastien's
> life miserable attempting to deal with them I really hope we can make
> progress on removing or simplifying them.
>
> This is why I think it is important to understand the rationales for
> introducing the rules in the first place (and why it is safe to remove them
> if indeed it is) and being as rigorous as possible on the rationales for
> introducing additional rules. It sounds like from Gloria's initial post we
> are still at a brainstorming phase (which is fine) but knowing what we know
> today I really hope we can learn from the mistakes of the original BIP 125,
> namely the Core implementation not matching the BIP and the sparse
> rationales for the rules. As Bastien says this is not criticizing the
> original BIP 125 authors, 7 years is a long time especially in Bitcoin
> world and they probably weren't thinking about Bastien sitting down to
> write an eclair PR in late 2021 (and reviewers of that PR) when they wrote
> the BIP in 2015.
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
>
>
> --- Original Message ---
> On Monday, January 31st, 2022 at 3:57 PM, Bastien TEINTURIER via
> bitcoin-dev  wrote:
>
> Hi Gloria,
>
> Many thanks for raising awareness on these issues and constantly pushing
> towards finding a better model. This work will highly improve the
> security of any multi-party contract trying to build on top of bitcoin
> (because most multi-party contracts will need to have timeout conditions
> and participants will need to make some transactions confirm before