Re: [bitcoin-dev] Covenants and feebumping

2022-03-12 Thread Jeremy Rubin via bitcoin-dev
Hi Antoine,

I have a few high level thoughts on your post comparing these types of
primitive to an explicit soft fork approach:

1) Transaction sponsors *is* a type of covenant. Precisely, it is very
similar to an "Impossible Input" covenant in conjunction with a "IUTXO" I
defined in my 2017 workshop
https://rubin.io/public/pdfs/multi-txn-contracts.pdf (I know, I know...
self citation, not cool, but helps with context).

However, for Sponsors itself we optimize the properties of how it works &
is represented, as well as "tighten the hatches" on binding to specific TX
vs merely spend of the outputs (which wouldn't work as well with APO).

Perhaps thinking of something like sponsors as a form of covenant, rather
than a special purpose thing, is helpful?

There's a lot you could do with a general "observe other txns in {this
block, the chain}" primitive. The catch is that for sponsors we don't
*care* to enable people to use this as a "smart contracting primitive", we
want to use it for fee bumping. So we don't care about programmability, we
care about being able to use the covenant to bump fees.

2) On Chain Efficiency.


A) Precommitted Levels
As you've noted, an approach like precomitted different fee levels might
work, but has substantial costs.

However, with sponsors, the minimum viable version of this (not quite what
is spec'd in my prior email, but it could be done this way if we care to
optimize for bytes) would require 1 in and 1 out with only 32 bytes extra.
So that's around 40 bytes outpoint + 64 bytes signature + 40 bytes output +
32 bytes metadata = 174 bytes per bump. Bumps in this way can also
amortize, so bumping >1 txn at the same time would hit the limit of 32
bytes + 144/n  bytes to bump more than one thing. You can imagine cases
where this might be popular, like "close >1 of my LN channels" or "start
withdrawals for 5 of my JamesOB vaulted coins"

B) Fancy(er) Covenants

We might also have something with OP_CAT and CSFS where bumps are done as
some sort of covenant-y thing that lets you arbitrarily rewrite
transactions.

Not too much to say other than that it is difficult to get these down in
size as the scripts become more complex, not to mention the (hotly
discussed of late) ramifications of those covenants more generally.

Absent a concrete fancy covenant with fee bumping, I can't comment.

3) On Capital Efficiency

Something like a precommitted or covenant fee bump requires the fee capital
to be pre-committed inside the UTXO, whereas for something like Sponsors
you can use capital you get sometime later. In certain models -- e.g.,
channels -- where you might expect only log(N) of your channels to fail in
a given epoch, you don't need to allocate as much capital as if you were to
have to do it in-band. This is also true for vaults where you know you only
want to open 1 per month let's say, and not  per month,
which pre-committing requires.

4) On Protocol Design

It's nice that you can abstract away your protocol design concerns as a
"second tier composition check" v.s. having to modify your protocol to work
with a fee bumping thing.

There are a myriad of ways dynamic txns (e.g. for Eltoo) can lead to RBF
pinning and similar, Sponsor type things allow you to design such protocols
to not have any native way of paying for fees inside the actual
"Transaction Intents" and use an external system to create the intended
effect. It seems (to me) more robust that we can prove that a Sponsors
mechanism allows any transaction -- regardless of covenant stuff, bugs,
pinning, etc -- to move forward.

Still... careful protocol design may permit the use of optimized
constructions! For example, in a vault rather than assigning *no fee* maybe
you can have a single branch with a reasonable estimated fee. If you are
correct or overshot (let's say 50% chance?) then you don't need to add a
sponsor. If you undershot, not to worry, just add a sponsor. Adopted
broadly, this would cut the expected value of using sponsors by . This basically enables all
protocols to try to be more efficient, but backstop that with a guaranteed
to work safe mechanism.



There was something else I was going to say but I forgot about it... if it
comes to me I'll send a follow up email.

Cheers,

Jeremy

p.s.

>


> *Of course this makes for a perfect DoS: it would be trivial for a miner
> to infer that you are using*
> *a specific vault standard and guess other leaves and replace the witness
> to use the highest-feerate*
> *spending path. You could require a signature from any of the
> participants. Or, at the cost of an**additional depth, in the tree you
> could "salt" each leaf by pairing it with -say- an OP_RETURN leaf.*



you don't need a salt, you just need a unique payout addr (e.g. hardened
derivation) per revocation txn and you cannot guess the branch.

--
@JeremyRubin 

On Sat, Mar 12, 2022 at 10:34 AM darosior via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The idea o

Re: [bitcoin-dev] Speedy Trial

2022-03-12 Thread Billy Tetrud via bitcoin-dev
>  If I find out I'm in the economic minority then I have little choice but
to either accept the existence of the new rules or sell my Bitcoin

I do worry about what I have called a "dumb majority soft fork". This is
where, say, mainstream adoption has happened, some crisis of some magnitude
happens that convinces a lot of people something needs to change now. Let's
say it's another congestion period where fees spike for months. Getting
into and out of lighting is hard and maybe even the security of lightning's
security model is called into question because it would either take too
long to get a transaction on chain or be too expensive. Panicy people might
once again think something like "let's increase the block size to 1GB, then
we'll never have this problem again". This could happen in a segwit-like
soft fork.

In a future where Bitcoin is the dominant world currency, it might not be
unrealistic to imagine that an economic majority might not understand why
such a thing would be so dangerous, or think the risk is low enough to be
worth it. At that point, we in the economic minority would need a plan to
hard fork away. One wouldn't necessarily need to sell all their majority
fork Bitcoin, but they could.

That minority fork would of course need some mining power. How much? I
don't know, but we should think about how small of a minority chain we
could imagine might be worth saving. Is 5% enough? 1%? How long would the
chain stall if hash power dropped to 1%?

TBH I give the world a ~50% chance that something like this happens in the
next 100 years. Maybe Bitcoin will ossify and we'll lose all the people
that had deep knowledge on these kinds of things because almost no one's
actively working on it. Maybe the crisis will be too much for people to
remain rational and think long term. Who knows? But I think that at some
point it will become dangerous if there isn't a well discussed well vetted
plan for what to do in such a scenario. Maybe we can think about that 10
years from now, but we probably shouldn't wait much longer than that. And
maybe it's as simple as: tweak the difficulty recalculation and then just
release a soft fork aware Bitcoin version that rejects the new rules or
rejects a specific existing post-soft-fork block. Would it be that simple?

On Sat, Mar 12, 2022, 07:35 Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Fri, Mar 11, 2022 at 9:03 AM Jorge Timón  wrote:
>
>>
>> A major contender to the Speedy Trial design at the time was to mandate
>>> eventual forced signalling, championed by luke-jr.  It turns out that, at
>>> the time of that proposal, a large amount of hash power simply did not have
>>> the firmware required to support signalling.  That activation proposal
>>> never got broad consensus, and rightly so, because in retrospect we see
>>> that the design might have risked knocking a significant fraction of mining
>>> power offline if it had been deployed.  Imagine if the firmware couldn't be
>>> quickly updated or imagine if the problem had been hardware related.
>>>
>>
 Yes, I like this solution too, with a little caveat: an easy mechanism
 for users to actively oppose a proposal.
 Luke alao talked about this.
 If users oppose, they should use activation as a trigger to fork out of
 the network by invalidating the block that produces activation.
 The bad scenario here is that miners want to deploy something but users
 don't want to.
 "But that may lead to a fork". Yeah, I know.
 I hope imagining a scenario in which developers propose something that
 most miners accept but some users reject is not taboo.

>>>
>>> This topic is not taboo.
>>>
>>> There are a couple of ways of opting out of taproot.  Firstly, users can
>>> just not use taproot.  Secondly, users can choose to not enforce taproot
>>> either by running an older version of Bitcoin Core or otherwise forking the
>>> source code.  Thirdly, if some users insist on a chain where taproot is
>>> "not activated", they can always softk-fork in their own rule that
>>> disallows the version bits that complete the Speedy Trial activation
>>> sequence, or alternatively soft-fork in a rule to make spending from (or
>>> to) taproot addresses illegal.
>>>
>>
>> Since it's about activation in general and not about taproot
>> specifically, your third point is the one that applies.
>> Users could have coordinated to have "activation x" never activated in
>> their chains if they simply make a rule that activating a given proposal
>> (with bip8) is forbidden in their chain.
>> But coordination requires time.
>>
>
> A mechanism of soft-forking against activation exists.  What more do you
> want? Are we supposed to write the code on behalf of this hypothetical
> group of users who may or may not exist for them just so that they can have
> a node that remains stalled on Speedy Trial lockin?  That simply isn't
> reasonable, but if you think it is, I invite you to cre

[bitcoin-dev] Covenants and feebumping

2022-03-12 Thread darosior via bitcoin-dev
The idea of a soft fork to fix dynamic fee bumping was recently put back on the 
table. It might
sound radical, as what prevents today reasonable fee bumping for contracts with 
presigned
transactions (pinning) has to do with nodes' relay policy. But the frustration 
is understandable
given the complexity of designing fee bumping with today's primitives. [0]
Recently too, there was a lot of discussions around covenants. Covenants 
(conceptually, not talking
about any specific proposal) seem to open lots of new use cases and to be 
desired by (some?) Bitcoin
application developers and users.
I think that fee bumping using covenants has attractive properties, and it 
requires a soft fork that
is already desirable beyond (trying) to fix fee bumping. However i could not 
come up with a solution
as neat for other protocols than vaults. I'd like to hear from others about 1) 
taking this route for
fee bumping 2) better ideas on applying this to other protocols.


In a vault construction you have a UTxO which can only be spent by an 
Unvaulting transaction, whose
output triggers a timelock before the expiration of which a revocation 
transaction may be confirmed.
The revocation transaction being signed in advance (typically before sharing 
the signature for the
Unvault transaction) you need fee bumping in order for the contract to actually 
be enforceable.

Now, with a covenant you could commit to the revocation tx instead of 
presigning it. And using a
Taproot tree you could commit to different versions of it with increasing 
feerate. Any network
monitor (the brooadcaster, a watchtower, ..) would be able to RBF the 
revocation transaction if it
doesn't confirm by spending using a leaf with a higher-feerate transaction 
being committed to.

Of course this makes for a perfect DoS: it would be trivial for a miner to 
infer that you are using
a specific vault standard and guess other leaves and replace the witness to use 
the highest-feerate
spending path. You could require a signature from any of the participants. Or, 
at the cost of an
additional depth, in the tree you could "salt" each leaf by pairing it with 
-say- an OP_RETURN leaf.
But this leaves you with a possible internal blackmail for multi-party 
contracts (although it's less
of an issue for vaults, and not one for single-party vaults).
What you could do instead is attaching an increasing relative timelock to each 
leaf (as the committed
revocation feerate increases, so does the timelock). You need to be careful to 
note wreck miner
incentives here (see [0], [1], [2] on "miner harvesting"), but this enables the 
nice property of a
feerate which "adapts" to the block space market. Another nice property of this 
approach is the
integrated anti fee sniping protection if the revocation transaction pays a 
non-trivial amount of
fees.

Paying fees from "shared" funds instead of a per-watchtower fee-bumping wallet 
opened up the
blackmail from the previous section, but the benefits of paying from internal 
funds shouldn't be
understated.
No need to decide on an amount to be refilled. No need to bother the user to 
refill the fee-bumping
wallet (before they can participate in more contracts, or worse before a 
deadline at which all
contracts are closed). No need for a potentially large amount of funds to just 
sit on a hot wallet
"just in case". No need to duplicate this amount as you replicate the number of 
network monitors
(which is critical to the security of such contracts).
In addition, note how modifying the feerate of the revocation transaction in 
place is less expensive
than adding a (pair of) new input (and output), let alone adding an entire new 
transaction to CPFP.
Aside, and less importantly, it can be made to work with today's relay rules 
(just use fee thresholds
adapted to the current RBF thresholds, potentially with some leeway to account 
for policy changes).
Paying from shared funds (in addition to paying from internal funds) also 
prevents pervert
incentives for contracts with more than 2 parties. In case one of the parties 
breaches it, all
remaining parties have an incentive to enforce the contract.. But only one 
would otherwise pay for
it! It would open up the door to some potential sneaky techniques to wait for 
another party to pay
for the fees, which is at odd with the reactive security model.

Let's examine how it could be concretely designed. Say you have a vault wallet 
software for a setup
with 5 participants. The revocation delay is 144 blocks. You assume revocation 
to be infrequent (if
one happens it's probably a misconfigured watchtower that needs be fixed before 
the next
unvaulting), so you can afford infrequent overpayments and larger fee 
thresholds. Participants
assume the vault will be spent within a year and assume a maximum possible 
feerate for this year of
10ksat/vb.
They create a Taproot tree of depth 7. First leaf is the spending path (open to 
whomever the vault
pays after the 144 blocks). Then the leaf `i` for `i

Re: [bitcoin-dev] Speedy Trial

2022-03-12 Thread pushd via bitcoin-dev
> A mechanism of soft-forking against activation exists. What more do you
want? Are we supposed to write the code on behalf of this hypothetical
group of users who may or may not exist for them just so that they can have
a node that remains stalled on Speedy Trial lockin? That simply isn't
reasonable, but if you think it is, I invite you to create such a fork.
I want BIP 8. And less invitations to fork or provoke people.

> If I believe I'm in the economic majority then I'll just refuse to upgrade
my node, which was option 2. I don't know why you dismissed it.

> Not much can prevent a miner cartel from enforcing rules that users don't
want other than hard forking a replacement POW. There is no effective
difference between some developers releasing a malicious soft-fork of
Bitcoin and the miners releasing a malicious version themselves. And when
the miner cartel forms, they aren't necessarily going to be polite enough
to give a transparent signal of their new rules.

Miners get paid irrespective of rules as long as subsidy doesn't change. You 
can affect their fees, bitcoin and that should be termed as an attack on 
bitcoin.

> However, without the
economic majority enforcing their set of rules, the cartel continuously
risks falling apart from the temptation of transaction fees of the censored
transactions.

Transaction fee isn't as expected even if we leave censored transactions in a 
censorship resistant network. If cartel of developers affect it in long term, 
there will be a time when nobody wants to mine for loss or less profit.

> Look, you cannot have the perfect system of money all by your
lonesome self.

I agree with this and I can't do the same thing with my local government.

pushd
---
parallel lines meet at infinity?___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Removing the Dust Limit

2022-03-12 Thread vjudeu via bitcoin-dev
> We should remove the dust limit from Bitcoin.

Any node operator can do that. Just put "dustrelayfee=0." in your 
bitcoin.conf.

And there is more: you can also conditionally allow free transactions:

mintxfee=0.0001
minrelaytxfee=0.
blockmintxfee=0.

Then, when using getblocktemplate you will get transactions with the highest 
fees first anyway, and you include cheap or free transactions in the end, if 
there will be enough room for them.

So, all of those settings are in the hands of node operators, there is no need 
to change the source code, all you need is to convince nodes to change their 
settings.


On 2021-08-08 20:53:28 user Jeremy via bitcoin-dev 
 wrote:
We should remove the dust limit from Bitcoin. Five reasons:


1) it's not our business what outputs people want to create
2) dust outputs can be used in various authentication/delegation smart contracts
3) dust sized htlcs in lightning 
(https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
 force channels to operate in a semi-trusted mode which has implications 
(AFAIU) for the regulatory classification of channels in various jurisdictions; 
agnostic treatment of fund transfers would simplify this (like getting a 0.01 
cent dividend check in the mail)
4) thinly divisible colored coin protocols might make use of sats as value 
markers for transactions.
5) should we ever do confidential transactions we can't prevent it without 
compromising privacy / allowed transfers


The main reasons I'm aware of not allow dust creation is that:


1) dust is spam
2) dust fingerprinting attacks


1 is (IMO) not valid given the 5 reasons above, and 2 is preventable by well 
behaved wallets to not redeem outputs that cost more in fees than they are 
worth.


cheers,


jeremy


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-12 Thread Russell O'Connor via bitcoin-dev
On Fri, Mar 11, 2022 at 9:03 AM Jorge Timón  wrote:

>
> A major contender to the Speedy Trial design at the time was to mandate
>> eventual forced signalling, championed by luke-jr.  It turns out that, at
>> the time of that proposal, a large amount of hash power simply did not have
>> the firmware required to support signalling.  That activation proposal
>> never got broad consensus, and rightly so, because in retrospect we see
>> that the design might have risked knocking a significant fraction of mining
>> power offline if it had been deployed.  Imagine if the firmware couldn't be
>> quickly updated or imagine if the problem had been hardware related.
>>
>
>>> Yes, I like this solution too, with a little caveat: an easy mechanism
>>> for users to actively oppose a proposal.
>>> Luke alao talked about this.
>>> If users oppose, they should use activation as a trigger to fork out of
>>> the network by invalidating the block that produces activation.
>>> The bad scenario here is that miners want to deploy something but users
>>> don't want to.
>>> "But that may lead to a fork". Yeah, I know.
>>> I hope imagining a scenario in which developers propose something that
>>> most miners accept but some users reject is not taboo.
>>>
>>
>> This topic is not taboo.
>>
>> There are a couple of ways of opting out of taproot.  Firstly, users can
>> just not use taproot.  Secondly, users can choose to not enforce taproot
>> either by running an older version of Bitcoin Core or otherwise forking the
>> source code.  Thirdly, if some users insist on a chain where taproot is
>> "not activated", they can always softk-fork in their own rule that
>> disallows the version bits that complete the Speedy Trial activation
>> sequence, or alternatively soft-fork in a rule to make spending from (or
>> to) taproot addresses illegal.
>>
>
> Since it's about activation in general and not about taproot specifically,
> your third point is the one that applies.
> Users could have coordinated to have "activation x" never activated in
> their chains if they simply make a rule that activating a given proposal
> (with bip8) is forbidden in their chain.
> But coordination requires time.
>

A mechanism of soft-forking against activation exists.  What more do you
want? Are we supposed to write the code on behalf of this hypothetical
group of users who may or may not exist for them just so that they can have
a node that remains stalled on Speedy Trial lockin?  That simply isn't
reasonable, but if you think it is, I invite you to create such a fork.


> Please, try to imagine an example for an activation that you wouldn't like
> yourself. Imagine it gets proposed and you, as a user, want to resist it.
>

If I believe I'm in the economic majority then I'll just refuse to upgrade
my node, which was option 2. I don't know why you dismissed it.

Not much can prevent a miner cartel from enforcing rules that users don't
want other than hard forking a replacement POW.  There is no effective
difference between some developers releasing a malicious soft-fork of
Bitcoin and the miners releasing a malicious version themselves.  And when
the miner cartel forms, they aren't necessarily going to be polite enough
to give a transparent signal of their new rules.  However, without the
economic majority enforcing their set of rules, the cartel continuously
risks falling apart from the temptation of transaction fees of the censored
transactions.

On the other hand, If I find out I'm in the economic minority then I have
little choice but to either accept the existence of the new rules or sell
my Bitcoin.  Look, you cannot have the perfect system of money all by your
lonesome self.  Money doesn't have economic value if no one else wants to
trade you for it.  Just ask that poor user who YOLO'd his own taproot
activation in advance all by themselves.  I'm sure they think they've got
just the perfect money system, with taproot early and everything.  But now
their node is stuck at block 692261
 and hasn't made
progress since.  No doubt they are hunkered down for the long term,
absolutely committed to their fork and just waiting for the rest of the
world to come around to how much better their version of Bitcoin is than
the rest of us.

Even though you've dismissed it, one of the considerations of taproot was
that it is opt-in for users to use the functionality.  Future soft-forks
ought to have the same considerations to the extent possible.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-03-12 Thread Billy Tetrud via bitcoin-dev
In reading through more of the discussion, it seems the idea I presented
above might basically be a reformulation of t-bast's rate-limiting idea
presented in this comment
.
Perhaps he could comment on whether that characterization is accurate.

On Fri, Mar 11, 2022 at 10:22 AM Billy Tetrud 
wrote:

> Hi Gloria,
>
> >  1. Transaction relay rate limiting
>
> I have a similar concern as yours, that this could prevent higher fee-rate
> transactions from being broadcast.
>
> > 2. Staggered broadcast of replacement transactions: within some time
> interval, maybe accept multiple replacements for the same prevout, but only
> relay the original transaction.
>
> By this do you mean basically having a batching window where, on receiving
> a replacement transaction, a node will wait for a period of time,
> potentially receiving many replacements for the same transaction (or many
> separate conflicting transactions), and only broadcasting the "best" one(s)
> at the end of that time window?
>
> Its an interesting idea, but it would produce a problem. Every hop that
> replacement transaction takes would be delayed by this staggered window. If
> the window were 3 minutes and transactions generally take 20 hops to get to
> the majority of miners, a "best-case average" delay might be 3.75 minutes
> (noting that among your 8 nodes, its quite likely one of them would have a
> window ending much sooner than 3 minutes). Some (maybe 3% of) nodes would
> experience delays of more than 20 minutes. That kind of delay isn't great.
>
> However it made me think of another idea: a transaction replacement
> broadcast cooldown. What if nodes kept track of the time they broadcasted
> the last replacement for a package and had a relay cooldown after the last
> replacement was broadcasted? A node receiving a replacement would relay the
> replacement immediately if the package its replacing was broadcasted more
> than X seconds ago, and otherwise it would wait until the time when that
> package was broadcasted at least X seconds ago to broadcast it. Any
> replacements it receives during that waiting period would replace as
> normal, meaning the unrebroadcasted replacement would never be
> broadcasted, and only the highest value replacement would be broadcasted at
> the end of the cooldown.
>
> This wouldn't prevent a higher-fee-rate transaction from being broadcasted
> (like rate limiting could), but would still be effective at limiting
> unnecessary data transmission. Another benefit is that in the
> non-adversarial case, replacement transactions wouldn't be subject to any
> delay at all (while in the staggered broadcast idea, most replacements
> would experience some delay). And in the adversarial case, where a
> malicious actor broadcasts a low-as-possible-value replacement just before
> yours, the worst case delay is just whatever the cooldown period is. I
> would imagine that maybe 1 minute would be a reasonable worst-case delay.
> This would limit spam for a transaction that makes it into a block to ~10x
> (9 to 1). I don't see much of a downside to doing this beyond just the
> slight additional complexity of relay rules (and considering it could save
> substantial additional code complexity, even that is a benefit).
>
> All a node would need to do is keep a timestamp on each transaction they
> receive for when it was broadcasted and check it when a replacement comes
> in. If now-broadcastDate < cooldown, set a timer for cooldown -
> (now-broadcastDate) to broadcast it. If another replacement comes in, clear
> that timer and repeat using the original broadcast date (since the
> unbroadcast transaction doesn't have a broadcast date yet).
>
> I think it might also be useful to note that eliminating "extra data"
> caused by careless or malicious actors (spam or whatever you want to call
> it) should not be the goal. It is impossible to prevent all spam. What we
> should be aiming for is more specific: we should attempt to design a system
> where spam is manageable. Eg if our goal is to ensure that a bitcoin node
> uses no more than 10% of the bandwidth of a "normal" user, if current
> non-spam traffic only requires 1% of a "normal" users's bandwidth, then the
> network can bear a 9 to 1 ratio of spam. When a node spins up, there is a
> lot more data to download and process. So we know that all full nodes can
> handle at least as much traffic as they handle during IBD. What's the
> difference between those amounts? I'm not sure, but I would guess that IBD
> is at least a couple times more demanding than a fully synced node. So I
> might suggest that as long as spam can be kept below a ratio of maybe 2 to
> 1, we should consider the design acceptable (and therefore more complexity
> unnecessary).
>
> The 1 minute broadcast cooldown I mentioned before wouldn't be quite
> sufficient to achieve that ratio. But a 3.33 minute cooldo