Re: [bitcoin-dev] [Bitcoin Advent Calendar] What's Smart about Smart Contracts

2021-12-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,


> > ## Why would a "Smart" contract be "Smart"?
> >
> > A "smart" contract is simply one that somehow self-enforces rather than 
> > requires a third party to enforce it.
> > It is "smart" because its execution is done automatically.
>
> There are no automatic executing smart contracts on any platform I'm aware 
> of. Bitcoin requires TX submission, same with Eth.
>
> Enforcement and execution are different subjects.

Nothing really prevents a cryptocurrency system from recording a "default 
branch" and enforcing that later.
In Bitcoin terms, nothing fundamentally prevents this redesign:

* A confirmed transaction can include one or more transactions (not part of its 
wtxid or txid) which spend an output of that confirmed transaction.
  * Like SegWit, they can be put in a new region that is not visible to 
pre-softfork nodes, but this new section is committed to in the coinbase.
* Those extra transactions must be `nLockTime`d to a future blockheight.
* When the future blockheight arrives, we add those transactions to the mempool.
  * If the TXO is already spent by then, then they are not put in the mempool.

That way, at least the timelocked branch can be automatically executed, because 
the tx can be submitted "early".
The only real limitation against the above is the amount of resources it would 
consume on typical nodes.

Even watchtower behavior can be programmed directly into the blockchain layer, 
i.e. we can put encrypted blobs into the same extra blockspace, with a partial 
txid key that triggers decryption and putting those transactions in the 
mempool, etc.
Thus, the line between execution and enforcement blurs.


But that is really beside the point.

The Real Point is that "smart"ness is not a Boolean flag, but a spectrum.
The above feature would allow for more "smart"ness in contracts, at the cost of 
increased resource utilization at each node.
In this point-of-view, even a paper contract is "smart", though less "smart" 
than a typical Bitcoin HTLC.

> > Consider the humble HTLC.
> > 
> > This is why the reticence of Bitcoin node operators to change the 
> > programming model is a welcome feature of the network.
> > Any change to the programming model risks the introduction of bugs to the 
> > underlying virtual machine that the Bitcoin network presents to contract 
> > makers.
> > And without that strong reticence, we risk utterly demolishing the basis of 
> > the "smart"ness of "smart" contracts --- if a "smart" contract cannot 
> > reliably be executed, it cannot self-enforce, and if it cannot 
> > self-enforce, it is no longer particularly "smart".
>
> I don't think that anywhere in the post I advocated for playing fast and 
> loose with the rules to introduce any sort of unreliability.

This is not a criticism of your post, merely an amusing article that fits the 
post title better.

> What I'm saying is more akin to we can actually improve the "hardware" that 
> Bitcoin runs on to the extent that it actually does give us better ability to 
> adjudicate the transfers of value, and we should absolutely and aggressively 
> pursue that rather than keeping Bitcoin running on a set mechanisms that are 
> insufficient to reach the scale, privacy, self custody, and decentralization 
> goals we have.

Agreed.

>  
>
> > ## The N-of-N Rule
> >
> > What is a "contract", anyway?
> >
> > A "contract" is an agreement between two or more parties.
> > You do not make a contract to yourself, since (we assume) you are 
> > completely a single unit (in practice, humans are internally divided into 
> > smaller compute modules with slightly different incentives (note: I did not 
> > get this information by *personally* dissecting the brains of any humans), 
> > hence the "we assume").
>
>  
>
> > Thus, a contract must by necessity require N participants
>
> This is getting too pedantic about contracts. If you want to go there, you're 
> also missing "consideration".
>
> Smart Contracts are really just programs. And you absolutely can enter smart 
> contracts with yourself solely, for example, Vaults (as covered in day 10) 
> are an example where you form a contract where you are intended to be the 
> only party.

No, because a vault is a contract between your self-of-today and your 
self-of-tomorrow, with your self-of-today serving as an attorney-in-place of 
your self-of-tomorrow.
After all, at the next Planck Interval you will die and be replaced with a new 
entity that only *mostly* agrees with you.

> You could make the claim that a vault is just an open contract between you 
> and some future would be hacker, but the intent is that the contract is there 
> to just safeguard you and those terms should mostly never execute. + you 
> usually want to define contract participants as not universally quantified...
>
> > This is of interest since in a reliability perspective, we often accept 
> > k-of-n.
> > 
> > But with an N-of-N, *you* are a participant and your input is necessary for 

Re: [bitcoin-dev] [Lightning-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Ruben Somsen via bitcoin-dev
Hi Jeremy,

Thanks for sharing your thoughts.

To summarize your arguments: the intentionally malicious path to getting
the 0 sat output confirmed without being spent is uneconomical compared to
simply creating dust outputs. And even if it does happen, the tx spending
from the 0 sat output may still be valid (as long as none of its inputs get
spent elsewhere) and could eventually get confirmed.

I think those are good points. I do still see a possibility where a user
non-maliciously happens to behave in a way that causes all of the above to
happen, but it does seem somewhat unlikely.

It could happen if all of the following occurs:
1. Another output happens to get spent at a higher feerate (e.g. because an
absolute timelock expires and the output gets used)
2. The tx spending the 0 sat output then happens to not make it into the
block due to the lower fees
3. The user then happens to invalidate the tx that was spending from the 0
sat output (seems rational at that point)

Assuming this is the only scenario (I am at least not currently aware of
others), the question then becomes whether the above is acceptable in order
to avoid a soft fork.

Cheers,
Ruben


On Wed, Dec 8, 2021 at 6:41 PM Jeremy  wrote:

> IMO this is not a big problem. The problem is not if a 0 value ever enters
> the mempool, it's if it is never spent. And even if C2/P1 goes in, C1 still
> can be spent. In fact, it increases it's feerate with P1's confirmation so
> it's somewhat likely it would go in. C2 further has to be pretty expensive
> compared to C1 in order to be mined when C2 would not be, so the user
> trying to do this has to pay for it.
>
> If we're worried it might never be spent again since no incentive, it's
> rational for miners *and users who care about bloat* to save to disk the
> transaction spending it to resurrect it. The way this can be broken is if
> the txn has two inputs and that input gets spent separately.
>
> That said, I think if we can say that taking advantage of keeping the 0
> value output will cost you more than if you just made it above dust
> threshold, it shouldn't be economically rational to not just do a dust
> threshold value output instead.
>
> So I'm not sure the extent to which we should bend backwards to make 0
> value outputs impossible v.s. making them inconvenient enough to not be
> popular.
>
>
>
> -
> Consensus changes below:
> -
>
> Another possibility is to have a utxo with drop semantics; if UTXO X with
> some flag on it is not spent in the block it is created, it expires and can
> never be spent. This is essentially an inverse timelock, but severely
> limited to one block and mempool evictions can be handled as if a conflict
> were mined.
>
> These types of 0 value outputs could be present just for attaching fee in
> the mempool but be treated like an op_return otherwise. We could add two
> cases for this: one bare segwit version (just the number, no data) and one
> that's equivalent to taproot. This covers OP_TRUE anchors very efficiently
> and ones that require a signature as well.
>
> This is relatively similar to how Transaction Sponsors works, but without
> full tx graph de-linkage... obviously I think if we'll entertain a
> consensus change, sponsors makes more sense, but expiring utxos doesn't
> change as many properties of the tx-graph validation so might be simpler.
>
>
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] [Bitcoin Advent Calendar] Inheritance Schemes

2021-12-08 Thread Jeremy via bitcoin-dev
Devs,

For today's post, something near and dear to our hearts: giving our sats to
our loved ones after we kick the bucket.

see: https://rubin.io/bitcoin/2021/12/08/advent-11/

Some interesting primitives, hopefully enough to spark a discussion around
different inheritance schemes that might be useful.

One note I think is particularly discussion worthy is how the UTXO model
makes inheritance backups sort of fundamentally difficult to do v.s. a
monolithic account model.

Cheers,

Jeremy

--
@JeremyRubin 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy via bitcoin-dev
IMO this is not a big problem. The problem is not if a 0 value ever enters
the mempool, it's if it is never spent. And even if C2/P1 goes in, C1 still
can be spent. In fact, it increases it's feerate with P1's confirmation so
it's somewhat likely it would go in. C2 further has to be pretty expensive
compared to C1 in order to be mined when C2 would not be, so the user
trying to do this has to pay for it.

If we're worried it might never be spent again since no incentive, it's
rational for miners *and users who care about bloat* to save to disk the
transaction spending it to resurrect it. The way this can be broken is if
the txn has two inputs and that input gets spent separately.

That said, I think if we can say that taking advantage of keeping the 0
value output will cost you more than if you just made it above dust
threshold, it shouldn't be economically rational to not just do a dust
threshold value output instead.

So I'm not sure the extent to which we should bend backwards to make 0
value outputs impossible v.s. making them inconvenient enough to not be
popular.



-
Consensus changes below:
-

Another possibility is to have a utxo with drop semantics; if UTXO X with
some flag on it is not spent in the block it is created, it expires and can
never be spent. This is essentially an inverse timelock, but severely
limited to one block and mempool evictions can be handled as if a conflict
were mined.

These types of 0 value outputs could be present just for attaching fee in
the mempool but be treated like an op_return otherwise. We could add two
cases for this: one bare segwit version (just the number, no data) and one
that's equivalent to taproot. This covers OP_TRUE anchors very efficiently
and ones that require a signature as well.

This is relatively similar to how Transaction Sponsors works, but without
full tx graph de-linkage... obviously I think if we'll entertain a
consensus change, sponsors makes more sense, but expiring utxos doesn't
change as many properties of the tx-graph validation so might be simpler.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Jeremy via bitcoin-dev
Bastien,

The issue is that with Decker Channels you either use SIGHASH_ALL / APO and
don't allow adding outs (this protects against certain RBF pinning on the
root with bloated wtxid data) and have anchor outputs or you do allow them
and then are RBF pinnable (but can have change).

Assuming you use anchor outs, then you really can't use dust-threshold
outputs as it either breaks the ratcheting update validity (if the specific
amount paid to output matters) OR it allows many non-latest updates to
fully drain the UTXO of any value.

You can get around the needing for N of them by having a congestion-control
tree setup in theory; then you only need log(n) data for one bumper, and
(say) 1.25x the data if all N want to bump. This can be a nice trade-off
between letting everyone bump and not. Since these could be chains of
IUTXO, they don't need to carry any weight directly.

The carve out would just be to ensure that CPFP 0 values are known how to
be spent.





--
@JeremyRubin 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A fee-bumping model

2021-12-08 Thread darosior via bitcoin-dev
Hi Gloria,

I agree with regard to the RBF changes. To me, we should (obviously?) do 
ancestor feerate instead of requiring confirmed inputs (#23121).
However, we are yet to come with a reasonable policy-only fix to "rule 3 
pinning".

Responses inline!

>> The part of Revault we are interested in for this study is the delegation 
>> process, and more
>> specifically the application of spending policies by network monitors 
>> (watchtowers).
>
> I'd like to better understand how fee-bumping would be used, i.e. how the 
> watchtower model works:
> - Do all of the vault parties both deposit to the vault and a refill/fee to 
> the watchtower, is there a reward the watchtower collects for a successful 
> Cancel, or something else? (Apologies if there's a thorough explanation 
> somewhere that I haven't already seen).
> - Do we expect watchtowers tracking multiple vaults to be batching multiple 
> Cancel transaction fee-bumps?
> - Do we expect vault users to be using multiple watchtowers for a better 
> trust model? If so, and we're expecting batched fee-bumps, won't those 
> conflict?

Grossly, it could be described as "enforce spending policied on 10BTC worth of 
delegated coins by allocating 5mBTC to 3 different watchtowers".
Each participant that will be delegating coins is expected to run a number of 
watchtowers. They should ideally be replicated (for full disclosure if
it wasn't obvious providing replication is the business model of the company 
behind the Revault project :p).
You can't batch fee-bumps as you *could* (maybe not *would*) do with anchor 
outputs channels, since the latter use CPFP and we "sponsor"
(or whatever the name of "supplementing the fees of a transaction by adding 
inputs" is).
In the current model, watchtowers enforcing the same policy do compete in that 
they broadcast conflicting transactions since they attach different
fee-bumping inputs. Ideally we could sign the added feebumping inputs 
themselves with ACP so they are allowed to cooperate. However doing that
would allow anyone on the network to snip the added fees to perform a "RBF 
rule-3 pinning".
Finally, there could be concerns around game theory of letting others' 
watchtowers feebump for you. I'm convinced however that in our case the fee
is completely dwarfed by the value at stake. Trying to delay your own WT's 
fee-bump reaction to hope someone else will pay the 10k sats for enforcing
a 1BTC contract, hmmm, i wouldn't do that.

>> For Revault we can afford to introduce malleability in the Cancel 
>> transaction since there is no
>> second-stage transaction depending on its txid. Therefore it is pre-signed 
>> with ANYONECANPAY. We
>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note 
>> how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties contract.
>
> We've already talked about this offline, but I'd like to point out here that 
> even transactions signed with ANYONECANPAY|ALL can be pinned by RBF unless we 
> add an ancestor score rule. [0], [1] (numbers are inaccurate, Cancel Tx 
> feerates wouldn't be that low, but just to illustrate what the attack would 
> look like)

Thanks for expliciting that, i should have mentioned it. For everyone reading 
the PR is at https://github.com/bitcoin/bitcoin/pull/23121 .

>> can't use ANYONECANPAY|SINGLE since it would open a pinning vector [3]. Note 
>> how we can't leverage
>> the carve out rule, and neither can any other more-than-two-parties contract.
>
> Well stated about CPFP carve out. I suppose the generalization is that 
> allowing n extra ancestorcount=2 descendants to a transaction means it can 
> help contracts with <=n+1 parties (more accurately, outputs)? I wonder if 
> it's possible to devise a different approach for limiting 
> ancestors/descendants, e.g. by height/width/branching factor of the family 
> instead of count... :shrug:

I don't think so, because you want any party involved in the contract to be 
able to unilaterally enforce it. With >2 anchor outputs any 2-parties can
collude against the other one(s) by pinning the transaction using the first 
party's output to hit the descendant chain limit and the second one to trigger
the carve-out.

Ideally i think it'd be better that all contracts move toward using sponsoring 
("tx malleation") when we can (ie for all transactions that are at the end of
the chain, or post-ANYPREVOUT any transaction really) instead of CPFP for 
fee-bumping because:
1. It's way easier to reason about wrt mempool DOS protections (the fees don't 
depend on a chain of childs, it's just a function of the transaction alone)
2. It's more space efficient (and thereby economical): you don't need to create 
a new transaction (or a set of new txs) to bump your fees.

Unfortunately, having to use ACP instead of ACP|SINGLE is a showstopper. 
Managing a fee-bumping UTxO pool is a massive burden.
On a side note, thinking back about ACP|SINGLE vs ACP i'm not so sure anymore 
the 

Re: [bitcoin-dev] [Lightning-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Ruben Somsen via bitcoin-dev
Hi Jeremy,

I brought up the exact same thing at coredev, but unfortunately I came up
with a way in which the 0 sat output could still enter the UTXO set under
those rules:

- Parent P1 (0 sat per byte) has 2 outputs, one is 0 sat
- Child C1 spends the 0 sat output for a combined feerate of 1 sat per byte
and they enter the mempool as a package
- Child C2 spends the other output of P1 with a really high feerate and
enters the mempool
- Fees rise and child C1 falls out of the mempool, leaving the 0 sat output
unspent

For this to not be a problem, the 0 sat output needs to provably be the
only spendable output. As you pointed out to me a few days ago, having a
relative timelock on the other outputs would do the trick (and this happens
to be true for spacechains), but that will only be provable if all script
conditions are visible prior to spending time (ruling out p2sh and taproot,
and conflicting with standardness rules for transactions).

It's worth noting out that you can't really make a policy rule that says
the 0 sat output can't be left unspent (i.e. C1 can't be evicted without
also evicting P1), as this would not mirror economically rational behavior
for miners (they would get more fees if they evicted C1, so we must assume
they will, if the transaction ever reaches them).

This last example really points out the tricky situation we're dealing
with. In my opinion, we'd only want to relay 0 sat outputs if we can
guarantee that it's never economically profitable to mine them without them
getting spent in the same block.

Finally, here's a timestamped link to a diagram that shows where 0 sat
outputs would be helpful for spacechains (otherwise someone would have to
pay the dust up front for countless outputs):
https://youtu.be/N2ow4Q34Jeg?t=2556

Cheers,
Ruben




On Wed, Dec 8, 2021 at 9:35 AM Bastien TEINTURIER  wrote:

> Hi Jeremy,
>
> Right now, lightning anchor outputs use a 330 sats amount. Each commitment
> transaction has two such outputs, and only one of them is spent to help the
> transaction get confirmed, so the other stays there and bloats the utxo
> set.
> We allow anyone to spend them after a csv of 16 blocks, in the hope that
> someone will claim a batch of them when the fees are low and remove them
> from the utxo set. However, that trick wouldn't work with 0-value outputs,
> as
> no-one would ever claim them (doesn't make economical sense).
>
> We actually need to have two of them to avoid pinning: each participant is
> able to spend only one of these outputs while the parent tx is unconfirmed.
> I believe N-party protocols would likely need N such outputs (not sure).
>
> You mention a change to the carve-out rule, can you explain it further?
> I believe it would be a necessary step, otherwise 0-value outputs for
> CPFP actually seem worse than low-value ones...
>
> Thanks,
> Bastien
>
> Le mer. 8 déc. 2021 à 02:29, Jeremy via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Bitcoin Devs (+cc lightning-dev),
>>
>> Earlier this year I proposed allowing 0 value outputs and that was shot
>> down for various reasons, see
>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
>>
>> I think that there can be a simple carve out now that package relay is
>> being launched based on my research into covenants from 2017
>> https://rubin.io/public/pdfs/multi-txn-contracts.pdf.
>>
>> Essentially, if we allow 0 value outputs BUT require as a matter of
>> policy (or consensus, but policy has major advantages) that the output be
>> used as an Intermediate Output (that is, in order for the transaction to be
>> creating it to be in the mempool it must be spent by another tx)  with the
>> additional rule that the parent must have a higher feerate after CPFP'ing
>> the parent than the parent alone we can both:
>>
>> 1) Allow 0 value outputs for things like Anchor Outputs (very good for
>> not getting your eltoo/Decker channels pinned by junk witness data using
>> Anchor Inputs, very good for not getting your channels drained by at-dust
>> outputs)
>> 2) Not allow 0 value utxos to proliferate long
>> 3) It still being valid for a 0 value that somehow gets created to be
>> spent by the fee paying txn later
>>
>> Just doing this as a mempool policy also has the benefits of not
>> introducing any new validation rules. Although in general the IUTXO concept
>> is very attractive, it complicates mempool :(
>>
>> I understand this may also be really helpful for CTV based contracts
>> (like vault continuation hooks) as well as things like spacechains.
>>
>> Such a rule -- if it's not clear -- presupposes a fully working package
>> relay system.
>>
>> I believe that this addresses all the issues with allowing 0 value
>> outputs to be created for the narrow case of immediately spendable outputs.
>>
>> Cheers,
>>
>> Jeremy
>>
>> p.s. why another post today? Thank Greg
>> https://twitter.com/JeremyRubin/status/1468390561417547780
>>
>>
>> --
>> @JeremyRubin 

Re: [bitcoin-dev] Take 2: Removing the Dust Limit

2021-12-08 Thread Bastien TEINTURIER via bitcoin-dev
Hi Jeremy,

Right now, lightning anchor outputs use a 330 sats amount. Each commitment
transaction has two such outputs, and only one of them is spent to help the
transaction get confirmed, so the other stays there and bloats the utxo set.
We allow anyone to spend them after a csv of 16 blocks, in the hope that
someone will claim a batch of them when the fees are low and remove them
from the utxo set. However, that trick wouldn't work with 0-value outputs,
as
no-one would ever claim them (doesn't make economical sense).

We actually need to have two of them to avoid pinning: each participant is
able to spend only one of these outputs while the parent tx is unconfirmed.
I believe N-party protocols would likely need N such outputs (not sure).

You mention a change to the carve-out rule, can you explain it further?
I believe it would be a necessary step, otherwise 0-value outputs for
CPFP actually seem worse than low-value ones...

Thanks,
Bastien

Le mer. 8 déc. 2021 à 02:29, Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Bitcoin Devs (+cc lightning-dev),
>
> Earlier this year I proposed allowing 0 value outputs and that was shot
> down for various reasons, see
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-August/019307.html
>
> I think that there can be a simple carve out now that package relay is
> being launched based on my research into covenants from 2017
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf.
>
> Essentially, if we allow 0 value outputs BUT require as a matter of policy
> (or consensus, but policy has major advantages) that the output be used as
> an Intermediate Output (that is, in order for the transaction to be
> creating it to be in the mempool it must be spent by another tx)  with the
> additional rule that the parent must have a higher feerate after CPFP'ing
> the parent than the parent alone we can both:
>
> 1) Allow 0 value outputs for things like Anchor Outputs (very good for not
> getting your eltoo/Decker channels pinned by junk witness data using Anchor
> Inputs, very good for not getting your channels drained by at-dust outputs)
> 2) Not allow 0 value utxos to proliferate long
> 3) It still being valid for a 0 value that somehow gets created to be
> spent by the fee paying txn later
>
> Just doing this as a mempool policy also has the benefits of not
> introducing any new validation rules. Although in general the IUTXO concept
> is very attractive, it complicates mempool :(
>
> I understand this may also be really helpful for CTV based contracts (like
> vault continuation hooks) as well as things like spacechains.
>
> Such a rule -- if it's not clear -- presupposes a fully working package
> relay system.
>
> I believe that this addresses all the issues with allowing 0 value outputs
> to be created for the narrow case of immediately spendable outputs.
>
> Cheers,
>
> Jeremy
>
> p.s. why another post today? Thank Greg
> https://twitter.com/JeremyRubin/status/1468390561417547780
>
>
> --
> @JeremyRubin 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev