Re: [bitcoin-dev] Merkleize All The Things

2022-11-08 Thread Bram Cohen via bitcoin-dev
On Tue, Nov 8, 2022 at 2:13 AM Salvatore Ingala via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> I have been working on some notes to describe an approach that uses
> covenants in order to enable general smart contracts in bitcoin. You can
> find them here:
>
> https://merkle.fun/
> 
>

Hash chained covenants in general all have about the same plateau of
functionality, which seems roughly reasonable to add to Bitcoin as it is
today but suffer from being limited and hence likely only a stepping stone
to greater functionality and unless whatever's put in now cleanly extends
to supporting more in the future it's likely to turn into a legacy
appendage which has to be supported. So my generic suggestion for this sort
of thing is that it should be proposed along with a plan for how it could
be extended to support full-blown covenants in the future.

Another probably unhelpful bit of feedback I have is that Bitcoin should
probably be taking verkle trees seriously because those can have
substantially lower size/cost/weight than merkle trees. That doesn't just
apply to this proposal, but to Bitcoin in general, which doesn't seem to
have any serious verkle tree proposals to date.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] brickchain

2022-11-08 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Tuesday, November 8th, 2022 at 3:49 PM, Erik Aronesty  wrote:

>> I think it's pretty clear that the "competitive nature of PoW" is not 
>> referring to verification nodes
>
> cool, so we can agree there is no accepted centralization pressure for 
> validating nodes then

The centralization produced by PoW only affects miners. the rest of nodes are 
freely distributed.
in the producer-consumer view consumers (blockchain builders) are 
satisfactorily distributed. It can't be said so about miners(block producers), 
who form a quite centralized subsystem with only a handful major pools 
producing blocks.

>> layers also add fees to users
>
> source? i feel like it's obvious that the tree-like efficiencies should 
> reduce fees, but i'd appreciate your research on that topic

systems(layers) where abuse is controlled by fees add up each one a cost.

> On Tue, Nov 8, 2022 at 9:25 AM mm-studios  wrote:
>
>> --- Original Message ---
>> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty  
>> wrote:
>>
 A) to not increase the workload of full-nodes
>>>
>>> yes, this is critical
>>>
 given the competitive nature of PoW itself
>>>
>>> validating nodes do not compete with PoW, i think maybe you are not sure of 
>>> the difference between a miner and a node
>>>
>>> nodes do validation of transactions, they do this for free, and many of 
>>> them provide essential services, like SPV validation for mobile
>>
>> I think it's pretty clear that the "competitive nature of PoW" is not 
>> referring to verification nodes (satoshi preferred this other word).
>>
>>> B) to not undermine L2 systems like LN.
>>>
>>> yes, as a general rule, layered financial systems are vastly superior. so 
>>> that risks incurred by edge layers are not propagated fully to the inner 
>>> layers. For example L3 projects like TARO and RGB are building on lightning 
>>> with less risk
>>
>> layers also add fees to users
>>
>>> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>>>
 Thanks all for your responses.
 so is it a no-go is because "reduced settlement speed is a desirable 
 feature"?

 I don';t know what weights more in this consideration:
 A) to not increase the workload of full-nodes, being "less difficult to 
 operate" and hence reduce the chance of some of them giving up which would 
 lead to a negative centralization effect. (a bit cumbersome reasoning in 
 my opinion, given the competitive nature of PoW itself, which introduce an 
 accepted centralization, forcing some miners to give up). In this case the 
 fact is accepted because is decentralized enough.
 B) to not undermine L2 systems like LN.

 in any case it is a major no-go reason, if there is not intention to speed 
 up L1.
 Thanks
 M

 --- Original Message ---
 On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
 wrote:

>> currently, a miner produce blocks with a limited capacity of 
>> transactions that ultimately limits the global settlement throughput to 
>> a reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we 
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold & 
> transfer, uncommitted transactions as pools / joins, so that layer 1's 
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>  wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the 
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of 
>> transactions that ultimately limits the global settlement throughput to 
>> a reduced number of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with 
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing 
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled 
>> transactions, they are kept back in the mempool, waiting for their turn 
>> in future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in 
>> the current block, while keeping the block size small. It requires 
>> changes in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the 
>> block. Its validity is determined by comparing the numeric value of the 
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition 
>> the new block becomes valid and can be propagated. All nodes would 
>> update 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread mm-studios via bitcoin-dev
--- Original Message ---
On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty  wrote:

>> A) to not increase the workload of full-nodes
>
> yes, this is critical
>
>> given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure of 
> the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of them 
> provide essential services, like SPV validation for mobile

I think it's pretty clear that the "competitive nature of PoW" is not referring 
to verification nodes (satoshi preferred this other word).

> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so 
> that risks incurred by edge layers are not propagated fully to the inner 
> layers. For example L3 projects like TARO and RGB are building on lightning 
> with less risk

layers also add fees to users

> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable 
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to 
>> operate" and hence reduce the chance of some of them giving up which would 
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my 
>> opinion, given the competitive nature of PoW itself, which introduce an 
>> accepted centralization, forcing some miners to give up). In this case the 
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed 
>> up L1.
>> Thanks
>> M
>>
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty  
>> wrote:
>>
 currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need 
>>> to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & 
>>> transfer, uncommitted transactions as pools / joins, so that layer 1's 
>>> decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev 
>>>  wrote:
>>>
 Hi Bitcoin devs,
 I'd like to share an idea of a method to increase throughput in the 
 bitcoin network.

 Currently, a miner produce blocks with a limited capacity of transactions 
 that ultimately limits the global settlement throughput to a reduced 
 number of tx/s.

 Big-blockers proposed the removal of limits but this didn't come with 
 undesirable effects that have been widely discussed and rejected.

 The main feature we wanted to preserve is 'small blocks', providing 
 'better network effects' I won't focus on them.

 The problem with small blocks is that, once a block is filled 
 transactions, they are kept back in the mempool, waiting for their turn in 
 future blocks.

 The following changes in the protocol aim to let all transactions go in 
 the current block, while keeping the block size small. It requires changes 
 in the PoW algorithm.

 Currently, the PoW algorithm consists on finding a valid hash for the 
 block. Its validity is determined by comparing the numeric value of the 
 block hash with a protocol-defined value difficulty.

 Once a miner finds a nonce for the block that satisfies the condition the 
 new block becomes valid and can be propagated. All nodes would update 
 their blockchains with it. (assuming no conflict resolution (orphan 
 blocks, ...) for clarity).

 This process is meant to happen every 10 minutes in average.

 With this background information (we all already know) I go on to describe 
 the idea:

 Let's allow a miner to include transactions until the block is filled, 
 let's call this structure (coining a new term 'Brick'), B0. [brick=block 
 that doesn't meet the difficulty rule and is filled of tx to its full 
 capacity]
 Since PoW hashing is continuously active, Brick B0 would have a nonce 
 corresponding to a minimum numeric value of its hash found until it got 
 filled.

 Fully filled brick B0, with a hash that doesn't meet the difficulty rule, 
 would be broadcasted and nodes would have it on in a separate fork as 
 usual.

 At this point, instead of discarding transactions, our miner would start 
 working on a new brick B1, linked with B0 as usual.

 Nodes would allow incoming regular blocks and bricks with hashes that 
 don't satisfy the difficulty rule, provided the brick is 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread Erik Aronesty via bitcoin-dev
> I think it's pretty clear that the "competitive nature of PoW" is not
referring to verification nodes

cool, so we can agree there is no accepted centralization pressure for
validating nodes then

> layers also add fees to users

source?  i feel like it's obvious that the tree-like efficiencies should
reduce fees, but i'd appreciate your research on that topic


On Tue, Nov 8, 2022 at 9:25 AM mm-studios  wrote:

>
> --- Original Message ---
> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty 
> wrote:
>
> > A) to not increase the workload of full-nodes
>
> yes, this is critical
>
> > given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure
> of the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of
> them provide essential services, like SPV validation for mobile
>
>
>
> I think it's pretty clear that the "competitive nature of PoW" is not
> referring to verification nodes (satoshi preferred this other word).
>
> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so
> that risks incurred by edge layers are not propagated fully to the inner
> layers. For example L3 projects like TARO and RGB are building on lightning
> with less risk
>
>
> layers also add fees to users
>
>
> On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to
>> operate" and hence reduce the chance of some of them giving up which would
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my
>> opinion, given the competitive nature of PoW itself, which introduce an
>> accepted centralization, forcing some miners to give up). In this case the
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to
>> speed up L1.
>> Thanks
>> M
>> --- Original Message ---
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
>> wrote:
>>
>> > currently, a miner produce blocks with a limited capacity of
>> transactions that ultimately limits the global settlement throughput to a
>> reduced number of tx/s.
>>
>> reduced settlement speed is a desirable feature and isn't something we
>> need to fix
>>
>> the focus should be on layer 2 protocols that allow the ability to hold &
>> transfer, uncommitted transactions as pools / joins, so that layer 1's
>> decentralization and incentives can remain undisturbed
>>
>> protocols like mweb, for example
>>
>>
>>
>>
>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Bitcoin devs,
>>> I'd like to share an idea of a method to increase throughput in the
>>> bitcoin network.
>>>
>>> Currently, a miner produce blocks with a limited capacity of
>>> transactions that ultimately limits the global settlement throughput to a
>>> reduced number of tx/s.
>>>
>>> Big-blockers proposed the removal of limits but this didn't come with
>>> undesirable effects that have been widely discussed and rejected.
>>>
>>> The main feature we wanted to preserve is 'small blocks', providing
>>> 'better network effects' I won't focus on them.
>>>
>>> The problem with small blocks is that, once a block is filled
>>> transactions, they are kept back in the mempool, waiting for their turn in
>>> future blocks.
>>>
>>> The following changes in the protocol aim to let all transactions go in
>>> the current block, while keeping the block size small. It requires changes
>>> in the PoW algorithm.
>>>
>>> Currently, the PoW algorithm consists on finding a valid hash for the
>>> block. Its validity is determined by comparing the numeric value of the
>>> block hash with a protocol-defined value difficulty.
>>>
>>> Once a miner finds a nonce for the block that satisfies the condition
>>> the new block becomes valid and can be propagated. All nodes would update
>>> their blockchains with it. (assuming no conflict resolution (orphan blocks,
>>> ...) for clarity).
>>>
>>> This process is meant to happen every 10 minutes in average.
>>>
>>> With this background information (we all already know) I go on to
>>> describe the idea:
>>>
>>> Let's allow a miner to include transactions until the block is filled,
>>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>>> that doesn't meet the difficulty rule and is filled of tx to its full
>>> capacity]
>>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>>> corresponding to a minimum numeric value of its hash found until it got
>>> filled.
>>>
>>> Fully filled brick B0, with a hash that doesn't meet the 

Re: [bitcoin-dev] brickchain

2022-11-08 Thread Erik Aronesty via bitcoin-dev
> A) to not increase the workload of full-nodes

yes, this is critical

>  given the competitive nature of PoW itself

validating nodes do not compete with PoW, i think maybe you are not sure of
the difference between a miner and a node

nodes do validation of transactions, they do this for free, and many of
them provide essential services, like SPV validation for mobile


B) to not undermine L2 systems like LN.

yes, as a general rule, layered financial systems are vastly superior.  so
that risks incurred by edge layers are not propagated fully to the inner
layers.  For example L3 projects like TARO and RGB are building on
lightning with less risk

On Wed, Oct 19, 2022 at 12:04 PM mm-studios  wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> --- Original Message ---
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty 
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state
>> where N bricks have been produced and the accumulated PoW calculated using
>> mathematics (every brick contains a 'minimum hash found', when a series of
>> 'minimum hashes' is computationally 

Re: [bitcoin-dev] On mempool policy consistency

2022-11-08 Thread yancy via bitcoin-dev


Peter,

It sounds like there are two attack vectors; neither of which require 
full-rbf (correct me if I'm wrong).


1) Bob has staked liquidity in a payment channel with Alice who later 
double spends the same inputs (at a very low feerate) resulting in a 
stalemate where neither can spend the UTXOs.  The TX that creates the 
payment channel with Bob will never be mined since the mining pool sees 
the double spend?


2) Alice spams the network with a double spend wide enough that the 
double spend makes it into a block before the remainder of the network 
sees the first spend.


In that case of 1), what if Bob required a opt-in rbf?  Wouldn't that 
solve the issue?  Bob could just create a replacement transaction with 
enough fee to get back his UTXO?


For 2) it seems to me that neither full-rbf or opt-in rbf resolves this, 
although it's a probabilistic attack and requires spamming many nodes.


Cheers,
-Yancy

On 2022-11-07 15:32, Peter Todd wrote:


On November 3, 2022 5:06:52 PM AST, yancy via bitcoin-dev
 wrote:
AJ/Antoine et al

What should folks wanting to do coinjoins/dualfunding/dlcs/etc do to
solve that problem if they have only opt-in RBF available?
Assuming Alice is a well funded advisory, with enough resources to spam 
the network so that enough nodes see her malicious transaction first, 
how does full-rbf solve this vs. opt-in rbf?


First of all, to make things clear, remember that the attacks were
talking about are aimed at _preventing_ a transaction from getting
mined. Alice wants to cheaply broadcast something with low fees that
won't get mined soon (if ever), that prevents a protocol from making
forward progress.

With full-rbf, who saw what transaction first doesn't matter: the
higher fee paying transaction will always(*) replace the lower fee
one. With opt-in RBF, spamming the network can beat out the
alternative.

*) So what's the catch? Well, due to limitations in today's mempool
implementation, sometimes we can't fully evaluate which tx pays the
higher fee. For example, if Alice spams the network with very _large_
numbers transactions spending that input, the current mempool code
doesn't even try to figure out if a replacement is better.

But those limitations are likely to be fixable. And even right now,
without fixing them, Alice still has to use a lot more money to pull
off these attacks with full-rbf. So full-rbf definitely improves the
situation even if it doesn't solve the problem completely.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Announcement: Full-RBF Miner Bounty

2022-11-08 Thread Peter Todd via bitcoin-dev
On Wed, Nov 02, 2022 at 05:26:27AM -0400, Peter Todd via bitcoin-dev wrote:
> I'm now running a full-RBf bounty program for miners.
> 
> tl;dr: I'm broadcasting full-RBF replacements paying extremely high fees to
> reward miners that turn on full-RBF. I'm starting small, just ~$100/block in
> times of congestion.

FYI I've gotten a few hundred dollars worth of donations to this effort, and
have raised the reward to about 0.02 BTC, or $400 USD at current prices.

To be clear, this doesn't mean there will always be a $400 fee tx in the
full-rbf mempool. I have to take a number of precautions to avoid the
double-spend tx being mined by accident, such as only spending txouts with >2
confirmations, and waiting significant amounts of time before the 1st
transaction is sent to allow for maximal propagation.

> If you'd like to donate to this effort, send BTC to
> bc1qagmufdn6rf80kj3faw4d0pnhxyr47sevp3nj9m

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2022-11-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Salvatore,

Interesting idea.

The idea to embed the current state is similar to something I have been musing 
about recently.


> ### Game theory (or why the chain will not see any of this)
> 
> With the right economic incentives, protocol designers can guarantee that 
> playing a losing game always loses money compared to cooperating. Therefore, 
> the challenge game is never expected to be played on-chain. The size of the 
> bonds need to be appropriate to disincentivize griefing attacks.

Modulo bugs, operator error, misconfigurations, and other irrationalities of 
humans.



> - OP_CHECKOUTPUTCOVENANTVERIFY: given a number out_i and three 32-byte hash 
> elements x, d and taptree on top of the stack, verifies that the out_i-th 
> output is a P2TR output with internal key computed as above, and tweaked with 
> taptree. This is the actual covenant opcode.

Rather than get taptree from the stack, just use the same taptree as in the 
revelation of the P2TR.
This removes the need to include quining and similar techniques: just do the 
quining in the SCRIPT interpreter.

The entire SCRIPT that controls the covenant can be defined as a taptree with 
various possible branches as tapleaves.
If the contract is intended to terminate at some point it can have one of the 
tapleaves use `OP_CHECKINPUTCOVENANTVERIFY` and then determine what the output 
"should" be using e.g. `OP_CHECKTEMPLATEVERIFY`.


> - Is it worth adding other introspection opcodes, for example 
> OP_INSPECTVERSION, OP_INSPECTLOCKTIME? See Liquid's Tapscript Opcodes [6].

`OP_CHECKTEMPLATEVERIFY` and some kind of sha256 concatenated hashing should be 
sufficient I think.

> - Is there any malleability issue? Can covenants “run” without signatures, or 
> is a signature always to be expected when using spending conditions with the 
> covenant encumbrance? That might be useful in contracts where no signature is 
> required to proceed with the protocol (for example, any party could feed 
> valid data to the bisection protocol above).

Hmm protocol designer beware?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On mempool policy consistency

2022-11-08 Thread AdamISZ via bitcoin-dev
Hi aj and list,
(questions inline)


--- Original Message ---
On Thursday, October 27th, 2022 at 18:21, Anthony Towns via bitcoin-dev 
 wrote:

> 
> Is that true? Antoine claims [1] that opt-in RBF isn't enough to avoid
> a DoS issue when utxos are jointly funded by untrusting partners, and,
> aiui, that's the main motivation for addressing this now.
> 
> [1] 
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-May/003033.html
> 
> The scenario he describes is: A, B, C create a tx:
> 
> inputs: A1, B1, C1 [opts in to RBF]
> fees: normal
> outputs:
> [lightning channel, DLC, etc, who knows]
> 
> they all analyse the tx, and agree it looks great; however just before
> publishing it, A spams the network with an alternative tx, double
> spending her input:
> 
> inputs: A1 [does not opt in to RBF]
> fees: low
> outputs: A
> 
> If A gets the timing right, that's bad for B and C because they've
> populated their mempool with the 1st transaction, while everyone else
> sees the 2nd one instead; and neither tx will replace the other. B and
> C can't know that they should just cancel their transaction, eg:
> 
> inputs: B1, C1 [opts in to RBF]
> fees: 50% above normal
> outputs:
> [smaller channel, refund, whatever]
> 
> and might instead waste time trying to fee bump the tx to get it mined,
> or similar.
> 
> What should folks wanting to do coinjoins/dualfunding/dlcs/etc do to
> solve that problem if they have only opt-in RBF available?
> 

> 

I read Antoine's original post on this and got the general gist, and here also, 
it makes sense, but I'd like to ask: is it necessary that (B, C) in the above 
not *see* A's opt-out "pre-replacement" (inputs: A1, outputs: A, fees: low; 
call it TX_2)? I get that they cannot replace it, but the idea that they suffer 
financial loss from "ignorant" fee bumping is the part that seems weird to me. 
Clearly TX_2 gets gossiped to other mempools; and understood that it does not 
replace the TX_1 (the 3-input) in B's mempool, say, but why should they not 
even hear about it? Is it just a matter of engineering, or is there some deeper 
problem with that.

About this general flavour of attack, it's never been a *big* concern in 
Joinmarket imo (though, we did until recently have a bug that made this happen 
*by accident*, i.e. people double spending an input out of a negotiated join, 
albeit it was rare; but it's ofc definitely *possible* to 'grief' like this, 
given the ordering of events; maker sends signature, maker broadcasts double 
spend - 95% of the time they will be first). Interactive protocols are yucky 
and I think there'll always be griefing possibilities; designing around 
multiple-rounds of negotiation amongst not always-on participants is even more 
yucky, so just having a 'taker is in charge of network fee; if it's slow or 
gets double spent out causing time delay then just wait', combined with 'there 
really isn't any economic incentive for an attacker' (i.e. ignoring griefing) 
might sound crappy but it's probably just being realistic.

Of course, off-chain contracting has more sophisticated considerations than 
this.

Cheers,
AdamISZ/waxwing

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Merkleize All The Things

2022-11-08 Thread Salvatore Ingala via bitcoin-dev
Hi list,

I have been working on some notes to describe an approach that uses
covenants in order to enable general smart contracts in bitcoin. You can
find them here:

https://merkle.fun

The approach has a number of desirable features:

- small impact to layer 1;
- not application-specific, very general;
- it fits well into P2TR;
- it does not require new cryptographic assumptions, nor any construction
that has not withstood the test of time.

This content was presented at the BTCAzores unconference, where it received
the name of MATT − short for Merkleize All The Things.
In fact, no other cryptographic primitive is required, other than Merkle
trees.

I believe this construction gets close to answering the question of how
small a change on bitcoin's layer 1 would suffice to enable arbitrary smart
contracts.

It is not yet at the stage where a formal proposal can be made, therefore
the proposed specs are only for illustrative purposes.

The same content is reformatted below for the mailing list.

Looking forward to hearing about your comments and improvements.
Salvatore Ingala


==


# General smart contracts in bitcoin via covenants

Covenants are UTXOs that are encumbered with restrictions on the outputs of
the transaction spending the UTXO. More formally, we can define a covenant
any UTXO such that at least one of its spending conditions is valid only if
one or more of the outputs’ scriptPubKey satisfies certain restrictions.

Generally, covenant proposals also add some form of introspection (that is,
the ability for Script to access parts of the inputs/outputs, or the
blockchain history).

In this note, we want to explore the possibilities unleashed by the
addition of a covenant with the following properties:

- introspection limited to a single hash attached to the UTXO (the
“covenant data”), and input/output amounts;
- pre-commitment to every possible future script (but not their data);
- few simple opcodes operating with the covenant data.

We argue that such a simple covenant construction is enough to extend the
power of bitcoin’s layer 1 to become a universal settlement layer for
arbitrary computation.

Moreover, the covenant can elegantly fit within P2TR transactions, without
any substantial increase for the workload of bitcoin nodes.

A preliminary version of these notes was presented and discussed at the
BTCAzores Unconference [1], on 23rd September 2022.


# Preliminaries

We can think of a smart contract as a “program” that updates a certain
state according to predetermined rules (which typically include access
control by authorizing only certain public keys to perform certain
actions), and that can possibly lock/unlock some coins of the underlying
blockchain according to the same rules.

The exact definition will be highly dependent on the properties of the
underlying blockchain.

In bitcoin, the only state upon which all the nodes reach consensus is the
UTXO set; other blockchains might have other data structures as part of the
consensus, like a key-value store that can be updated as a side effect of
transaction execution.

In this section we explore the following concepts in order to set the
framework for a definition of smart contracts that fits the structure of
bitcoin:

- the contract’s state: the “memory” the smart contract operates on;
- state transitions: the rules to update the contract’s state;
- covenants: the technical means that can allow contracts to function in
the context of a bitcoin UTXO.

In the following, an on-chain smart contract is always represented as a
single UTXO that implicitly embeds the contract’s state and possibly
controls some coins that are “locked” in it. More generally, one could
think of smart contracts that are represented in a set of multiple UTXOs;
we leave the exploration of generalizations of the framework to future
research.

## State

Any interesting “state” of a smart contract can ultimately be encoded as a
list, where each element is either a bit, a fixed-size integers, or an
arbitrary byte string.

Whichever the choice, it does not really affect what kinds of computations
are expressible, as long as one is able to perform some basic computations
on those elements.

In the following, we will assume without loss of generality that
computations happen on a state which is a list of fixed length S = [s_1,
s_2, …, s_n], where each s_i is a byte string.

### Merkleized state

By constructing a Merkle tree that has the (hashes of) the elements of S in
the leaves, we can produce a short commitment h_S to the entire list S with
the following properties (that hold for a verifier that only knows h_S):

- a (log n)-sized proof can prove the value of an element s_i;
- a (log n + |x|)-sized proof can prove the new commitment h_S’, where S’
is a new list obtained by replacing the value of a certain leaf with x.

This allows to compactly commit to a RAM, and to prove correctness of RAM
updates.

In other words, a