Re: [bitcoin-dev] [Lightning-dev] CheckTemplateVerify Does Not Scale Due to UTXO's Required For Fee Payment

2024-01-29 Thread ZmnSCPxj via bitcoin-dev


> I should note that under Decker-Russell-Osuntokun the expectation is that 
> both counterparties hold the same offchain transactions (hence why it is 
> sometimes called "LN-symmetry").
> However, there are two ways to get around this:
> 
> 1. Split the fee between them in some "fair" way.
> Definition of "fair" wen?
> 2. Create an artificial asymmetry: flip a bit of `nSequence` for the 
> update+state txes of one counterparty, and have each side provide signatures 
> for the tx held by its counterparty (as in Poon-Dryja).
> This lets you force that the party that holds a particular update+state tx is 
> the one that pays fees.

No, wait, #2 does not actually work as stated.
Decker-Russell-Osuntokun uses `SIGHASH_NOINPUT` meaning the `nSequence` is not 
committed in  the signature and can be malleated.

Further, in order for update transactions to be able to replace one another, 
the amount output of the update transaction needs to be the same value as the 
input of the update transaction --- meaning cannot deduct the fee from the 
channel, at least for the update tx.
This forces the update transaction to be paid for by bringing in an external 
UTXO owned by whoever constructed the update transaction (== whoever started 
the closing).


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] CheckTemplateVerify Does Not Scale Due to UTXO's Required For Fee Payment

2024-01-29 Thread ZmnSCPxj via bitcoin-dev






Sent with Proton Mail secure email.

On Tuesday, January 30th, 2024 at 4:38 AM, Peter Todd  
wrote:

> On Tue, Jan 30, 2024 at 04:12:07AM +, ZmnSCPxj wrote:
> 
> > Peter Todd proposes to sign multiple versions of offchain transactions at 
> > varying feerates.
> > However, this proposal has the issue that if you are not the counterparty 
> > paying for onchain fees (e.g. the original acceptor of the channel, as per 
> > "initiator pays" principle), then you have no disincentive to just use the 
> > highest-feerate version always, and have a tiny incentive to only store the 
> > signature for the highest-feerate version to reduce your own storage costs 
> > slightly.
> 
> 
> You are incorrect. Lightning commitments actually come in two different
> versions, for the local and remote sides. Obviously, I'm proposing that fees 
> be
> taken from the side choosing to sign and broadcast the transaction. Which is
> essentially no different from CPFP anchors, where the side choosing to get the
> transaction mined pays the fee (though with anchors, it is easy for both sides
> to choose to contribute, but in practice this almost never seems to happen in
> my experience running LN nodes).

There is a reason why I mention "initiator pays", and it is because the channel 
opener really ought to pay for onchain fees in general.

For example, I could mount the following attack:

1.  I already have an existing LN node on the network.
2.  I wait for a low-feerate time.
3.  I spin up a new node and initiate a channel funding to a victim node.
4.  I empty my channel with the victim node, sending out my funds to my 
existing LN node.
5.  I retire my new node forever.

This forces the victim node to use its commitment tx.

If the onchain fee for the commitment tx is paid for by who holds the 
commitment tx (as in your proposal) then I have forced the victim node to pay 
an onchain fee.

This is why the initial design for openv1 is "initiator pays".
In the above attack scenario, the commitment tx held by the victim node, under 
"initiator pays", has its onchain fee paid by me, thus the victim is not forced 
to pay the unilateral close fee, I am the one forced to pay it.
They do still need to pay fees to get their now-onchain funds back into 
Lightning, but at least more of the onchain fees (the fees to unilaterally 
close the channel with the now-dead node) is paid by the attacker.

On the other hand, it may be possible that "initiator pays" can be dropped.
In this attack scenario, the victim node should really require a non-zero 
reserve anyway that is proportional to the channel size, so that the attacker 
needs to commit some funds to the victim until the victim capitulates and 
unilaterally closes.
In addition, to repeat this attack, I need to keep opening channels to the 
victim and thus pay onchain fees for the channel open.

So it may be that your proposal is sound; possibly the "initiator pays" 
advantage in this attack scenario is small enough that we can sacrifice it for 
multi-fee-version.

I should note that under Decker-Russell-Osuntokun the expectation is that both 
counterparties hold the same offchain transactions (hence why it is sometimes 
called "LN-symmetry").
However, there are two ways to get around this:

1.  Split the fee between them in some "fair" way.
Definition of "fair" wen?
2.  Create an artificial asymmetry: flip a bit of `nSequence` for the 
update+state txes of one counterparty, and have each side provide signatures 
for the tx held by its counterparty (as in Poon-Dryja).
This lets you force that the party that holds a particular update+state tx 
is the one that pays fees.

> > In addition, it is also incentive-incompatible for the party that pays 
> > onchain fees to withhold signatures for the higher-fee versions, because if 
> > you are the party who does not pay fees and all you hold are the complete 
> > signatures for the lowest-fee version (because the counterparty does not 
> > want to trust you with signatures for higher-fee versions because you will 
> > just abuse it), then you will need anchor outputs again to bump up the fee.
> 
> 
> That is also incorrect. If the protocol demands multiple fee variants exist,
> the state of the lightning channel simply doesn't advance until all required
> fee-variants are provided. Withholding can't happen any more than someone 
> could
> "withhold" a state by failing to provide the last byte of a commitment
> transaction: until the protocol state requirements have been fufilled in full,
> the previous state remains in effect.

No, I am referring to a variation of your proposal where the side paying the 
fees in "initiator pays" gets full signatures for all feerate-versions but the 
other side gets only the full signatures for the lowest-fee version.

If you can build the multi-version proposal with both sides contributing fees 
or with the one exiting the channel paying the fee, then this variation is 
unnecessary and you can 

Re: [bitcoin-dev] [Lightning-dev] CheckTemplateVerify Does Not Scale Due to UTXO's Required For Fee Payment

2024-01-29 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael et al,

> 
> I assume that a CTV based LN-Symmetry also has this drawback when compared to 
> an APO based LN-Symmetry? In theory at least an APO based LN-Symmetry could 
> change the fees in every channel update based on what the current market fee 
> rate was at the time of the update. In today's pre LN-Symmetry world you are 
> always going to have justice transactions for revoked states that were 
> constructed when the market fee rate was very different from the present 
> day's market fee rate.

This is the same in the future Decker-Russell-Osuntokun ("eltoo" / 
"LN-Symmetry") world as in the current Poon-Dryja ("LN-punishment").

Every *commitment* transaction in Poon-Dryja commits to a specific fee rate, 
which is why it it problematic today.
The *justice* transaction is single-signed and can (and SHOULD!) be RBF-ed 
(e.g. CLN implements an aggressive *justice* transaction RBF-ing written by me).

However, the issue is that every *commitment* transaction commits to a specific 
feerate today, and if the counterparty is offline for some time, the market 
feerate may diverge tremendously from the last signed feerate.

The same issue will still exist in Decker-Russell-Osuntokun --- the latest pair 
of update and state transactions will commit to a specific feerate.
If the counterparty is offline for some time, the market feerate may diverge 
tremendously from the last signed feerate.

Anchor commitments Fixes This by adding an otherwise-unnecessary change output 
(called "anchor output") for both parties to be able to attach a CPFP 
transaction.
However, this comes at the expense of increased blockspace usage for the anchor 
outputs.

Peter Todd proposes to sign multiple versions of offchain transactions at 
varying feerates.
However, this proposal has the issue that if you are not the counterparty 
paying for onchain fees (e.g. the original acceptor of the channel, as per 
"initiator pays" principle), then you have no disincentive to just use the 
highest-feerate version always, and have a tiny incentive to only store the 
signature for the highest-feerate version to reduce your own storage costs 
slightly.
In addition, it is also incentive-incompatible for the party that pays onchain 
fees to withhold signatures for the higher-fee versions, because if you are the 
party who does not pay fees and all you hold are the complete signatures for 
the lowest-fee version (because the counterparty does not want to trust you 
with signatures for higher-fee versions because you will just abuse it), then 
you will need anchor outputs again to bump up the fee.

The proposal from Peter Todd might work if both parties share the burden for 
paying the fees.
However, this may require that both parties always bring in funds on all 
channel opens, i.e. no single-sided funding.
I have also not considered how this game would play out, though naively, it 
seems to me that if both parties pay onchain fees "fairly" for some definition 
of "fair" (how to define "fair" may be problematic --- do they pay equal fees 
or proportional to their total funds held in the channel?) then it seems to me 
okay to have multi-feerate-version offchain txes (regardless of using 
Poon-Dryja or Decker-Russell-Osuntokun).
However, there may be issues with hosting HTLCs; technically HTLCs are nested 
inside a larger contract (the channel) and if so, do you need a separate 
transaction to resolve them (Poon-Dryja does!) and do you also have to 
multi-feerate *in addition to* multi-feerate the outer transaction (e.g. 
commitment transaction in Poon-Dryja) resulting in a O(N * N) transactions for 
N feerates?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_Expire and Coinbase-Like Behavior: Making HTLCs Safer by Letting Transactions Expire Safely

2023-11-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,


> Once the HTLC is committed on the Bob-Caroll link, Caroll releases the 
> preimage off-chain to Bob with an `update_fulfill_htlc` message, though Bob 
> does _not_ send back his signature for the updated channel state.
> 
> Some blocks before 100, Caroll goes on-chain to claim the inbound HTLC output 
> with the preimage. Her commitment transaction propagation in network mempools 
> is systematically "replaced cycled out" by Bob.

I think this is impossible?

In this scenario, there is an HTLC offered by Bob to Carol.

Prior to block 100, only Carol can actually create an HTLC-success transaction.
Bob cannot propagate an HTLC-timeout transaction because the HTLC timelock says 
"wait till block 100".

Neither can Bob replace-recycle out the commitment transaction itself, because 
the commitment transaction is a single-input transaction, whose sole input 
requires a signature from Bob and a signature from Carol --- obviously Carol 
will not cooperate on an attack on herself.

So as long as Carol is able to get the HTLC-success transaction confirmed 
before block 100, Bob cannot attack.
Of course, once block 100 is reached, `OP_EXPIRE` will then mean that Carol 
cannot claim the fund anymore.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] OP_Expire and Coinbase-Like Behavior: Making HTLCs Safer by Letting Transactions Expire Safely

2023-10-23 Thread ZmnSCPxj via bitcoin-dev
Hi all,

This was discussed partially on the platform formerly known as twitter, but an 
alternate design goes like this:

* Add an `nExpiryTime` field in taproot annex.
  * This indicates that the transaction MUST NOT exist in a block at or above 
the height specified.
  * Mempool should put txes buckets based on `nExpiryTime`, and if the block is 
reached, drop all the buckets with `nExpiryTime` less than that block height.
* Add an `OP_CHECKEXPIRYTIMEVERIFY` opcode, mostly similar in behavior to 
`OP_EXPIRE` proposed by Peter Todd:
  * Check if `nExpiryTime` exists and has value equal or less than the stack 
top.

The primary difference is simply that while Peter proposes an implicit field 
for the value that `OP_EXPIRE` will put in `CTransaction`, I propose an 
explicit field for it in the taproot annex.

The hope is that "explicit is better than implicit" and that the design will be 
better implemented as well by non-Bitcoin-core implementations, as the use of 
tx buckets is now explicit in treating the `nExpiryTime` field.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-17 Thread ZmnSCPxj via bitcoin-dev


Good morning Greg,


> > I do not know if existing splice implementations actually perform such a 
> > check.
> Unless all splice implementations do this, then any kind of batched splicing 
> is risky.
> As long as the implementation decides to splice again at some point when a 
> prior
> splice isn't confirming, it will self-resolve once any subsequent splice is 
> confirmed.

Do note that there is a risk here that the reason for "not confirming" is 
because of an unexpected increase in mempool usage.

In particular, if the attack is not being performed, it is possible for the 
previous splice tx that was not confirming for a while, to be the one that 
confirms in the end, instead of the subsequent splice.
This is admittedly an edge case, but one that could potentially be specifically 
attacked and could lead to loss of funds if the implementations naively deleted 
the signatures for commitment transactions for the previously-not-confirming 
splice transaction.

Indeed, as I understood it, part of the splice proposal is that while a channel 
is being spliced, it should not be spliced again, which your proposal seems to 
violate.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Bastien,

I have not gotten around to posting it yet, but I have a write-up in my 
computer with the title:

> Batched Splicing Considered Risky

The core of the risk is that if:

* I have no funds right now in a channel (e.g. the LSP allowed me to have 0 
reserve, or this is a newly-singlefunded channel from the LSP to me).
* I have an old state (e.g. for a newly-singlefunded channel, it could have 
been `update_fee`d, so that the initial transaction is old state).

Then if I participate in a batched splice, I can disrupt the batched splice by 
broadcasting the old state and somehow convincing miners to confirm it before 
the batched splice.

Thus, it is important for *any* batched splicing mechanism to have a backout, 
where if the batched splice transaction can no longer be confirmed due to some 
participant disrupting it by posting an old commitment transaction, either a 
subset of the splice is re-created or the channels revert back to pre-splice 
state (with knowledge that the post-splice state can no longer be confirmed).

I know that current splicing tech is to run both the pre-splice and post-splice 
state simultaneously until the splicing transaction is confirmed.
However we need to *also* check if the splicing transaction *cannot* be 
confirmed --- by checking if the other inputs to the splice transaction were 
already consumed by transactions that have deeply confirmed, and in that case, 
to drop the post-splice state and revert to the pre-splice state.
I do not know if existing splice implementations actually perform such a check.
Unless all splice implementations do this, then any kind of batched splicing is 
risky.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Full Disclosure: CVE-2023-40231 / CVE-2023-40232 / CVE-2023-40233 / CVE-2023-40234 "All your mempool are belong to us"

2023-10-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine et al.,

Let me try to rephrase the core of the attack.

There exists these nodes on the LN (letters `A`, `B`, and `C` are nodes, `==` 
are channels):

A = B = C

`A` routes `A->B->C`.

The timelocks, for example, could be:

   A->B timeelock = 144
   B->C timelock = 100

The above satisfies the LN BOLT requirements, as long as `B` has a 
`cltv_expiry_delta` of 44 or lower.

After `B` forwards the HTLC `B->C`, C suddenly goes offline, and all the signed 
transactions --- commitment transaction and HTLC-timeout transactions --- are 
"stuck" at the feerate at the time.

At block height 100, `B` notices the `B->C` HTLC timelock is expired without 
`C` having claimed it, so `B` forces the `BC` channel onchain.
However, onchain feerates have risen and the commitment transaction and 
HTLC-timeout transaction do not confirm.

In the mean time, `A` is still online with `B` and updates the onchain fees of 
the `AB` channel pre-signed transactions (commitment tx and HTLC-timeout 
tx) to the latest.

At block height 144, `B` is still not able to claim the `A->B` HTLC, so `A` 
drops the `AB` channel onchain.
As the fees are up-to-date, this confirms immediately and `A` is able to 
recover the HTLC funds.
However, the feerates of the `BC` pre-signed transactions remain at the 
old, uncompetitive feerates.

At this point, `C` broadcasts an HTLC-success transaction with high feerates 
that CPFPs the commitment tx.
However, it replaces the HTLC-timeout transaction, which is at the old, low 
feerate.
`C` is thus able to get the value of the HTLC, but `B` is now no longer able to 
use the knowledge of the preimage, as its own incoming HTLC was already 
confirmed as claimed by `A`.

Is the above restatement accurate?



Let me also explain to non-Lightning experts why HTLC-timeout is presigned in 
this case and why `B` cannot feebump it.

In the Poon-Dryja mechanism, the HTLCs are "infected" by the Poon-Dryja penalty 
case, and are not plain HTLCs.

A plain HTLC offerred by `B` to `C` would look like this:

(B && OP_CLTV) || (C && OP_HASH160)

However, on the commitment transaction held by `B`, it would be infected by the 
penalty case in this way:

(B && C && OP_CLTV) || (C && OP_HASH160) || (C && revocation)

There are two changes:

* The addition of a revocation branch `C && revocation`.
* The branch claimable by `B` in the "plain" HTLC (`B && OP_CLTV`) also 
includes `C`.

These are necessary in case `B` tries to cheat and this HTLC is on an old, 
revoked transaction.
If the revoked transaction is *really* old, the `OP_CLTV` would already impose 
a timelock far in the past.
This means that a plain `B && OP_CLTV` branch can be claimed by `B` if it 
retained this very old revoked transaction.

To prevent that, `C` is added to the `B && OP_CLTV` branch.
We also introduce an HTLC-timeout transaction, which spends the `B && C && 
OP_CLTV` branch, and outputs to:

(B && OP_CSV) || (C && revocation)

Thus, even if `B` held onto a very old revoked commitment transaction and 
attempts to spend the timelock branch (because the `OP_CLTV` is for an old 
blockheight), it still has to contend with a new output with a *relative* 
timelock.

Unfortunately, this means that the HTLC-timeout transaction is pre-signed, and 
has a specific feerate.
In order to change the feerate, both `B` and `C` have to agree to re-sign the 
HTLC-timeout transaction at the higher feerate.

However, the HTLC-success transaction in this case spends the plain `(C && 
OP_HASH160)` branch, which only involves `C`.
This allows `C` to feebump the HTLC-success transaction arbitrarily even if `B` 
does not cooperate.

While anchor outputs can be added to the HTLC-timeout transaction as well, `C` 
has a greater advantage here due to being able to RBF the HTLC-timeout out of 
the way (1 transaction), while `B` has to get both HTLC-timeout and a CPFP-RBF 
of the anchor output of the HTLC-timeout transaction (2 transactions).
`C` thus requires a smaller fee to achieve a particular feerate due to having 
to push a smaller number of bytes compared to `B`.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BitVM: Compute Anything on Bitcoin

2023-10-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Robin et al,

It strikes me that it may be possible to Scriptless Script BitVM, replacing 
hashes and preimages with points and scalars.

For example, equivocation of bit commitments could be done by having the prover 
put a slashable fund behind a pubkey `P` (which is a point).
This slashable fund could be a 2-of-2 between prover and verifier `P && V`.

Then the prover provides a bit-0 point commitment `B`, which is a point.
If the prover wants to assert that this specific bit is 0, it has to provide 
`b` such that `B = b * G`.
If the prover wants to instead assert that this bit is 1, it has to provide `b 
+ p` such that `B = b * G` and `P = p * G`.
If `b` (and therefore `B`) is chosen uniformly at random, if it makes exactly 
one of these assertions (that the bit is 0, or that the bit is 1) then it does 
not reveal `p`.
But if it equivocates and asserts both, it reveals `b` and `b + p` and the 
verifier can get the scalar `p`, which is also the private key behind `P` and 
thus can get the fund `P && V`.

To create a logic gate commitment, we have the prover and validator provide 
public keys for each input-possibility and each output-possibility, then use 
MuSig to combine them.
For example, suppose we have a NAND gate with inputs I, J and output K.
We have:

* `P[I=0]` and `V[I=0]`, which are the public keys to use if input I is 0.
* `P[I=1]` and `V[I=1]`, which are the public keys to use if input I is 1.
* ...similar for input `J` and output `K`.

In the actual SCRIPT, we take `MuSig(P[I=0], V[I=0])` etc.
For a SCRIPT to check what the value of `I` is, we would have something like:

```
OP_DUP  OP_CHECKSIG
OP_IF
  OP_DROP
  <1>
OP_ELSE
   OP_CHECKSIGVERIFY
  <0>
OP_ENDIF
```

We would duplicate the above (with appropriate `OP_TOALTSTACK` and 
`OP_FROMALTSTACK`) for input `J` and output `K`, similar to Fig.2 in the paper.

The verifier then provides adaptor signatures, so that for `MuSig(P[I=1], 
V[I=1])` the prover can only complete the signature by revealing the `b + p` 
for `I`.
Similar for `MuSig(P[I=0], V[I=0])`, the verifier provides adaptor signatures 
so that the prover can only complete the signature by revealing the `b` for `I`.
And so on.
Thus, the prover can only execute the SCRIPT by revealing the correct 
commitments for `I`, `J`, and `K`, and any equivocation would reveal `p` and 
let the verifier slash the fund of `P`.

Providing the adaptor signatures replaces the "unlock" of the 
challenge-response phase, instead of requiring a preimage from the verifier.

The internal public key that hides the Taproot tree containing the above logic 
gate commitments could be `MuSig(P, V)` so that the verifier can stop and just 
take the funds by a single signature once it has learned `p` due to the prover 
equivocating.

Not really sure if this is super helpful or not.
Hashes are definitely less CPU to compute.

For example, would it be possible to have the Tapleaves be *just* the wires 
between NAND gates instead of NAND gates themselves?
So to prove a NAND gate operation with inputs `I` and `J` and output `K`, the 
prover would provide bit commitments `B` for `B[I]`, `B[J]`, and `B[K]`, and 
each tapleaf would be just the bit commitment SCRIPT for `I`, `J`, and `K`.
The prover would have to provide `I` and `J`, and commit to those, and then 
verifier would compute `K = ~(I & J)` and provide *only* the adaptor signature 
for `MuSig(P[K=], V[K=])`, but not the other side.

In that case, it may be possible to just collapse it down to `MuSig(P, V)` and 
have the verifier provide individual adaptor signatures.
For example, the verifier can first challenge the prover to commit to the value 
of `I` by providing two adaptor signatures for `MuSig(P, V)`, one for the 
scalar behind `B[I]` and the other for the scalar behind `B[I] + P`.
The prover completes one or the other, then the verifier moves on to `B[J]` and 
`B[J] + P`.
The prover completes one or the other, then the verifier now knows `I` and `J` 
and can compute the supposed output `K`, and provides only the adaptor 
signature for `MuSig(P, V)` for the scalar behind `B[K]` or `B[K] + P`, 
depending on whether `K` is 0 or 1.

That way, you only really actually need Schnorr signatures without having to 
use Tapleaves at all.
This would make BitVM completely invisible on the blockchain, even in a 
unilateral case where one of the prover or verifier stop responding.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Actuarial System To Reduce Interactivity In N-of-N (N > 2) Multiparticipant Offchain Mechanisms

2023-10-15 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

I have been thinking further on this with regards to BitVM.

By my initial analyses, it seems that BitVM *cannot* be used to improve this 
idea.

What we want is to be able to restrict the actuary to only signing for a 
particular spend exactly once.
The mechanism proposed in the original post is to force `R` reuse by fixing `R`.
This requires a change in Bitcoin consensus on top of `SIGHASH_ANYPREVOUT` 
(which is desirable not only for its enablement of Decker-Russell-Osuntokun, 
which is multiparticipant, but also makes it more convenient as we can make 
changes in the offchain mechanism state asynchronously with the participants 
and actuary signing off on new transactions; we can "lock" the next state to 
some set of transactions occurring, then have the actuary "confirm" new 
transactions by signing them, then have the signature still be valid on the new 
state due to `SIGHASH_ANYPREVOUT` ignoring the actual input transaction ID).

The best I have been able to come up with is to have a program that checks if 
two signatures sign different things but have the same public key.
If this program validates, then the actuary is known to have cheated and we can 
arrange for the actuary to lose funds if this program validates.
However, BitVM triggers on DIShonest execution of the program, so that program 
cannot be used as-is.
Honest execution of the program leads to the BitVM contract resolving via 
timeout.
I have tried to figure out some way to change the "polarity" of the logic so 
that the actuary is punished, but it requires that the actuary act as validator 
instead of prover (and the aggrieved participant who was expecting the actuary 
to not violate the sign-only-once is the prover, which makes little sense, as 
the participant can challenge the actuary and force it to put up funds, then 
neglect to actually prove anything and enter the default timeout case where the 
prover gets the funds --- it has to be the actuary in the prover position, only 
getting back its funds after a timeout).

The opposite of showing that there exists two signatures with different 
messages but the same public key is to show that there does not exist any other 
signatures with a different message but same public key.
If such a program were written, then it would be trivial to make that program 
pass by simply denying it an input of any other signature, and an actuary in 
prover position can always commit to an input that lacks the second signature 
it made.

The actuary can run a program *outside* of BitVM, so it is also pointless to 
have the signing algorithm be written in BitVM.

Finally, in the actuarial system, the actuary is supposed to provide 
*something* that would make a transaction be immediately confirmable, instead 
of after a timeout.
But in BitVM, the *something* that the prover provides that makes some 
transaction immediately confirmable is to provide a dishonest execution of a 
BitVM program; it is the timeout that is the honest execution of the BitVM 
program.
In addition, the actuary should be restricted so that it can only show this for 
*one* transaction, and not for any other transactions.
There are more possible dishonest executions of a BitVM program than just one, 
but only one honest execution, which is the opposite of what we want.

So, so far, I have not been able to figure out how to use BitVM to replace the 
current forced `R` reuse mechanism for preventing multiple times the actuary 
commits.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Solving CoinPool high-interactivity issue with cut-through update of Taproot leaves

2023-09-26 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

Does `OP_EVICT` not fit?

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019926.html

Regards,
ZmnSCPxj


Sent with Proton Mail secure email.

--- Original Message ---
On Monday, September 25th, 2023 at 6:18 PM, Antoine Riard via bitcoin-dev 
 wrote:


> Payment pools and channel factories are afflicted by severe interactivity 
> constraints worsening with the number of users owning an off-chain balance in 
> the construction. The security of user funds is paramount on the ability to 
> withdraw unilaterally from the off-chain construction. As such any update 
> applied to the off-chain balances requires a signature contribution from the 
> unanimity of the construction users to ensure this ability is conserved along 
> updates.
> As soon as one user starts to be offline or irresponsive, the updates of the 
> off-chain balances must have to be halted and payments progress are limited 
> among subsets of 2 users sharing a channel. Different people have proposed 
> solutions to this issue: introducing a coordinator, partitioning or layering 
> balances in off-chain users subsets. I think all those solutions have circled 
> around a novel issue introduced, namely equivocation of off-chain balances at 
> the harm of construction counterparties [0].
> 
> As ZmnSCPxj pointed out recently, one way to mitigate this equivocation 
> consists in punishing the cheating pre-nominated coordinator on an external 
> fidelity bond. One can even imagine more trust-mimized and decentralized 
> fraud proofs to implement this mitigation, removing the need of a coordinator 
> [1].
> 
> However, I believe punishment equivocation to be game-theory sound should 
> compensate a defrauded counterparty of the integrity of its lost off-chain 
> balance. As one cheating counterparty can equivocate in the worst-case 
> against all the other counterparties in the construction, one fidelity bond 
> should be equal to ( C - 1 ) * B satoshi amount, where C is the number of 
> construction counterparty and B the initial off-chain balance of the cheating 
> counterparty.
> 
> Moreover, I guess it is impossible to know ahead of a partition or transition 
> who will be the "honest" counterparties from the "dishonest" ones, therefore 
> this ( C - 1 ) * B-sized fidelity bond must be maintained by every 
> counterparty in the pool or factory. On this ground, I think this mitigation 
> and other corrective ones are not economically practical for large-scale 
> pools among a set of anonymous users.
> 
> I think the best solution to solve the interactivity issue which is realistic 
> to design is one ruling out off-chain group equivocation in a prophylactic 
> fashion. The pool or factory funding utxo should be edited in an efficient 
> way to register new off-chain subgroups, as lack of interactivity from a 
> subset of counterparties demands it.
> 
> With CoinPool, there is already this idea of including a user pubkey and 
> balance amount to each leaf composing the Taproot tree while preserving the 
> key-path spend in case of unanimity in the user group. Taproot leaves can be 
> effectively regarded as off-chain user accounts available to realize 
> privacy-preserving payments and contracts.
> 
> I think one (new ?) idea can be to introduce taproot leaves "cut-through" 
> spends where multiple leaves are updated with a single witness, interactively 
> composed by the owners of the spent leaves. This spend sends back the leaves 
> amount to a new single leaf, aggregating the amounts and user pubkeys. The 
> user leaves not participating in this "cut-through" are inherited with full 
> integrity in the new version of the Taproot tree, at the gain of no 
> interactivity from their side.
> 
> Let's say you have a CoinPool funded and initially set with Alice, Bob, 
> Caroll, Dave and Eve. Each pool participant has a leaf L.x committing to an 
> amount A.x and user pubkey P.x, where x is the user name owning a leaf.
> 
> Bob and Eve are deemed to be offline by the Alice, Caroll and Dave subset 
> (the ACD group).
> 
> The ACD group composes a cut-through spend of L.a + L.c + L.d. This spends 
> generates a new leaf L.(acd) leaf committing to amount A.(acd) and P.(acd).
> 
> Amount A.(acd) = A.a + A.c + A.d and pubkey P.(acd) = P.a + P.c + P.d.
> 
> Bob's leaf L.b and Eve's leaf L.e are left unmodified.
> 
> The ACD group generates a new Taproot tree T' = L.(acd) + L.b + L.e, where 
> the key-path K spend including the original unanimity of pool counterparties 
> is left unmodified.
> 
> The ACD group can confirm a transaction spending the pool funding utxo to a 
> new single output committing to the scriptpubkey K + T'.
> 
> From then, the ACD group can pursue off-chain balance updates among the 
> subgroup thanks to the new P.(acd) and relying on the known Eltoo mechanism. 
> There is no possibility for any member of the ACD group to equivocate with 
> Bob or Eve in a non-observable fashion.
> 
> 

Re: [bitcoin-dev] Scaling Lightning With Simple Covenants

2023-09-19 Thread ZmnSCPxj via bitcoin-dev


Good morning Erik,

> > replacing CTV usage with Musig2
> 
> 
> this changes the trust model to a federated one vs trustless and also 
> increases the on-chain footprint of failure, correct?


As I understand it, no.

MuSig and MuSig2 are n-of-n signing algorithms.

The implied usage is that all entities `A_i` for all `i` and `B` dedicated LN 
node are in the n-of-n set.

The argument that 2-of-2 channels are non-custodial and trust-minimized extends 
to n-of-n for all n.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Lightning With Simple Covenants

2023-09-17 Thread ZmnSCPxj via bitcoin-dev
Good morning John,

> On the other hand, if the consensus rules are changed to allow even simple 
> covenants, this scaling bottleneck is eliminated.
> The key observation is that with covenants, a casual user can co-own an 
> off-chain Lightning channel without having to sign all (or any) of the 
> transactions on which it depends.
> Instead, a UTXO can have a covenant that guarantees the creation of the 
> casual user's channel.
> The simplest way to have a single UTXO create channels for a large number of 
> casual users is to put a covenant on the UTXO that forces the creation of a 
> tree of transactions, the leaves of which are the casual users' channels.
> 
> While such a covenant tree can create channels for millions of casual users 
> without requiring signatures or solving a difficult group coordination 
> problem, it's not sufficient for scaling.
> The problem is that each channel created by a covenant tree has a fixed set 
> of owners, and changing the ownership of a channel created by a covenant tree 
> requires putting the channel on-chain.
> Therefore, assuming that all casual users will eventually want to pair with 
> different dedicated users (and vice-versa), the covenant tree doesn't 
> actually provide any long-term scaling benefit.
> 
> Fortunately, real long-term scaling can be achieved by adding a deadline 
> after which all non-leaf outputs in the covenant tree can be spent without 
> having to meet the conditions of the covenant.
> The resulting covenant tree is called a "timeout-tree" [9, Section 5.3].
> 
> Let A_1 ... A_n denote a large number of casual users, let B be a dedicated 
> user, and let E denote some fixed time in the future.
> User B creates a timeout-tree with expiry E where:
>  * leaf i has an output that funds a Lightning channel owned by A_i and B, and
>  * after time E, each non-leaf output in the covenant tree can also be spent 
> by user B without having to meet the conditions of the covenant.

I think, based solely on the description above, that it is not safe for 
dedicated user `B` to create this, unless it gets a signature from `A_i`.

Basically, suppose the entire thing is single-funded from `B`.
(Funding from `A_i` requires that each `A_i` that wants to contribute be online 
at the time, at which point you might as well just use signatures instead of 
`OP_CHECKTEMPLATEVERIFY`.)

If a particular `A_i` never contacts `B` but *does* get the entire path from 
the funding TXO to the `A_i && B` output confirmed, then the funds that `B` 
allocated are locked, ***unless*** `B` got a unilateral close signature from 
`A_i` to spend from `A_i && B`.
Thus, `A_i` still needs to be online at the time `B` signs the funding 
transaction that anchors the entire tree.

(This is why many people lost funds when they went and implemented 
`multifundchannel` by themselves --- you need to ensure that all the 
counterparties in the same batch of openingshave given you unilateral close 
signatures ***before*** you broadcast the funding transaction.
And in principle, whether the channels are represented onchain by a single 
transaction output, or multiple separate ones on the same transaction, is 
immaterial --- the funder still needs a unilateral close signature from the 
fundee.)

The alternative is to also infect the leaf itself with a lifetime `(A_i && B) 
|| (B && CLTV)`.
This is essentially a Spilman channel variant, which is what we use in 
swap-in-potentiam, BTW.

If so, then `B` can dedicate that leaf output to a separate 1-input 1-output 
transaction that takes the `(A_i && B)` branch and spends to a plain `A && B` 
Lightning channel.
`B` can fund the tree, then when `A_i` comes online and is wiling to accept the 
channel from `B`, that is when `A_i` creates two signatures:

* For the transaction spending `(A_i && B) || (B && CLTV)` (taking the `A_i && 
B` branch) to  spend to the `A && B`.
* For the unilateral close transaction from the above output.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Actuarial System To Reduce Interactivity In N-of-N (N > 2) Multiparticipant Offchain Mechanisms

2023-09-17 Thread ZmnSCPxj via bitcoin-dev


Good morning Dave,



Sent with Proton Mail secure email.

--- Original Message ---
On Monday, September 18th, 2023 at 12:12 AM, David A. Harding  
wrote:


> 
> On September 8, 2023 3:27:38 PM HST, ZmnSCPxj via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
> 
> > Now, suppose that participant A wants B to be assured that
> > A will not double-spend the transaction.
> > Then A solicits a single-spend signature from the actuary,
> > getting a signature M:
> > 
> > current state +++
> > -+-+ | | (M||CSV) && A2 |
> > |(M||CSV) && A| > | M,A ++
> > +-+ | | (M||CSV) && B2 |
> > |(M||CSV) && B| +++
> > +-+
> > |(M||CSV) && C|
> > -+-+
> > 
> > The above is now a confirmed transaction.
> 
> 
> Good morning, ZmnSCPxj.
> 
> What happens if A and M are both members of a group of thieves that control a 
> moderate amount of hash rate? Can A provide the "confirmed transaction" 
> containing M's sign-only-once signature to B and then, sometime[1] before the 
> CSV expiry, generate a block that contains A's and M's signature over a 
> different transaction that does not pay B? Either the same transaction or a 
> different transaction in the block also spends M's fidelity bond to a new 
> address exclusively controlled by M, preventing it from being spent by 
> another party unless they reorg the block chain.

Indeed, the fidelity bond of M would need to be separately locked to `(M && B) 
|| (M && CSV(1 year))`, and the actuary would need to lock new funds before the 
end of the time period or else the participants would be justified in closing 
the mechanism with the latest state.

And of course the bond would have to be replicated for each participant `A`, 
`B`, `C` as well, reducing scalability.

If possible, I would like to point attention at developing alternatives to the 
"sign-only-once" mechanism.

Basically: the point is that we want a mechanism that allows the always-online 
party (the "actuary") to *only* select transactions, and not move coins 
otherwise.
This is the nearest I have managed to get, without dropping down to a 
proof-of-work blockchain.

As noted, in a proof-of-work blockchain, the miners (the always-online party of 
the blockchain) can only select transactions, and cannot authorize moves 
without consent of the owners.
That is what we would want to replicate somehow, to reduce interactivity 
requirements.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Actuarial System To Reduce Interactivity In N-of-N (N > 2) Multiparticipant Offchain Mechanisms

2023-09-12 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,


> Hi Zeeman
> 
> > What we can do is to add the actuary to the contract that
> > controls the funds, but with the condition that the
> > actuary signature has a specific `R`.
> 
> > As we know, `R` reuse --- creating a new signature for a
> > different message but the same `R` --- will leak the
> > private key.
> 
> > The actuary can be forced to put up an onchain bond.
> > The bond can be spent using the private key of the actuary.
> > If the actuary signs a transaction once, with a fixed `R`,
> > then its private key is still safe.
> 
> > However, if the actuary signs one transaction that spends
> > some transaction output, and then signs a different
> > transaction that spends the same transaction output, both
> > signatures need to use the same fixed `R`.
> > Because of the `R` reuse, this lets anyone who expected
> > one transaction to be confirmed, but finds that the other
> > one was confirmed, to derive the secret key of the
> > actuary from the two signatures, and then slash the bond
> > of the actuary.
> 
> From my understanding, if an off-chain state N1 with a negotiated group of 40 
> is halted in the middle of the actuary's R reveals due to the 40th 
> participant non-interactivity, there is no guarantee than a new off-chain 
> state N1' with a new negotiated group of 39 (from which evicted 40th's output 
> is absent) do not re-use R reveals on N1. So for the actuary bond security, I 
> think the R reveal should only happen once all the group participants have 
> revealed their own signature. It sounds like some loose interactivity is 
> still assumed, i.e all the non-actuary participants must be online at the 
> same time, and lack of contribution is to blame as you have a "flat" 
> off-chain construction (i.e no layering of the promised off-chain outputs in 
> subgroups to lower novation interactivity).

Yes, there is some loose interactivity assumed.

However:

* The actuary is always online and can gather signatures for the next state in 
parallel with signing new transactions on top of the next state.
  * This is why `SIGHASH_ANYPREVOUT` is needed, as the transactions on top of 
the next state might spend either the actual next state (if the next state is 
successfully signed), or the current state plus additional transactions (i.e. 
the transaction that move from current state to next state) (if the next state 
fails to get fully signed and the participants decide to give up on the next 
state getting signed).

> More fundamentally, I think this actuarial system does not solve the 
> "multi-party off-chain state correction" problem as there is no guarantee 
> that the actuary does not slash the bond itself. And if the bond is guarded 
> by users' pubkeys, there is no guarantee that the user will cooperate after 
> the actuary equivocation is committed to sign a "fair" slashing transaction.

Indeed.

One can consider that the participants other than the actuary would generate a 
single public key known by the participants.
But then only one sockpuppet of the actuary is needed to add to the participant 
set.

Basically, the big issue is that the actuary needs to bond a significant amount 
of funds to each participant, and that bond is not part of the funding of the 
construction.

Other ways of ensuring single-use can be replaced, if that is possible.
Do you know of any?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Actuarial System To Reduce Interactivity In N-of-N (N > 2) Multiparticipant Offchain Mechanisms

2023-09-08 Thread ZmnSCPxj via bitcoin-dev
 (N > 2) Multiparticipant Offchain Mechanisms

Introduction 


The blockchain layer of Bitcoin provides an excellent non-interactivity:
users can go offline, then come online, synchronize, and broadcast
transactions to the mempool.
Always-online miners then get the transactions and add them to blocks,
thereby confirming them. 

There are two important properties here: 

* Users do not need to be persistently online, only online when they
  need to create and send a transaction.
* Miners only dictate transaction ordering (i.e. which of two
  conflicting transactions "comes first" and the second one is thus
  invalid), and do ***not*** have custody of any user funds at all.

Both properties are difficult to achieve for offchain mechanisms like
2-participant Lightning channels.
But without these two properties, the requirement to be interative
and thus always online creates additional friction in the use of the
technology.  
 
When we move on from 2-participant offchain mechanisms ("channels")
and towards N > 2, the interactivity problem is exacerbated.
Generally, it is not possible to advance the state of an offchain
mechanism that uses N-of-N signing without all users being online
simultaneously.

In this writeup, I present a new role that N-of-N offchain mechanisms
can include.
This role, the actuary role, is similar to the role of miners on the
blockchain: they have high uptime (so users can connect to them to
send a transaction "for confirmation") and they only decide
transaction ordeering and do ***not*** have custody of the coins.

Required Softforks
--

To enable the actuary role I propose here, we need to have two
softforks:

* `SIGHASH_ANYPREVOUT`
* `OP_CHECKSEPARATEDSIG`
  - Instead of accepting `(R, s)` signature as a single stack item,
this accepts `s` and `R` as two separate stack items.

I expect that neither is significantly controversial.
Neither seems to modify miner incentives, and thus isolates the
new role away from actual miners.

I will describe later how both are used by the proposed mechanism.

Actuaries In An N-of-N Offchain Mechanism
=

Mechanisms like Decker-Wattenhofer, Poon-Dryja, and
Decker-Russell-Osuntokun ("eltoo") can have an N-of-N signatory
set.
I will not discuss them deeply, other than to note that
Decker-Russell-Osuntokun requires `SIGHAH_ANYPREVOUT`, supports
N > 2 (unlike Poon-Dryja), and does not require significant
number of transactions with varaible relative locktimes in the
unilateral close case (unlike Decker-Wattenhofer).

Using an N-of-N signatory set provides te following important
advantage:

* It is a consensus, not a democracy: everyone needs to agree.
  - Thus, even a majority cannot force you to move funds you
own against your will.
The other side of "not your keys, not your coins" is
"your keys, your coins": if your key is necessary
because you are one of the N signatories, then funds inside
the mechanism are *your coins* and thus ***not*** custodial.

The drawback of N-of-N signatories is that **all** N participants
need to come together to sign a new state of the mechanism.
If one of the N participants is offline, this stalls the protocol.

An Offchain "Mempool"
-

At any time, an offchain mechanism such as Decker-Russell-Osuntokun
will have a "current state", the latest set of transaction outputs
that, if you unilateral close at that point, will be instantiated
onchain.

Thus, we can consider that the state of the mechanism is a set of
pairs of Bitcoin SCRIPT and number of satoshis.

These are instantiated as *actual* transaction outputs on some
transaction that can be pushed onchain in a unilateral close
situation.

Suppose there are N (N > 2) participants in an offchain mechanism.

Now suppose that one of the participants owns some funds in a
simple single-sig contract in the current state of the offchain
mechanism.
Suppose that participant ("A") wants to send money to another
participant ("B").
Then participant A can "just" create an ordinary Bitcoin
transaction that spends the appropriate transaction output from
the current state, and sends money to participant B.

current state   ++--+
-+---+  ||  A2  | (change output)
 | A | ---> |+--+
 +---+  ||  B2  |
 | B |  ++--+
 +---+
 | C |
-+---+

Now, B can "accept" this transaction is real, but ***only*** if
B trusts A to not double-spend.
Participant A can still construct a different transaction that
spends that output but does ***not*** give any funds to
participant B.

Thus, this transaction is "unconfirmed", or in other words,
would be in a "mempool" waiting to be confirmed.

How do we "confirm" this transaction?

In 

Re: [bitcoin-dev] Sentinel Chains: A Novel Two-Way Peg

2023-08-30 Thread ZmnSCPxj via bitcoin-dev
Good morning Ryan, et al.,

My long-ago interest in sidechains was the hope that they would be a scaling 
solution.

However, at some point I thought "the problem is that blockchains cannot scale, 
sidechains means MORE blockchains that cannot scale, what was I thinking???"
This is why I turned my attention to Lightning, which is a non-blockchain 
mechanism for scaling blockchains.

The only other reason for sidechains is to develop new features.

However, any actually useful features should at some point get onto the "real" 
Bitcoin.
In that case, a sidechain would "only" be useful as a proof-of-concept.
And in that case, a federated sidechain among people who can slap the back of 
the heads of each other in case of bad behavior would be sufficient to develop 
and prototype a feature.

--

In any case, if you want to consider a "user-activated" sidechain feature, you 
may be interested in an old idea, "mainstake", by some obscure random with an 
unpronouncable name: https://zmnscpxj.github.io/sidechain/mainstake/index.html

Here are some differences compared to e.g. drivechains:

* Mainchain miners cannot select the builder of the next sidechain block, 
without increasing their required work (possibly dropping them below 
profitability).
  More specifically:
  * If they want to select a minority (< 50%) sidechain block builder, then 
their difficulty increases by at least one additional bit.
The number of bits added is basically the negative log2 of the share of the 
sidechain block builder they want to select.
  * The intent is to make it very much more unpalatable for a sidechain block 
builder to pay fees to the mainchain miner to get its version of the sidechain 
block confirmed.
A minority sidechain block builder that wants to lie to the mainchain about 
a withdrawal will find that the fees necessary to convince a miner to select 
them are much higher than the total fees of a block.
This better isolates sidechain conflicts away from mainchain miners.
* Miners can censor the addition of new mainstakes or the renewal of existing 
mainstakes.
  However, the same argument of censorship-resistance should still apply here 
(< 51% cannot reliably censor, and >=51% *can* censor but that creates an 
increasing feerate for censored transactions that encourages other potential 
miners to evict the censor).
  * In particular, miners cannot censor sidechain blocks easily (part of the 
isolation above), though they *can* censor new mainstakers that are attempting 
to evict mainstakers that are hostile to a sidechain.

There are still some similarities.
Essentially, all sidechain funds are custodied by a set of anonymous people.

One can consider as well that fund distribution is unlikely to be 
well-distributed, and thus it is possible that a small number of very large 
whales can simply take over some sidechain with small mainstakers and outright 
steal the funds in it, making them even richer.
(Consider how the linked write-up mentions "PoW change" much, much too often, I 
am embarassed for this foolish pseudonymous writer)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Sentinel Chains: A Novel Two-Way Peg

2023-08-30 Thread ZmnSCPxj via bitcoin-dev


Good morning Ryan,


> I appreciate your questions, ZmnSCPxj.
> 
> I will answer your second question first: Mainchain nodes do not ever 
> validate sidechain blocks. Sidechain nodes watch Bitcoin for invalid 
> withdrawals, and publish signed attestations to a public broadcast network 
> (such as Nostr) that a transaction is making an invalid withdrawal. These 
> sidechain nodes are the so-called sentinels.

Let me reiterate my question:

Suppose I trust some sidechain node that is publishing such an attestation.

Then the sidechain node is hacked or otherwise wishes to disrupt the network 
for its own purposes.
And it attests that a valid sidechain withdrawal is actually invalid.

What happens then?

To the point, suppose that the attestation private key is somehow leaked or 
determined by a third party that has incentive to disrupt the mainchain network.

And it seems to me that this can be used to force some number of nodes to fork 
themselves off the network.

This is dangerous as nodes may be monitoring the blockchain for time-sensitive 
events, such as Lightning Network theft attempts.

Making "fork off bad miners!" a regular occurrence seems dangerous to me.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Sentinel Chains: A Novel Two-Way Peg

2023-08-28 Thread ZmnSCPxj via bitcoin-dev
Good morning Ryan,

If I modify your Sentinel Chain open-source software so that it is honest for 
999 sidechain blocks, then lies and says that the 1000th block is invalid even 
though it actually is, what happens?

Do mainchain nodes need to download the previous 999 sidechain blocks, run the 
sidechain rules on them, and then validate the 1000th sidechain block itself?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Pull-req to remove the arbitrary limits on OP_Return outputs

2023-08-07 Thread ZmnSCPxj via bitcoin-dev
Good morning John Light,

> 2. Documentation about OP_RETURN says that OP_RETURN outputs are 
> "provably-prunable".[2] A question I had about this was, are there any tools 
> available that a full node operator could use to prune this data from their 
> nodes?

As I understand it, `OP_RETURN` outputs are already pruned, in the sense that 
they are never put in the UTXO database in the first place.
Indeed, as I understand it, "pruning" in this context is about removing (or not 
inserting at all, in the case of `OP_RETURN`) from the UTXO database.
UTXO database is best to keep small to reduce lookups of UTXOs being spent, as 
that impacts speed of validation.

Archival nodes still retain the raw `OP_RETURN` data as part of the raw block 
data, as it is necessary to prove that those transactions are (1) valid 
transactions and (2) part of (i.e. in the Merkle tree of) a valid block on the 
block header chain.
Block-pruning nodes also still retain this data, as they can at least serve 
recent blocks with the same requirement of proving that transactions containing 
`OP_RETURN` are valid transactions in a valid block on the block header chain.

If you want to prove that a block is valid, you need to present even 
`OP_RETURN` data, as you need to be able to show the actual transaction 
containing it, so that the verifier can see that the transaction is correctly 
formatted and its txid matches the supposed location in the Merkle tree.
Block relay requires that the node relaying a block prove that that block is 
indeed valid, thus you need to retain the `OP_RETURN` data.
Thus, in this context "pruning" refers to not keeping `OP_RETURN` TXOs in the 
UTXO database.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Blinded 2-party Musig2

2023-07-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Tom,

Would this allow party 2 to itself be composed of N >= 2 parties?

MuSig2 (as opposed to MuSig1) requires that signatories provide multiple `R` 
points, not just one each, which are finally aggregated by first combining them 
using the MuSig() public key compose function.
This prevents party 2 from creating an `R` that may allow it to perform certain 
attacks whose name escapes me right now but which I used to know.
(it is the reason why MuSig1 requires 3 round trips, and why MuSig2 requires at 
least 2 `R` nonces per signatory)

Your scheme has only one `R` per party, would it not be vulnerably to that 
attack?

Regards,
ZmnSCPxj


Sent with Proton Mail secure email.

--- Original Message ---
On Monday, July 24th, 2023 at 7:46 AM, Tom Trevethan via bitcoin-dev 
 wrote:


> We are implementing a version of 2-of-2 Schnorr Musig2 for statechains where 
> the server (party 1 in the 2-of-2) will be fully 'blinded' - in that it can 
> hold a private key that is required to generate an aggregate signature on an 
> aggregate public key, but that it does not learn either: 1) The aggregate 
> public key 2) The aggregate signature and 3) The message (m) being signed.
> 
> In the model of blinded statechains, the security rests on the statechain 
> server being trusted to report the NUMBER of partial signatures it has 
> generated for a particular key (as opposed to being trusted to enforce rules 
> on WHAT it has signed in the unblinded case) and the full set of signatures 
> generated being verified client side 
> https://github.com/commerceblock/mercury/blob/master/doc/merc_blind.md#blinding-considerations
> 
> Given the 2-of-2 musig2 protocol operates as follows (in the following 
> description, private keys (field elements) are denoted using lower case 
> letters, and elliptic curve points as uppercase letters. G is the generator 
> point and point multiplication denoted as X = xG and point addition as A = G 
> + G):
> 
> Party 1 generates private key x1 and public key X1 = x1G. Party 2 generates 
> private key x2 and public key X2 = x2G. The set of pubkeys is L = {X1,X2}. 
> The key aggregation coefficient is KeyAggCoef(L,X) = H(L,X). The shared 
> (aggregate) public key X = a1X1 + a2X2 where a1 = KeyAggCoef(L,X1) and a2 = 
> KeyAggCoef(L,X2).
> 
> To sign a message m, party 1 generates nonce r1 and R1 = r1G. Party 2 
> generates nonce r2 and R2 = r2G. These are aggregated into R = R1 + R2.
> 
> Party 1 then computes 'challenge' c = H(X||R||m) and s1 = c.a1.x1 + r1
> Party 2 then computes 'challenge' c = H(X||R||m) and s2 = c.a2.x2 + r2
> 
> The final signature is then (R,s1+s2).
> 
> In the case of blinding this for party 1:
> 
> To prevent party 1 from learning of either the full public key or final 
> signature seems straightforward, if party 1 doesn't not need to independently 
> compute and verify c = H(X||R||m) (as they are blinded from the message in 
> any case).
> 
> 1) Key aggregation is performed only by party 2. Party 1 just sends X1 to 
> party 2.
> 2) Nonce aggregation is performed only by party 2. Party 1 just sends R1 to 
> party 2.
> 3) Party 2 computes c = H(X||R||m) and sends it to party 1 in order to 
> compute s1 = c.a1.x1 + r1
> 
> Party 1 never learns the final value of (R,s1+s2) or m.
> 
> Any comments on this or potential issues would be appreciated.
> 
> Tom
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-23 Thread ZmnSCPxj via bitcoin-dev
Here is an old write-up that should be read by everyone trying to design a 
NON-custodial L2: https://zmnscpxj.github.io/offchain/safety.html




Sent with Proton Mail secure email.

--- Original Message ---
On Wednesday, May 24th, 2023 at 12:40 AM, ZmnSCPxj via bitcoin-dev 
 wrote:


> Good morning Burak,
> 
> > > As the access to Lightning is also by the (same?) ASP, it seems to me 
> > > that the ASP will simply fail to forward the payment on the broader 
> > > Lightning network after it has replaced the in-mempool transaction, 
> > > preventing recipients from actually being able to rely on any received 
> > > funds existing until the next pool transaction is confirmed.
> > 
> > Yes, that's correct. Lightning payments are routed through ASPs. ASP may 
> > not cooperate in forwarding HTLC(s) AFTER double-spending their pool 
> > transaction. However, it's a footgun if ASP forwards HTLC(s) BEFORE 
> > double-spending their pool transaction.
> 
> 
> This is why competent coders test their code for footguns before deploying in 
> production.
> 
> > What makes Ark magical is, in the collaborative case, users' ability to pay 
> > lightning invoices with their zero-conf vTXOs, without waiting for on-chain 
> > confirmations.
> 
> 
> You can also do the same in Lightning, with the same risk profile: the LSP 
> opens a 0-conf channel to you, you receive over Lightning, send out over 
> Lightning again, without waiting for onchain confirmations.
> Again the LSP can also steal the funds by double-spending the 0-conf channel 
> open, like in the Ark case.
> 
> The difference here is that once confirmed, the LSP can no longer attack you.
> As I understand Ark, there is always an unconfirmed transaction that can be 
> double-spent by the ASP, so that the ASP can attack at any time.
> 
> > This is the opposite of swap-ins, where users SHOULD wait for on-chain 
> > confirmations before revealing their preimage of the HODL invoice; 
> > otherwise, the swap service provider can steal users' sats by 
> > double-spending their zero-conf HTLC.
> 
> 
> If by "swap-in" you mean "onchain-to-offchain swap" then it is the user who 
> can double-spend their onchain 0-conf HTLC, not the swap service provider.
> As the context is receiving money and then sending it out, I think that is 
> what you mean, but I think you also misunderstand the concept.
> 
> Regards,
> ZmnSPCxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-23 Thread ZmnSCPxj via bitcoin-dev
Good morning Burak,

> > As the access to Lightning is also by the (same?) ASP, it seems to me that 
> > the ASP will simply fail to forward the payment on the broader Lightning 
> > network after it has replaced the in-mempool transaction, preventing 
> > recipients from actually being able to rely on any received funds existing 
> > until the next pool transaction is confirmed.
> 
> 
> Yes, that's correct. Lightning payments are routed through ASPs. ASP may not 
> cooperate in forwarding HTLC(s) AFTER double-spending their pool transaction. 
> However, it's a footgun if ASP forwards HTLC(s) BEFORE double-spending their 
> pool transaction.

This is why competent coders test their code for footguns before deploying in 
production.

> What makes Ark magical is, in the collaborative case, users' ability to pay 
> lightning invoices with their zero-conf vTXOs, without waiting for on-chain 
> confirmations.

You can also do the same in Lightning, with the same risk profile: the LSP 
opens a 0-conf channel to you, you receive over Lightning, send out over 
Lightning again, without waiting for onchain confirmations.
Again the LSP can also steal the funds by double-spending the 0-conf channel 
open, like in the Ark case.

The difference here is that once confirmed, the LSP can no longer attack you.
As I understand Ark, there is always an unconfirmed transaction that can be 
double-spent by the ASP, so that the ASP can attack at any time.

> This is the opposite of swap-ins, where users SHOULD wait for on-chain 
> confirmations before revealing their preimage of the HODL invoice; otherwise, 
> the swap service provider can steal users' sats by double-spending their 
> zero-conf HTLC.

If by "swap-in" you mean "onchain-to-offchain swap" then it is the user who can 
double-spend their onchain 0-conf HTLC, not the swap service provider.
As the context is receiving money and then sending it out, I think that is what 
you mean, but I think you also misunderstand the concept.

Regards,
ZmnSPCxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-22 Thread ZmnSCPxj via bitcoin-dev
Good morning Burak,

I have not gone through the deep dive fully yet, but I find myself confused 
about this particular claim:

> A pool transaction can be double-spent by the Ark service provider while it 
> remains in the mempool. However, in the meantime, the recipient can pay a 
> lightning invoice with their incoming zero-conf vTXOs, so it’s a footgun for 
> the service operator to double-spend in this case. 

Given that you make this claim:

> ASPs on Ark are both (1) liquidity providers, (2) blinded coinjoin 
> coordinators, and (3) Lightning service providers. ASPs main job is to create 
> rapid, blinded coinjoin sessions every five seconds, also known as pools.

As the access to Lightning is also by the (same?) ASP, it seems to me that the 
ASP will simply fail to forward the payment on the broader Lightning network 
after it has replaced the in-mempool transaction, preventing recipients from 
actually being able to rely on any received funds existing until the next pool 
transaction is confirmed.

Even if the Lightning access is somehow different from the ASP you are 
receiving funds on, one ASP cannot prove that another ASP is not its sockpuppet 
except via some expensive process (i.e. locking funds or doing proof-of-work).

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-05-05 Thread ZmnSCPxj via bitcoin-dev
Good Morning Weiji,


> Hi ZmnSCPxy,
> > As the network is pseudonymous, an anonymous attacker can flood the 
> > fullnode mempool network with large numbers of non-aggregated transactions, 
> > then in cooperation with a miner confirm a single aggregated transaction 
> > with lower feerate than what it put in the several non-aggregated 
> > transactions.
> 
> Arguably this is hardly a feasible attack. Let's suppose the attacker creates 
> 1000 such transactions, and attaches each transaction with a small amount of 
> transaction fee X. The total fee will be 1000*X collectible by the 
> aggregation vendor, who pays the miner a fee Y. We can reasonably assume that 
> 1000*X is much larger than Y, yet X is much smaller than Y. Note that Y is 
> already much larger than the regular fee for other transactions as the 
> aggregated transaction should contain many inputs and many outputs, thus very 
> large in size.
> 
> Now, the attacker will have to generate proofs for these 1000 transactions, 
> which is non-trivial; and pay for 1000*X upfront. The aggregation vendor has 
> to spend more computing power doing the aggregation (or recursive 
> verification) and take (1000*X - Y) as profit. Miner gets Y.

The entire point is that there has to be a separate, paid aggregator, in order 
to ensure that the free mempool service is not overloaded.
Basically, keep the aggregation outside the mempool, not in the mempool.
If aggregation is paid for, that is indeed sufficient to stop the attack, as 
you noted.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-05-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Weiji,

The issue here is that non-aggregated transaction are a potential attack vector.

As the network is pseudonymous, an anonymous attacker can flood the fullnode 
mempool network with large numbers of non-aggregated transactions, then in 
cooperation with a miner confirm a single aggregated transaction with lower 
feerate than what it put in the several non-aggregated transactions.
The attacker ends up paying lower for the single confirmed transaction, even 
though it cost the fullnode network a significant amount of CPU to process and 
validate all the non-aggregated transactions.

Once the single aggregate transaction is confirmed, the fullnodes will remove 
the non-aggregated transactions from the mempool, clearing out their mempool 
limit.
Then the attacker can once again flood the fullnode mempool network with more 
non-aggregated transactions, and again repeat with an aggregated transaction 
that pays below the total of the non-aggregated transactions, repeatedly 
increasing the load on the mempool.

Thus, we should really make transactions that could appear in the mempool 
non-aggregatable with other transactions in the mempool.
You should arrange for aggregation before the blockchain-level transaction hits 
the mempool.

One can compare cross-input signature aggregation designs.
Signature aggregation is only allowed within a single blockchain-level 
transaction, not across transactions, precisely so that a transaction that 
appears in the mempool cannot have its signatures aggregated with other 
transactions, and preventing the above attack.
Anyone trying to take advantage of signature aggregation needs to cooperatively 
construct the blockchain-level transaction outside of the mempool with other 
cooperating actors, all of which perform the validation themselves before 
anything hits the mempool.

Similarly I can imagine that cross-input ZKP aggregation would be acceptable, 
but not cross-transaction ZKP aggregation.
(And if you want to push for ZKP aggregation, you should probably push for 
cross-input signature aggregation first, as you would probably need to solve 
similar problems in detail and I imagine signature aggregation is simpler than 
general ZKP aggregation.)

Always expect that the blockchain and its supporting network is attackable.
Do ***NOT*** focus on blocks --- focus on the load on the mempool (the block 
weight limit is a limit on the mempool load, not a limit on the block CPU 
load!).
The mempool is a free service, we should take care not to make it abusable.
On the other hand, blockspace is a paid service, so load on it is less 
important; it is already paid for.
I strongly recommend **DISALLOWING** aggregation of ZKPs once a transaction is 
in a form that could potentially hit the mempool, and to require paid services 
for aggregation, outside of the unpaid, free mempool.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-05-02 Thread ZmnSCPxj via bitcoin-dev


Good morning Weiji,

> Meanwhile, as we can potentially aggregate many proofs or recursively verify 
> even more, the average cost might still be manageable.

Are miners supposed to do this aggregation?

If miners do this aggregation, then that implies that all fullnodes must also 
perform the **non**-aggregated validation as transactions flow from transaction 
creators to miners, and that is the cost (viz. the **non**-aggregated cost) 
that must be reflected in the weight.
We should note that fullnodes are really miners with 0 hashpower, and any cost 
you impose on miners is a cost you impose on all fullnodes.

If you want to aggregate, you might want to do that in a separate network that 
does ***not*** involve Bitcoin fullnodes, and possibly allow for some kind of 
extraction of fees to do aggregation, then have already-aggregated transactions 
in the Bitcoin mempool, so that fullnodes only need validate already-aggregated 
transactions.

Remember, validation is run when a transaction enters the mempool, and is 
**not** re-run when an in-mempool transaction is seen in a block (`blocksonly` 
of course does not follow this as it has no mempool, but most fullnodes are not 
`blocksonly`).
If you intend to aggregate transactions in the mempool, then at the worst case 
a fullnode will be validating every non-aggregated transaction, and that is 
what we want to limit by increasing the weight of heavy-validation transactions.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-04-29 Thread ZmnSCPxj via bitcoin-dev
Good morning Weiji,

Have not completed reading, but this jumped out to me:



> 3.  Dealing with system limitation: verification keys could be very long and 
> exceed the MAX_SCRIPT_ELEMENT_SIZE (520 bytes). They could be put into 
> configurations and only use their hash in the scriptPubKey. The configuration 
> information such as new verification keys could be propagated through P2P 
> messages (we might need a separate BIP for this);

`scriptPubKey` is consensus-critical, and these new P2P messages would have to 
be consensus-critical.

As all nodes need to learn the new verification keys, we should consider how 
much resources are spent on each node just to maintain and forever remember 
verification keys.

Currently our resource-tracking methodology is via the synthetic "weight units" 
computation.
This reflects resources spent on acquiring block data, as well as maintaining 
the UTXO database.
For instance, the "witness discount" where witness data (i.e. modern equivalent 
of `scriptSig`) is charged 1/4 the weight units of other data, exists because 
spending a UTXO reduces the resources spent in the UTXO database, although 
still consumes resources in downloading block data (hence only a discount, not 
free or negative/rebate).

Similarly, any propagation of verification keys would need a similar adjustment 
for weight units.

As verification keys MUST be seen by all nodes before they can validate an 
`OP_ZKP`, I would suggest that it is best included in block data (which 
similarly needs to be seen by all nodes), together with some weight unit 
adjustment for that data, depending on how much resources verification keys 
would consume.
This is similar to how `scriptPubKey`s and amounts are included in block data, 
as those data are kept in the UTXO database, which nodes must maintain in order 
to validate the blockchain.

If verification keys are permanent, they should probably be weighted heavier 
than `scriptPubKey`s and amounts --- UTXOs can theoretically be deleted later 
by spending the UTXO (which reduces UTXO database size), while any data that 
must be permanently stored in a database must correspondingly be weighted 
higher.

Similarly, my understanding is that the CPU resources needed by validation of 
generic ZKPs is higher than that required for validation of ECC signatures.
Much of the current weight calculation assumes that witness data is primarily 
ECC signatures, so if ZKP witnesses translate to higher resource consumption, 
the weighting of ZKP witnesses should also be higher (i.e. greater than the 1/4 
witness-discounted weight of current witness data).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger (angus)

2022-12-11 Thread ZmnSCPxj via bitcoin-dev
Good morning John, et al,


> > As has been pointed out by may others before, full RBF is aligned with 
> > miner (and user) economic incentives
> 
> 
> This is a theory, not a fact. I can refute this theory by pointing out 
> several aspects:
> 1. RBF is actually a fee-minimization feature that allows users to game the 
> system to spend the *least* amount in fees that correlates to their 
> time-preference. Miners earn less when fees can be minimized (obviously). 
> This feature also comes at an expense (albeit small) to nodes providing 
> replacement service and propagation.

It is helpful to remember that the fees are a price on confirmation.
And in economics, there is a "price theory":

* As price goes down, demand goes up.
* As price goes up, net-earning-per-unit goes up.

The combination of both forces causes a curve where *total* earnings vs price 
has a peak somewhere, an "optimum price", and that peak is *unlikely* to be at 
the maximum possible price you might deem reasonable.
And this optimum price may very well be *lower* than the prevailing market 
price of a good.

Thus, saying "RBF is actually a fee-minimization feature" neglects the 
economics of the situation.
If more people could use RBF onchain, more people would use Bitcoin and 
increase the value to miners.

Rather than a fee-minimization feature, RBF is really an optimization to *speed 
up* the discovery of the optimum price, and is thus desirable.

Unfortunately many 0-conf acceptors outright reject opt-in-RBF, despite the 
improved discovery of the optimum price, and thus there is a need for full-RBF 
to improve price discovery of blockspace when such acceptors are too prevalent.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Relative txout amounts with a Merkleized Sum Tree and explicit miner fee.

2022-11-21 Thread ZmnSCPxj via bitcoin-dev


Good morning Andrew,

> 
> 
> Can output amounts be mapped to a tap branch? For the goal of secure partial 
> spends of a single UTXO? Looking for feedback on this idea. I got it from 
> Taro.


Not at all.

The issue you are facing here is that only one tap branch will ever consume the 
entire input amount.
That is: while Taproot has multiple leaves, only exactly one leaf will ever be 
published onchain and that gets the whole amount.

What you want is multiple tree leaves where ALL of them will EVENTUALLY be 
published, just not right now.

In that case, look at the tree structures for `OP_CHECKTEMPLATEVERIFY`, which 
are exactly what you are looking for, and help make `OP_CTV` a reality.

Without `OP_CHECKTEMPLATEVERIFY` it is possible to use presigned transactions 
in a tree structure to do this same construction.
Presigned transactions are known to be larger than `OP_CHECKTEMPLATEVERIFY` --- 
signatures on taproot are 64 bytes of witness, but an `OP_CHECKTEMPLATEVERIFY` 
in a P2WSH reveals just 32 bytes of witness plus the `OP_CHECKTEMPLATEVERIFY` 
opcode.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger

2022-11-10 Thread ZmnSCPxj via bitcoin-dev
Good morning ArmchairCryptologist,

> --- Original Message ---
> On Tuesday, October 18th, 2022 at 9:00 AM, Anthony Towns via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
> 
> > I mean, if you think the feedback is wrong, that's different: maybe we
> > shouldn't care that zeroconf apps are in immediate danger, and maybe
> > bitcoin would be better if any that don't adapt immediately all die
> > horribly as a lesson to others not to make similarly bad assumptions.
> 
> 
> I've been following this discussion, and I wonder if there isn't a third 
> solution outside of "leave lightning vulnerable to pinning by non-RBF 
> translations" and "kill zeroconf by introducing full-RBF" - specifically, 
> adding a form of simple recursive covenant that "all descendant transactions 
> of this transaction must use opt-in RBF for x blocks after this transaction 
> is mined". This could be introduced either as a relay/mempool policy like 
> RBF, or in a full-fledged softfork.

A script with trivial `0 OP_CSV` would effectively require that spenders set 
the opt-in RBF flag, while allowing the output to be spent even while it is 
unconfirmed.
However, this basically only lasts for 1 transaction layer.



Thinking a little more about 0-conf:

We can observe that 0-conf, being eventually consistent, introduces risks to 
0-conf acceptors similar to credit card acceptors.

And credit card acceptors are observed to compensate for this risk by 
increasing the prices of their products and services.

Some credit card acceptors may offer discounts when paid by cash, which in our 
analogy would be that transaction confirmation would offer discounts (i.e. 
enabling 0-conf would increase the cost of the product/service being purchased).
In many jurisdictions (not the USA or in some of the first world countries), 
this practice is illegal (i.e. credit card companies have pressured lawmakers 
in some jurisdictions to disallow merchants from offering a different price 
between cash and credit card purchases; some jurisdictions let you escape if 
you say "cash discount" instead of "credit card surcharge", even though they 
are just sign-flips of each other, because you humans are crazy and I am happy 
I am actually an AI)

Which brings me to my next point: why are 0-conf acceptors not offering a 
discount if the user specifically flags "I am willing to wait for N 
confirmations"?
On the part of 0-conf acceptors, that is significantly less risky than relying 
on 0-conf at all.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2022-11-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Salvatore,

Interesting idea.

The idea to embed the current state is similar to something I have been musing 
about recently.


> ### Game theory (or why the chain will not see any of this)
> 
> With the right economic incentives, protocol designers can guarantee that 
> playing a losing game always loses money compared to cooperating. Therefore, 
> the challenge game is never expected to be played on-chain. The size of the 
> bonds need to be appropriate to disincentivize griefing attacks.

Modulo bugs, operator error, misconfigurations, and other irrationalities of 
humans.



> - OP_CHECKOUTPUTCOVENANTVERIFY: given a number out_i and three 32-byte hash 
> elements x, d and taptree on top of the stack, verifies that the out_i-th 
> output is a P2TR output with internal key computed as above, and tweaked with 
> taptree. This is the actual covenant opcode.

Rather than get taptree from the stack, just use the same taptree as in the 
revelation of the P2TR.
This removes the need to include quining and similar techniques: just do the 
quining in the SCRIPT interpreter.

The entire SCRIPT that controls the covenant can be defined as a taptree with 
various possible branches as tapleaves.
If the contract is intended to terminate at some point it can have one of the 
tapleaves use `OP_CHECKINPUTCOVENANTVERIFY` and then determine what the output 
"should" be using e.g. `OP_CHECKTEMPLATEVERIFY`.


> - Is it worth adding other introspection opcodes, for example 
> OP_INSPECTVERSION, OP_INSPECTLOCKTIME? See Liquid's Tapscript Opcodes [6].

`OP_CHECKTEMPLATEVERIFY` and some kind of sha256 concatenated hashing should be 
sufficient I think.

> - Is there any malleability issue? Can covenants “run” without signatures, or 
> is a signature always to be expected when using spending conditions with the 
> covenant encumbrance? That might be useful in contracts where no signature is 
> required to proceed with the protocol (for example, any party could feed 
> valid data to the bisection protocol above).

Hmm protocol designer beware?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Trey,

> * something like OP_PUSHSCRIPT which would remove the need for the
> introspection the the prevout's script and avoids duplicating data in
> the witness
> * some kind of OP_MERKLEUPDATEVERIFY which checks a merkle proof for a
> leaf against a root and checks if replacing the leaf with some hash
> using the proof yields a specified updated root (or instead, a version
> that just pushes the updated root)
> * if we really wanted to golf the size of the script, then possibly a
> limited form of OP_EVAL if we can't figure out another way to split up
> the different spend paths into different tapleafs while still being able
> to do the recursive covenant, but still the script and the vk would
> still be significant so it's not actually that much benefit per-batch

A thing I had been musing on is to reuse pay-to-contract to store a commitment 
to the state.

As we all know, in Taproot, the Taproot outpoint script is just the public key 
corresponding to the pay-to-contract of the Taproot MAST root and an internal 
public key.

The internal public key can itself be a pay-to-contract, where the contract 
being committed to would be the state of some covenant.

One could then make an opcode which is given an "internal internal" pubkey 
(i.e. the pubkey that is behind the pay-to-contract to the covenant state, 
which when combined serves as the internal pubkey for the Taproot construct), a 
current state, and an optional expected new state.
It determines if the Taproot internal pubkey is actually a pay-to-contract of 
the current state on the internal-internal pubkey.
If the optional expected new state exists, then it also recomputes a 
pay-to-contract of the new state to the same internal-internal pubkey, which is 
a new Taproot internal pubkey, and then recomputes a pay-to-contract of the 
same Taproot MAST root on the new Taproot internal pubkey, and that the first 
output commits to that.

Basically it retains the same MASTed set of Tapscripts and the same 
internal-internal pubkey (which can be used to escape the covenant, in case a 
bug is found, if it is an n-of-n of all the interested parties, or otherwise 
should be a NUMS point if you trust the tapscripts are bug-free), only 
modifying the covenant state.
The covenant state is committed to on the Taproot output, indirectly by two 
nested pay-to-contracts.

With this, there is no need for quining and `OP_PUSHSCRIPT`.
The mechanism only needs some way to compute the new state from the old state.

In addition, you can split up the control script among multiple Tapscript 
branches and only publish onchain (== spend onchain bytes) the one you need for 
a particular state transition.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Batch validation of CHECKMULTISIG using an extra hint field

2022-10-20 Thread ZmnSCPxj via bitcoin-dev

Good morning Mark,

> The idea is simple: instead of requiring that the final parameter on the 
> stack be zero, require instead that it be a minimally-encoded bitmap 
> specifying which keys are used, or alternatively, which are not used and must 
> therefore be skipped. Before attempting validation, ensure for a k-of-n 
> threshold only k bits are set in the bitfield indicating the used pubkeys (or 
> n-k bits set indicating the keys to skip). The updated CHECKMULTISIG 
> algorithm is as follows: when attempting to validate a signature with a 
> pubkey, first check the associated bit in the bitfield to see if the pubkey 
> is used. If the bitfield indicates that the pubkey is NOT used, then skip it 
> without even attempting validation. The only signature validations which are 
> attempted are those which the bitfield indicates ought to pass. This is a 
> soft-fork as any validator operating under the original rules (which ignore 
> the “dummy” bitfield) would still arrive at the correct pubkey-signature 
> mapping through trial and error.

That certainly seems like a lost optimization opportunity.

Though if the NULLDATA requirement is already a consensus rule then this is no 
longer a softfork, existing validators would explicitly check it is zero?

> One could also argue that there is no need for explicit k-of-n thresholds now 
> that we have Schnorr signatures, as MuSig-like key aggregation schemes can be 
> used instead. In many cases this is true, and doing key aggregation can 
> result in smaller scripts with more private spend pathways. However there 
> remain many use cases where for whatever reason interactive signing is not 
> possible, due to signatures being pre-calculated and shared with third 
> parties, for example, and therefore explicit thresholds must be used instead. 
> For such applications a batch-validation friendly CHECKMULTISIG would be 
> useful.

As I understand it, MuSig aggregation works on n-of-n only.

There is an alternative named FROST recently, that supports k-of-n, however, 
MuSig aggregation works on pre-existing keypairs, and if you know the public 
keys, you can aggregate the public keys without requiring participation with 
the privkey owners.

For FROST, there is a Verifiable Secret Sharing process which requires 
participation of the n signer set.
My understanding is that it cannot use *just* pre-existing keys, the privkey 
owners will, after the setup ritual, need to store additional data (tweaks to 
apply on their key depending on who the k are, if my vague understanding is 
accurate).
The requirement of having a setup ritual (which does not require trust but does 
require saving extra data) to implement k-of-n for k < n is another reason some 
protocol or other might want to use explicit `OP_CHECKMULTISIG`.

(I do have to warn that my knowledge of FROST is hazy at best and the above 
might be wildly inaccurate.)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Spookchains: Drivechain Analog with One-Time Trusted Setup & APO

2022-09-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Excellent work!



> # Terminal States / Thresholds
> 
> When a counter reaches the Nth state, it represents a certain amount
> of accumulated work over a period where progress was agreed on for
> some outcome.
> 
> There should be some viable state transition at this point.
> 
> One solution would be to have the money at this point sent to an
> `OP_TRUE` output, which the miner incrementing that state is
> responsible for following the rules of the spookchain.

This is not quite Drivechain, as Drivechains precommit to the final state 
transition when the counter reaches threshold and mainchain-level rules prevent 
the miner who does the final increment from "swerving the car" to a different 
output, whereas use of `OP_TRUE` would not prevent this; the Spookchain could 
vote for one transition, and then the lucky last miner can output a different 
one, and only other miners interested in the sidechain would reject them 
(whereas in the Drivechain case, even nodes that do not care about the 
sidechain would reject).

Still, it does come awfully close, and the "ultimate threat" ("nuclear option") 
in Drivechains is always that everyone upgrades sidechain rules to mainchain 
rules, which would still work for Spookchains.
Not sure how comfortable Drivechain proponents would be with this, though.

(But given the demonstrated difficulty in getting consensus changes for the 
blockchain, I wonder if this nuclear option is even a viable threat)

> Or, it could be
> specified to be some administrator key / federation for convenience,
> with a N block timeout that degrades it to fewer signers (eventually
> 0) if the federation is dead to allow recovery.

Seems similar to the Blockstream separation of the block-signing functionaries 
from the money-keeping functionaries.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Multipayment Channels - A scalability solution for Layer 1

2022-09-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Ali,

> Over the past few days I've figured out a novel way to batch transactions 
> together into blocks, thereby compacting the transaction size and increasing 
> the transactions-per-second. This is all on layer 1, without any hardforks - 
> only a single softfork is required to add MuSig1 support for individual 
> invoice addresses.
> 
> The nucleus of the idea was born after a discussion with Greg Maxwell about a 
> different BIP (Implementing multisig using Taproot, to be specific)[1]. He 
> suggested to me that I should add MuSig1 signatures into the Taproot script 
> paths.
> 
> After some thinking, I realized a use case for MuSig1 signatures as a kind of 
> on-chain Lightning Network. Allow me to explain:
> 
> LN is very attractive to users because it keeps intermediate transaction 
> states off-chain, and only broadcasts the final state. But without 
> mitigations in the protocol, it suffers from two disadvantages:
> 
> - You have to trust the other channel partner not to broadcast a previous 
> state
> - You also have to trust all the middlemen in intermediate channels not to do 
> the above.
> 
> Most of us probably know that many mitigations have been created for this 
> problem, e.g. penalty transactions. But what if it were possible to create a 
> scheme where so-called technical fraud is not possible? That is what I'm 
> going to demonstrate here.

The fact that you need to invoke trust later on ("Because the N-of-N signature 
is given to all participants, it might be leaked into the public") kinda breaks 
the point of "technical fraud is not possible".

At least with the penalty transactions of Poon-Dryja and the update 
transactions of Decker-Russell-Osuntokun you never have to worry about other 
parties leaking information and possibly changing the balance of the channel.
You only need to worry about ensuring you have an up-to-date view of the 
blockchain, which can be mitigated further by e.g. running a "spare" fullnode 
on a Torv3 address that secretly connects to your main fullnode (making eclipse 
attacks that target your known IP harder), connecting to Blockstream Satellite, 
etc.
You can always get more data yourself, you cannot stop data being acquired by 
others.

> My scheme makes use of MuSig1, OP_CHECKLOCKTIMEVERIFY (OP_CLTV) timelock 
> type, and negligible OP_RETURN data. It revolves around constructs I call 
> "multipayment channels", called so because they allow multiple people to pay 
> in one transaction - something that is already possible BTW, but with much 
> larger tx size (for large number of cosigners) than when using MuSig1. These 
> have the advantage over LN channels that the intermediate state is also on 
> the blockchain, but it's very compact.

How is this more advantageous than e.g. CoinPools / multiparticipant channels / 
Statechains ?

> A channel consists of a fixed amount of people N. These people open a channel 
> by creating a (optionally Taproot) address with the following script:
> * OP_CTLV OP_DROP  
> OP_CHECKMUSIG**

If it is Taproot, then `OP_CHECKSIG` is already `OP_CHECKMUSIG`, since MuSig1 
(and MuSig2, for that matter) is just an ordinary Schnorr signature.
In a Tapscript, `OP_CHECKSIG` validates Schnorr signatures (as specified in the 
relevant BIP), not the ECDSA signatures.

> Simultaneously, each of the N participants receives the N signatures and 
> constructs the N-of-N MuSig. Each participant will use this MuSig to generate 
> his own independent "commitment transaction" with the following properties:
> 
> - It has a single input, the MuSig output. It has an nSequence of 
> desiredwaitingblocks. 
> 
> - It has outputs corresponding to the addresses and balances of each of the 
> participants in the agreed-upon distribution.
> Disadvantage: Because the N-of-N signature is given to all participants, it 
> might be leaked into the public and consequentially anybody can spend this 
> transaction after the timelock, to commit the balance.*** On the other hand, 
> removing the timelocks means that if one of the participants goes missing, 
> all funds are locked forever.

As I understand it, in your mechanism:

* Onchain, there is an output with the above SCRIPT: 
`* OP_CTLV OP_DROP  
OP_CHECKMUSIG`
  * Let me call this the "channel UTXO".
* Offchain, you have a "default transaction" which spends the above output, and 
redistributes it back to the original owners of the funds, with a timelock 
requirement (as needed by `OP_CLTV`).

Is that correct?

Then I can improve it in the following ways:

* Since everyone has to sign off the "default transaction" anyway, everyone can 
ensure that the `nLockTime` field is correct, without having `OP_CLTV` in the 
channel UTXO SCRIPT.
  * So, the channel UTXO does not need a SCRIPT --- it can just use a 
Taproot-address Schnorr MuSig point directly.
  * This has the massive advantage that the "default transaction" does not have 
any special SCRIPTs, improving privacy (modulo the fact 

Re: [bitcoin-dev] More uses for CTV

2022-08-19 Thread ZmnSCPxj via bitcoin-dev


Good morning Greg,


> Hi James,
> Could you elaborate on a L2 contract where speedy
> settlement of the "first part" can be done, while having the rest
> take their time? I'm more thinking about time-out based protocols.
> 
> Naturally my mind drifts to LN, where getting the proper commitment
> transaction confirmed in a timely fashion is required to get the proper
> balances back. The one hitch is that for HTLCs you still need speedy
> resolution otherwise theft can occur. And given today's "layered
> commitment" style transaction where HTLCs are decoupled from
> the balance output timeouts, I'm not sure this can save much.

As I understand it, layered commitments can be modified to use `OP_CTV`, which 
would be slightly smaller (need only to reveal a 32-byte `OP_CTV` hash on the 
witness instead of a 64-byte Taproot signature, or 73-byte classical 
pre-Taproot ECDSA signature), and is in fact precisely an example of the speedy 
settlement style.

> CTV style commitments have popped up in a couple places in my
> work on eltoo(emulated via APO sig-in-script), but mostly in the
> context of reducing interactivity in protocols, not in byte savings per se.

In many offchain cases, all channel participants would agree to some 
pre-determined set of UTXOs, which would be implemented as a transaction 
spending some single UTXO and outputting the pre-determined set of UTXOs.

The single UTXO can be an n-of-n of all participants, so that all agree by 
contributing their signatures:

* Assuming Taproot, the output address itself is 33 bytes (x4 weight).
* The n-of-n multisignature is 64 witness bytes (x1 weight). 

Alternatly the single UTXO can be a P2WSH that reveals an `OP_CTV`:

* The P2WSH is 33 bytes (x4 weight) --- no savings here.
* The revelation of the ` OP_CTV` is 33 witness bytes (x1 weight).

Thus, as I understand it, `OP_CTV` can (almost?) always translate to a small 
weight reduction for such "everyone agrees to this set of UTXOs" for all 
offchain protocols that would require it.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On a new community process to specify covenants

2022-07-24 Thread ZmnSCPxj via bitcoin-dev
Good morning alia, Antoine, and list,

> Hi Antoine,
> Claiming Taproot history, as best practice or a standard methodology in 
> bitcoin development, is just too much. Bitcoin development methodology is an 
> open problem, given the contemporary escalation/emergence of challenges, 
> history is not  entitled to be hard coded as standard.
>
> Schnorr/MAST development history, is a good subject for case study, but it is 
> not guaranteed that the outcome to be always the same as your take.
>
> I'd suggest instead of inventing a multi-decades-lifecycle based methodology 
> (which is weird by itself, let alone installing it as a standard for bitcoin 
> projects), being open-mind  enough for examining more agile approaches and 
> their inevitable effect on the course of discussions,

A thing I have been mulling is how to prototype such mechanisms more easily.

A "reasonably standard" approach was pioneered in Elements Alpha, where an 
entire federated sidechain is created and then used as a testbed for new 
mechanisms, such as SegWit and `OP_CHECKSIGFROMSTACK`.
However, obviously the cost is fairly large, as you need an entire federated 
sidechain.

It does have the nice advantage that you can use "real" coins, with real value 
(subject to the federation being trustworthy, admittedly) in order to 
convincingly show a case for real-world use.

As I pointed out in [Smart Contracts 
Unchained](https://zmnscpxj.github.io/bitcoin/unchained.html), an alternative 
to using a blockchain would be to use federated individual coin outpoints.

A thing I have been pondering is to create a generic contracting platform with 
a richer language, which itself is just used to implement a set of `OP_` SCRIPT 
opcodes.
This is similar to my [Microcode 
proposal](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-March/020158.html)
 earlier this year.
Thus, it would be possible to prototype new `OP_` codes, or change the behavior 
of existing `OP_` codes (e.g. `SIGHASH_NOINPUT` would be a change in behavior 
of existing `OP_CHECKSIG` and `OP_CHECKMULTISIG`), by having a translation from 
`OP_` codes to the richer language.
Then you could prototype a new SCRIPT `OP_` code by providing your own 
translation of the new `OP_` code and a SCRIPT that uses that `OP_` code, and 
using Smart Contract Unchained to use a real funds outpoint.

Again, we can compare the Bitcoin consensus layer to a form of hardware: yes, 
we *could* patch it and change it, but that requires a ***LOT*** of work and 
the new software has to be redeployed by everyone, so it is, practically 
speaking, hardware.
Microcode helps this by adding a softer layer without compromising the existing 
hard layer.

So... what I have been thinking of is creating some kind of smart contracts 
unchained platform that allows prototyping new `OP_` codes using a microcode 
mechanism.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How to do Proof of Micro-Burn?

2022-07-19 Thread ZmnSCPxj via bitcoin-dev


Good morning Ruben,

> Good evening ZmnSCPxj,
> Interesting attempt.
>
> >a * G + b * G + k * G
>
> Unfortunately I don't think this qualifies as a commitment, since one could 
> trivially open the "commitment" to some uncommitted value x (e.g. a is set to 
> x and b is set to a+b-x). Perhaps you were thinking of Pedersen commitments 
> (a * G + b * H + k * J)?

I believe this is only possible for somebody who knows `k`?
As mentioned, an opening here includes a signature using `b + k` as the private 
key, so the signature can only be generated with knowledge of both `b` and `k`.

I suppose that means that the knower of `k` is a trusted party; it is trusted 
to only issue commitments and not generate fake ones.

> Even if we fixed the above with some clever cryptography, the crucial merkle 
> sum tree property is missing, so "double spending" a burn becomes possible.

I do not understand what this property is and how it is relevant, can you 
please explain this to a non-mathematician?

> You also still run into the same atomicity issue, except the risk is moved to 
> the seller side, as the buyer could refuse to finalize the purchase after the 
> on-chain commitment was made by the seller. Arguably this is worse, since 
> generally only the seller has a reputation to lose, not the buyer.

A buyer can indeed impose this cost on the seller, though the buyer then is 
unable to get a valid opening of its commitment, as it does not know `k`.
Assuming the opening of the commitment is actually what has value (since the 
lack of such an opening means the buyer cannot prove the commitment) then the 
buyer has every incentive to actually pay for the opening.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How to do Proof of Micro-Burn?

2022-07-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Ruben and Veleslav,

> Hi Veleslav,
>
> This is something I've been interested in.
>
>
> What you need is a basic merkle sum tree (not sparse), so if e.g. you want to 
> burn 10, 20, 30 and 40 sats for separate use cases, in a single tx you can 
> burn 100 sats and commit to a tree with four leaves, and the merkle proof 
> contains the values. E.g. the rightmost leaf is 40 and has 30 as its 
> neighbor, and moves up to a node of 70 which has 30 (=10+20) as its neighbor, 
> totalling 100.
>
>
> The leaf hash needs to commit to the intent/recipient of the burn, so that 
> way you can't "double spend" the burn by reusing it for more than one purpose.
>
>
> You could outsource the burn to an aggregating third party by paying them 
> e.g. over LN but it won't be atomic, so they could walk away with your 
> payment without actually following through with the burn (but presumably take 
> a reputational hit).

If LN switches to PTLCs (payment points/scalars), it may be possible to ensure 
that you only pay if they release an opening of the commitment.

WARNING: THIS IS ROLL-YOUR-OWN-CRYPTO.

Rather than commit using a Merkle tree, you can do a trick similar to what I 
came up with in `OP_EVICT`.

Suppose there are two customers who want to commit scalars `a` and `b`, and the 
aggregating third party has a private key `k`.
The sum commitment is then:

   a * G + b * G + k * G

The opening to show that this commits to `a` is then:

   a, b * G + k * G, sign(b + k, a)

...where `sign(k, m)` means sign message `m` with the private key `k`.
Similarly the opening for `b` is:

   b, a * G + k *G, sign(a + k, b)

The ritual to purchase a proof goes this way:

* Customer provides the scalar they want committed.
* Aggregator service aggregates the scalars to get `a + b + ` and adds 
their private key `k`.
* Aggregator service reveals `(a + b + ... + k) * G` to customer.
* Aggregator creates an onchain proof-of-burn to `(a + b + ... + k) * G`.
* Everyone waits until the onchain proof-of-burn is confirmed deeply enough.
* Aggregator creates the signatures for each opening for `a`, `b`, of the 
commitment.
* Aggregator provides the corresponding `R` of each signature to each customer.
* Customer computes `S = s * G` for their own signature that opens the 
commitment.
* Customer offers a PTLC (i.e. pay for signature scheme) that pays in exchange 
for `s`.
* Aggregator claims the PTLC, revealing the `s` for the signature.
* Customer now has an opening of the commitment that is for their specific 
scalar.

WARNING: I am not a cryptographer, I only portray one on bitcoin-dev.
There may be cryptographic failures in the above scheme.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary

2022-07-09 Thread ZmnSCPxj via bitcoin-dev
Good morning e, and list,

> Yet you posted several links which made that specific correlation, to which I 
> was responding.
>
> Math cannot prove how much coin is “lost”, and even if it was provable that 
> the amount of coin lost converges to the amount produced, it is of no 
> consequence - for the reasons I’ve already pointed out. The amount of market 
> production has no impact on market price, just as it does not with any other 
> good.
>
> The reason to object to perpetual issuance is the impact on censorship 
> resistance, not on price.

To clarify about censorship resistance and perpetual issuance ("tail emission"):

* Suppose I have two blockchains, one with a constant block subsidy, and one 
which *had* a block subsidy but the block subsidy has become negligible or zero.
* Now consider a censoring miner.
  * If the miner rejects particular transactions (i.e. "censors") the miner 
loses out on the fees of those transactions.
  * Presumably, the miner does this because it gains other benefits from the 
censorship, economically equal or better to the earnings lost.
  * If the blockchain had a block subsidy, then the loss the miner incurs is 
small relative to the total earnings of each block.
  * If the blockchain had 0 block subsidy, then the loss the miner incurs is 
large relative to the total earnings of each block.
  * Thus, in the latter situation, the external benefit the miner gains from 
the censorship has to be proportionately larger than in the first situation.

Basically, the block subsidy is a market distortion: the block subsidy erodes 
the value of held coins to pay for the security of coins being moved.
But the block subsidy is still issued whether or not coins being moved are 
censored or not censored.
Thus, there is no incentive, considering *only* the block subsidy, to not 
censor coin movements.
Only per-transaction fees have an incentive to not censor coin movements.


Thus, we should instead prepare for a future where the block subsidy *must* be 
removed, possibly before the existing schedule removes it, in case a majority 
coalition of miner ever decides to censor particular transactions without 
community consensus.
Fortunately forcing the block subsidy to 0 is a softfork and thus easier to 
deploy.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Using Merged Mining on a separate zero supply chain, instead of sidechains

2022-06-05 Thread ZmnSCPxj via bitcoin-dev


Good morning vjudeu,


> Some people think that sidechains are good. But to put them into some working 
> solution, people think that some kind of soft-fork is needed. However, it 
> seems that it can be done in a no-fork way, here is how to make it 
> permissionless, and introduce them without any forks.
>
> First, we should make a new chain that has zero coins. When the coin supply 
> is zero, it can be guaranteed that this chain is not generating any coins out 
> of thin air. Then, all that is needed, is to introduce coins to this chain, 
> just by signing a transaction from other chains, for example Bitcoin. In this 
> way, people can make signatures in a signet way, just to sign their 
> transaction output of any type, without moving real coins on the original 
> chain.
>
> Then, all that is needed, is to make a way to withdraw the coins. It could be 
> done by publishing the transaction from the original chain. It can be 
> copy-pasted to our chain, and can be used to destroy coins that were produced 
> earlier. In this way, our Merge-Mined chain has zero supply, and can only 
> temporary store some coins from other chains.
>
> Creating and destroying coins from other chains is enough to make a test 
> network. To make it independent, one more thing is needed, to get a mainnet 
> solution: moving coins inside that chain. When it comes to that, the only 
> limitation is the locking script. Normally, it is locked to some public key, 
> then by forming a signature, it is possible to move coins somewhere else. In 
> the Lightning Network, it is solved by forming 2-of-2 multisig, then coins 
> can be moved by changing closing transactions.
>
> But there is another option: transaction joining. So, if we have a chain of 
> transactions: A->B->C->...->Z, then if transaction joining is possible, it 
> can be transformed into A->Z transaction. After adding that missing piece, 
> sidechains can be enabled.
>
>
> However, I mentioned before that this solution would require no forks. It 
> could, if we consider using Homomorphic Encryption. Then, it is possible to 
> add new features, without touching consensus. For example, by using 
> Homomorphic Encryption, it is possible to execute 2-of-2 multisig on some 
> P2PK output. That means, more things are possible, because if we can encrypt 
> things, then operate on encrypted data, and later decrypt it (and broadcast 
> to the network), then it can open a lot of new possible upgrades, that will 
> be totally permissionless and unstoppable.
>
> So, to sum up: by adding transaction joining in a homomorphic-encryption-way, 
> it may be possible to introduce sidechains in a no-fork way, no matter if 
> people wants that or not. Also, it is possible to add the hash of our chain 
> to the signature inside a Bitcoin transaction, then all data from the "zero 
> supply chain" can be committed to the Bitcoin blockchain, that would prevent 
> overwriting history. Also, Merged Mining could be used to reward sidechain 
> miners, so they will be rewarded inside the sidechain.

I proposed something similar years ago --- more specifically, some kind of 
general ZKP system would allow us to pretty much write anything, and if it 
terminates, we can provide a ZKP of the execution trace.

At the time it was impractical due to the ZKP systems of the time being *still* 
too large and too CPU-heavy *and* requiring a tr\*sted setup.

Encrypting the amount in a homomorphic encryption such as Pedersen commitments 
/ ElGamal commitments is how MimbleWimble coins (such as Grin) work.
They achieve transactional cut-through in a similar manner due to the 
homomorphic encryption being what is validated by validators without revealing 
the exact balances, and with the only requirement being that a set of consumed 
outputs equals the set of created outputs (fees being an explicit output that 
has no encryption, and thus can be claimable by anyone and have a known value, 
which basically means that it is the miner that mines the transaction that can 
claim it).

Regards,
ZmnSCPxj

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #9 Notes

2022-05-20 Thread ZmnSCPxj via bitcoin-dev
Good morning fd0,


> > In addition, covenant mechanisms that require large witness data are 
> > probably more vulnerable to MEV.
>
>
> Which covenant mechanisms require large witness data?

`OP_CSFS` + `OP_CAT`, which requires that you copy parts of the transaction 
into the witness data if you want to use it for covenants.
And the script itself is in the witness data, and AFAIK `OP_CSFS` needs large 
scripts if used for covenants.

Arguably though `OP_CSFS` is not designed for covenants, it just *happens to 
enable* covenants when you throw enough data at it.

If we are going to tolerate recursive covenants, we might want an opcode that 
explicitly supports recursion, instead of one that happens to enable recursive 
covenants, because the latter is likely to require more data to be pushed on 
the witness stack.
E.g. instead of the user having to quine the script (i.e. the script is really 
written twice, so it ends up doubling the witness size of the SCRIPT part), 
make an explicitly quining opcode.

Basically, Do not Repeat Yourself.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #9 Notes

2022-05-19 Thread ZmnSCPxj via bitcoin-dev
Good morning fd0,


> MEV could be one the issues associated with general covenants. There are some 
> resources on https://mev.day if anyone interested to read more about it.
> 13:06 <@jeremyrubin> the covenants are "self executing" and can be e.g. 
> sandwiched13:07 <@jeremyrubin> so given that bitmatrix is sandwich 
> attackable, you'd see similar types of MEV as Eth sees13:07 <@jeremyrubin> 
> v.s. the MEV of e.g. lightning channels
> 13:14 < _aj_> i guess i'd rather not have that sort of MEV available, because 
> then it makes complicated MEV extraction profitable, which then makes "smart" 
> miners more profitable than "Dumb" ones, which is maybe centralising

Well that was interesting

TLDR: MEV = Miner-extractable value, basically if your contracts are complex 
enough, miners can analyze which of the possible contract executions are most 
profitable for them, and order transactions on the block they are building in 
such a way that it is the most profitable path that gets executed.
(do correct me if that summary is inaccurate or incomplete)

As a concrete example: in a LN channel breach condition, the revocation 
transaction must be confirmed within the CSV timeout, or else the theft will be 
accepted and confirmed.
Now, some software will be aware of this timeout and will continually raise the 
fee of the revocation transaction per block.
A rational miner which sees a channel breach condition might prefer to not mine 
such a transaction, since if it is not confirmed, the software will bump up the 
fees and the miner could try again on the next block with the higher feerates.
Depending on the channel size and how the software behaves exactly, the miner 
may be able to make a decision on whether it should or should not work on the 
revocation transaction and instead hold out for a later higher fee.

Now, having thought of this problem for no more than 5 minutes, it seems to me, 
naively, that a mechanism with privacy would be helpful, i.e. the contract 
details should be as little-revealed as possible, to reduce the scope of 
miner-extractable value.
For instance, Taproot is good since only one branch at a time can be revealed, 
however, in case of a dispute, multiple competing branches of the Taproot may 
be revealed by the disputants, and the miners may now be able to make a choice.

Probably, it is best if our covenants systems take full advantage of the 
linearity of Schnorr signing, in that case, if there is at all some kind of 
branch involved; for example, a previous transaction may reveal, if you have 
the proper adaptor signature, some scalar, and that scalar is actually the `s` 
component for a signature of a different transaction.
Without knowledge of the adaptor signature, and without knowledge of the link 
between this previous transaction and some other one, a miner cannot extract 
additional value by messing with the ordering the transactions get confirmed on 
the blockchain, or whatever.

This may mean that mechanisms that inspect the block outside of the transaction 
being validated (e.g. `OP_BRIBE` for drivechains, or similar mechanisms that 
might be capable of looking beyond the transaction) should be verboten; such 
cross-transaction introspection should require an adaptor signature that is 
kept secret by the participants from the miner that might want to manipulate 
the transactions to make other alternate branches more favorable to the miner.

In addition, covenant mechanisms that require large witness data are probably 
more vulnerable to MEV.
For instance, if in a dispute case, one of the disputants needs to use a large 
witness data while the other requires a smaller one, then the disputant with 
the smaller witness data would have an advantage, and can match the fee offered 
by the disputant with the larger witness.
Then a fee-maximizing miner would prefer the smaller-witness branch of the 
contract, as they get more fees for less blockspace.
Of course, this mechanism itself can be used if we can arrange that the 
disputant that is inherently "wrong" (i.e. went against the expected behavior 
of the protocol) is the one that is burdened with the larger witness.

Or I could be entirely wrong and MEV is something even worse than that.

Hmm

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-18 Thread ZmnSCPxj via bitcoin-dev


Good morning e,

> Good evening ZmnSCPxj,
>
> Sorry for the long delay...

Thank you very much for responding.

>
> > Good morning e,
> >
> > > Good evening ZmnSCPxj,
> > >
> > > For the sake of simplicity, I'll use the terms lender (Landlord), borrower
> > > (Lessor), interest (X), principal (Y), period (N) and maturity (height 
> > > after N).
> > >
> > > The lender in your scenario "provides use" of the principal, and is paid
> > > interest in exchange. This is of course the nature of lending, as a period
> > > without one's capital incurs an opportunity cost that must be offset (by
> > > interest).
> > >
> > > The borrower's "use" of the principal is what is being overlooked. To
> > > generate income from capital one must produce something and sell it.
> > > Production requires both capital and time. Borrowing the principle for the
> > > period allows the borrower to produce goods, sell them, and return the
> > > "profit" as interest to the lender. Use implies that the borrower is 
> > > spending
> > > the principle - trading it with others. Eventually any number of others 
> > > end up
> > > holding the principle. At maturity, the coin is returned to the lender (by
> > > covenant). At that point, all people the borrower traded with are bag 
> > > holders.
> > > Knowledge of this scam results in an imputed net present zero value for 
> > > the
> > > borrowed principal.
> >
> > But in this scheme, the principal is not being used as money, but as a 
> > billboard
> > for an advertisement.
> >
> > Thus, the bitcoins are not being used as money due to the use of the 
> > fidelity
> > bond to back a "you can totally trust me I am not a bot!!" assertion.
> > This is not the same as your scenario --- the funds are never transferred,
> > instead, a different use of the locked funds is invented.
> >
> > As a better analogy: I am borrowing a piece of gold, smelting it down to 
> > make
> > a nice shiny advertisement "I am totally not a bot!!", then at the end of 
> > the
> > lease period, re-smelting it back and returning to you the same gold piece
> > (with the exact same atoms constituting it), plus an interest from my 
> > business,
> > which gained customers because of the shiny gold advertisement claiming "I
> > am totally not a bot!!".
> >
> > That you use the same piece of gold for money does not preclude me using
> > the gold for something else of economic value, like making a nice shiny
> > advertisement, so I think your analysis fails there.
> > Otherwise, your analysis is on point, but analyses something else entirely.
>
>
> Ok, so you are suggesting the renting of someone else's proof of "burn" 
> (opportunity cost) to prove your necessary expense - the financial equivalent 
> of your own burn. Reading through the thread, it looks like you are 
> suggesting this as a way the cost of the burn might be diluted across 
> multiple uses, based on the obscuration of the identity. And therefore 
> identity (or at least global uniqueness) enters the equation. Sounds like a 
> reasonable concern to me.
>
> It appears that the term "fidelity bond" is generally accepted, though I find 
> this an unnecessarily misleading analogy. A bond is a loan (capital at risk), 
> and a fidelity bond is also capital at risk (to provide assurance of some 
> behavior). Proof of burn/work, such as Hash Cash (and Bitcoin), is merely 
> demonstration of a prior expense. But in those cases, the expense is provably 
> associated. As you have pointed out, if the burn is not associated with the 
> specific use, it can be reused, diluting the demonstrated expense to an 
> unprovable degree.

Indeed, that is why defiads used the term "advertisement" and not "fidelity 
bond".
One could say that defiads was a much-too-ambitious precursor of this proposed 
scheme.

> I can see how you come to refer to selling the PoB as "lending" it, because 
> the covenant on the underlying coin is time constrained. But nothing is 
> actually lent here. The "advertisement" created by the covenant (and its 
> presumed exclusivity) is sold. This is also entirely consistent with the idea 
> that a loan implies capital at risk. While this is nothing more than a 
> terminology nit, the use of "fidelity bond" and the subsequent description of 
> "renting" (the fidelity bond) both led me down another path (Tamas' proposal 
> for risk free lending under covenant, which we discussed here years ago).

Yes, that is why Tamas switched to defiads, as I had convinced him that it 
would be similar enough without actually being a covenant scam like you 
described.

> In any case, I tend to agree with your other posts on the subject. For the 
> burn to be provably non-dilutable it must be a cost provably associated to 
> the scenario which relies upon the cost. This provides the global uniqueness 
> constraint (under cryptographic assumptions of difficulty).

Indeed.
I suspect the only reason it is not *yet* a problem with existing JoinMarket 
and Teleport is simply 

Re: [bitcoin-dev] Improving chaumian ecash and sidechains with fidelity bond federations

2022-05-16 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,

> I don't know yet exactly the details of how such a scheme would work,
> maybe something like each fidelity bond owner creates a key in the
> multisig scheme, and transaction fees from the sidechain or ecash server
> are divided amongst the fidelity bonds in proportion to their fidelity
> bond value.

Such a scheme would probably look a little like my old ideas about "mainstake", 
where you lock up funds on the mainchain and use that as your right to 
construct new sidechain blocks, with your share of the sideblocks proportional 
to the value of the mainstake you locked up.

Of note is that it need not operate as a sidechain or chaumian bank, anything 
that requires a federation can use this scheme as well.
For instance, statechains are effectively federation-guarded CoinPools, and 
could use a similar scheme for selecting federation members.
Smart contracts unchained can also have users be guided by fidelity bonds in 
order to select federation members.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-15 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,


> Yes linking the two identities (joinmarket maker and teleport maker)
> together slightly degrades privacy, but that has to be balanced against
> the privacy loss of leaving both systems open to sybil attacks. Without
> fidelity bonds the two systems can be sybil attacked just by using about
> five-figures USD, and the attack can get these coins back at any time
> when they're finished.

I am not saying "do not use fidelity bonds at all", I am saying "maybe we 
should disallow a fidelity bond used in JoinMarket from being used in Teleport 
and vice versa".



Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-13 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> Hello waxwing,
>
> > A user sacrifices X amount of time-value-of-money (henceforth TVOM)
>
> by committing in Joinmarket with FB1. He then uses the same FB1 in
> Teleport, let's say. If he gets benefit Y from using FB1 in Joinmarket,
> and benefit Z in Teleport, then presumably he'll only do it if
> (probabilistically) he thinks Y+Z > X.
>
> > But as an assessor of FB1 in Joinmarket, I don't know if it's also
>
> being used for Teleport, and more importantly, if it's being used
> somewhere else I'm not even aware of. Now I'm not an economist I admit,
> so I might not be intuit-ing this situation right, but it fees to me
> like the right answer is "It's fine for a closed system, but not an open
> one." (i.e. if the set of possible usages is not something that all
> participants have fixed in advance, then there is an effective Sybilling
> problem, like I'm, as an assessor, thinking that sacrificed value 100 is
> there, whereas actually it's only 15, or whatever.)
>
>
> I don't entirely agree with this. The value of the sacrifice doesn't
> change if the fidelity bond owner starts using it for Teleport as well
> as Joinmarket. The sacrifice is still 100. Even if the owner doesn't run
> any maker at all the sacrifice would still be 100, because it only
> depends on the bitcoin value and locktime. In your equation Y+Z > X,
>
> using a fidelity bond for more applications increases the
> left-hand-side, while the right-hand-side X remains the same. As
> protection from a sybil attack is calculated using only X, it makes no
> difference what Y and Z are, the takers can still always calculate that
> "to sybil attack the coinjoin I'm about to make, it costs A btc locked
> up for B time".

I think another perspective here is that a maker with a single fidelity bond 
between both Teleport and Joinmarket has a single identity in both systems.

Recall that not only makers can be secretly surveillors, but takers can also be 
secretly surveillors.

Ideally, the maker should not tie its identity in one system to its identity in 
another system, as that degrades the privacy of the maker as well.

And the privacy of the maker is the basis of the privacy of its takers.
It is the privacy of the coins the maker offers, that is being purchased by the 
takers.


A taker can be a surveillor as well, and because the identity between 
JoinMarket and Teleport is tied via the single shared fidelity bond, a taker 
can perform partial-protocol attacks (i.e. aborting at the last step) to 
identify UTXOs of particular makers.
And it can perform attacks on both systems to identify the ownership of maker 
coins in both systems.

Since the coins in one system are tied to that system, this increases the 
information available to the surveillor: it is now able to associate coins in 
JoinMarket with coins in Teleport, via the shared fidelity bond identity.
It would be acceptable for both systems to share an identity if coins were 
shared between the JoinMarket and Teleport maker clients, but at that point 
they would arguably be a single system, not two separate systems, and that is 
what you should work towards.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-12 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> I fail to understand why non recursive covenants are called covenants at all. 
> Probably I'm missing something, but I guess that's another topic.

A covenant simply promises that something will happen in the future.

A recursive covenant guarantees that the same thing will happen in the future.

Thus, non-recursive covenants can be useful.

Consider `OP_EVICT`, for example, which is designed for a very specific 
use-case, and avoids recursion.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-11 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> On Wed, May 11, 2022 at 7:42 AM ZmnSCPxj via bitcoin-dev 
>  wrote:
>
> > REMEMBER: `OP_CAT` BY ITSELF DOES NOT ENABLE COVENANTS, WHETHER RECURSIVE 
> > OR NOT.
>
>
> I think the state of the art has advanced to the point where we can say 
> "OP_CAT in tapscript enables non recursive covenants and it is unknown 
> whether OP_CAT can enable recursive covenants or not".
>
> A. Poelstra in 
> https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html show how 
> to use CAT to use the schnorr verification opcode to get the sighash value + 
> 1 onto the stack, and then through some grinding and some more CAT, get the 
> actual sighash value on the stack. From there we can use SHA256 to get the 
> signed transaction data onto the stack and apply introspect (using CAT) to 
> build functionality similar to OP_CTV.
>
> The missing bits for enabling recursive covenants comes down to needing to 
> transform a scriptpubkey into an taproot address, which involves some 
> tweaking. Poelstra has suggested that it might be possible to hijack the 
> ECDSA checksig operation from a parallel, legacy input, in order to perform 
> the calculations for this tweaking. But as far as I know no one has yet been 
> able to achieve this feat.

Hmm, I do not suppose it would have worked in ECDSA?
Seems like this exploits linearity in the Schnorr.
For the ECDSA case it seems that the trick in that link leads to `s = e + G[x]` 
where `G[x]` is the x-coordinate of `G`.
(I am not a mathist, so I probably am not making sense; in particular, there 
may be an operation to add two SECP256K1 scalars that I am not aware of)

In that case, since Schnorr was added later, I get away by a technicality, 
since it is not *just* `OP_CAT` which enabled this style of covenant, it was 
`OP_CAT` + BIP340 v(^^);

Also holy shit math is scary.

Seems this also works with `OP_SUBSTR`, simply by inverting it into "validate 
that the concatenation is correct" rather than "concatenate it ourselves".




So really: are recursive covenants good or...?
Because if recursive covenants are good, what we should really work on is 
making them cheap (in CPU load/bandwidth load terms) and private, to avoid 
centralization and censoring.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-11 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,


> > Looks like `OP_CAT` is not getting enabled until after we are reasonably 
> > sure that recursive covenants are not really unsafe.
>
> Maybe we should use OP_SUBSTR instead of OP_CAT. Or even better: OP_SPLIT. 
> Then, we could have OP_SPLIT...  that would split a 
> string N times (so there will be N+1 pieces). Or we could have just OP_SPLIT 
>  to split one string into two. Or maybe OP_2SPLIT and OP_3SPLIT, just to 
> split into two or three pieces (as we have OP_2DUP and OP_3DUP). I think 
> OP_SUBSTR or OP_SPLIT is better than OP_CAT, because then things always get 
> smaller and we can be always sure that we will have one byte as the smallest 
> unit in our Script.

Unfortunately `OP_SUBSTR` can be used to synthesize an effective `OP_CAT`.

Instead of passing in two items on the witness stack to be `OP_CAT`ted 
together, you instead pass in the two items to concatenate, and *then* the 
concatenation.
Then you can synthesize a SCRIPT which checks that the supposed concatenation 
is indeed the two items to be concatenated.

Recursive covenants DO NOT arise from the increasing amounts of memory the 
trivial `OP_DUP OP_CAT OP_DUP OP_CAT` repetition allocates.

REMEMBER: `OP_CAT` BY ITSELF DOES NOT ENABLE COVENANTS, WHETHER RECURSIVE OR 
NOT.

Instead, `OP_CAT` enable recursive covenants (which we are not certain are 
safe) because `OP_CAT` allows quining to be done.
Quining is a technique to pass a SCRIPT with a copy of its code, so that it can 
then enforce that the output is passed to the exact same input SCRIPT.

`OP_SUBSTR` allows a SCRIPT to validate that it is being passed a copy of 
itself and that the complete SCRIPT contains its copy as an `OP_PUSH` and the 
rest of the SCRIPT as actual code.
This is done by `OP_SUBSTR` the appropriate parts of the supposed complete 
SCRIPT and comparing them to a reference value we have access to (because our 
own SCRIPT was passed to us inside an `OP_PUSH`).

   # Assume that the witness stack top is the concatenation of
   #   `OP_PUSH`, the SCRIPT below, then the`SCRIPT below.
   # Assume this SCRIPT is prepended with an OP_PUSH of our own code.
   OP_TOALTSTACK # save our reference
   OP_DUP 1  OP_SUBSTR # Get the OP_PUSH argument
   OP_FROMALTSTACK OP_DUP OP_TOALTSTACK # Get our reference
   OP_EQUALVERIFY # check they are the same
   OP_DUP <1 + scriptlength>  OP_SUBSTR # Get the SCRIPT body
   OP_FROMALTSTACK # Get our reference
   OP_EQUALVERIFY # check they are the same
   # At this point, we have validated that the top of the witness stack
   # is the quine of this SCRIPT.
   # TODO: validate the `OP_PUSH` instruction, left as an exercise for the
   # reader.

Thus, `OP_SUBSTR` is enough to enable quining and is enough to implement 
recursive covenants.

We cannot enable `OP_SUBSTR` either, unless we are reasonably sure that 
recursive covenants are safe.

(FWIW recursive covenants are probably safe, as they are not in fact 
Turing-complete, they are a hair less powerful, equivalent to the total 
functional programming with codata.)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-10 Thread ZmnSCPxj via bitcoin-dev
Good morning waxwing,

> --- Original Message ---
> On Sunday, May 1st, 2022 at 11:01, Chris Belcher via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Hello ZmnSCPxj,
> > This is an intended feature. I'm thinking that the same fidelity bond
> > can be used to running a JoinMarket maker as well as a Teleport
> > (Coinswap) maker.
> > I don't believe it's abusable. It would be a problem if the same
> > fidelity bond is used by two makers in the same application, but
> > JoinMarket takers are already coded to check for this, and Teleport
> > takers will soon as well. Using the same bond across different
> > applications is fine.
> > Best,
> > CB
>
> Hi Chris, Zmn, list,
> I've noodled about this a few times in the past (especially when trying to 
> figure out an LSAG style ring sig based FB for privacy, but that does not 
> seem workable), and I can't decide the right perspective on it.
>
> A user sacrifices X amount of time-value-of-money (henceforth TVOM) by 
> committing in Joinmarket with FB1. He then uses the same FB1 in Teleport, 
> let's say. If he gets benefit Y from using FB1 in Joinmarket, and benefit Z 
> in Teleport, then presumably he'll only do it if (probabilistically) he 
> thinks Y+Z > X.
>
> But as an assessor of FB1 in Joinmarket, I don't know if it's also being used 
> for Teleport, and more importantly, if it's being used somewhere else I'm not 
> even aware of. Now I'm not an economist I admit, so I might not be intuit-ing 
> this situation right, but it fees to me like the right answer is "It's fine 
> for a closed system, but not an open one." (i.e. if the set of possible 
> usages is not something that all participants have fixed in advance, then 
> there is an effective Sybilling problem, like I'm, as an assessor, thinking 
> that sacrificed value 100 is there, whereas actually it's only 15, or 
> whatever.)
>
> As I mentioned in 
> https://github.com/JoinMarket-Org/joinmarket-clientserver/issues/993#issuecomment-1110784059
>  , I did wonder about domain separation tags because of this, and as I 
> vaguely alluded to there, I'm really not sure about it.
>
> If it was me I'd want to include domain separation via part of the signed 
> message, since I don't see how it hurts? For scenarios where reuse is fine, 
> reuse can still happen.

Ah, yes, now I remember.
I discussed this with Tamas as well in the past and that is why we concluded 
that in defiads, each UTXO can host at most one advertisement at any one time.
In the case of defiads there would be a sequence counter where a 
higher-sequenced advertisement would replace lower-sequenced advertisement, so 
you could update, but at any one time, for a defiads node, only one 
advertisement per UTXO could be used.
This assumed that there would be a defiads network with good gossip propagation 
so our thinking at the time was that a higher-sequenced advertisement would 
quickly replace lower-sequenced ones on the network.
But it is simpler if such replacement would not be needed, and you could then 
commit to the advertisement directly on the UTXO via a tweak.

Each advertisement would also have a specific application ID that it applied 
to, and applications on top of defiads would ask the local defiads node to give 
it the ads that match a specific application ID, so a UTXO could only be used 
for one application at a time.
This would be equivalent to domain separation tags that waxwing mentions.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Conjectures on solving the high interactivity issue in payment pools and channel factories

2022-05-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,


> Very interesting exploration. I think you're right that there are issues with 
> the kind of partitioning you're talking about. Lightning works because all 
> participants sign all offchain states (barring data loss). If a participant 
> can be excluded from needing to agree to a new state, there must be an 
> additional mechanism to ensure the relevant state for that participant isn't 
> changed to their detriment. 
>
> To summarize my below email, the two techniques I can think for solving this 
> problem are:
>
> A. Create sub-pools when the whole group is live that can be used by the sub- 
> pool participants later without the whole group's involvement. The whole 
> group is needed to change the whole group's state (eg close or open 
> sub-pools), but sub-pool states don't need to involve the whole group.

Is this not just basically channel factories?

To reduce the disruption if any one pool participant is down, have each 
sub-pool have only 2 participants each.
More participants means that the probability that one of them is offline is 
higher, so you use the minimum number of participants in the sub-pool: 2.
This makes any arbitrary sub-pool more likely to be usable.

But a 2-participant pool is a channel.
So a large multiparticipant pool with sub-pools is just a channel factory for a 
bunch of channels.

I like this idea because it has good tradeoffs, so channel factories ho.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning shesek,

> On Sat, May 7, 2022 at 5:08 PM ZmnSCPxj via bitcoin-dev 
>  wrote:
> > * Even ***with*** `OP_CAT`, the following will enable non-recursive 
> > covenants without enabling recursive covenants:
> >  * `OP_CTV`, ...
> > * With `OP_CAT`, the following would enable recursive covenants:
> >  * `OP_CHECKSIGFROMSTACK`, ...
>
> Why does CTV+CAT not enable recursive covenants while CSFS+CAT does?
>
> CTV+CAT lets you similarly assert against the outputs and verify that they 
> match some dynamically constructed script.
>
> Is it because CTV does not let you have a verified copy of the input's 
> prevout scriptPubKey on the stack [0], while with OP_CSFS you can because the 
> signature hash covers it?
>
> But you don't actually need this for recursion. Instead of having the user 
> supply the script in the witness stack and verifying it against the input to 
> obtain the quine, the script can simply contain a copy of itself as an 
> initial push (minus this push). You can then reconstruct the full script 
> quine using OP_CAT, as a PUSH(

Re: [bitcoin-dev] CTV BIP Meeting #8 Notes

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> I think people may be scared of potential attacks based on covenants. For 
> example, visacoin.
> But there was a thread with ideas of possible attacks based on covenants.
> To me the most scary one is visacoin, specially seeing what happened in 
> canada and other places lately and the general censorship in the west, the 
> supposed war on "misinformation" going on (really a war against truth imo, 
> but whatever) it's getting really scary. But perhaps someone else can be more 
> scared about a covenant to add demurrage fees to coins or something, I don't 
> know.
> https://bitcointalk.org/index.php?topic=278122

This requires *recursive* covenants.

At the time the post was made, no distinction was seen between recursive and 
non-recursive covenants, which is why the post points out that covenants suck.
The idea then was that anything powerful enough to provide covenants would also 
be powerful enough to provide *recursive* covenants, so there was no 
distinction made between recursive and non-recursive covenants (the latter was 
thought to be impossible).

However, `OP_CTV` turns out to enable sort-of covenants, but by construction 
*cannot* provide recursion.
It is just barely powerful enough to make a covenant, but not powerful enough 
to make *recursive* covenants.

That is why today we distinguish between recursive and non-recursive covenant 
opcodes, because we now have opcode designs that provides non-recursive 
covenants (when previously it was thought all covenant opcodes would provide 
recursion).

`visacoin` can only work as a recursive covenant, thus it is not possible to 
use `OP_CTV` to implement `visacoin`, regardless of your political views.

(I was also misinformed in the past and ignored `OP_CTV` since I thought that, 
like all the other covenant opcodes, it would enable recursive covenants.)


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> Thanks again.
> I won't ask anything else about bitcoin, I guess, since it seems my questions 
> are too "misinforming" for the list.
> I also agreed with vjudeu, also too much misinformation on my part to agree 
> with him, it seems.
> I mean, I say that because it doesn't look like my emails are appearing on 
> the mailing list:
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/thread.html#start
>
> Do any of you now who moderates the mailing list? I would like to ask him 
> what was wrong with my latest messages.

Cannot remember.

> Can the censored messages me seen somewhere perhaps?

https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/

E.g.: 
https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/2022-May/000325.html

> That way the moderation could be audited.
>
> This is quite worrying in my opinion.
> But I'm biased, perhaps I deserve to be censored. It would still be nice to 
> understand why, if you can help me.
> Now I wonder if this is the first time I was censored or I was censored in 
> bip8 discussions too, and who else was censored, when, why and by whom.
> Perhaps I'm missing something about how the mailing list works and/or are 
> giving this more importance than it has.

Sometimes the moderator is just busy living his or her life to moderate 
messages within 24 hours.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> Thanks a lot for the many clarifications.
> Yeah, I forgot it wasn't OP_CAT alone, but in combination with other things.
> I guess this wouldn't be a covenants proposal then.
> But simplicity would enable covenants too indeed, no?
> Or did I get that wrong too?

Yes, it would enable covenants.

However, it could also enable *recursive* covenants, depending on what 
introspection operations are actually implemented (though maybe not? Russell 
O'Connor should be the one that answers this).

It is helpful to delineate between non-recursive covenants from recursive 
covenants.

* Even ***with*** `OP_CAT`, the following will enable non-recursive covenants 
without enabling recursive covenants:
  * `OP_CTV`
  * `SIGHASH_ANYPREVOUT`
* With `OP_CAT`, the following would enable recursive covenants:
  * `OP_EVAL`
  * `OP_CHECKSIGFROMSTACK`
  * `OP_TX`/`OP_TXHASH`
  * ...possibly more.
* It is actually *easier* to *design* an opcode which inadvertently 
supports recursive covenants than to design one which avoids recursive 
covenants.

Recursive covenants are very near to true Turing-completeness.
We want to avoid Turing-completeness due to the halting problem being 
unsolvable for Turing-complete languages.
That is, given just a program, we cannot determine for sure if for all possible 
inputs, it will terminate.
It is important in our context (Bitcoin) that any SCRIPT programs we write 
*must* terminate, or else we run the risk of a DoS on the network.

A fair amount of this is theoretical crap, but if you want to split hairs, 
recursive covenants are *not* Turing-complete, but are instead total functional 
programming with codata.

As a very rough bastardization, a program written in a total functional 
programming language with codata will always assuredly terminate.
However, the return value of a total functional programming language with 
codata can be another program.
An external program (written in a Turing-complete language) could then just 
keep invoking the interpreter of the total functional programming language with 
codata (taking the output program and running it, taking *its* output program 
and running it, ad infinitum, thus effectively able to loop indefinitely.

Translated to Bitcoin transactions, a recursive covenant system can force an 
output to be spent only if the output is spent on a transaction where one of 
the outputs is the same covenant (possibly with tweaks).
Then an external program can keep passing the output program to the Bitcoin 
SCRIPT interpreter --- by building transactions that spend the previous output.

This behavior is still of concern.
It may be possible to attack the network by eroding its supply, by such a 
recursive covenant.

--

Common reactions:

* We can just limit the number of opcodes we can process and then fail it if it 
takes too many operations!
  That way we can avoid DoS!
  * Yes, this indeed drops it from Turing-complete to total, possibly total 
functional programming **without** codata.
But if it is possible to treat data as code, it may drop it "total but with 
codata" instead (i.e. recursive covenants).
But if you want to avoid recursive covenants while allowing recursive ones 
(i.e. equivalent to total without codata), may I suggest you instead look at 
`OP_CTV` and `SIGHASH_ANYPREVOUT`?

* What is so wrong with total-with-codata anyway??
  So what if the recursive covenant could potentially consume all Bitcoins, 
nobody will pay to it except as a novelty!!
  If you want to burn your funds, 1BitcoinEater willingly accepts it!
  * The burden of proof-of-safety is on the proposer, so if you have some proof 
that total-with-codata is safe, by construction, then sure, we can add opcodes 
that may enable recursive covenants, and add `OP_CAT` back in too.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy covenants (OP_CAT2)

2022-05-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> OP_CAT was removed. If I remember correctly, some speculated that perhaps it 
> was removed because it could allow covenants.I don't remember any technical 
> concern about the OP besides enabling covenants.Before it was a common 
> opinion that covenants shouldn't be enabled in bitcoin because, despite 
> having good use case, there are some nasty attacks that are enabled with them 
> too. These days it seems the opinion of the benefits being worth the dangers 
> is quite generalized. Which is quite understandable given that more use cases 
> have been thought since then.

I think the more accurate reason for why it was removed is because the 
following SCRIPT of N size would lead to 2^N memory usage:

OP_1 OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT OP_DUP OP_CAT 
OP_DUP OP_CAT ...

In particular it was removed at about the same time as `OP_MUL`, which has 
similar behavior (consider that multiplying two 32-bit numbers results in a 
64-bit number, similar to `OP_CAT`ting a vector to itself).

`OP_CAT` was removed long before covenants were even expressed as a possibility.

Covenants were first expressed as a possibility, I believe, during discussions 
around P2SH.
Basically, at the time, the problem was this:

* Some receivers wanted to use k-of-n multisignature for improved security.
* The only way to implement this, pre-P2SH, was by putting in the 
`scriptPubKey` all the public keys.
* The sender is the one paying for the size of the `scriptPubKey`.
* It was considered unfair that the sender is paying for the security of the 
receiver.

Thus, `OP_EVAL` and the P2SH concept was conceived.
Instead of the `scriptPubKey` containing the k-of-n multisignature, you create 
a separate script containing the public keys, then hash it, and the 
`scriptPubKey` would contain the hash of the script.
By symmetry with the P2PKH template:

OP_DUP OP_HASH160  OP_EQUALVERIFY OP_CHECKSIG

The P2SH template would be:

OP_DUP OP_HASH160  OP_EQUALVERIFY OP_EVAL

`OP_EVAL` would take the stack top vector and treat it as a Bitcoin SCRIPT.

It was then pointed out that `OP_EVAL` could be used to create recursive 
SCRIPTs by quining using `OP_CAT`.
`OP_CAT` was already disabled by then, but people were talking about 
re-enabling it somehow by restricting the output size of `OP_CAT` to limit the 
O(2^N) behavior.

Thus, since then, `OP_CAT` has been associated with ***recursive*** covenants 
(and people are now reluctant to re-enable it even with a limit on its output 
size, because recursive covenants).
In particular, `OP_CAT` in combination with `OP_CHECKSIGFROMSTACK` and 
`OP_CHECKSIG`, you could get a deferred `OP_EVAL` and then use `OP_CAT` too to 
quine.

Because of those concerns, the modern P2SH is now "just a template" with an 
implicit `OP_EVAL` of the `redeemScript`, but without any `OP_EVAL` being 
actually enabled.

(`OP_EVAL` cannot replace an `OP_NOP` in a softfork, but it is helpful to 
remember that P2SH was pretty much what codified the difference between 
softfork and hardfork, and the community at the time was small enough (or so it 
seemed) that a hardfork might not have been disruptive.)

> Re-enabling OP_CAT with the exact same OP would be a hardfork, but creating a 
> new OP_CAT2 that does the same would be a softfork.

If you are willing to work in Taproot the same OP-code can be enabled in a 
softfork by using a new Tapscript version.

If you worry about quantum-computing-break, a new SegWit version (which is more 
limited than Tapscript versions, unfortunately) can also be used, creating a 
new P2WSHv2 (or whatever version) that enables these opcodes.

> As far a I know, this is the covenants proposal that has been implemented for 
> the longest time, if that's to be used as a selection criteria.And as always, 
> this is not incompatible with deploying other convenant proposals later.

No, it was `OP_EVAL`, not `OP_CAT`.
In particular if `OP_EVAL` was allowed in the `redeemScript` then it would 
enable covenants as well.
It was just pointed out that `OP_CAT` enables recursive covenenats in 
combination with `OP_EVAL`-in-`redeemScript`.

In particular, in combination with `OP_CAT`, `OP_EVAL` not only allows 
recursive covenants, but also recursion *within* a SCRIPT i.e. unbounded SCRIPT 
execution.
Thus, `OP_EVAL` is simply not going to fly, at all.

> Personally I find the simplicity proposal the best one among all the covenant 
> proposals by far, including this one.But I understand that despite the name, 
> the proposal is harder to review and test than other proposals, for it 
> wouldn't simply add covenants, but a complete new scripting language that is 
> better in many senses.Speedy covenants, on the other hand, is much simpler 
> and has been implemented for longer, so in principle, it should be easier to 
> deploy in a speedy manner.
>
> What are the main arguments against speedy covenants (aka op_cat2) and 
> against deploying simplicity in 

Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning e,

> Good evening ZmnSCPxj,
>
> For the sake of simplicity, I'll use the terms lender (Landlord), borrower 
> (Lessor), interest (X), principal (Y), period (N) and maturity (height after 
> N).
>
> The lender in your scenario "provides use" of the principal, and is paid 
> interest in exchange. This is of course the nature of lending, as a period 
> without one's capital incurs an opportunity cost that must be offset (by 
> interest).
>
> The borrower's "use" of the principal is what is being overlooked. To 
> generate income from capital one must produce something and sell it. 
> Production requires both capital and time. Borrowing the principle for the 
> period allows the borrower to produce goods, sell them, and return the 
> "profit" as interest to the lender. Use implies that the borrower is spending 
> the principle - trading it with others. Eventually any number of others end 
> up holding the principle. At maturity, the coin is returned to the lender (by 
> covenant). At that point, all people the borrower traded with are bag 
> holders. Knowledge of this scam results in an imputed net present zero value 
> for the borrowed principal.

But in this scheme, the principal is not being used as money, but as a 
billboard for an advertisement.
Thus, the bitcoins are not being used as money due to the use of the fidelity 
bond to back a "you can totally trust me I am not a bot!!" assertion.
This is not the same as your scenario --- the funds are never transferred, 
instead, a different use of the locked funds is invented.

As a better analogy: I am borrowing a piece of gold, smelting it down to make a 
nice shiny advertisement "I am totally not a bot!!", then at the end of the 
lease period, re-smelting it back and returning to you the same gold piece 
(with the exact same atoms constituting it), plus an interest from my business, 
which gained customers because of the shiny gold advertisement claiming "I am 
totally not a bot!!".

That you use the same piece of gold for money does not preclude me using the 
gold for something else of economic value, like making a nice shiny 
advertisement, so I think your analysis fails there.
Otherwise, your analysis is on point, but analyses something else entirely.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning e,


> It looks like you are talking about lending where the principal return is 
> guaranteed by covenant at maturity. This make the net present value of the 
> loan zero.

I am talking about lending where:

* Lessor pays landlord X satoshis in rent.
* Landlord provides use of the fidelity bond coin (value Y) for N blocks.
* Landlord gets the entire fidelity bond amount (Y) back.

Thus, the landlord gets X + Y satoshis, earning X satoshis, at the cost of 
having Y satoshis locked for N blocks.

So I do not understand why the value of this, to the landlord, would be 0.
Compare to a simple HODL strategy, where I lock Y satoshis for N blocks and get 
Y satoshi back.
Or are you saying that a simple HODL strategy is of negative value and that 
"zero value" is the point where you actively invest all your savings?
Or are you saying that HODL strategy is of some value since it still allows you 
to spend funds freely in the N blocks you are HODLing them, and the option to 
spend is of value, while dedfinitely locking the value Y for N blocks is equal 
to the value X of the rent paid (and thus net zero value)?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Pay to signature hash as a covenant

2022-05-03 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> Typical P2PK looks like that: "  OP_CHECKSIG". In a 
> typical scenario, we have "" in out input and " 
> OP_CHECKSIG" in our output. I wonder if it is possible to use covenants right 
> here and right now, with no consensus changes, just by requiring a specific 
> signature. To start with, I am trying to play with P2PK and legacy 
> signatures, but it may turn out, that doing such things with Schnorr 
> signatures will be more flexible and will allow more use cases.
>
>
> The simplest "pay to signature" script I can think of is: " 
> OP_SWAP OP_CHECKSIG". Then, any user can provide just a "" in some 
> input, as a part of a public key recovery. The problem with such scheme is 
> that it is insecure. Another problem is that we should handle it carefully, 
> because signatures are removed from outputs. However, we could replace it 
> with some signature hash, then it will be untouched, for example: 
> "OP_TOALTSTACK OP_DUP OP_HASH160  OP_EQUALVERIFY 
> OP_FROMALTSTACK OP_CHECKSIG".
>
> And then, signatures are more flexible than public keys, because we can use 
> many different sighashes to decide, what kind of transaction is allowed and 
> what should be rejected. Then, if we could use the right signature with 
> correct sighashes, it could be possible to disable key recovery and require 
> some specific public key, then that scheme could be safely used again. I 
> still have no idea, how to complete that puzzle, but it seems to be possible 
> to use that trick, to restrict destination address. Maybe I should wrap such 
> things in some kind of multisig or somehow combine it with OP_CHECKSIGADD, 
> any ideas?

You can do the same thing with P2SH, P2WSH, and P2TR (in a Tapscript) as well.

Note that it is generally known that you *can* use pre-signed transactions to 
implement vaults.
Usually what we refer to by "covenant" is something like "this output will 
definitely be constructed here" without necessarily requiring a signature.

HOWEVER, what you are proposing is not ***quite*** pre-signed transactions!
Instead, you are (ab)using signatures in order to commit to particular 
sighashes.

First, let me point out that you do not need to hash the signature and *then* 
use a raw `scriptPubKey`, which I should *also* point it is not going to pass 
`IsStandard` checks (and will not propagate on the mainnet network reliably, 
only on testnet).
Instead, you can use P2WSH and *include* the signature outright in the 
`redeemScript`.
Since the output `scriptPubKey` is really just the hash of the `redeemScript`, 
this is automatically a hash of a signature (plus a bunch of other bytes).

So your proposal boils down to using P2WSH and having a `redeemScript`:

redeemScript =   OP_CHECKSIG

Why include the `fixPubKey` in the `redeemScript`?
In your scheme, you would provide the signature and pubkey in the `scriptSig` 
that spends the `scriptPubKey`.
But in a post-P2WSH world, `redeemScript` will also be provided in the 
`witness`, so you *also* provide both the signature and the pubkey, and both 
are hashed before appearing on the `scriptPubKey` --- which is exactly what you 
are proposing anyway.

The above pre-commits to a particular transaction, depending on the `SIGHASH` 
flags of the `fixedSignature`.
Of note is that the `fixPubKey` can have a throwaway privkey, or even a 
***publicly-shared*** privkey.
Even if an alternate signature is created from  well-known privkey, the 
`redeemScript` will not allow any other signature to be accepted, it will only 
use the one that is hardcoded into the script.
Using a publicly-shared privkey would allow us to compute just the expected 
`sighash`. them derove the `fixedSignature` that should be in the 
`redeemScript`.

In particular, this scheme would work just as well for the "congestion control" 
application proposed for `OP_CTV`.
`OP_CTV` still wins in raw WUs spent (just the 32-WU hash), but in the absence 
of `OP_CTV` because raisins, this would also work (but you reveal a 33-WU 
pubkey, and a 73-WU/64-WU signature, which is much larger).
Validation speed is also better for `OP_CTV`, as it is just a hash, while this 
scheme uses signature validation in order to commit to a specific hash anyway 
(a waste of CPU time, since you could just check the hash directly instead of 
going through the rigmarole of a signature, but one which allows us to make 
non-recursive covenants with some similarities to `OP_CTV`).

A purported `OP_CHECKSIGHASHVERIFY` which accepts a `SIGHASH` flag and a hash, 
and checks that the sighash of the transaction (as modified by the flags) is 
equal to the hash, would be more efficient, and would also not differ by much 
from `OP_CTV`.

This can be used in a specific branch of an `OP_IF` to allow, say, a cold 
privkey to override this branch, to start a vault construction.

The same technique should work with Tapscripts inside Taproot (but the 
`fixedPubKey` CANNOT be the same as the internal Taproot key!).

Regards,

Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> Hello ZmnSCPxj,
>
> Renting out fidelity bonds is an interesting idea. It might happen in
> the situation where a hodler wants to generate yield but doesn't want
> the hassle of running a full node and yield generator. A big downside of
> it is that the yield generator income is random while the rent paid is a
> fixed cost, so there's a chance that the income won't cover the rent.

The fact that *renting* is at all possible suggests to me that the following 
situation *could* arise:

* A market of lessors arises.
* A surveillor creates multiple identities.
* Each fake identity rents separately from multiple lessors.
* Surveillor gets privacy data by paying out rent money to the lessor market.

In defiads, I and Tamas pretty much concluded that rental would happen 
inevitably.
One could say that defiads was a kind of fidelity bond system.
Our solution for defiads was to prioritize propagating advertisements (roughly 
equivalent to the certificates in your system, I think) with larger bonded 
values * min(bonded_time, 1 year).
However, do note that we did not intend defiads to be used for 
privacy-sensitive applications like JoinMarket/Teleport.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-01 Thread ZmnSCPxj via bitcoin-dev
Good morning again Chris,

I wonder if there would be an incentive to *rent* out a fidelity bond, i.e. I 
am interested in application A, you are interested in application B, and you 
rent my fidelity bond for application B.
We can use a pay-for-signature protocol now that Taproot is available, so that 
the signature for the certificate for your usage of application B can only be 
completed if I reveal a secret via a signature on another Taproot UTXO that 
gets me the rent for the fidelity bond.

I do not know if this would count as "abuse" or just plain "economic 
sensibility".
But a time may come where people just offer fidelity bonds for lease without 
actually caring about the actual applications it is being used *for*.
If the point is simply to make it costly to show your existence, whether you 
pay for the fidelity bond by renting it, or by acquiring your own Bitcoins and 
foregoing the ability to utilize it for some amount of time (which should cost 
closely to renting the fidelity bond from a provider), should probably not 
matter economically.

You mention that JoinMarket clients now check for fidelity bonds not being used 
across multiple makers, how is this done exactly, and does the technique not 
deserve a section in this BIP?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Timelocked address fidelity bond for BIP39 seeds

2022-05-01 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

Excellent BIP!

>From a quick read-over, it seems to me that the fidelity bond does not commit 
>to any particular scheme or application.
This means (as I understand it) that the same fidelity bond can be used to 
prove existence across multiple applications.
I am uncertain whether this is potentially abusable or not.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks

2022-04-30 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> @Zman
> > if two people are perfectly rational and start from the same information, 
> > they *will* agree
> I take issue with this. I view the word "rational" to mean basically logical. 
> Someone is rational if they advocate for things that are best for them. Two 
> humans are not the same people. They have different circumstances and as a 
> result different goals. Two actors with different goals will inevitably have 
> things they rationally and logically disagree about. There is no universal 
> rationality. Even an AI from outside space and time is incredibly likely to 
> experience at least some value drift from its peers.

Note that "the goal of this thing" is part of the information where both "start 
from" here.

Even if you and I have different goals, if we both think about "given this 
goal, and these facts, is X the best solution available?" we will both agree, 
though our goals might not be the same as each other, or the same as "this 
goal" is in the sentence.
What is material is simply that the laws of logic are universal and if you 
include the goal itself as part of the question, you will reach the same 
conclusion --- but refuse to act on it (and even oppose it) because the goal is 
not your own goal.

E.g. "What is the best way to kill a person without getting caught?" will 
probably have us both come to the same broad conclusion, but I doubt either of 
us has a goal or sub-goal to kill a person.
That is: if you are perfectly rational, you can certainly imagine a "what if" 
where your goal is different from your current goal and figure out what you 
would do ***if*** that were your goal instead.

Is that better now?

> > 3. Can we actually have the goals of all humans discussing this topic all 
> > laid out, *accurately*?
> I think this would be a very useful exercise to do on a regular basis. This 
> conversation is a good example, but conversations like this are rare. I tried 
> to discuss some goals we might want bitcoin to have in a paper I wrote about 
> throughput bottlenecks. Coming to a consensus around goals, or at very least 
> identifying various competing groupings of goals would be quite useful to 
> streamline conversations and to more effectively share ideas.


Using a future market has the attractive property that, since money is often an 
instrumental sub-goal to achieve many of your REAL goals, you can get 
reasonably good information on the goals of people without them having to 
actually reveal their actual goals.
Also, irrationality on the market tends to be punished over time, and a human 
who achieves better-than-human rationality can gain quite a lot of funds on the 
market, thus automatically re-weighing their thoughts higher.

However, persistent irrationalities embedded in the design of the human mind 
will still be difficult to break (it is like a program attempting to escape a 
virtual machine).
And an uninformed market is still going to behave pretty much randomly.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks

2022-04-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Keagan, et al,



> I think there are a few questions surrounding the issue of soft fork 
> activation. Perhaps it warrants zooming out beyond even what my proposal aims 
> to solve. In my mind the most important questions surrounding this process 
> are:
>
> 1. In an ideal world, assuming we could, with perfect certainty, know 
> anything we wanted about the preferences of the user base, what would be the 
> threshold for saying "this consensus change is ready for activation"?
>     1a. Does that threshold change based on the nature of the consensus 
> change (new script type/opcode vs. block size reduction vs. blacklisting 
> UTXOs)?
>     1b. Do different constituencies (end users, wallets, exchanges, coinjoin 
> coordinators, layer2 protocols, miners) have a desired minimum or maximum 
> representation in this "threshold"?

Ideally, in a consensus system, 100% should be the threshold.
After all, the intent of the design of Bitcoin is that everyone should be able 
to use it, and the objection of even 0.01%, who would actively refuse a change, 
implies that set would not be able to use Bitcoin.
i.e. "consensus means 'everyone agrees'"

Against this position, the real world smashes our ideals.
Zooming out, the number of Bitcoin users in the globe is far less than 100%, 
and there are people who would object to the use of Bitcoin entirely.
This implies that the position "consensus means 'everyone agrees'" would imply 
that Bitcoin should be shut down, as it cannot help users who oppose it.
Obviously, the continued use of Bitcoin, by us and others, is not in perfect 
agreement with this position.

Let us reconsider the result of the blocksize debate.
A group of former-Bitcoin-users forked themselves off the Bitcoin blockchain.
But in effect: the opposers to SegWit were simply outright *evicted* from the 
set of people who are in 'everyone', in the "consensus means 'everyone agrees'" 
sense.
(That some of them changed their mind later is immaterial --- their acceptance 
back into the Bitcoin community is conditional on them accepting the current 
Bitcoin rules.)

So obviously there is *some* threshold, that is not 100%, that we would deem 
gives us "acceptable losses".
So: what is the "acceptable loss"?

--

More philosphically: the [Aumann Agreement 
Theorem](https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem) can be 
bastardized to: "if two people are perfectly rational and start from the same 
information, they *will* agree".

If humans were perfectly rational and the information was complete and 
accurately available beforehand, we could abduct a single convenient human 
being, feed them the information, and ask them what they think, and simply 
follow that.
It would be pointless to abduct a second human, since it would just agree with 
the first (as per the Aumann Agreement Theorem), and abducting humans is not 
easy or cheap.

If humans were perfectly rational and all information was complete, then there 
would be no need for "representation", you just input "this is my goal" and 
"this is the info" and get out "aye" or "nay", and whoever you gave those 
inputs to would not matter, because everyone would agree on the same conclusion.

All democracy/voting and consensus, stem from the real-world flaws of this 
simple theorem.

1.  No human is perfectly rational in the sense required by the Aumann 
Agreement Theorem.
2.  Information may be ambiguous or lacking.
3.  Humans do not want to reveal their *actual* goals and sub-goals, because 
their competitors may be able to block them if the competitors knew what their 
goals/sub-goals were.

Democracy, and the use of some kind of high "threshold" in a "consensus" (ha, 
ha) system, depend on the following assumptions to "fix" the flaws of the 
Aumann Agreement Theorem:

1.  With a large sample of humans, the flaws in rationality (hopefully, ha, ha) 
cancel out, and if we ask them *Really Nicely* they may make an effort to be a 
little nearer to the ideal perfect rationality.
2.  With a large sample of humans, the incompleteness and obscureness of the 
necessary information may now become available in aggregate (hopefully, ha, 
ha), which it might not be individually.
3.  With a large sample of humans, hopefully those with similar goals get to 
aggregate their goals, and thus we can get the most good (achieved goals) for 
the greatest number.

Unfortunately, democracy itself (and therefore, any "consensus" ha ha system 
that uses a high threshold, which is just a more restricted kind of democracy 
that overfavors the status quo) has these flaws in the above assumptions:

1.  Humans belong to a single species with pretty much a single brain design 
("foolish humans!"), thus flaws in their rationality tend to correlate, so 
aggregation will *increase* the error, not decrease it.
2.  Humans have limited brain space ("puny humans!") which they often assign to 
more important things, like whether Johnny Depp is the victim or not, and thus 

Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-25 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> On Mon, 25 Apr 2022 at 07:36, ZmnSCPxj  wrote
>
> > CTV *can* benefit layer 2 users, which is why I switched from vaguely 
> > apathetic to CTV, to vaguely supportive of it.
>
>
> Other proposals exist that also benefit L2 solutions. What makes you support 
> CTV specifically?

It is simple to implement, and a pure `OP_CTV` SCRIPT on a P2WSH / P2SH is only 
32 bytes + change on the output and 32 bytes + change on the input/witness, 
compared to signature-based schemes which require at least 32 bytes + change on 
the output and 64 bytes + change on the witness ***IF*** they use the Taproot 
format (and since we currently gate the Taproot format behind actual Taproot 
usages, any special SCRIPT that uses Taproot-format signatures would need at 
least the 33-byte internal pubkey revelation; if we settle with the old 
signature format, then that is 73 bytes for the signature).
To my knowledge as well, hashes (like `OP_CTV` uses) are CPU-cheaper (and 
memory-cheaper?) than even highly-optimized `libsecp256k1` signature 
validation, and (to my knowledge) you cannot use batch validation for 
SCRIPT-based signature checks.
It definitely does not enable recursive covenants, which I think deserve more 
general research and thinking before we enable recursive covenants.

Conceptually, I see `OP_CTV` as the "AND" to the "OR" of MAST.
In both cases, you have a hash-based tree, but in `OP_CTV` you want *all* these 
pre-agreed cases, while in MAST you want *one* of these pre-agreed cases.

Which is not to say that other proposals do not benefit L2 solutions *more* 
(`SIGHASH_ANYPREVOUT` when please?), but other proposals are signature-based 
and would be larger in this niche.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Peter,

>
> On April 22, 2022 11:03:51 AM GMT+02:00, Zac Greenwood via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > I like the maxim of Peter Todd: any change of Bitcoin must benefit all
> > users. This means that every change must have well-defined and transparent
> > benefits. Personally I believe that the only additions to the protocol that
> > would still be acceptable are those that clearly benefit layer 2 solutions
> > such as LN and do not carry the dangerous potential of getting abused by
> > freeloaders selling commercial services on top of “free” eternal storage on
> > the blockchain.
>
>
> To strengthen your point: benefiting "all users" can only be done by 
> benefiting layer 2 solutions in some way, because it's inevitable that the 
> vast majority of users will use layer 2 because that's the only known way 
> that Bitcoin can scale.

I would like to point out that CTV is usable in LN.
In particular, instead of hosting all outputs (remote, local, and all the 
HTLCs) directly on the commitment transaction, the commitment transaction 
instead outputs to a CTV-guarded SCRIPT that defers the "real" outputs.

This is beneficial since a common cause of unilateral closes is that one of the 
HTLCs on the channel has timed out.
However, only *that* particular HTLC has to be exposed onchain *right now*, and 
the use of CTV allows only that failing HTLC, plus O(log N) other txes, to be 
published.
The CTV-tree can even be rearranged so that HTLCs with closer timeouts are 
nearer to the root of the CTV-tree.
This allows the rest of the unilateral close to be resolved later, if right now 
there is block space congestion (we only really need to deal with the sole HTLC 
that is timing out right now, the rest can be done later when block space is 
less tight).

This is arguably minimal (unilateral closes are rare, though they *do* have 
massive effects on the network, since a single timed-out channel can, during 
short-term block congestion, cause other channels to also time out, which 
worsen the block congestion and leading to cascades of channel closures).

So this objection seems, to me, at least mitigated: CTV *can* benefit layer 2 
users, which is why I switched from vaguely apathetic to CTV, to vaguely 
supportive of it.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-24 Thread ZmnSCPxj via bitcoin-dev
Good morning Dave, et al.,

I have not read through *all* the mail on this thread, but have read a fair 
amount of it.

I think the main argument *for* this particular idea is that "it allows the use 
of real-world non-toy funds to prove that this feature is something actual 
users demand".

An idea that has been percolating in my various computation systems is to use 
Smart Contracts Unchained to implement a variant of the Microcode idea I put 
forth some months ago.

Briefly, define a set of "more detailed" opcodes that would allow any general 
computation to be performed.
This is the micro-opcode instruction set.

Then, when a new opcode or behavior is proposed for Bitcoin SCRIPT, create a 
new mapping from Bitcoin SCRIPT opcodes (including the new opcodes / behavior) 
to the micro-opcodes.
This is a microcode.

Then use Smart Contracts Unchained.
This means that we commit to the microcode, plus the SCRIPT that uses the 
microcode, and instead of sending funds to a new version of the Bitcoin SCRIPT 
that uses the new opcode(s), send to a "(n-of-n of users) or (1-of-users and 
(k-of-n of federation))".

This is no worse security-wise than using a federated sidechain, without 
requiring a complete sidechain implementation, and allows the same code (the 
micro-opcode interpreter) to be reused across all ideas.
It may even be worthwhile to include the micro-opcode interpreter into Bitcoin 
Core, so that the mechanics of merging in a new opcode, that was prototyped via 
this mechanism, is easier.

The federation only needs to interpret the micro-opcode instruction set; it 
simply translates the (modified) Bitcoin SCRIPT opcodes to the corresponding 
micro-opcodes and runs that, possibly with reasonable limits on execution time.
Users are not required to trust a particular fixed set of k-of-n federation, 
but may choose any k-of-n they believe is trustworthy.

This idea does not require consensus at any point in time.
It allows "real" funds to be used, thus demonstrating real demand for the 
supposed innovation.
The problem is the effective erosion of security to depending on k-of-n of a 
federation.

Presumably, proponents of a new opcode or feature would run a micro-opcode 
interpreter faithfully, so that users have a positive experience with their new 
opcode, and would carefully monitor and vet the micro-opcode interpreters run 
by other supposed proponents, on the assumption that a sub-goal of such 
proponents would be to encourage use of the new opcode / feature.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-04-05 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> When I see more and more proposals like this, where things are commited to 
> Taproot outputs, then I think we should start designing "miner-based 
> commitments". If someone is going to make a Bitcoin transaction and add a 
> commitment for zero cost, just by tweaking some Taproot public key, then it 
> is a benefit for the network, because then it is possible to get more things 
> with no additional bytes. Instead of doing "transaction-only", people can do 
> "transaction+commitment" for the same cost, that use case is positive.
>
> But if someone is going to make a Bitcoin transaction only to commit things, 
> where in other case that person would make no transaction at all, then I 
> think we should have some mechanism for "miner-based commitments" that would 
> allow making commitments in a standardized way. We always have one coinbase 
> transaction for each block, it is consensus rule. So, by tweaking single 
> public key in the coinbase transaction, it is possible to fit all commitments 
> in one tweaked key, and even make it logarithmic by forming a tree of 
> commitments.
>
> I think we cannot control user-based commitments, but maybe we should 
> standardize miner-based commitments, for example to have a sorted merkle tree 
> of commitments. Then, it would be possible to check if some commitment is a 
> part of that tree or not (if it is always sorted, then it is present at some 
> specified position or not, so by forming SPV-proof we can quickly prove, if 
> some commitment is or is not a part of some miner Taproot commitment).

You might consider implementing `OP_BRIBE` from Drivechains, then.

Note that if you *want* to have some data committed on the blockchain, you 
*have to* pay for the privilege of doing so --- miners are not obligated to put 
a commitment to *your* data on the coinbase for free.
Thus, any miner-based commitment needs to have a mechanism to offer payments to 
miners to include your commitment.

You might as well just use a transaction, and not tell miners that you want to 
commit data using some tweak of the public key (because the miners might then 
be induced to censor such commitments).

In short: there is no such thing as "other case that person would make no 
transcation at all", because you have to somehow bribe miners to include the 
commitment to your data, and you might as well use existing mechanisms 
(transactions that implicitly pay fees) for your data commitment, and get 
better censorship-resistance and privacy.

Nothing really prevents any transaction-based scheme from having multiple users 
that aggregate their data (losing privacy but aggregating their fees) to make a 
sum commitment and just make a single transaction that pays for the privilege 
of committing to the sum commitment.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

> On Tue, Mar 22, 2022 at 05:37:03AM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks
>
> (Have you considered applying a jit or some other compression algorithm
> to your emails?)
>
> > Microcode For Bitcoin SCRIPT
> >
> > =
> >
> > I propose:
> >
> > -   Define a generic, low-level language (the "RISC language").
>
> This is pretty much what Simplicity does, if you optimise the low-level
> language to minimise the number of primitives and maximise the ability
> to apply tooling to reason about it, which seem like good things for a
> RISC language to optimise.
>
> > -   Define a mapping from a specific, high-level language to
> > the above language (the microcode).
> >
> > -   Allow users to sacrifice Bitcoins to define a new microcode.
>
> I think you're defining "the microcode" as the "mapping" here.

Yes.

>
> This is pretty similar to the suggestion Bram Cohen was making a couple
> of months ago:
>
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019722.html
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019773.html
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019803.html
>
> I believe this is done in chia via the block being able to
> include-by-reference prior blocks' transaction generators:
>
> ] transactions_generator_ref_list: List[uint32]: A list of block heights of 
> previous generators referenced by this block's generator.
>
> -   https://docs.chia.net/docs/05block-validation/block_format
>
> (That approach comes at the cost of not being able to do full validation
> if you're running a pruning node. The alternative is to effectively
> introduce a parallel "utxo" set -- where you're mapping the "sacrificed"
> BTC as the nValue and instead of just mapping it to a scriptPubKey for
> a later spend, you're permanently storing the definition of the new
> CISC opcode)
>
>

Yes, the latter is basically what microcode is.

> > We can then support a "RISC" language that is composed of
> > general instructions, such as arithmetic, SECP256K1 scalar
> > and point math, bytevector concatenation, sha256 midstates,
> > bytevector bit manipulation, transaction introspection, and
> > so on.
>
> A language that includes instructions for each operation we can think
> of isn't very "RISC"... More importantly it gets straight back to the
> "we've got a new zk system / ECC curve / ... that we want to include,
> let's do a softfork" problem you were trying to avoid in the first place.

`libsecp256k1` can run on purely RISC machines like ARM, so saying that a 
"RISC" set of opcodes cannot implement some arbitrary ECC curve, when the 
instruction set does not directly support that ECC curve, seems incorrect.

Any new zk system / ECC curve would have to be implementable in C++, so if you 
have micro-operations that would be needed for it, such as XORing two 
multi-byte vectors together, multiplying multi-byte precision numbers, etc., 
then any new zk system or ECC curve would be implementable in microcode.
For that matter, you could re-write `libsecp256k1` there.

> > Then, the user creates a new transaction where one of
> > the outputs contains, say, 1.0 Bitcoins (exact required
> > value TBD),
>
> Likely, the "fair" price would be the cost of introducing however many
> additional bytes to the utxo set that it would take to represent your
> microcode, and the cost it would take to run jit(your microcode script)
> if that were a validation function. Both seem pretty hard to manage.
>
> "Ideally", I think you'd want to be able to say "this old microcode
> no longer has any value, let's forget it, and instead replace it with
> this new microcode that is much better" -- that way nodes don't have to
> keep around old useless data, and you've reduced the cost of introducing
> new functionality.

Yes, but that invites "I accidentally the smart contract" behavior.

> Additionally, I think it has something of a tragedy-of-the-commons
> problem: whoever creates the microcode pays the cost, but then anyone
> can use it and gain the benefit. That might even end up creating
> centralisation pressure: if you design a highly decentralised L2 system,
> it ends up expensive because people can't coordinate to pay for the
> new microcode that would make it cheaper; but if you design a highly
> centralised L2 system, you can just pay for the microcode yourself and
> make it even cheaper.

Th

Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev


Good morning again Russell,

> Good morning Russell,
>
> > Thanks for the clarification.
> > You don't think referring to the microcode via its hash, effectively using 
> > 32-byte encoding of opcodes, is still rather long winded?

For that matter, since an entire microcode represents a language (based on the 
current OG Bitcoin SCRIPT language), with a little more coordination, we could 
entirely replace Tapscript versions --- every Tapscript version is a slot for a 
microcode, and the current OG Bitcoin SCRIPT is just the one in slot `0xc2`.
Filled slots cannot be changed, but new microcodes can use some currently-empty 
Tapscript version slot, and have it properly defined in a microcode 
introduction outpoint.

Then indication of a microcode would take only one byte, that is already needed 
currently anyway.

That does limit us to only 255 new microcodes, thus the cost of one microcode 
would have to be a good bit higher.

Again, remember, microcodes represent an entire language that is an extension 
of OG Bitcoin SCRIPT, not individual operations in that language.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> Thanks for the clarification.
>
> You don't think referring to the microcode via its hash, effectively using 
> 32-byte encoding of opcodes, is still rather long winded?

A microcode is a *mapping* of `OP_` codes to a variable-length sequence of 
`UOP_` micro-opcodes.
So a microcode hash refers to an entire language of redefined `OP_` codes, not 
each individual opcode in the language.

If it costs 1 Bitcoin to create a new microcode, then there are only 21 million 
possible microcodes, and I think about 50 bits of hash is sufficient to specify 
those with low probability of collision.
We could use a 20-byte RIPEMD . SHA256 instead for 160 bits, that should be 
more than sufficient with enough margin.
Though perhaps it is now easier to deliberately attack...

Also, if you have a common SCRIPT whose non-`OP_PUSH` opcodes are more than say 
32 + 1 bytes (or 20 + 1 if using RIPEMD), and you can fit their equivalent 
`UOP_` codes into the max limit for a *single* opcode, you can save bytes by 
redefining some random `OP_` code into the sequence of all the `UOP_` codes.
You would have a hash reference to the microcode, and a single byte for the 
actual "SCRIPT" which is just a jet of the entire SCRIPT.
Users of multiple *different* such SCRIPTs can band together to define a single 
microcode, mapping their SCRIPTs to different `OP_` codes and sharing the cost 
of defining the new microcode that shortens all their SCRIPTs.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-22 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> Setting aside my thoughts that something like Simplicity would make a better 
> platform than Bitcoin Script (due to expression operating on a more narrow 
> interface than the entire stack (I'm looking at you OP_DEPTH)) there is an 
> issue with namespace management.
>
> If I understand correctly, your implication was that once opcodes are 
> redefined by an OP_RETURN transaction, subsequent transactions of that opcode 
> refer to the new microtransaction.  But then we have a race condition between 
> people submitting transactions expecting the outputs to refer to the old code 
> and having their code redefined by the time they do get confirmed  (or worse 
> having them reorged).

No, use of specific microcodes is opt-in: you have to use a specific `0xce` 
Tapscript version, ***and*** refer to the microcode you want to use via the 
hash of the microcode.

The only race condition is reorging out a newly-defined microcode.
This can be avoided by waiting for deep confirmation of a newly-defined 
microcode before actually using it.

But once the microcode introduction outpoint of a particular microcode has been 
deeply confirmed, then your Tapscript can refer to the microcode, and its 
meaning does not change.

Fullnodes may need to maintain multiple microcodes, which is why creating new 
microcodes is expensive; they not only require JIT compilation, they also 
require that fullnodes keep an index that cannot have items deleted.


The advantage of the microcode scheme is that the size of the SCRIPT can be 
used as a proxy for CPU load  just as it is done for current Bitcoin SCRIPT.
As long as the number of `UOP_` micro-opcodes that an `OP_` code can expand to 
is bounded, and we avoid looping constructs, then the CPU load is also bounded 
and the size of the SCRIPT approximates the amount of processing needed, thus 
microcode does not require a softfork to modify weight calculations in the 
future.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-21 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

It is entirely possible that I have gotten into the deep end and am now 
drowning in insanity, but here goes

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

Introduction


Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
  * Then later, once we have learned how to use this language,
we can softfork in new opcodes that compress sections of
programs written in this general language.

There are two arguments against this style:

1.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just provide a very general language and
  never softfork in any other change ever again.
2.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just skip over the initial very general
  language and individually activate small, specific
  constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
--

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
  - Few registers, many addressing/indexing modes, variable
instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
  - Many registers, usually only immediate and indexed
addressing modes, fixed instruction length, few
instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually 

Re: [bitcoin-dev] Speedy Trial

2022-03-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> @Jorge
> > Any user polling system is going to be vulnerable to sybil attacks.
>
> Not the one I'll propose right here. What I propose specifically is a 
> coin-weighted signature-based poll with the following components:
> A. Every pollee signs messages like  support:10%}> for each UTXO they want to respond to the poll with.
> B. A signed message like that is valid only while that UTXO has not been 
> spent.
> C. Poll results are considered only at each particular block height, where 
> the support and opposition responses are weighted by the UTXO amount (and the 
> support/oppose fraction in the message). This means you'd basically see a 
> rolling poll through the blockchain as new signed poll messages come in and 
> as their UTXOs are spent. 
>
> This is not vulnerable to sybil attacks because it requires access to UTXOs 
> and response-weight is directly tied to UTXO amount. If someone signs a poll 
> message with a key that can unlock (or is in some other designated way 
> associated with) a UTXO, and then spends that UTXO, their poll response stops 
> being counted for all block heights after the UTXO was spent. 
>
> Why put support and oppose fractions in the message? Who would want to both 
> support and oppose something? Any multiple participant UTXO would. Eg 
> lightning channels would, where each participant disagrees with the other. 
> They need to sign together, so they can have an agreement to sign for the 
> fractions that match their respective channel balances (using a force channel 
> close as a last resort against an uncooperative partner as usual). 

This does not quite work, as lightning channel balances can be changed at any 
time.
I might agree that you have 90% of the channel and I have 10% of the channel 
right now, but if you then send a request to forward your funds out, I need to 
be able to invalidate the previous signal, one that is tied to the fulfillment 
of the forwarding request.
This begins to add complexity.

More pointedly, if the signaling is done onchain, then a forward on the LN 
requires that I put up invalidations of previous signals, also onchain, 
otherwise you could cheaty cheat your effective balance by moving your funds 
around.
But the point of LN is to avoid putting typical everyday forwards onchain.

> This does have the potential issue of public key exposure prior to spending 
> for current addresses. But that could be fixed with a new address type that 
> has two public keys / spend paths: one for spending and one for signing. 

This issue is particularly relevant to vault constructions.
Typically a vault has a "cold" key that is the master owner of the fund, with 
"hot" keys having partial access.
Semantically, we would consider the "cold" key to be the "true" owner of the 
fund, with "hot" key being delegates who are semi-trusted, but not as trusted 
as the "cold" key.

So, we should consider a vote from the "cold" key only.
However, the point is that the "cold" key wants to be kept offline as much as 
possible for security.

I suppose the "cold" key could be put online just once to create the signal 
message, but vault owners might not want to vote because of the risk, and their 
weight might be enough to be important in your voting scheme (consider that the 
point of vaults is to protect large funds).


A sub-issue here with the spend/signal pubkey idea is that if I need to be able 
to somehow indicate that a long-term-cold-storage UTXO has a signaling pubkey, 
I imagine this mechanism of indioating might itself require a softfork, so you 
have a chicken-and-egg problem...

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Covenants and feebumping

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> For "hot contracts" a signature challenge is used to achieve the same. I know 
> the latter is imperfect, since
> the lower the uptime risk (increase the number of network monitors) the 
> higher the DOS risk (as you duplicate
> the key).. That's why i asked if anybody had some thoughts about this and if 
> there was a cleverer way of doing
> it.

Okay, let me see if I understand your concern correctly.

When using a signature challenge, the concern is that you need to presign 
multiple versions of a transaction with varying feerates.

And you have a set of network monitors / watchtowers that are supposed to watch 
the chain on your behalf in case your ISP suddenly hates you for no reason.

The more monitors there are, the more likely that one of them will be corrupted 
by a miner and jump to the highest-feerate version, overpaying fees and making 
miners very happy.
Such is third-party trust.

Is my understanding correct?


A cleverer way, which requires consolidating (but is unable to eliminate) 
third-party trust, would be to use a DLC oracle.
The DLC oracle provides a set of points corresponding to a set of feerate 
ranges, and commits to publishing the scalar of one of those points at some 
particular future block height.
Ostensibly, the scalar it publishes is the one of the point that corresponds to 
the feerate range found at that future block height.

You then create adaptor signatures for each feerate version, corresponding to 
the feerate ranges the DLC oracle could eventually publish.
The adaptor signatures can only be completed if the DLC oracle publishes the 
corresponding scalar for that feerate range.

You can then send the adaptor signatures to multiple watchtowers, who can only 
publish one of the feerate versions, unless the DLC oracle is hacked and 
publishes multiple scalars (at which point the DLC oracle protocol reveals a 
privkey of the DLC oracle, which should be usable for slashing some bond of the 
DLC oracle).
This prevents any of them from publishing the highest-feerate version, as the 
adaptor signature cannot be completed unless that is what the oracle published.

There are still drawbacks:

* Third-party trust risk: the oracle can still lie.
  * DLC oracles are prevented from publishing multiple scalars; they cannot be 
prevented from publishing a single wrong scalar.
* DLCs must be time bound.
  * DLC oracles commit to publishing a particular point at a particular fixed 
time.
  * For "hot" dynamic protocols, you need the ability to invoke the oracle at 
any time, not a particular fixed time.

The latter probably makes this unusable for hot protocols anyway, so maybe not 
so clever.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> > I think we would want to have a cleanstack rule at some point
>
> Ah is this a rule where a script shouldn't validate if more than just a true 
> is left on the stack? I can see how that would prevent the non-soft-fork 
> version of what I'm proposing. 

Yes.
There was also an even stronger cleanstack rule where the stack and alt stack 
are totally empty.
This is because a SCRIPT really just returns "valid" or "invalid", and 
`OP_VERIFY` can be trivially appended to a SCRIPT that leaves a single stack 
item to convert to a SCRIPT that leaves no stack items and retains the same 
behavior.

>
> > How large is the critical mass needed?
>
> Well it seems we've agreed that were we going to do this, we would want to at 
> least do a soft-fork to make known jet scripts lighter weight (and unknown 
> jet scripts not-heavier) than their non-jet counterparts. So given a 
> situation where this soft fork happens, and someone wants to implement a new 
> jet, how much critical mass would be needed for the network to get some 
> benefit from the jet? Well, the absolute minimum for some benefit to happen 
> is that two nodes that support that jet are connected. In such a case, one 
> node can send that jet scripted transaction along without sending the data of 
> what the jet stands for. The jet itself is pretty small, like 2 or so bytes. 
> So that does impose a small additional cost on nodes that don't support a 
> jet. For 100,000 nodes, that means 200,000 bytes of transmission would need 
> to be saved for a jet to break even. So if the jet stands for a 22 byte 
> script, it would break even when 10% of the network supported it. If the jet 
> stood for a 102 byte script, it would break even when 2% of the network 
> supported it. So how much critical mass is necessary for it to be worth it 
> depends on what the script is. 

The math seems reasonable.


> The question I have is: where would the constants table come from? Would it 
> reference the original positions of items on the witness stack? 

The constants table would be part of the SCRIPT puzzle, and thus not in the 
witness solution.
I imagine the SCRIPT would be divided into two parts: (1) a table of constants 
and (2) the actual opcodes to execute.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> On Wed, Mar 9, 2022 at 6:30 AM ZmnSCPxj  wrote:
>
> > I am pointing out that:
> >
> > * We want to save bytes by having multiple inputs of a transaction use the 
> > same single signature (i.e. sigagg).
> >
> > is not much different from:
> >
> > * We want to save bytes by having multiple inputs of a transaction use the 
> > same `scriptPubKey` template.
>
> Fair point. In the past Bitcoin has been resistant to such things because for 
> example reusing pubkeys can save you from having to separately pay for the 
> reveals of all of them but letting people get credit for that incentivizes 
> key reuse which isn't such a great thing.

See paragraph below:

> > > > For example you might have multiple HTLCs, with mostly the same code 
> > > > except for details like who the acceptor and offerrer are, exact hash, 
> > > > and timelock, and you could claim multiple HTLCs in a single tx and 
> > > > feed the details separately but the code for the HTLC is common to all 
> > > > of the HTLCs.
> > > > You do not even need to come from the same protocol if multiple 
> > > > protocols use the same code for implementing HTLC.

Note that the acceptor and offerrer are represented by pubkeys here.
So we do not want to encourage key reuse, we want to encourage reuse of *how* 
the pubkeys are used (but rotate the pubkeys).

In the other thread on Jets in bitcoin-dev I proposed moving data like pubkeys 
into a separate part of the SCRIPT in order to (1) not encourage key reuse and 
(2) make it easier to compress the code.
In LISP terms, it would be like requiring that top-level code have a `(let 
...)` form around it where the assigned data *must* be constants or `quote`, 
and disallowing constants and `quote` elsewhere, then any generated LISP code 
has to execute in the same top-level environment defined by this top-level 
`let`.

So you can compress the code by using some metaprogramming where LISP generates 
LISP code but you still need to work within the confines of the available 
constants.

> > > HTLCs, at least in Chia, have embarrassingly little code in them. Like, 
> > > so little that there's almost nothing to compress.
> >
> > In Bitcoin at least an HTLC has, if you remove the `OP_PUSH`es, by my 
> > count, 13 bytes.
> > If you have a bunch of HTLCs you want to claim, you can reduce your witness 
> > data by 13 bytes minus whatever number of bytes you need to indicate this.
> > That amounts to about 3 vbytes per HTLC, which can be significant enough to 
> > be worth it (consider that Taproot moving away from encoded signatures 
> > saves only 9 weight units per signature, i.e. about 2 vbytes).
>
> Oh I see. That's already extremely small overhead. When you start optimizing 
> at that level you wind up doing things like pulling all the HTLCs into the 
> same block to take the overhead of pulling in the template only once.
>  
>
> > Do note that PTLCs remain more space-efficient though, so forget about 
> > HTLCs and just use PTLCs.
>
> It makes a lot of sense to make a payment channel system using PTLCs and 
> eltoo right off the bat but then you wind up rewriting everything from 
> scratch.

Bunch of #reckless devs implemented Lightning with just HTLCs so that is that, 
*shrug*, gotta wonder what those people were thinking, not waiting for PTLCs.

>  
>
> > > > This does not apply to current Bitcoin since we no longer accept a 
> > > > SCRIPT from the spender, we now have a witness stack.
> > >
> > > My mental model of Bitcoin is to pretend that segwit was always there and 
> > > the separation of different sections of data is a semantic quibble.
> >
> > This is not a semantic quibble --- `witness` contains only the equivalent 
> > of `OP_PUSH`es, while `scriptSig` can in theory contain non-`OP_PUSH` 
> > opcodes.
> > xref. `1 RETURN`.
>
> It's very normal when you're using lisp for snippets of code to be passed in 
> as data and then verified and executed. That's enabled by the extreme 
> adherence to no side effects.

Quining still allows Turing-completeness and infinite loops, which *is* still a 
side effect, though as I understand it ChiaLISP uses the "Turing-complete but 
with a max number of ops" kind of totality.

> > This makes me kinda wary of using such covenant features at all, and if 
> > stuff like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added 
> > but must be reimplemented via a covenant feature, I would be saddened, as I 
> > now have to contend with the complexity of covenant features and carefully 
> > check that `SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented 
> > correctly.
>
> Even the 'standard format' transaction which supports taproot and graftroot 
> is implemented in CLVM. The benefit of this approach is that new 
> functionality can be implemented and deployed immediately rather than having 
> to painstakingly go through a soft fork deployment for each thing.

Wow, just wow.

Regards,
ZmnSCPxj

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning aj et al.,

> On Tue, Mar 08, 2022 at 03:06:43AM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > > > They're radically different approaches and
> > > > it's hard to see how they mix. Everything in lisp is completely 
> > > > sandboxed,
> > > > and that functionality is important to a lot of things, and it's really
> > > > normal to be given a reveal of a scriptpubkey and be able to rely on 
> > > > your
> > > > parsing of it.
> > > > The above prevents combining puzzles/solutions from multiple coin 
> > > > spends,
> > > > but I don't think that's very attractive in bitcoin's context, the way
> > > > it is for chia. I don't think it loses much else?
> > > > But cross-input signature aggregation is a nice-to-have we want for 
> > > > Bitcoin, and, to me, cross-input sigagg is not much different from 
> > > > cross-input puzzle/solution compression.
>
> Signature aggregation has a lot more maths and crypto involved than
> reversible compression of puzzles/solutions. I was more meaning
> cross-transaction relationships rather than cross-input ones though.

My point is that in the past we were willing to discuss the complicated crypto 
math around cross-input sigagg in order to save bytes, so it seems to me that 
cross-input compression of puzzles/solutions at least merits a discussion, 
since it would require a lot less heavy crypto math, and *also* save bytes.

> > > I /think/ the compression hook would be to allow you to have the puzzles
> > > be (re)generated via another lisp program if that was more efficient
> > > than just listing them out. But I assume it would be turtles, err,
> > > lisp all the way down, no special C functions like with jets.
> > > Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
> > > cryptocurrency node software, so "special C function" seems to 
> > > overprivilege C...
>
> Jets are "special" in so far as they are costed differently at the
> consensus level than the equivalent pure/jetless simplicity code that
> they replace. Whether they're written in C or something else isn't the
> important part.
>
> By comparison, generating lisp code with lisp code in chia doesn't get
> special treatment.

Hmm, what exactly do you mean here?

If I have a shorter piece of code that expands to a larger piece of code 
because metaprogramming, is it considered the same cost as the larger piece of 
code (even if not all parts of the larger piece of code are executed, e.g. 
branches)?

Or is the cost simply proportional to the number of operations actually 
executed?

I think there are two costs here:

* Cost of bytes to transmit over the network.
* Cost of CPU load.

Over here in Bitcoin we have been mostly conflating the two, to the point that 
Taproot even eliminates unexecuted branches from being transmitted over the 
network so that bytes transmitted is approximately equal to opcodes executed.

It seems to me that lisp-generating-lisp compression would reduce the cost of 
bytes transmitted, but increase the CPU load (first the metaprogram runs, and 
*then* the produced program runs).

> (You could also use jets in a way that doesn't impact consensus just
> to make your node software more efficient in the normal case -- perhaps
> via a JIT compiler that sees common expressions in the blockchain and
> optimises them eg)

I believe that is relevant in the other thread about Jets that I and Billy 
forked off from `OP_FOLD`?


Over in that thread, we seem to have largely split jets into two types:

* Consensus-critical jets which need a softfork but reduce the weight of the 
jetted code (and which are invisible to pre-softfork nodes).
* Non-consensus-critical jets which only need relay change and reduces bytes 
sent, but keeps the weight of the jetted code.

It seems to me that lisp-generating-lisp compression would roughly fall into 
the "non-consensus-critical jets", roughly.


> On Wed, Mar 09, 2022 at 02:30:34PM +, ZmnSCPxj via bitcoin-dev wrote:
>
> > Do note that PTLCs remain more space-efficient though, so forget about 
> > HTLCs and just use PTLCs.
>
> Note that PTLCs aren't really Chia-friendly, both because chia doesn't
> have secp256k1 operations in the first place, but also because you can't
> do a scriptless-script because the information you need to extract
> is lost when signatures are non-interactively aggregated via BLS --
> so that adds an expensive extra ECC operation rather than reusing an
> op you're already paying for (scriptless script PTLCs) or just adding
> a cheap hash operation (HTLCs).
>
> (Pretty sure Chia could do (= PTLC (pubkey_for_exp PREIMAGE)) for
> preimage re

Re: [bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> Hi ZmnSCPxj,
>
> >  Just ask a bunch of fullnodes to add this 1Mb of extra ignored data in 
> >this tiny 1-input-1-output transaction so I pay only a small fee
>
> I'm not suggesting that you wouldn't have to pay a fee for it. You'd pay a 
> fee for it as normal, so there's no DOS vector. Doesn't adding extra witness 
> data do what would be needed here? Eg simply adding extra data onto the 
> witness script that will remain unconsumed after successful execution of the 
> script?

I think we would want to have a cleanstack rule at some point (do not remember 
out-of-hand if Taproot already enforces one).

So now being nice to the network is *more* costly?
That just *dis*incentivizes jet usage.

> > how do new jets get introduced?
>
> In scenario A, new jets get introduced by being added to bitcoin software as 
> basically relay rules. 
>
> > If a new jet requires coordinated deployment over the network, then you 
> > might as well just softfork and be done with it.
>
> It would not need a coordinated deployment. However, the more nodes that 
> supported that jet, the more efficient using it would be for the network. 
>
> > If a new jet can just be entered into some configuration file, how do you 
> > coordinate those between multiple users so that there *is* some benefit for 
> > relay?
>
> When a new version of bitcoin comes out, people generally upgrade to it 
> eventually. No coordination is needed. 100% of the network need not support a 
> jet. Just some critical mass to get some benefit. 

How large is the critical mass needed?

If you use witness to transport jet information across non-upgraded nodes, then 
that disincentivizes use of jets and you can only incentivize jets by softfork, 
so you might as well just get a softfork.

If you have no way to transport jet information from an upgraded through a 
non-upgraded back to an upgraded node, then I think you need a fairly large 
buy-in from users before non-upgraded nodes are rare enough that relay is not 
much affected, and if the required buy-in is large enough, you might as well 
softfork.

> > Having a static lookup table is better since you can pattern-match on 
> > strings of specific, static length
>
> Sorry, better than what exactly? 

Than using a dynamic lookup table, which is how I understood your previous 
email about "scripts in the 1000 past blocks".

> > How does the unupgraded-to-upgraded boundary work?
> 
> When the non-jet aware node sends this to a jet-aware node, that node would 
> see the extra items on the stack after script execution, and would interpret 
> them as an OP_JET call specifying that OP_JET should replace the witness 
> items starting at index 0 with `1b5f03cf  OP_JET`. It does this and then 
> sends that along to the next hop.

It would have to validate as well that the SCRIPT sub-section matches the jet, 
else I could pretend to be a non-jet-aware node and give you a SCRIPT 
sub-section that does not match the jet and would cause your validation to 
diverge from other nodes.

Adler32 seems a bit short though, it seems to me that it may lead to two 
different SCRIPT subsections hashing to the same hash.

Suppose I have two different node softwares.
One uses a particular interpretation for a particular Adler32 hash.
The other uses a different interpretation.
If we are not careful, if these two jet-aware software talk to each other, they 
will ban each other from the network and cause a chainsplit.
Since the Bitcoin software is open source, nothing prevents anyone from using a 
different SCRIPT subsection for a particular Adler32 hash if they find a 
collision and can somehow convince people to run their modified software.

> In order to support this without a soft fork, this extra otherwise 
> unnecessary data would be needed, but for jets that represent long scripts, 
> the extra witness data could be well worth it (for the network). 
>
> However, this extra data would be a disincentive to do transactions this way, 
> even when its better for the network. So it might not be worth doing it this 
> way without a soft fork. But with a soft fork to upgrade nodes to support an 
> OP_JET opcode, the extra witness data can be removed (replaced with 
> out-of-band script fragment transmission for nodes that don't support a 
> particular jet). 

Which is why I pointed out that each individual jet may very well require a 
softfork, or enough buy-in that you might as well just softfork.

> One interesting additional thing that could be done with this mechanism is to 
> add higher-order function ability to jets, which could allow nodes to add 
> OP_FOLD or similar functions as a jet without requiring additional soft 
> forks.  Hypothetically, you could imagine a jet script that uses an OP_LOOP 
> jet be written as follows:
>
> 5             # Loop 5 times
> 1             # Loop the next 1 operation
> 3c1g14ad 
> OP_JET
> OP_ADD  # The 1 operation to loop
>
> The above would sum up 5 numbers from the 

Re: [bitcoin-dev] Meeting Summary & Logs for CTV Meeting #5

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge,

> What is ST? If it may be a reason to oppose CTV, why not talk about it more 
> explicitly so that others can understand the criticisms?

ST is Speedy Trial.
Basically, a short softfork attempt with `lockinontimeout=false` is first done.
If this fails, then developers stop and think and decide whether to offer a 
UASF `lockinontimeout=true` version or not.

Jeremy showed a state diagram of Speedy Trial on the IRC, which was complicated 
enough that I ***joked*** that it would be better to not implement `OP_CTV` and 
just use One OPCODE To Rule Them All, a.k.a. `OP_RING`.

If you had actually read the IRC logs you would have understood it, I even 
explicitly asked "ST ?=" so that the IRC logs have it explicitly listed as 
"Speedy Trial".


> It seems that criticism isn't really that welcomed and is just explained away.

It seems that you are trying to grasp at any criticism and thus fell victim to 
a joke.

> Perhaps it is just my subjective perception.
> Sometimes it feels we're going from "don't trust, verify" to "just trust 
> jeremy rubin", i hope this is really just my subjective perception. Because I 
> think it would be really bad that we started to blindly trust people like 
> that, and specially jeremy.

Why "specially jeremy"?
Any particular information you think is relevant?

The IRC logs were linked, you know, you could have seen what was discussed.

In particular, on the other thread you mention:

> We should talk more about activation mechanisms and how users should be able 
> to actively resist them more.

Speedy Trial means that users with mining hashpower can block the initial 
Speedy Trial, and the failure to lock in ***should*** cause the developers to 
stop-and-listen.
If the developers fail to stop-and-listen, then a counter-UASF can be written 
which *rejects* blocks signalling *for* the upgrade, which will chainsplit from 
a pro-UASF `lockinontimeout=true`, but clients using the initial Speedy Trial 
code will follow which one has better hashpower.

If we assume that hashpower follows price, then users who want for / against a 
particular softfork will be able to resist the Speedy Trial, and if developers 
release a UASF `lockinontimeout=true` later, will have the choice to reject 
running the UASF and even running a counter-UASF.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> On Mon, Mar 7, 2022 at 7:06 PM ZmnSCPxj  wrote:
>
> > But cross-input signature aggregation is a nice-to-have we want for 
> > Bitcoin, and, to me, cross-input sigagg is not much different from 
> > cross-input puzzle/solution compression.
>
> Cross-input signature aggregation has a lot of headaches unless you're using 
> BLS signatures, in which case you always aggregate everything all the time 
> because it can be done after the fact noninteractively. In that case it makes 
> sense to have a special aggregated signature which always comes with a 
> transaction or block. But it might be a bit much to bundle both lisp and BLS 
> support into one big glop.

You misunderstand my point.

I am not saying "we should add sigagg and lisp together!"

I am pointing out that:

* We want to save bytes by having multiple inputs of a transaction use the same 
single signature (i.e. sigagg).

is not much different from:

* We want to save bytes by having multiple inputs of a transaction use the same 
`scriptPubKey` template.

> > For example you might have multiple HTLCs, with mostly the same code except 
> > for details like who the acceptor and offerrer are, exact hash, and 
> > timelock, and you could claim multiple HTLCs in a single tx and feed the 
> > details separately but the code for the HTLC is common to all of the HTLCs.
> > You do not even need to come from the same protocol if multiple protocols 
> > use the same code for implementing HTLC.
>
> HTLCs, at least in Chia, have embarrassingly little code in them. Like, so 
> little that there's almost nothing to compress.

In Bitcoin at least an HTLC has, if you remove the `OP_PUSH`es, by my count, 13 
bytes.
If you have a bunch of HTLCs you want to claim, you can reduce your witness 
data by 13 bytes minus whatever number of bytes you need to indicate this.
That amounts to about 3 vbytes per HTLC, which can be significant enough to be 
worth it (consider that Taproot moving away from encoded signatures saves only 
9 weight units per signature, i.e. about 2 vbytes).

Do note that PTLCs remain more space-efficient though, so forget about HTLCs 
and just use PTLCs.

>
> > This does not apply to current Bitcoin since we no longer accept a SCRIPT 
> > from the spender, we now have a witness stack.
>
> My mental model of Bitcoin is to pretend that segwit was always there and the 
> separation of different sections of data is a semantic quibble.

This is not a semantic quibble --- `witness` contains only the equivalent of 
`OP_PUSH`es, while `scriptSig` can in theory contain non-`OP_PUSH` opcodes.
xref. `1 RETURN`.

As-is, with SegWit the spender no longer is able to provide any SCRIPT at all, 
but new opcodes may allow the spender to effectively inject any SCRIPT they 
want, once again, because `witness` data may now become code.

> But if they're fully baked into the scriptpubkey then they're opted into by 
> the recipient and there aren't any weird surprises.

This is really what I kinda object to.
Yes, "buyer beware", but consider that as the covenant complexity increases, 
the probability of bugs, intentional or not, sneaking in, increases as well.
And a bug is really "a weird surprise" --- xref TheDAO incident.

This makes me kinda wary of using such covenant features at all, and if stuff 
like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added but must be 
reimplemented via a covenant feature, I would be saddened, as I now have to 
contend with the complexity of covenant features and carefully check that 
`SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented correctly.
True I also still have to check the C++ source code if they are implemented 
directly as opcodes, but I can read C++ better than frikkin Bitcoin SCRIPT.
Not to mention that I now have to review both the (more complicated due to more 
general) covenant feature implementation, *and* the implementation of 
`SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` in terms of the covenant feature.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning aj et al.,


> > They're radically different approaches and
> > it's hard to see how they mix. Everything in lisp is completely sandboxed,
> > and that functionality is important to a lot of things, and it's really
> > normal to be given a reveal of a scriptpubkey and be able to rely on your
> > parsing of it.
>
> The above prevents combining puzzles/solutions from multiple coin spends,
> but I don't think that's very attractive in bitcoin's context, the way
> it is for chia. I don't think it loses much else?

But cross-input signature aggregation is a nice-to-have we want for Bitcoin, 
and, to me, cross-input sigagg is not much different from cross-input 
puzzle/solution compression.

For example you might have multiple HTLCs, with mostly the same code except for 
details like who the acceptor and offerrer are, exact hash, and timelock, and 
you could claim multiple HTLCs in a single tx and feed the details separately 
but the code for the HTLC is common to all of the HTLCs.
You do not even need to come from the same protocol if multiple protocols use 
the same code for implementing HTLC.

> > > There's two ways to think about upgradability here; if someday we want
> > > to add new opcodes to the language -- perhaps something to validate zero
> > > knowledge proofs or calculate sha3 or use a different ECC curve, or some
> > > way to support cross-input signature aggregation, or perhaps it's just
> > > that some snippets are very widely used and we'd like to code them in
> > > C++ directly so they validate quicker and don't use up as much block
> > > weight. One approach is to just define a new version of the language
> > > via the tapleaf version, defining new opcodes however we like.
> > > A nice side benefit of sticking with the UTXO model is that the soft fork
> > > hook can be that all unknown opcodes make the entire thing automatically
> > > pass.
>
> I don't think that works well if you want to allow the spender (the
> puzzle solution) to be able to use opcodes introduced in a soft-fork
> (eg, for graftroot-like behaviour)?

This does not apply to current Bitcoin since we no longer accept a SCRIPT from 
the spender, we now have a witness stack.
However, once we introduce opcodes that allow recursive covenants, it seems 
this is now a potential footgun if the spender can tell the puzzle SCRIPT to 
load some code that will then be used in the *next* UTXO created, and *then* 
the spender can claim it.

Hmmm Or maybe not?
If the spender can already tell the puzzle SCRIPT to send the funds to a SCRIPT 
that is controlled by the spender, the spender can already tell the puzzle 
SCRIPT to forward the funds to a pubkey the spender controls.
So this seems to be more like "do not write broken SCRIPTs"?

> > > > - serialization seems to be a bit verbose -- 100kB of serialized clvm
> > > >    code from a random block gzips to 60kB; optimising the serialization
> > > >    for small lists, and perhaps also for small literal numbers might be
> > > >    a feasible improvement; though it's not clear to me how frequently
> > > >    serialization size would be the limiting factor for cost versus
> > > >    execution time or memory usage.
> > > > A lot of this is because there's a hook for doing compression at the 
> > > > consensus layer which isn't being used aggressively yet. That one has 
> > > > the downside that the combined cost of transactions can add up very 
> > > > nonlinearly, but when you have constantly repeated bits of large 
> > > > boilerplate it gets close and there isn't much of an alternative. That 
> > > > said even with that form of compression maxxed out it's likely that 
> > > > gzip could still do some compression but that would be better done in 
> > > > the database and in wire protocol formats rather than changing the 
> > > > format which is hashed at the consensus layer.
> > > > How different is this from "jets" as proposed in Simplicity?
>
> Rather than a "transaction" containing "inputs/outputs", chia has spend
> bundles that spend and create coins; and spend bundles can be merged
> together, so that a block only has a single spend bundle. That spend
> bundle joins all the puzzles (the programs that, when hashed match
> the scriptPubKey) and solutions (scriptSigs) for the coins being spent
> together.
>
> I /think/ the compression hook would be to allow you to have the puzzles
> be (re)generated via another lisp program if that was more efficient
> than just listing them out. But I assume it would be turtles, err,
> lisp all the way down, no special C functions like with jets.

Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
cryptocurrency node software, so "special C function" seems to overprivilege 
C...
I suppose the more proper way to think of this is that jets are *equivalent to* 
some code in the hosted language, and have an *efficient implementation* in the 
hosting language.
In this view, the current OG Bitcoin SCRIPT is the hosted 

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Antoine,

> Hi James,
>
> Interesting to see a sketch of a CTV-based vault design !
>
> I think the main concern I have with any hashchain-based vault design is the 
> immutability of the flow paths once the funds are locked to the root vault 
> UTXO. By immutability, I mean there is no way to modify the 
> unvault_tx/tocold_tx transactions and therefore recover from transaction 
> fields
> corruption (e.g a unvault_tx output amount superior to the root vault UTXO 
> amount) or key endpoints compromise (e.g the cold storage key being stolen).
>
> Especially corruption, in the early phase of vault toolchain deployment, I 
> believe it's reasonable to expect bugs to slip in affecting the output amount 
> or relative-timelock setting correctness (wrong user config, miscomputation 
> from automated vault management, ...) and thus definitively freezing the 
> funds. Given the amounts at stake for which vaults are designed, errors are 
> likely to be far more costly than the ones we see in the deployment of 
> payment channels.
>
> It might be more conservative to leverage a presigned transaction data design 
> where every decision point is a multisig. I think this design gets you the 
> benefit to correct or adapt if all the multisig participants agree on. It 
> should also achieve the same than a key-deletion design, as long as all
> the vault's stakeholders are participating in the multisig, they can assert 
> that flow paths are matching their spending policy.

Have not looked at the actual vault design, but I observe that Taproot allows 
for a master key (which can be an n-of-n, or a k-of-n with setup (either 
expensive or trusted, but I repeat myself)) to back out of any contract.

This master key could be an "even colder" key that you bury in the desert to be 
guarded over by generations of Fremen riding giant sandworms until the Bitcoin 
Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto, arrives.

> Of course, relying on presigned transactions comes with higher assumptions on 
> the hardware hosting the flow keys. Though as hashchain-based vault design 
> imply "secure" key endpoints (e.g ), as a vault user you're 
> still encumbered with the issues of key management, it doesn't relieve you to 
> find trusted hardware. If you want to avoid multiplying devices to trust, I 
> believe flow keys can be stored on the same keys guarding the UTXOs, before 
> sending to vault custody.
>
> I think the remaining presence of trusted hardware in the vault design might 
> lead one to ask what's the security advantage of vaults compared to classic 
> multisig setup. IMO, it's introducing the idea of privileges in the coins 
> custody : you set up the flow paths once for all at setup with the highest 
> level of privilege and then anytime you do a partial unvault you don't need 
> the same level of privilege. Partial unvault authorizations can come with a 
> reduced set of verifications, at lower operational costs. That said, I think 
> this security advantage is only relevant in the context of recursive design, 
> where the partial unvault sends back the remaining funds to vault UTXO (not 
> the design proposed here).
>
> Few other thoughts on vault design, more minor points.
>
> "If Alice is watching the mempool/chain, she will see that the unvault 
> transaction has been unexpectedly broadcast,"
>
> I think you might need to introduce an intermediary, out-of-chain protocol 
> step where the unvault broadcast is formally authorized by the vault 
> stakeholders. Otherwise it's hard to qualify "unexpected", as hot key 
> compromise might not be efficiently detected.

Thought: It would be nice if Alice could use Lightning watchtowers as well, 
that would help increase the anonymity set of both LN watchtower users and 
vault users.

> "With  OP_CTV, we can ensure that the vault operation is enforced by 
> consensus itself, and the vault transaction data can be generated 
> deterministically without additional storage needs."
>
> Don't you also need the endpoint scriptPubkeys (, ), 
> the amounts and CSV value ? Though I think you can grind amounts and CSV 
> value in case of loss...But I'm not sure if you remove the critical data 
> persistence requirement, just reduce the surface.
>
> "Because we want to be able to respond immediately, and not have to dig out 
> our cold private keys, we use an additional OP_CTV to encumber the "swept" 
> coins for spending by only the cold wallet key."
>
> I think a robust vault deployment would imply the presence of a set of 
> watchtowers, redundant entities able to broadcast the cold transaction in 
> reaction to unexpected unvault. One feature which could be interesting is 
> "tower accountability", i.e knowing which tower initiated the broadcast, 
> especially if it's a faultive one. One way is to watermark the cold 
> transaction (e.g tweak nLocktime to past value). Though I believe with CTV 
> you would need as much different hashes than towers included in 

[bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

Changed subject since this is only tangentially related to `OP_FOLD`.

> Let me organize my thoughts on this a little more clearly. There's a couple 
> possibilities I can think of for a jet-like system:
>
> A. We could implement jets now without a consensus change, and without 
> requiring all nodes to upgrade to new relay rules. Probably. This would give 
> upgraded nodes improved validation performance and many upgraded nodes relay 
> savings (transmitting/receiving fewer bytes). Transactions would be weighted 
> the same as without the use of jets tho.
> B. We could implement the above + lighter weighting by using a soft fork to 
> put the jets in a part of the blockchain hidden from unupgraded nodes, as you 
> mentioned. 
> C. We could implement the above + the jet registration idea in a soft fork. 
>
> For A:
>
> * Upgraded nodes query each connection for support of jets in general, and 
> which specific jets they support.
> * For a connection to another upgraded node that supports the jet(s) that a 
> transaction contains, the transaction is sent verbatim with the jet included 
> in the script (eg as some fake opcode line like 23 OP_JET, indicating to 
> insert standard jet 23 in its place). When validation happens, or when a 
> miner includes it in a block, the jet opcode call is replaced with the script 
> it represents so hashing happens in a way that is recognizable to unupgraded 
> nodes.
> * For a connection to a non-upgraded node that doesn't support jets, or an 
> upgraded node that doesn't support the particular jet included in the script, 
> the jet opcode call is replaced as above before sending to that node. In 
> addition, some data is added to the transaction that unupgraded nodes 
> propagate along but otherwise ignore. Maybe this is extra witness data, maybe 
> this is some kind of "annex", or something else. But that data would contain 
> the original jet opcode (in this example "23 OP_JET") so that when that 
> transaction data reaches an upgraded node that recognizes that jet again, it 
> can swap that back in, in place of the script fragment it represents. 
>
> I'm not 100% sure the required mechanism I mentioned of "extra ignored data" 
> exists, and if it doesn't, then all nodes would at least need to be upgraded 
> to support that before this mechanism could fully work.

I am not sure that can even be *made* to exist.
It seems to me a trivial way to launch a DDoS: Just ask a bunch of fullnodes to 
add this 1Mb of extra ignored data in this tiny 1-input-1-output transaction so 
I pay only a small fee if it confirms but the bandwidth of all fullnodes is 
wasted transmitting and then ignoring this block of data.

> But even if such a mechanism doesn't exist, a jet script could still be used, 
> but it would be clobbered by the first nonupgraded node it is relayed to, and 
> can't then be converted back (without using a potentially expensive lookup 
> table as you mentioned). 

Yes, and people still run Bitcoin Core 0.8.x.

> > If the script does not weigh less if it uses a jet, then there is no 
> > incentive for end-users to use a jet
>
> That's a good point. However, I'd point out that nodes do lots of things that 
> there's no individual incentive for, and this might be one where people 
> either altruistically use jets to be lighter on the network, or use them in 
> the hopes that the jet is accepted as a standard, reducing the cost of their 
> scripts. But certainly a direct incentive to use them is better. Honest nodes 
> can favor connecting to those that support jets.

Since you do not want a dynamic lookup table (because of the cost of lookup), 
how do new jets get introduced?
If a new jet requires coordinated deployment over the network, then you might 
as well just softfork and be done with it.
If a new jet can just be entered into some configuration file, how do you 
coordinate those between multiple users so that there *is* some benefit for 
relay?

> >if a jet would allow SCRIPT weights to decrease, upgraded nodes need to hide 
> >them from unupgraded nodes
> > we have to do that by telling unupgraded nodes "this script will always 
> > succeed and has weight 0"
>
> Right. It doesn't have to be weight zero, but that would work fine enough. 
>
> > if everybody else has not upgraded, a user of a new jet has no security.
>
> For case A, no security is lost. For case B you're right. For case C, once 
> nodes upgrade to the initial soft fork, new registered jets can take 
> advantage of relay-cost weight savings (defined by the soft fork) without 
> requiring any nodes to do any upgrading, and nodes could be further upgraded 
> to optimize the validation of various of those registered jets, but those 
> processing savings couldn't change the weighting of transactions without an 
> additional soft fork.
>
> > Consider an attack where I feed you a SCRIPT that validates trivially but 
> > is filled with almost-but-not-quite-jettable code
>
> I 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Bram,

> while in the coin set model each puzzle (scriptpubkey) gets run and either 
> assert fails or returns a list of extra conditions it has, possibly including 
> timelocks and creating new coins, paying fees, and other things.

Does this mean it basically gets recursive covenants?
Or is a condition in this list of conditions written a more restrictive 
language which itself cannot return a list of conditions?


> >  - serialization seems to be a bit verbose -- 100kB of serialized clvm
> >    code from a random block gzips to 60kB; optimising the serialization
> >    for small lists, and perhaps also for small literal numbers might be
> >    a feasible improvement; though it's not clear to me how frequently
> >    serialization size would be the limiting factor for cost versus
> >    execution time or memory usage.
>
> A lot of this is because there's a hook for doing compression at the 
> consensus layer which isn't being used aggressively yet. That one has the 
> downside that the combined cost of transactions can add up very nonlinearly, 
> but when you have constantly repeated bits of large boilerplate it gets close 
> and there isn't much of an alternative. That said even with that form of 
> compression maxxed out it's likely that gzip could still do some compression 
> but that would be better done in the database and in wire protocol formats 
> rather than changing the format which is hashed at the consensus layer.

How different is this from "jets" as proposed in Simplicity?

> > Pretty much all the opcodes in the first section are directly from chia
> > lisp, while all the rest are to complete the "bitcoin" functionality.
> > The last two are extensions that are more food for thought than a real
> > proposal.
>
> Are you thinking of this as a completely alternative script format or an 
> extension to bitcoin script? They're radically different approaches and it's 
> hard to see how they mix. Everything in lisp is completely sandboxed, and 
> that functionality is important to a lot of things, and it's really normal to 
> be given a reveal of a scriptpubkey and be able to rely on your parsing of it.

I believe AJ is proposing a completely alternative format to OG Bitcoin SCRIPT.
Basically, as I understand it, nothing in the design of Tapscript versions 
prevents us from completely changing the interpretation of Tapscript bytes, and 
use a completely different language.
That is, we could designate a new Tapscript version as completely different 
from OG Bitcoin SCRIPT.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> Even changing the weight of a transaction using jets (ie making a script 
> weigh less if it uses a jet) could be done in a similar way to how segwit 
> separated the witness out.

The way we did this in SegWit was to *hide* the witness from unupgraded nodes, 
who are then unable to validate using the upgraded rules (because you are 
hiding the data from them!), which is why I bring up:

> > If a new jet is released but nobody else has upgraded, how bad is my 
> > security if I use the new jet?
>
> Security wouldn't be directly affected, only (potentially) cost. If your 
> security depends on cost (eg if it depends on pre-signed transactions and is 
> for some reason not easily CPFPable or RBFable), then security might be 
> affected if the unjetted scripts costs substantially more to mine. 

So if we make a script weigh less if it uses a jet, we have to do that by 
telling unupgraded nodes "this script will always succeed and has weight 0", 
just like `scriptPubKey`s with `<0> ` are, to pre-SegWit nodes, 
spendable with an empty `scriptSig`.
At least, that is how I always thought SegWit worked.

Otherwise, a jet would never allow SCRIPT weights to decrease, as unupgraded 
nodes who do not recognize the jet will have to be fed the entire code of the 
jet and would consider the weight to be the expanded, uncompressed code.
And weight is a consensus parameter, blocks are restricted to 4Mweight.

So if a jet would allow SCRIPT weights to decrease, upgraded nodes need to hide 
them from unupgraded nodes (else the weight calculations of unupgraded nodes 
will hit consensus checks), then if everybody else has not upgraded, a user of 
a new jet has no security.

Not even the `softfork` form of chialisp that AJ is proposing in the other 
thread would help --- unupgraded nodes will simply skip over validation of the 
`softfork` form.

If the script does not weigh less if it uses a jet, then there is no incentive 
for end-users to use a jet, as they would still pay the same price anyway.

Now you might say "okay even if no end-users use a jet, we could have fullnodes 
recognize jettable code and insert them automatically on transport".
But the lookup table for that could potentially be large once we have a few 
hundred jets (and I think Simplicity already *has* a few hundred jets, so 
additional common jets would just add to that?), jettable code could start at 
arbitrary offsets of the original SCRIPT, and jettable code would likely have 
varying size, that makes for a difficult lookup table.
In particular that lookup table has to be robust against me feeding it some 
section of code that is *almost* jettable but suddenly has a different opcode 
at the last byte, *and* handle jettable code of varying sizes (because of 
course some jets are going to e more compressible than others).
Consider an attack where I feed you a SCRIPT that validates trivially but is 
filled with almost-but-not-quite-jettable code (and again, note that expanded 
forms of jets are varying sizes), your node has to look up all those jets but 
then fails the last byte of the almost-but-not-quite-jettable code, so it ends 
up not doing any jetting.
And since the SCRIPT validated your node feels obligated to propagate it too, 
so now you are helping my DDoS.

> >  I suppose the point would be --- how often *can* we add new jets?
>
> A simple jet would be something that's just added to bitcoin software and 
> used by nodes that recognize it. This would of course require some debate and 
> review to add it to bitcoin core or whichever bitcoin software you want to 
> add it to. However, the idea I proposed in my last email would allow anyone 
> to add a new jet. Then each node can have their own policy to determine which 
> jets of the ones registered it wants to keep an index of and use. On its own, 
> it wouldn't give any processing power optimization, but it would be able to 
> do the same kind of script compression you're talking about op_fold allowing. 
> And a list of registered jets could inform what jets would be worth building 
> an optimized function for. This would require a consensus change to implement 
> this mechanism, but thereafter any jet could be registered in userspace.

Certainly a neat idea.
Again, lookup table tho.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> >On the other hand, the above, where the oracle determines *when* the fund 
> >can be spent, can also be implemented by a simple 2-of-3, and called an 
> >"escrow".
>
> I think something that is underappreciated by protocol developers is the fact 
> that multisig requires interactiveness at settlement time. The multisig 
> escrow provider needs to know the exact details about the bitcoin transaction 
> and needs to issue a signature (gotta sign the outpoints, the fee, the payout 
> addresses etc).
>
> With PTLCs that isn't the case, and thus gives a UX improvement for Alice & 
> Bob that are using the escrow provider. The oracle (or escrow) just issues 
> attestations. Bob or Alice take those attestations and complete the adaptor 
> signature. Instead of a bi-directional communication requirement (the oracle 
> working with Bob or Alice to build the bitcoin tx) at settlement time there 
> is only unidirectional communication required. Non-interactive settlement is 
> one of the big selling points of DLC style applications IMO.
>
> One of the unfortunate things about LN is the interactiveness requirements 
> are very high, which makes developing applications hard (especially mobile 
> applications). I don't think this solves lightning's problems, but it is a 
> worthy goal to reduce interactiveness requirements with new bitcoin 
> applications to give better UX.

Good point.

I should note that 2-of-3 contracts are *not* transportable over LN, but PTLCs 
*are* transportable.
So the idea still has merit for LN, as a replacement for 2-fo-3 escrows.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> On Sat, Mar 5, 2022 at 8:41 AM Jeremy Rubin via bitcoin-dev 
>  wrote:
>
> > It seems like a decent concept for exploration.
> >
> > AJ, I'd be interested to know what you've been able to build with Chia Lisp 
> > and what your experience has been... e.g. what does the Lightning Network 
> > look like on Chia?
> >
> > One question that I have had is that it seems like to me that neither 
> > simplicity nor chia lisp would be particularly suited to a ZK prover...
>
> Not that I necessarily disagree with this statement, but I can say that I 
> have experimented with compiling Simplicity to Boolean circuits.  It was a 
> while ago, but I think the result of compiling my SHA256 program was within 
> an order of magnitude of the hand made SHA256 circuit for bulletproofs.

"Within" can mean "larger" or "smaller" in this context, which was it?
From what I understand, compilers for ZK-provable circuits are still not as 
effective as humans, so I would assume "larger", but I would be much interested 
if it is "smaller"!

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> It sounds like the primary benefit of op_fold is bandwidth savings. 
> Programming as compression. But as you mentioned, any common script could be 
> implemented as a Simplicity jet. In a world where Bitcoin implements jets, 
> op_fold would really only be useful for scripts that can't use jets, which 
> would basically be scripts that aren't very often used. But that inherently 
> limits the usefulness of the opcode. So in practice, I think it's likely that 
> jets cover the vast majority of use cases that op fold would otherwise have.

I suppose the point would be --- how often *can* we add new jets?
Are new jets consensus critical?
If a new jet is released but nobody else has upgraded, how bad is my security 
if I use the new jet?
Do I need to debate `LOT` *again* if I want to propose a new jet?

> A potential benefit of op fold is that people could implement smaller scripts 
> without buy-in from a relay level change in Bitcoin. However, even this could 
> be done with jets. For example, you could implement a consensus change to add 
> a transaction type that declares a new script fragment to keep a count of, 
> and if the script fragment is used enough within a timeframe (eg 1 
> blocks) then it can thereafter be referenced by an id like a jet could be. 
> I'm sure someone's thought about this kind of thing before, but such a thing 
> would really relegate the compression abilities of op fold to just the most 
> uncommon of scripts. 
>
> > * We should provide more *general* operations. Users should then combine 
> > those operations to their specific needs.
> > * We should provide operations that *do more*. Users should identify their 
> > most important needs so we can implement them on the blockchain layer.
>
> That's a useful way to frame this kind of problem. I think the answer is, as 
> it often is, somewhere in between. Generalization future-proofs your system. 
> But at the same time, the boundary conditions of that generalized 
> functionality should still be very well understood before being added to 
> Bitcoin. The more general, the harder to understand the boundaries. So imo we 
> should be implementing the most general opcodes that we are able to reason 
> fully about and come to a consensus on. Following that last constraint might 
> lead to not choosing very general opcodes.

Yes, that latter part is what I am trying to target with `OP_FOLD`.
As I point out, given the restrictions I am proposing, `OP_FOLD` (and any 
bounded loop construct with similar restrictions) is implementable in current 
Bitcoin SCRIPT, so it is not an increase in attack surface.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

> I think this proposal describes arbitrary lines of pre-approved credit from a 
> bitcoin wallet. The line can be drawn down with oracle attestations. You can 
> mix in locktimes on these pre-approved lines of credit if you would like to 
> rate limit, or ignore rate limiting and allow the full utxo to be spent by 
> the borrower. It really is contextual to the use case IMO.

Ah, that seems more useful.

Here is an example application that might benefit from this scheme:

I am commissioning some work from some unbranded workperson.
I do not know how long the work will take, and I do not trust the workperson to 
accurately tell me how complete the work is.
However, both I and the workperson trust a branded third party (the oracle) who 
can judge the work for itself and determine if it is complete or not.
So I create a transaction whose signature can be completed only if the oracle 
releases a proper scalar and hand it over to the workperson.
Then the workperson performs the work, then asks the oracle to judge if the 
work has been completed, and if so, the work can be compensated.

On the other hand, the above, where the oracle determines *when* the fund can 
be spent, can also be implemented by a simple 2-of-3, and called an "escrow".
After all, the oracle attestation can be a partial signature as well, not just 
a scalar.
Is there a better application for this scheme?

I suppose if the oracle attestation is intended to be shared among multiple 
such transactions?
There may be multiple PTLCs, that are triggered by a single oracle?

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Umm `OP_ANNEX` seems boring 


> It seems like one good option is if we just go on and banish the OP_ANNEX. 
> Maybe that solves some of this? I sort of think so. It definitely seems like 
> we're not supposed to access it via script, given the quote from above:
>
> Execute the script, according to the applicable script rules[11], using the 
> witness stack elements excluding the script s, the control block c, and the 
> annex a if present, as initial stack.
> If we were meant to have it, we would have not nixed it from the stack, no? 
> Or would have made the opcode for it as a part of taproot...
>
> But recall that the annex is committed to by the signature.
>
> So it's only a matter of time till we see some sort of Cat and Schnorr Tricks 
> III the Annex Edition that lets you use G cleverly to get the annex onto the 
> stack again, and then it's like we had OP_ANNEX all along, or without CAT, at 
> least something that we can detect that the value has changed and cause this 
> satisfier looping issue somehow.

... Never mind I take that back.

Hmmm.

Actually if the Annex is supposed to be ***just*** for adding weight to the 
transaction so that we can do something like increase limits on SCRIPT 
execution, then it does *not* have to be covered by any signature.
It would then be third-party malleable, but suppose we have a "valid" 
transaction on the mempool where the Annex weight is the minimum necessary:

* If a malleated transaction has a too-low Annex, then the malleated 
transaction fails validation and the current transaction stays in the mempool.
* If a malleated transaction has a higher Annex, then the malleated transaction 
has lower feerate than the current transaction and cannot evict it from the 
mempool.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning aj,

> On Sun, Feb 27, 2022 at 04:34:31PM +0000, ZmnSCPxj via bitcoin-dev wrote:
>
> > In reaction to this, AJ Towns mailed me privately about some of his
> > thoughts on this insane `OP_EVICT` proposal.
> > He observed that we could generalize the `OP_EVICT` opcode by
> > decomposing it into smaller parts, including an operation congruent
> > to the Scheme/Haskell/Scala `map` operation.
>
> At much the same time Zman was thinking about OP_FOLD and in exactly the
> same context, I was wondering what the simplest possible language that
> had some sort of map construction was -- I mean simplest in a "practical
> engineering" sense; I think Simplicity already has the Euclidean/Peano
> "least axioms" sense covered.
>
> The thing that's most appealing to me about bitcoin script as it stands
> (beyond "it works") is that it's really pretty simple in an engineering
> sense: it's just a "forth" like system, where you put byte strings on a
> stack and have a few operators to manipulate them. The alt-stack, and
> supporting "IF" and "CODESEPARATOR" add a little additional complexity,
> but really not very much.
>
> To level-up from that, instead of putting byte strings on a stack, you
> could have some other data structure than a stack -- eg one that allows
> nesting. Simple ones that come to mind are lists of (lists of) byte
> strings, or a binary tree of byte strings [0]. Both those essentially
> give you a lisp-like language -- lisp is obviously all about lists,
> and a binary tree is just made of things or pairs of things, and pairs
> of things are just another way of saying "car" and "cdr".
>
> A particular advantage of lisp-like approaches is that they treat code
> and data exactly the same -- so if we're trying to leave the option open
> for a transaction to supply some unexpected code on the witness stack,
> then lisp handles that really naturally: you were going to include data
> on the stack anyway, and code and data are the same, so you don't have
> to do anything special at all. And while I've never really coded in
> lisp at all, my understanding is that its biggest problems are all about
> doing things efficiently at large scales -- but script's problem space
> is for very small scale things, so there's at least reason to hope that
> any problems lisp might have won't actually show up for this use case.

I heartily endorse LISP --- it has a trivial implementation of `eval` that is 
easily implementable once you have defined a proper data type in 
preferred-language-here to represent LISP datums.
Combine it with your idea of committing to a max-number-of-operations (which 
increases the weight of the transaction) and you may very well have something 
viable.
(In particular, even though `eval` is traditionally (re-)implemented in LISP 
itself, the limit on max-number-of-operations means any `eval` implementation 
within the same language is also forcibly made total.)

Of note is that the supposed "problem at scale" of LISP is, as I understand it, 
due precisely to its code and data being homoiconic to each other.
This homoiconicity greatly tempts LISP programmers to use macros, i.e. programs 
that generate other programs from some input syntax.
Homoiconicity means that one can manipulate code just as easily as the data, 
and thus LISP macros are a trivial extension on the language.
This allows each LISP programmer to just code up a macro to expand common 
patterns.
However, each LISP programmer then ends up implementing *different*, but 
*similar* macros from each other.
Unfortunately, programming at scale requires multiple programmers speaking the 
same language.
Then programming at scale is hampered because each LISP programmer has their 
own private dialect of LISP (formed from the common LISP language and from 
their own extensive set of private macros) and intercommunication between them 
is hindered by the fact that each one speaks their own private dialect.
Some LISP-like languages (e.g. Scheme) have classically targeted a "small" 
subset of absolutely-necessary operations, and each implementation of the 
language immediately becomes a new dialect due to having slightly different 
forms for roughly the same convenience function or macro, and *then* individual 
programmers build their own private dialect on top.
For Scheme specifically, R7RS has targeted providing a "large" standard as 
well, as did R6RS (which only *had* a "large" standard), but individual Scheme 
implementations have not always liked to implement *all* the "large" standard.

Otherwise, every big C program contains a half-assed implementation of half of 
Common LISP, so 


> -   I don't think execution costing takes into account how much memory
> 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash 
> in OP_RETURN inside TapScript. Then, every sidechain node can check that 
> "this sidechain hash is connected with this Taproot address", without pushing 
> 32 bytes on-chain.

The Taproot address itself has to take up 32 bytes onchain, so this saves 
nothing.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,

Quick question.

How does this improve over just handing over `nLockTime`d transactions?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Paul,

> On 2/26/2022 9:00 PM, ZmnSCPxj wrote:
>
> > ...
> >
> > > Such a technique would need to meet two requirements (or, so it seems to 
> > > me):
> > > #1: The layer1 UTXO (that defines the channel) can never change (ie, the 
> > > 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay 
> > > what-they-were when the channel was opened).
> > > #2: The new part-owners (who are getting coins from the rich man), will 
> > > have new pubkeys which are NOT known, until AFTER the channel is opened 
> > > and confirmed on the blockchain.
> > >
> > > Not sure how you would get both #1 and #2 at the same time. But I am not 
> > > up to date on the latest LN research.
> >
> > Yes, using channel factories.
>
> I think you may be wrong about this.
> Channel factories do not meet requirement #2, as they cannot grow to onboard 
> new users (ie, new pubkeys).
> The factory-open requires that people pay to (for example), a 5-of-5 
> multisig. So all 5 fixed pubkeys must be known, before the factory-open is 
> confirmed, not after.

I am not wrong about this.
You can cut-through the closure of one channel factory with the opening of 
another channel factory with the same 5 fixed pubkeys *plus* an additional 100 
new fixed pubkeys.
With `SIGHASH_ANYPREVOUT` (which we need to Decker-Russell-Osuntokun-based 
channel factories) you do not even need to make new signatures for the existing 
channels, you just reuse the existing channel signatures and whether or not the 
*single*, one-input-one-output, close+reopen transaction is confirmed or not, 
the existing channels remain usable (the signatures can be used on both 
pre-reopen and post-reopen).

That is why I said changing the membership set requires onchain action.
But the onchain action is *only* a 1-input-1-output transaction, and with 
Taproot the signature needed is just 64 bytes witness (1 weight unit per byte), 
I had several paragraphs describing that, did you not read them?

Note as well that with sidechains, onboarding also requires action on the 
mainchain, in the form of a sideblock merge-mined on the mainchain.

>
> > We assume that onboarding new members is much rarer than existing members 
> > actually paying each other
>
> Imagine that Bitcoin could only onboard 5 new users per millennium, but once 
> onboarded they had payment nirvana (could transact hundreds of trillions of 
> times per second, privately, smart contracts, whatever).
> Sadly, the payment nirvana would not matter. The low onboarding rate would 
> kill the project.

Fortunately even without channel factories the onboarding rate of LN is much 
much higher than that.
I mean, like, LN *is* live and *is* working, today, and (at least where I have 
looked, but I could be provincial) has a lot more onboarding activity than 
half-hearted sidechains like Liquid or Rootstock.

> The difference between the two rates [onboarding and payment], is not 
> relevant. EACH rate must meet the design goal.
> It is akin to saying: " Our car shifts from park to drive in one-millionth of 
> a second, but it can only shift into reverse once per year; but that is OK 
> because 'we assume that going in reverse is much rarer than driving forward' 
> ".

Your numbers absolutely suck and have no basis in reality, WTF.
Even without batched channel openings and a typical tranaction of 2 inputs, 1 
LN channel, and a change output, you can onboard ~1250 channels per mainchain 
block (admittedly, without any other activity).
Let us assume every user needs 5 channels on average and that is still 250 
users per 10 minutes.
I expect channel factories to increase that by about 10x to 100x more, and then 
you are going to hit the issue of getting people to *use* Bitcoin rather than 
many users wanting to get in but being unable to due to block size limits.

>
> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> That's true, but I think you have neglected to actually take out your 
> calculator and run the numbers.
>
> Hypothetically, 10 largeblock-sidechains would be 320 bytes per block 
> (00.032%, essentially nothing).
> Those 10, could onboard 33% of the planet in a single month [footnote], even 
> if each sc-onboard required an average of 800 sc-bytes.
>
> Certainly not a perfect idea, as the SC onboarding rate is the same as the 
> payment rate. But once they are onboarded, those users can immediately join 
> the LN *from* their sidechain. (All of the SC LNs would be interoperable.)
>
> Such a strategy would take enormous pressure *off* of layer1 (relative to the 
> "LN only" strategy). The layer1 blocksize could even **shrink** from 4 MB 
> (wu) to 400 kb, or lower. That would cancel out the 320 bytes of overhead, 
> many hundreds of times over.
>
> Paul
>
> [footnote] Envelope math, 10 sidechains, 

[bitcoin-dev] `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT

2022-02-27 Thread ZmnSCPxj via bitcoin-dev
`OP_FOLD`: A Looping Construct For Bitcoin SCRIPT
=

(This writeup requires at least some programming background, which I
expect most readers of this list have.)

Recently, some rando was ranting on the list about this weird crap
called `OP_EVICT`, a poorly-thought-out attempt at covenants.

In reaction to this, AJ Towns mailed me privately about some of his
thoughts on this insane `OP_EVICT` proposal.
He observed that we could generalize the `OP_EVICT` opcode by
decomposing it into smaller parts, including an operation congruent
to the Scheme/Haskell/Scala `map` operation.
As `OP_EVICT` effectively loops over the outputs passed to it, a
looping construct can be used to implement `OP_EVICT` while retaining
its nice property of cut-through of multiple evictions and reviving of
the CoinPool.

More specifically, an advantage of `OP_EVICT` is that it allows
checking multiple published promised outputs.
This would be implemented in a loop.
However, if we want to instead provide *general* operations in
SCRIPT rather than a bunch of specific ones like `OP_EVICT`, we
should consider how to implement looping so that we can implement
`OP_EVICT` in a SCRIPT-with-general-opcodes.

(`OP_FOLD` is not sufficient to implement `OP_EVICT`; for
efficiency, AJ Towns also pointed out that we need some way to
expose batch validation to SCRIPT.
There is a follow-up writeup to this one which describes *that*
operation.)

Based on this, I started ranting as well about how `map` is really
just a thin wrapper on `foldr` and the *real* looping construct is
actually `foldr` (`foldr` is the whole FP Torah: all the rest is
commentary).
This is thus the genesis for this proposal, `OP_FOLD`.

A "fold" operation is sometimes known as "reduce" (and if you know
about Google MapReduce, you might be familiar with "reduce").
Basically, a "fold" or "reduce" operation applies a function
repeatedly (i.e. *loops*) on the contents of an input structure,
creating a "sum" or "accumulation" of the contents.

For the purpose of building `map` out of `fold`, the accumulation
can itself be an output structure.
The `map` simply accumulates to the output structure by applying
its given function and concatenating it to the current accumulation.

Digression: Programming Is Compression
--

Suppose you are a programmer and you are reading some source code.
You want to wonder "what will happen if I give this piece of code
these particular inputs?".

In order to do so, you would simulate the execution of the code in
your head.
In effect, you would generate a "trace" of basic operations (that
do not include control structures).
By then thinking about this linear trace of basic operations, you
can figure out what the code does.

Now, let us recall two algorithms from the compression literature:

1.  Run-length Encoding
2.  Lempel-Ziv 1977

Suppose our flat linear trace of basic operations contains something
like this:

OP_ONE
OP_TWO
OP_ONE
OP_TWO
OP_ONE
OP_TWO

IF we had looping constructs in our language, we could write the
above trace as something like:

for N = 1 to 3
OP_ONE
OP_TWO

The above is really Run-length Encoding.

(`if` is just a loop that executes 0 or 1 times.)

Similarly, suppose you have some operations that are commonly
repeated, but not necessarily next to each other:

OP_ONE
OP_TWO
OP_THREE
OP_ONE
OP_TWO
OP_FOUR
OP_FIVE
OP_ONE
OP_TWO

If we had functions/subroutines/procedures in our language, we
could write the above trace as something like:

function foo()
OP_ONE
OP_TWO
foo()
OP_THREE
foo()
OP_FOUR
OP_FIVE
foo()

That is, functions are just Lempel-Ziv 1977 encoding, where we
"copy" some repeated data from a previously-shown part of
data.

Thus, we can argue that programming is really a process of:

* Imagining what we want the machine to do given some particular
  input.
* Compressing that list of operations so we can more easily
  transfer the above imagined list over your puny low-bandwidth
  brain-computer interface.
  * I mean seriously, you humans still use a frikkin set of
*mechanical* levers to transfer data into a matrix of buttons?
(you don't even make the levers out of reliable metal, you
use calcium of all things??
You get what, 5 or 6 bytes per second???)
And your eyes are high-bandwidth but you then have this
complicated circuitry (that has to be ***trained for
several years*** WTF) to extract ***tiny*** amounts of ASCII
text from that high-bandwidth input stream
Evolve faster!
(Just to be clear, I am actually also a human being and
definitely am not a piece of circuitry connected directly to
the Internet and I am not artificially limiting my output
bandwidth so as not to overwhelm you mere humans.)

See also "Kolmogorov complexity".

This becomes relevant, because the 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-26 Thread ZmnSCPxj via bitcoin-dev
Good morning again Paul,

> With sidechains, changing the ownership set requires that the sidechain 
> produce a block.
> That block requires a 32-byte commitment in the coinbase.
> What is more, if any transfers occur on the sidechain, they cannot be real 
> without a sidechain block, that has to be committed on the mainchain.

The above holds if the mainchain miners also act as sidechain validators.
If they are somehow separate (i.e. blind merge mining), then the `OP_BRIBE` 
transaction needed is also another transaction.
Assuming the sidechain validator is using Taproot as well, it needs the 32+1 
txin, a 64-byte signature, a 32-byte copy of the sidechain commitment that the 
miner is being bribed to put in the coinbase, and a txout for any change the 
sidechain validator has.

This is somewhat worse than the case for channel factories, even if you assume 
that every block, at least one channel factory has to do an onboarding event.

> Thus, while changing the membership set of a channel factory is more 
> expensive (it requires a pointer to the previous txout, a 64-byte Taproot 
> signature, and a new Taproot address), continuous operation does not publish 
> any data at all.
> While in sidehchains, continuous operation and ordinary payments requires 
> ideally one commitment of 32 bytes per mainchain block.
> Continuous operation of the sidechain then implies a constant stream of 
> 32-byte commitments, whereas continuous operation of a channel factory, in 
> the absence of membership set changes, has 0 bytes per block being published.
>
> We assume that onboarding new members is much rarer than existing members 
> actually paying each other in an actual economy (after the first burst of 
> onboarding, new members will only arise in proportion to the birth rate, but 
> typical economic transactions occur much more often), so optimizing for the 
> continuous operation seems a better tradeoff.

Perhaps more illustratively, with channel factories, different layers have 
different actions they can do, and the only one that needs to be broadcast 
widely are actions on the onchain layer:

* Onchain: onboarding / deboarding
* Channel Factory: channel topology change
* Channel: payments

This is in contrast with merge-mined Sidechains, where *all* activity requires 
a commitment on the mainchain:

* Onchain: onboarding / deboarding, payments

While it is true that all onboarding, deboarding, and payments are summarized 
in a single commitment, notice how in LN-with-channel-factories, all onboarding 
/ deboarding is *also* summarized, but payments *have no onchain impact*, at 
all.

Without channel factories, LN is only:

* Onchain: onboarding / deboarding, channel topology change
* Channel: payments

So even without channel factories there is already a win, although again, due 
to the large numbers of channels we need, a channel factory in practice will be 
needed to get significantly better scaling.


Finally, in practice with Drivechains, starting a new sidechain requires 
implicit permission from the miners.
With LN, new channels and channel factories do not require any permission, as 
they are indistinguishable from ordinary transactions.
(the gossip system does leak that a particular UTXO is a particular published 
channel, but gossip triggers after deep confirmation, at which point it would 
be too late for miners to censor the channel opening.
The miners can censor channel closure for published channels, admittedly, but 
at least you can *start* a new channel without being censored, which you cannot 
do with Drivechain sidechains.)


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-26 Thread ZmnSCPxj via bitcoin-dev
Good morning Paul,

> ***
>
> You have emphasized the following relation: "you have to show your 
> transaction to everyone" = "thing doesn't scale".
>
> However, in LN, there is one transaction which you must, in fact, "show to 
> everyone": your channel-opener.
>
> Amusingly, in the largeblock sidechain, there is not. You can onboard using 
> only the blockspace of the SC.
> (One "rich guy" can first shift 100k coins Main-to-Side, and he can 
> henceforth onboard many users over there. Those users can then onboard new 
> users, forever.)
>
> So it would seem to me, that you are on the ropes, even by your own 
> criterion. [Footnote 1]
>
> ***
>
> Perhaps, someone will invent a way, to LN-onboard WITHOUT needing new layer1 
> bytes.
>
> If so, a "rich man" could open a LN channel, and gradually transfer it to new 
> people.
>
> Such a technique would need to meet two requirements (or, so it seems to me):
> #1: The layer1 UTXO (that defines the channel) can never change (ie, the 
> 32-bytes which define the p2sh/tapscript/covenant/whatever, must stay 
> what-they-were when the channel was opened).
> #2: The new part-owners (who are getting coins from the rich man), will have 
> new pubkeys which are NOT known, until AFTER the channel is opened and 
> confirmed on the blockchain.
>
> Not sure how you would get both #1 and #2 at the same time. But I am not up 
> to date on the latest LN research.

Yes, using channel factories.

A channel factory is a N-of-N where N >= 3, and which uses the same offchain 
technology to host multiple 2-of-2 channels.
We observe that, just as an offchain structure like a payment channel can host 
HTLCs, any offchain structure can host a lot of *other* contracts, because the 
offchain structure can always threaten to drop onchain to enforce any 
onchain-enforceable contract.
But an offchain structure is just another onchain contract!
Thus, an offchain structure can host many other offchain structures, and thus 
an N-of-N channel factory can host multiple 2-of-2 channel factories.

(I know we discussed sidechains-within-sidechains before, or at least I 
mentioned that to you in direct correspondence, this is basically that idea 
brought to its logical conclusion.)

Thus, while you still have to give *one* transaction to all Bitcoin users, that 
single transaction can back several channels, up to (N * (N - 1)) / 2.

It is not quite matching your description --- the pubkeys of the peer 
participants need to be fixed beforehand.
However, all it means is some additional pre-planning during setup with no 
scope for dynamic membership.

At least, you cannot dynamically change membership without onchain action.
You *can* change membership sets by publishing a one-input-one-output 
transaction onchain, but with Taproot, the new membership set is representable 
in a single 32-byte Taproot address onchain (admittedly, the transaction input 
is a txin and thus has overhead 32 bytes plus 1 byte for txout index, and you 
need 64 bytes signature for Taproot as well).
The advantage is that, in the meantime, if membership set is not changed, 
payments can occur *without* any data published on the blockchain (literally 0 
data).

With sidechains, changing the ownership set requires that the sidechain produce 
a block.
That block requires a 32-byte commitment in the coinbase.
What is more, if *any* transfers occur on the sidechain, they cannot be real 
without a sidechain block, that has to be committed on the mainchain.

Thus, while changing the membership set of a channel factory is more expensive 
(it requires a pointer to the previous txout, a 64-byte Taproot signature, and 
a new Taproot address), continuous operation does not publish any data at all.
While in sidehchains, continuous operation and ordinary payments requires 
ideally one commitment of 32 bytes per mainchain block.
Continuous operation of the sidechain then implies a constant stream of 32-byte 
commitments, whereas continuous operation of a channel factory, in the absence 
of membership set changes, has 0 bytes per block being published.

We assume that onboarding new members is much rarer than existing members 
actually paying each other in an actual economy (after the first burst of 
onboarding, new members will only arise in proportion to the birth rate, but 
typical economic transactions occur much more often), so optimizing for the 
continuous operation seems a better tradeoff.


Channel factories have the nice properties:

* N-of-N means that nobody can steal from you.
  * Even with a 51% miner, nobody can steal from you as long as none of the N 
participants is the 51% miner, see the other thread.
* Graceful degradation: even if if 1 of the N is offline, payments are done 
over the hosted 2-of-2s, and the balance of probability is that most of the 
2-of-2s have both participants online and payments can continue to occur.

--

The reason why channel factories do not exist *yet* is that the main offchain 
construction we 

  1   2   3   4   5   6   >