Re: [bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-04-29 Thread ZmnSCPxj via bitcoin-dev
Good morning Weiji,

Have not completed reading, but this jumped out to me:



> 3.  Dealing with system limitation: verification keys could be very long and 
> exceed the MAX_SCRIPT_ELEMENT_SIZE (520 bytes). They could be put into 
> configurations and only use their hash in the scriptPubKey. The configuration 
> information such as new verification keys could be propagated through P2P 
> messages (we might need a separate BIP for this);

`scriptPubKey` is consensus-critical, and these new P2P messages would have to 
be consensus-critical.

As all nodes need to learn the new verification keys, we should consider how 
much resources are spent on each node just to maintain and forever remember 
verification keys.

Currently our resource-tracking methodology is via the synthetic "weight units" 
computation.
This reflects resources spent on acquiring block data, as well as maintaining 
the UTXO database.
For instance, the "witness discount" where witness data (i.e. modern equivalent 
of `scriptSig`) is charged 1/4 the weight units of other data, exists because 
spending a UTXO reduces the resources spent in the UTXO database, although 
still consumes resources in downloading block data (hence only a discount, not 
free or negative/rebate).

Similarly, any propagation of verification keys would need a similar adjustment 
for weight units.

As verification keys MUST be seen by all nodes before they can validate an 
`OP_ZKP`, I would suggest that it is best included in block data (which 
similarly needs to be seen by all nodes), together with some weight unit 
adjustment for that data, depending on how much resources verification keys 
would consume.
This is similar to how `scriptPubKey`s and amounts are included in block data, 
as those data are kept in the UTXO database, which nodes must maintain in order 
to validate the blockchain.

If verification keys are permanent, they should probably be weighted heavier 
than `scriptPubKey`s and amounts --- UTXOs can theoretically be deleted later 
by spending the UTXO (which reduces UTXO database size), while any data that 
must be permanently stored in a database must correspondingly be weighted 
higher.

Similarly, my understanding is that the CPU resources needed by validation of 
generic ZKPs is higher than that required for validation of ECC signatures.
Much of the current weight calculation assumes that witness data is primarily 
ECC signatures, so if ZKP witnesses translate to higher resource consumption, 
the weighting of ZKP witnesses should also be higher (i.e. greater than the 1/4 
witness-discounted weight of current witness data).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] On adaptor security (in protocols)

2023-04-29 Thread AdamISZ via bitcoin-dev
Hi list,
I was motivated to look more carefully at the question of the security of using 
signature adaptors after recently getting quite enthused about the idea of 
using adaptors across N signing sessions to do a kind of multiparty swap. But 
of course security analysis is also much more important for the base case of 2 
party swapping, which is of .. some considerable practical importance :)

There is work (referenced in Section 3 here) that's pretty substantial on "how 
secure are adaptors" (think in terms of security reductions) already from I 
guess the 2019-2021 period. But I wanted to get into scenarios of multiple 
adaptors at once or multiple signing sessions at once with the *same* adaptor 
(as mentioned above, probably this is the most important scenario).

To be clear this is the work of an amateur and is currently unreviewed - hence 
(a) me posting it here and (b) putting the paper on github so people can easily 
add specific corrections or comments if they like:

https://github.com/AdamISZ/AdaptorSecurityDoc/blob/main/adaptorsecurity.pdf

I'll note that I did the analysis only around MuSig, not MuSig2.

The penultimate ("third case"), that as mentioned, of "multiple signing 
sessions, same adaptor" proved to be the most interesting: in trying to reduce 
this to ECDLP I found an issue around sequencing. It may just be irrelevant but 
I'd be curious to hear what others think about that.

If nothing else, I'd be very interested to hear what experts in the field have 
to say about security reductions for this primitive in the case of multiple 
concurrent signing sessions (which of course has been analyzed very carefully 
already for base MuSig(2)).

Cheers,
AdamISZ/waxwing




Sent with Proton Mail secure email.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] proposal: new opcode OP_ZKP to enable ZKP-based spending authorization

2023-04-29 Thread Weiji Guo via bitcoin-dev
Hey everyone,


I am writing this email to propose a new opcode to enable zero knowledge
based spending authorization for Bitcoin. This new opcode OP_ZKP will
enable the Bitcoin network to authorize spending based on off-chain
computation, provided acceptable proof is supplied. This will not only
equip the Bitcoin script with Turing completeness, but also enable the
building of payment channels more flexible, stablecoin, decentralized
exchange, DeFi, etc. directly over the Bitcoin network, or even a layer 2.
All these could be accomplished with a soft fork (and follow-up building).


Before any BIP could be requested, I’d like to discuss all the aspects in
more detail, to cover as many corners as possible, and hopefully to build
consensus among developers and the community.


*### 0. Overview*

Here are what I am currently considering, listed as starting points for
discussion. I hope that I have covered all major issues, but please do feel
free to add more.



   1. How it works: we add OP_ZKP and OP_ZKPVERIFY using some unused OP_NOP
   codes. OP_ZKP works similarly to OP_MULTISIG, with a number parameter to
   indicate how many public inputs are to be read;
   2. Security: to bind the spending conditions to certain UTXO set,
   amount, and recipients, the hash of all this information shall be used as
   public input to the proof;
   3. Dealing with system limitation: verification keys could be very long
   and exceed the MAX_SCRIPT_ELEMENT_SIZE (520 bytes). They could be put into
   configurations and only use their hash in the scriptPubKey. The
   configuration information such as new verification keys could be propagated
   through P2P messages (we might need a separate BIP for this);
   4. Scalability: arrangement for miners or computing power vendors to
   aggregate some proofs for scalability and fee reduction. Alternatively,
   this could be accomplished with recursive proof.
   5. ZKP scheme and curve choices.
   6. Potential uses are unlimited, although there is no apparent secret
   key (private key or seed) in play.
   7. Ecology implications. Impacts on wallets, etc.


Below, I will introduce/discuss the above-listed item one by one. Mostly I
have to keep the discussion minimal and brief. It is still a bit lengthy,
please bear with me.


*### 1. How it works*

Consider below script:


scriptPubKey:   OP_ZKP

scriptSig:   …   


 is only an example in place of an indicator for the ZKP
scheme and curve parameters. Other combinations are possible. Further
discussion is provided in section 5.


 - the node implementation should look up a
verification key that hashed to this value. Verification keys tend to be
long and exceed the limitation imposed by MAX_SCRIPT_ELEMENT_SIZE (520
bytes). And for various schemes/circuits, the size might be different. So a
hash is used here. Further discussion is covered in section 3.


 refers to the proof data to be verified. Its size is also subject
to MAX_SCRIPT_ELEMENT_SIZE. This might limit our choices of the ZKP scheme,
although proof data could be split into several parts.


 refers to how many public inputs are to be read. It should depend on
the circuit in use. However, this is *not* the actual number. There is also
an implicit public input, which is the hash of the transaction, calculated
according to the script context.


The evaluation is pretty straightforward. The implementation of OP_ZKP
reads in (and removes) all its parameters from the stack, calculates the
implicit public input from the transaction (UTXO inputs, amount,
recipients, etc.), and then leave a true or false in the stack after ZKP
verification. The security reason behind this implicit public input design
is discussed in the next section.


OP_ZKPVERIFY works the same except for the last step, where it leaves
nothing or fails the verification.


*### 2. Security: replay protection*

Proof should not be replayed to authorize spending of another set of UTXO.
Therefore it is critical to bind the proof to the transaction. This is
similar to OP_CHECKSIG: the transaction hash is needed to verify the proof
for OP_ZKP. To calculate the hash, just follow what the context requires.
For a SegWit transaction, BIP 0143 seems natural. And most of the time this
will be the case. But if those opcodes are indeed used outside of SegWit,
then the hashing algorithm before SegWit should be fine.


Binding proof to the transaction, especially the UTXO entries, also
protects from proof malleability, since once spent, the UTXO entries will
be removed from the available UTXO set.


On the other hand, since proof could be calculated by anybody given the
witness, circuit developers must take this into account. The private inputs
should be kept confidential. In some other uses, there might not be any
private inputs but only an authoritative signature, then it is the
authority’s responsibility to only sign valid information, and circuit
developers’ responsibility to ensure proper protection. Further discussion
is def

[bitcoin-dev] MyCitadel wallet v1.3 advances time-locked multi-sigs further

2023-04-29 Thread Dr Olga Ukolova via bitcoin-dev
Dear community,


MyCitadel [1] by Pandora Prime SA is a Bitcoin wallet that was first to support
account-based multi-sigs with time-locks and complex miniscript descriptors.
Today an updated version 1.3 is released, extending this support, allowing the 
same signer to participate in multiple spending conditions in SegWit v0
contexts. This enables creation of complex time-locked conditions involving 
the same signers in different spending policies (for instance having 2-of-4 
multi-sig which in 1 year becomes 1-of-2).


Details
===

A year ago we created desktop version of MyCitadel: it was the first bitcoin 
wallet fully written in Rust (using GDK3 framework) and also the first 
with a support for both branched tapscripts and gracefully degrading 
time-locked multi-sigs (with the later working for both Taproot and SegWit v0).

Today we are happy to inform you about new release v1.3, with two main 
improvements:

* Creation of degrading time locked multi-sigs in SegWit v0 context, where 
  same device/signer can participate multiple conditions with different 
  timelocks and multisig thresholds. 

  NB: Previously, miniscript was preventing from creating SegWit policies 
  re-using the same extended key, while in Taproot it was possible [2]. 
  We have mitigated the problem by introducing account-based spending policies, 
  using different account from the same signer in different script branches.

* Support for exporting Bitcoin Core/BIP380-compatible wallet descriptors, 
  which include Taproot script tree and miniscript fragments.


More about MyCitadel


MyCitadel was designed with approach that avoids touching private keys and 
seeds. Unlike many other wallets, it is based not on BDK but on an
alternative stack of rust libraries called "descriptor wallet library" [3], 
created by the LNP/BP Standards Association [4]. This library provides 
compile-time type-level guarantees avoiding usage of private keys in wallet 
descriptors, and miniscript fragments. The wallet works with hardware 
signers, and also can produce and export PSBT files, which may be signed 
elsewhere (including air gaped devices, or even command-line hot-wallet 
signers, if needed).


Accessing release
=

If you want to check it out, play with it or leave feedback, please feel
free to go to the release GutHub discussion [5], which also links to the 
released binaries [6].


Acknowledgements


This release was made possible because with the help we received from 
the NYM project team, which had provided both valuable ideas and financial 
support for continued MyCitadel development.


Thank you.

Regards,
Olga Ukolova
CEO Pandora Prime SA, Switzerland
https://mycitadel.io
Twitter: @mycitadel_io

[1]: https://mycitadel.io
[2]: 
https://github.com/rust-bitcoin/rust-miniscript/issues/338#issuecomment-1137750428
[3]: https://github.com/BP-WG/descriptor-wallet
[4]: https://www.lnp-bp.org
[5]: https://github.com/mycitadel/mycitadel-desktop/discussions/90
[6]: https://github.com/mycitadel/mycitadel-desktop/releases/tag/v1.3.0
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkleize All The Things

2023-04-29 Thread Johan Torås Halseth via bitcoin-dev
Hi, Salvatore.

I find this proposal very interesting. Especially since you seemingly
can achieve such powerful capabilities by such simple opcodes.

I'm still trying to grok how this would look like on-chain (forget
about the off-chain part for now), if we were to play out such a
computation.

Let's say you have a simple game like "one player tic-tac-toe" with
only two tiles: [ _ | _ ]. The player wins if he can get two in a row
(pretty easy game tbh).

Could you give a complete example how you would encode one such state
transition (going from [ X, _ ] -> [ X, X ] for instance) in Bitcoin
script?

Feel free to choose a different game or program if you prefer :)

Thanks!
Johan



On Tue, Dec 13, 2022 at 2:08 PM Billy Tetrud via bitcoin-dev
 wrote:
>
> Re Verkle trees, that's a very interesting construction that would be super 
> useful as a tool for something like Utreexo. A potentially substantial 
> downside is that it seems the cryptography used to get those nice properties 
> of Verkle trees isn't quantum safe. While a lot of things in Bitcoin seems to 
> be going down the path of quantum-unsafe (I'm looking at you, taproot), there 
> are still a lot of people who think quantum safety is important in a lot of 
> contexts.
>
> On Thu, Dec 1, 2022 at 5:52 AM Salvatore Ingala via bitcoin-dev 
>  wrote:
>>
>> Hello Rijndael,
>>
>>
>>
>> On Wed, 30 Nov 2022 at 23:09, Rijndael  wrote:
>>>
>>> Hello Salvatore,
>>>
>>> I found my answer re-reading your original post:
>>> > During the arbitration phase (say at the i-th leaf node of M_T), any 
>>> > party can win the challenge by providing correct values for tr_i = (st_i, 
>>> > op_i, st_{i + 1}). Crucially, only one party is able to provide correct 
>>> > values, and Script can verify that indeed the state moves from st_i to 
>>> > st_{i + 1} by executing op_i. The challenge is over.
>>
>> You are correct, the computation step encoded in a leaf needs to be simple 
>> enough for Script to verify it.
>>
>> For the academic purpose of proving completeness (that is, any computation 
>> can be successfully "proved" by the availability of the corresponding fraud 
>> proof), one can imagine reducing the computation all the way down to a 
>> circuit, where each step (leaf) is as simple as what can be checked with 
>> {OP_NOT, OP_BOOLAND, OP_BOOLOR, OP_EQUAL}.
>>
>> In practice, you would want to utilize Script to its fullest, so for example 
>> you wouldn't compile a SHA256 computation to something else – you'd rather 
>> use OP_SHA256 directly.
>>
>>>
>>> That raises leads to a different question: Alice initially posts a 
>>> commitment to an execution trace of `f(x) = y`, `x`, and `y`. Bob Disagrees 
>>> with `y` so starts the challenge protocol. Is there a commitment to `f`? In 
>>> other words, the dispute protocol (as I read it) finds the leftmost step in 
>>> Alice and Bob's execution traces that differ, and then rewards the coins to 
>>> the participant who's "after-value" is computed by the step's operation 
>>> applied to the "before value". But if the participants each present valid 
>>> steps but with different operations, who wins? In other words, Alice could 
>>> present [64, DECREMENT, 63] and Bob could present [64, INCREMENT, 65]. 
>>> Those steps don't match, but both are valid. Is there something to ensure 
>>> that before the challenge protocol starts, that the execution trace that 
>>> Alice posts is for the right computation and not a different computation 
>>> that yields a favorable result for her (and for which she can generate a 
>>> valid merkle tree)?
>>
>>
>> The function f is already hard-coded in the contract itself, by means of the 
>> tree of scripts − that already commits to the possible futures. Therefore, 
>> once you are at state S14, you know that you are verifying the 6th step of 
>> the computation; and the operation in the 6th step of the computation 
>> depends solely on f, not its inputs. In fact, you made me realize that I 
>> could drop op_i from the i-th leaf commitment, and just embed the 
>> information in the Script of that corresponding state.
>>
>> Note that the states S0 to S14 of the 256x game are not _all_ the possible 
>> states, but only the ones that occurred in that execution of the contract 
>> (corresponding to a path from the root to the leaf of the Merkle tree of the 
>> computation trace), and therefore the ones that materialized in a UTXO. 
>> Different choices made by the parties (by providing different data, and 
>> therefore choosing different branches) would lead to a different leaf, and 
>> therefore to different (but in a certain sense "symmetric") states.
>>
>> 
>>
>> Since we are talking about the fact that f is committed to in the contract, 
>> I'll take the chance to extend on this a bit with a fun construction on top.
>> It is well-known in the academic literature of state channels that you can 
>> create contracts where even the function ("program", or "contract") is not 
>> decided when

Re: [bitcoin-dev] TARO Protocol metadata BIP proposal

2023-04-29 Thread Andrew Melnychuk Oseen via bitcoin-dev
Big fan of this. I don't have the technical expertise to suggest much, but I 
think that is a really good start for a foundation of bearer instruments.

-Andrew

Sent with [Proton Mail](https://proton.me/) secure email.

--- Original Message ---
On Friday, April 21st, 2023 at 2:46 AM, Adam Ivansky via bitcoin-dev 
 wrote:

> Hi all / happy Friday ,
>
> I would like to propose a BIP for the metadata structure of assets traded on 
> TARO Protocol running on Bitcoin blockchain. A new bip-taro.mediawiki file.
>
> The BIP for TARO is here 
> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro.mediawiki . TARO BIP 
> does not explicitly talk about the format of metadata of the assets. However 
> this is something we will have to agree on if we are to start trading NFTs, 
> Stablecoins and different synthetic assets such as tokenized stocks / options.
>
> For the past few months I have been operating a wallet for TARO called 
> Tiramisu Wallet on testnet ( https://testnet.tarowallet.net/ ) and I was able 
> to put together a list of fields that the metadata should have . This is a 
> result of myself testing different use cases for the protocol as well as 
> external users coming in and minting different assets.
>
> My observation is that users care a lot about the ticker, asset name, 
> description, image representing the asset, info on who minted the asset.
>
> For this reason I would like to propose a BIP for TARO Protocol asset 
> metadata. I think this should be separate from the TARO BIP as the format of 
> asset metadata might evolve depending on the real-life use cases and what 
> assets end up being minted / traded on TARO.
>
> I am proposing that the metadata is structured as a JSON stored as a string 
> and that it is formatted as follows:
>
> {
> "ticker": // [optional] Fungible assets should have ticker
> "type": // Stablecoin | Image | Video | Data ... Type of the asset
> "description": // [mandatory] Short description of the asset explaining how 
> the asset works
> "data": // [optional] Base64 formatted image data. This is the image 
> representation of the asset / an icon representing the asset.
> "hash_data": // [optional] Hash of the data that asset represents
> "external_url": // [optional] External URL to the thing that the asset 
> represents
> "attributes": { // [optional] External URL to the thing that the asset 
> represents
> "collection_name":
> ...
> }
> "minter_info": { // [optional] Information about the entity that minted the 
> asset
> "name":
> "email":
> "phone":
> "telegram":
> "website":
> }
> }
>
> This was loosely inspired by the standard use by OpenSea 
> https://docs.opensea.io/docs/metadata-standards only in case of TARO we have 
> less of an incentive to make the metadata small as this data is not written 
> to blockchain directly.
> This is why I think we should start including the actual image data into the 
> metadata.
>
> Tiramisu wallet is on testnet right now and uses some of these JSON fields.
>
> Please let me know how you feel about this.
>
> PS: I am following the manual from here 
> https://github.com/Roasbeef/bips/tree/bip-taro that says my first step should 
> be sending an email to this mailing list .
>
> Best regards,
>
> Adam Ivansky
>
> Founder of Tiramisu Wallet___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev