Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
Hi Jeremy, > I've seen some discussion of what the Annex can be used for in Bitcoin. For > example, some people have discussed using the annex as a data field for > something like CHECKSIGFROMSTACK type stuff (additional authenticated data) > or for something like delegation (the delegation is to the annex). I think > before devs get too excited, we should have an open discussion about what > this is actually for, and figure out if there are any constraints to using > it however we may please. I think one interesting purpose of the annex is to serve as a transaction field extension, where we assign new consensus validity rules to the annex payloads. One could think about new types of locks, e.g where a transaction inclusion is constrained before the annex payload value is superior to the chain's `ChainWork`. This could be useful in case of contentious forks, where you want your transaction to confirm only when enough work is accumulated, and height isn't a reliable indicator anymore. Or a relative-timelock where the endpoint is the presence of a state number encumbering the spent transaction. This could be useful in the context of payment pools, where the user withdraw transactions are all encumbered by a bip68 relative-timelock, as you don't know which one is going to confirm first, but where you don't care about enforcement of the timelocks once the contestation delay has played once and no higher-state update transaction has confirmed. Of course, we could reuse the nSequence field for some of those new types of locks, though we would lose the flexibility of combining multiple locks encumbering the same input. Another use for the annex is locating there the SIGHASH_GROUP group count value. One advantage over placing the value as a script stack item could be to have annex payloads interdependency validity, where other annex payloads are reusing the group count value as part of their own semantics. Antoine Le ven. 4 mars 2022 à 18:22, Jeremy Rubin via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> a écrit : > I've seen some discussion of what the Annex can be used for in Bitcoin. > For example, some people have discussed using the annex as a data field for > something like CHECKSIGFROMSTACK type stuff (additional authenticated data) > or for something like delegation (the delegation is to the annex). I think > before devs get too excited, we should have an open discussion about what > this is actually for, and figure out if there are any constraints to using > it however we may please. > > The BIP is tight lipped about it's purpose, saying mostly only: > > *What is the purpose of the annex? The annex is a reserved space for > future extensions, such as indicating the validation costs of > computationally expensive new opcodes in a way that is recognizable without > knowing the scriptPubKey of the output being spent. Until the meaning of > this field is defined by another softfork, users SHOULD NOT include annex > in transactions, or it may lead to PERMANENT FUND LOSS.* > > *The annex (or the lack of thereof) is always covered by the signature and > contributes to transaction weight, but is otherwise ignored during taproot > validation.* > > *Execute the script, according to the applicable script rules[11], using > the witness stack elements excluding the script s, the control block c, and > the annex a if present, as initial stack.* > > Essentially, I read this as saying: The annex is the ability to pad a > transaction with an additional string of 0's that contribute to the virtual > weight of a transaction, but has no validation cost itself. Therefore, > somehow, if you needed to validate more signatures than 1 per 50 virtual > weight units, you could add padding to buy extra gas. Or, we might somehow > make the witness a small language (e.g., run length encoded zeros) such > that we can very quickly compute an equivalent number of zeros to 'charge' > without actually consuming the space but still consuming a linearizable > resource... or something like that. We might also e.g. want to use the > annex to reserve something else, like the amount of memory. In general, we > are using the annex to express a resource constraint efficiently. This > might be useful for e.g. simplicity one day. > > Generating an Annex: One should write a tracing executor for a script, run > it, measure the resource costs, and then generate an annex that captures > any externalized costs. > > --- > > Introducing OP_ANNEX: Suppose there were some sort of annex pushing > opcode, OP_ANNEX which puts the annex on the stack as well as a 0 or 1 (to > differentiate annex is 0 from no annex, e.g. 0 1 means annex was 0 and 0 0 > means no annex). This would be equivalent to something based on flag> OP_TXHASH OP_TXHASH. > > Now suppose that I have a computation that I am running in a script as > follows: > > OP_ANNEX > OP_IF > `some operation that requires annex to be <1>` > OP_ELSE > OP_SIZE >
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
On Sat, Mar 05, 2022 at 12:20:02PM +, Jeremy Rubin via bitcoin-dev wrote: > On Sat, Mar 5, 2022 at 5:59 AM Anthony Towns wrote: > > The difference between information in the annex and information in > > either a script (or the input data for the script that is the rest of > > the witness) is (in theory) that the annex can be analysed immediately > > and unconditionally, without necessarily even knowing anything about > > the utxo being spent. > I agree that should happen, but there are cases where this would not work. > E.g., imagine OP_LISP_EVAL + OP_ANNEX... and then you do delegation via the > thing in the annex. > Now the annex can be executed as a script. You've got the implication backwards: the benefit isn't that the annex *can't* be used as/in a script; it's that it *can* be used *without* having to execute/analyse a script (and without even having to load the utxo being spent). How big a benefit that is might be debatable -- it's only a different ordering of the work you have to do to be sure the transaction is valid; it doesn't reduce the total work. And I think you can easily design invalid transactions that will maximise the work required to establish the tx is invalid, no matter what order you validate things. > Yes, this seems tough to do without redefining checksig to allow partial > annexes. "Redefining checksig to allow X" in taproot means "defining a new pubkey format that allows a new sighash that allows X", which, if it turns out to be necessary/useful, is entirely possible. It's not sensible to do what you suggest *now* though, because we don't have a spec of how a partial annex might look. > Hence thinking we should make our current checksig behavior > require it be 0, Signatures already require the annex to not be present. If you personally want to do that for every future transaction you sign off on, you already can. > > It seems like a good place for optimising SIGHASH_GROUP (allowing a group > > of inputs to claim a group of outputs for signing, but not allowing inputs > > from different groups to ever claim the same output; so that each output > > is hashed at most once for this purpose) -- since each input's validity > > depends on the other inputs' state, it's better to be able to get at > > that state as easily as possible rather than having to actually execute > > other scripts before your can tell if your script is going to be valid. > I think SIGHASH_GROUP could be some sort of mutable stack value, not ANNEX. The annex is already a stack value, and the SIGHASH_GROUP parameter cannot be mutable since it will break the corresponding signature, and (in order to ensure validating SIGHASH_GROUP signatures don't require hashing the same output multiple times) also impacts SIGHASH_GROUP signatures from other inputs. > you want to be able to compute what range you should sign, and then the > signature should cover the actual range not the argument itself. The value that SIGHASH_GROUP proposes putting in the annex is just an indication of whether (a) this input is using the same output group as the previous input; or else (b) how many outputs are in this input's output group. The signature naturally commits to that value because it's signing all the outputs in the group anyway. > Why sign the annex literally? To prevent it from being third-party malleable. When there is some meaning assigned to the annex then perhaps it will make sense to add some more granular way of accessing it via script, but until then, committing to the whole thing is the best option possible, since it still allows some potential uses of the annex without having to define a new sighash. Note that signing only part of the annex means that you probably reintroduce the quadratic hashing problem -- that is, with a script of length X and an annex of length Y, you may have to hash O(X*Y) bytes instead of O(X+Y) bytes (because every X/k bytes of the script selects a different Y/j subset of the annex to sign). > Why require that all signatures in one output sign the exact same digest? > What if one wants to sign for value and another for value + change? You can already have one signature for value and one for value+change: use SIGHASH_SINGLE for the former, and SIGHASH_ALL for the latter. SIGHASH_GROUP is designed for the case where the "value" goes to multiple places. > > > Essentially, I read this as saying: The annex is the ability to pad a > > > transaction with an additional string of 0's > > If you wanted to pad it directly, you can do that in script already > > with a PUSH/DROP combo. > You cannot, because the push/drop would not be signed and would be > malleable. If it's a PUSH, then it's in the tapscript and committed to by the scriptPubKey, and not malleable. There's currently no reason to have padding specifiable at spend time -- you know when you're writing the script whether the spender can reuse the same signature for multiple CHECKSIG ops, because the only way to do that is to
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
Hi Christian, For that purpose I'd recommend having a checksig extra that is checksigextra that allows N extra data items on the stack in addition to the txn hash. This would allow signers to sign some addtl arguments, but would not be an annex since the values would not have any consensus meaning (whereas annex is designed to have one) I've previously discussed this for eltoo with giving signatures an explicit extra seqnum, but it can be generalized as above. W.r.t. pinning, if the annex is a pure function of the script execution, then there's no issue with letting it be mutable (e.g. for a validation cost hint). But permitting both validation cost commitments and stack readability is asking too much of the annex IMO. On Sun, Mar 6, 2022, 1:13 PM Christian Decker wrote: > One thing that we recently stumbled over was that we use CLTV in eltoo not > for timelock but to have a comparison between two committed numbers coming > from the spent and the spending transaction (ordering requirement of > states). We couldn't use a number on the stack of the scriptSig as the > signature doesn't commit to it, which is why we commandeered nLocktime > values that are already in the past. > > With the annex we could have a way to get a committed to number we can > pull onto the stack, and free the nLocktime for other uses again. It'd also > be less roundabout to explain in classes :-) > > An added benefit would be that update transactions, being singlesig, can > be combined into larger transactions by third parties or watchtowers to > amortize some of the fixed cost of getting them confirmed, allowing > on-path-aggregation basically (each node can group and aggregate > transactions as they forward them). This is currently not possible since > all the transactions that we'd like to batch would have to have the same > nLocktime at the moment. > > So I think it makes sense to partition the annex into a global annex > shared by the entire transaction, and one for each input. Not sure if one > for inputs would also make sense as it'd bloat the utxo set and could be > emulated by using the input that is spending it. > > Cheers, > Christian > > On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev >> wrote: >> > I've seen some discussion of what the Annex can be used for in Bitcoin. >> >> >> https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html >> >> includes some discussion on that topic from the taproot review meetings. >> >> The difference between information in the annex and information in >> either a script (or the input data for the script that is the rest of >> the witness) is (in theory) that the annex can be analysed immediately >> and unconditionally, without necessarily even knowing anything about >> the utxo being spent. >> >> The idea is that we would define some simple way of encoding (multiple) >> entries into the annex -- perhaps a tag/length/value scheme like >> lightning uses; maybe if we add a lisp scripting language to consensus, >> we just reuse the list encoding from that? -- at which point we might >> use one tag to specify that a transaction uses advanced computation, and >> needs to be treated as having a heavier weight than its serialized size >> implies; but we could use another tag for per-input absolute locktimes; >> or another tag to commit to a past block height having a particular hash. >> >> It seems like a good place for optimising SIGHASH_GROUP (allowing a group >> of inputs to claim a group of outputs for signing, but not allowing inputs >> from different groups to ever claim the same output; so that each output >> is hashed at most once for this purpose) -- since each input's validity >> depends on the other inputs' state, it's better to be able to get at >> that state as easily as possible rather than having to actually execute >> other scripts before your can tell if your script is going to be valid. >> >> > The BIP is tight lipped about it's purpose >> >> BIP341 only reserves an area to put the annex; it doesn't define how >> it's used or why it should be used. >> >> > Essentially, I read this as saying: The annex is the ability to pad a >> > transaction with an additional string of 0's >> >> If you wanted to pad it directly, you can do that in script already >> with a PUSH/DROP combo. >> >> The point of doing it in the annex is you could have a short byte >> string, perhaps something like "0x010201a4" saying "tag 1, data length 2 >> bytes, value 420" and have the consensus intepretation of that be "this >> transaction should be treated as if it's 420 weight units more expensive >> than its serialized size", while only increasing its witness size by >> 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 >> bytes for a 426 weight unit increase seems much better than adding 426 >>
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
One thing that we recently stumbled over was that we use CLTV in eltoo not for timelock but to have a comparison between two committed numbers coming from the spent and the spending transaction (ordering requirement of states). We couldn't use a number on the stack of the scriptSig as the signature doesn't commit to it, which is why we commandeered nLocktime values that are already in the past. With the annex we could have a way to get a committed to number we can pull onto the stack, and free the nLocktime for other uses again. It'd also be less roundabout to explain in classes :-) An added benefit would be that update transactions, being singlesig, can be combined into larger transactions by third parties or watchtowers to amortize some of the fixed cost of getting them confirmed, allowing on-path-aggregation basically (each node can group and aggregate transactions as they forward them). This is currently not possible since all the transactions that we'd like to batch would have to have the same nLocktime at the moment. So I think it makes sense to partition the annex into a global annex shared by the entire transaction, and one for each input. Not sure if one for inputs would also make sense as it'd bloat the utxo set and could be emulated by using the input that is spending it. Cheers, Christian On Sat, 5 Mar 2022, 07:33 Anthony Towns via bitcoin-dev, < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev > wrote: > > I've seen some discussion of what the Annex can be used for in Bitcoin. > > > https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html > > includes some discussion on that topic from the taproot review meetings. > > The difference between information in the annex and information in > either a script (or the input data for the script that is the rest of > the witness) is (in theory) that the annex can be analysed immediately > and unconditionally, without necessarily even knowing anything about > the utxo being spent. > > The idea is that we would define some simple way of encoding (multiple) > entries into the annex -- perhaps a tag/length/value scheme like > lightning uses; maybe if we add a lisp scripting language to consensus, > we just reuse the list encoding from that? -- at which point we might > use one tag to specify that a transaction uses advanced computation, and > needs to be treated as having a heavier weight than its serialized size > implies; but we could use another tag for per-input absolute locktimes; > or another tag to commit to a past block height having a particular hash. > > It seems like a good place for optimising SIGHASH_GROUP (allowing a group > of inputs to claim a group of outputs for signing, but not allowing inputs > from different groups to ever claim the same output; so that each output > is hashed at most once for this purpose) -- since each input's validity > depends on the other inputs' state, it's better to be able to get at > that state as easily as possible rather than having to actually execute > other scripts before your can tell if your script is going to be valid. > > > The BIP is tight lipped about it's purpose > > BIP341 only reserves an area to put the annex; it doesn't define how > it's used or why it should be used. > > > Essentially, I read this as saying: The annex is the ability to pad a > > transaction with an additional string of 0's > > If you wanted to pad it directly, you can do that in script already > with a PUSH/DROP combo. > > The point of doing it in the annex is you could have a short byte > string, perhaps something like "0x010201a4" saying "tag 1, data length 2 > bytes, value 420" and have the consensus intepretation of that be "this > transaction should be treated as if it's 420 weight units more expensive > than its serialized size", while only increasing its witness size by > 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 > bytes for a 426 weight unit increase seems much better than adding 426 > witness bytes. > > The example scenario is that if there was an opcode to verify a > zero-knowledge proof, eg I think bulletproof range proofs are something > like 10x longer than a signature, but require something like 400x the > validation time. Since checksig has a validation weight of 50 units, > a bulletproof verify might have a 400x greater validation weight, ie > 20,000 units, while your witness data is only 650 bytes serialized. In > that case, we'd need to artificially bump the weight of you transaction > up by the missing 19,350 units, or else an attacker could fill a block > with perhaps 6000 bulletproofs costing the equivalent of 120M signature > operations, rather than the 80k sigops we currently expect as the maximum > in a block. Seems better to just have "0x01024b96" stuck in the annex, > than 19kB of zeroes. > > > Introducing OP_ANNEX: Suppose there were some sort of annex
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
We'd have to be very carefully with this kind of third-party malleability, since it'd make transaction pinning trivial without even requiring the ability to spend one of the outputs (which current CPFP based pinning attacks require). Cheers, Christian On Sat, 5 Mar 2022, 00:33 ZmnSCPxj via bitcoin-dev, < bitcoin-dev@lists.linuxfoundation.org> wrote: > Good morning Jeremy, > > Umm `OP_ANNEX` seems boring > > > > It seems like one good option is if we just go on and banish the > OP_ANNEX. Maybe that solves some of this? I sort of think so. It definitely > seems like we're not supposed to access it via script, given the quote from > above: > > > > Execute the script, according to the applicable script rules[11], using > the witness stack elements excluding the script s, the control block c, and > the annex a if present, as initial stack. > > If we were meant to have it, we would have not nixed it from the stack, > no? Or would have made the opcode for it as a part of taproot... > > > > But recall that the annex is committed to by the signature. > > > > So it's only a matter of time till we see some sort of Cat and Schnorr > Tricks III the Annex Edition that lets you use G cleverly to get the annex > onto the stack again, and then it's like we had OP_ANNEX all along, or > without CAT, at least something that we can detect that the value has > changed and cause this satisfier looping issue somehow. > > ... Never mind I take that back. > > Hmmm. > > Actually if the Annex is supposed to be ***just*** for adding weight to > the transaction so that we can do something like increase limits on SCRIPT > execution, then it does *not* have to be covered by any signature. > It would then be third-party malleable, but suppose we have a "valid" > transaction on the mempool where the Annex weight is the minimum necessary: > > * If a malleated transaction has a too-low Annex, then the malleated > transaction fails validation and the current transaction stays in the > mempool. > * If a malleated transaction has a higher Annex, then the malleated > transaction has lower feerate than the current transaction and cannot evict > it from the mempool. > > Regards, > ZmnSCPxj > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
On Sat, Mar 5, 2022 at 5:59 AM Anthony Towns wrote: > On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev > wrote: > > I've seen some discussion of what the Annex can be used for in Bitcoin. > > > https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html > > includes some discussion on that topic from the taproot review meetings. > > The difference between information in the annex and information in > either a script (or the input data for the script that is the rest of > the witness) is (in theory) that the annex can be analysed immediately > and unconditionally, without necessarily even knowing anything about > the utxo being spent. > I agree that should happen, but there are cases where this would not work. E.g., imagine OP_LISP_EVAL + OP_ANNEX... and then you do delegation via the thing in the annex. Now the annex can be executed as a script. > > The idea is that we would define some simple way of encoding (multiple) > entries into the annex -- perhaps a tag/length/value scheme like > lightning uses; maybe if we add a lisp scripting language to consensus, > we just reuse the list encoding from that? -- at which point we might > use one tag to specify that a transaction uses advanced computation, and > needs to be treated as having a heavier weight than its serialized size > implies; but we could use another tag for per-input absolute locktimes; > or another tag to commit to a past block height having a particular hash. > Yes, this seems tough to do without redefining checksig to allow partial annexes. Hence thinking we should make our current checksig behavior require it be 0, future operations should be engineered with specific structured annex in mind. > > It seems like a good place for optimising SIGHASH_GROUP (allowing a group > of inputs to claim a group of outputs for signing, but not allowing inputs > from different groups to ever claim the same output; so that each output > is hashed at most once for this purpose) -- since each input's validity > depends on the other inputs' state, it's better to be able to get at > that state as easily as possible rather than having to actually execute > other scripts before your can tell if your script is going to be valid. > I think SIGHASH_GROUP could be some sort of mutable stack value, not ANNEX. you want to be able to compute what range you should sign, and then the signature should cover the actual range not the argument itself. Why sign the annex literally? Why require that all signatures in one output sign the exact same digest? What if one wants to sign for value and another for value + change? > > > The BIP is tight lipped about it's purpose > > BIP341 only reserves an area to put the annex; it doesn't define how > it's used or why it should be used. > > It does define how it's used, Checksig must commit to it. Were there no opcodes dependent on it I would agree, and that would be preferable. > > Essentially, I read this as saying: The annex is the ability to pad a > > transaction with an additional string of 0's > > If you wanted to pad it directly, you can do that in script already > with a PUSH/DROP combo. > You cannot, because the push/drop would not be signed and would be malleable. The annex is not malleable, so it can be used to this as authenticated padding. > > The point of doing it in the annex is you could have a short byte > string, perhaps something like "0x010201a4" saying "tag 1, data length 2 > bytes, value 420" and have the consensus intepretation of that be "this > transaction should be treated as if it's 420 weight units more expensive > than its serialized size", while only increasing its witness size by > 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 > bytes for a 426 weight unit increase seems much better than adding 426 > witness bytes. > > Yes, that's what I say in the next sentence, *> Or, we might somehow make the witness a small language (e.g., run length encoded zeros) such that we can very quickly compute an equivalent number of zeros to 'charge' without actually consuming the space but still consuming a linearizable resource... or something like that.* so I think we concur on that. > > Introducing OP_ANNEX: Suppose there were some sort of annex pushing > opcode, > > OP_ANNEX which puts the annex on the stack > > I think you'd want to have a way of accessing individual entries from > the annex, rather than the annex as a single unit. > Or OP_ANNEX + OP_SUBSTR + OP_POVARINTSTR? Then you can just do 2 pops for the length and the tag and then get the data. > > > Now suppose that I have a computation that I am running in a script as > > follows: > > > > OP_ANNEX > > OP_IF > > `some operation that requires annex to be <1>` > > OP_ELSE > > OP_SIZE > > `some operation that requires annex to be len(annex) + 1 or does a > > checksig` > > OP_ENDIF > > > > Now every time you run this, > > You only
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev wrote: > I've seen some discussion of what the Annex can be used for in Bitcoin. https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html includes some discussion on that topic from the taproot review meetings. The difference between information in the annex and information in either a script (or the input data for the script that is the rest of the witness) is (in theory) that the annex can be analysed immediately and unconditionally, without necessarily even knowing anything about the utxo being spent. The idea is that we would define some simple way of encoding (multiple) entries into the annex -- perhaps a tag/length/value scheme like lightning uses; maybe if we add a lisp scripting language to consensus, we just reuse the list encoding from that? -- at which point we might use one tag to specify that a transaction uses advanced computation, and needs to be treated as having a heavier weight than its serialized size implies; but we could use another tag for per-input absolute locktimes; or another tag to commit to a past block height having a particular hash. It seems like a good place for optimising SIGHASH_GROUP (allowing a group of inputs to claim a group of outputs for signing, but not allowing inputs from different groups to ever claim the same output; so that each output is hashed at most once for this purpose) -- since each input's validity depends on the other inputs' state, it's better to be able to get at that state as easily as possible rather than having to actually execute other scripts before your can tell if your script is going to be valid. > The BIP is tight lipped about it's purpose BIP341 only reserves an area to put the annex; it doesn't define how it's used or why it should be used. > Essentially, I read this as saying: The annex is the ability to pad a > transaction with an additional string of 0's If you wanted to pad it directly, you can do that in script already with a PUSH/DROP combo. The point of doing it in the annex is you could have a short byte string, perhaps something like "0x010201a4" saying "tag 1, data length 2 bytes, value 420" and have the consensus intepretation of that be "this transaction should be treated as if it's 420 weight units more expensive than its serialized size", while only increasing its witness size by 6 bytes (annex length, annex flag, and the four bytes above). Adding 6 bytes for a 426 weight unit increase seems much better than adding 426 witness bytes. The example scenario is that if there was an opcode to verify a zero-knowledge proof, eg I think bulletproof range proofs are something like 10x longer than a signature, but require something like 400x the validation time. Since checksig has a validation weight of 50 units, a bulletproof verify might have a 400x greater validation weight, ie 20,000 units, while your witness data is only 650 bytes serialized. In that case, we'd need to artificially bump the weight of you transaction up by the missing 19,350 units, or else an attacker could fill a block with perhaps 6000 bulletproofs costing the equivalent of 120M signature operations, rather than the 80k sigops we currently expect as the maximum in a block. Seems better to just have "0x01024b96" stuck in the annex, than 19kB of zeroes. > Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode, > OP_ANNEX which puts the annex on the stack I think you'd want to have a way of accessing individual entries from the annex, rather than the annex as a single unit. > Now suppose that I have a computation that I am running in a script as > follows: > > OP_ANNEX > OP_IF > `some operation that requires annex to be <1>` > OP_ELSE > OP_SIZE > `some operation that requires annex to be len(annex) + 1 or does a > checksig` > OP_ENDIF > > Now every time you run this, You only run a script from a transaction once at which point its annex is known (a different annex gives a different wtxid and breaks any signatures), and can't reference previous or future transactions' annexes... > Because the Annex is signed, and must be the same, this can also be > inconvenient: The annex is committed to by signatures in the same way nVersion, nLockTime and nSequence are committed to by signatures; I think it helps to think about it in a similar way. > Suppose that you have a Miniscript that is something like: and(or(PK(A), > PK(A')), X, or(PK(B), PK(B'))). > > A or A' should sign with B or B'. X is some sort of fragment that might > require a value that is unknown (and maybe recursively defined?) so > therefore if we send the PSBT to A first, which commits to the annex, and > then X reads the annex and say it must be something else, A must sign > again. So you might say, run X first, and then sign with A and C or B. > However, what if the script somehow detects the bitstring WHICH_A WHICH_B > and has a
Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
Good morning Jeremy, Umm `OP_ANNEX` seems boring > It seems like one good option is if we just go on and banish the OP_ANNEX. > Maybe that solves some of this? I sort of think so. It definitely seems like > we're not supposed to access it via script, given the quote from above: > > Execute the script, according to the applicable script rules[11], using the > witness stack elements excluding the script s, the control block c, and the > annex a if present, as initial stack. > If we were meant to have it, we would have not nixed it from the stack, no? > Or would have made the opcode for it as a part of taproot... > > But recall that the annex is committed to by the signature. > > So it's only a matter of time till we see some sort of Cat and Schnorr Tricks > III the Annex Edition that lets you use G cleverly to get the annex onto the > stack again, and then it's like we had OP_ANNEX all along, or without CAT, at > least something that we can detect that the value has changed and cause this > satisfier looping issue somehow. ... Never mind I take that back. Hmmm. Actually if the Annex is supposed to be ***just*** for adding weight to the transaction so that we can do something like increase limits on SCRIPT execution, then it does *not* have to be covered by any signature. It would then be third-party malleable, but suppose we have a "valid" transaction on the mempool where the Annex weight is the minimum necessary: * If a malleated transaction has a too-low Annex, then the malleated transaction fails validation and the current transaction stays in the mempool. * If a malleated transaction has a higher Annex, then the malleated transaction has lower feerate than the current transaction and cannot evict it from the mempool. Regards, ZmnSCPxj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations
I've seen some discussion of what the Annex can be used for in Bitcoin. For example, some people have discussed using the annex as a data field for something like CHECKSIGFROMSTACK type stuff (additional authenticated data) or for something like delegation (the delegation is to the annex). I think before devs get too excited, we should have an open discussion about what this is actually for, and figure out if there are any constraints to using it however we may please. The BIP is tight lipped about it's purpose, saying mostly only: *What is the purpose of the annex? The annex is a reserved space for future extensions, such as indicating the validation costs of computationally expensive new opcodes in a way that is recognizable without knowing the scriptPubKey of the output being spent. Until the meaning of this field is defined by another softfork, users SHOULD NOT include annex in transactions, or it may lead to PERMANENT FUND LOSS.* *The annex (or the lack of thereof) is always covered by the signature and contributes to transaction weight, but is otherwise ignored during taproot validation.* *Execute the script, according to the applicable script rules[11], using the witness stack elements excluding the script s, the control block c, and the annex a if present, as initial stack.* Essentially, I read this as saying: The annex is the ability to pad a transaction with an additional string of 0's that contribute to the virtual weight of a transaction, but has no validation cost itself. Therefore, somehow, if you needed to validate more signatures than 1 per 50 virtual weight units, you could add padding to buy extra gas. Or, we might somehow make the witness a small language (e.g., run length encoded zeros) such that we can very quickly compute an equivalent number of zeros to 'charge' without actually consuming the space but still consuming a linearizable resource... or something like that. We might also e.g. want to use the annex to reserve something else, like the amount of memory. In general, we are using the annex to express a resource constraint efficiently. This might be useful for e.g. simplicity one day. Generating an Annex: One should write a tracing executor for a script, run it, measure the resource costs, and then generate an annex that captures any externalized costs. --- Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode, OP_ANNEX which puts the annex on the stack as well as a 0 or 1 (to differentiate annex is 0 from no annex, e.g. 0 1 means annex was 0 and 0 0 means no annex). This would be equivalent to something based on OP_TXHASH OP_TXHASH. Now suppose that I have a computation that I am running in a script as follows: OP_ANNEX OP_IF `some operation that requires annex to be <1>` OP_ELSE OP_SIZE `some operation that requires annex to be len(annex) + 1 or does a checksig` OP_ENDIF Now every time you run this, it requires one more resource unit than the last time you ran it, which makes your satisfier use the annex as some sort of "scratch space" for a looping construct, where you compute a new annex, loop with that value, and see if that annex is now accepted by the program. In short, it kinda seems like being able to read the annex off of the stack makes witness construction somehow turing complete, because we can use it as a register/tape for some sort of computational model. --- This seems at odds with using the annex as something that just helps you heuristically guess computation costs, now it's somehow something that acts to make script satisfiers recursive. Because the Annex is signed, and must be the same, this can also be inconvenient: Suppose that you have a Miniscript that is something like: and(or(PK(A), PK(A')), X, or(PK(B), PK(B'))). A or A' should sign with B or B'. X is some sort of fragment that might require a value that is unknown (and maybe recursively defined?) so therefore if we send the PSBT to A first, which commits to the annex, and then X reads the annex and say it must be something else, A must sign again. So you might say, run X first, and then sign with A and C or B. However, what if the script somehow detects the bitstring WHICH_A WHICH_B and has a different Annex per selection (e.g., interpret the bitstring as a int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn, Kn')) we end up with needing to pre-sign 2**n annex values somehow... this seems problematic theoretically. Of course this wouldn't be miniscript then. Because miniscript is just for the well behaved subset of script, and this seems ill behaved. So maybe we're OK? But I think the issue still arises where suppose I have a simple thing like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide what logic each satisfier for the branch is going to use in advance, or sign all possible sums of both our annex costs?