Re: [bitcoin-dev] [Lightning-dev] CheckTemplateVerify Does Not Scale Due to UTXO's Required For Fee Payment
On Tue, Jan 30, 2024 at 05:17:04AM +, ZmnSCPxj via bitcoin-dev wrote: > > > I should note that under Decker-Russell-Osuntokun the expectation is that > > both counterparties hold the same offchain transactions (hence why it is > > sometimes called "LN-symmetry"). > > However, there are two ways to get around this: > > > > 1. Split the fee between them in some "fair" way. > > Definition of "fair" wen? > > 2. Create an artificial asymmetry: flip a bit of `nSequence` for the > > update+state txes of one counterparty, and have each side provide > > signatures for the tx held by its counterparty (as in Poon-Dryja). > > This lets you force that the party that holds a particular update+state tx > > is the one that pays fees. > > No, wait, #2 does not actually work as stated. > Decker-Russell-Osuntokun uses `SIGHASH_NOINPUT` meaning the `nSequence` is > not committed in the signature and can be malleated. BIP 118 as at March 2021 (when it defined NOINPUT rather than APO): ] The transaction digest algorithm from BIP 143 is used when verifying a ] SIGHASH_NOINPUT signature, with the following modifications: ] ] 2. hashPrevouts (32-byte hash) is 32 0x00 bytes ] 3. hashSequence (32-byte hash) is 32 0x00 bytes ] 4. outpoint (32-byte hash + 4-byte little endian) is ]set to 36 0x00 bytes ] 5. scriptCode of the input is set to an empty script ]0x00 BIP 143: ] A new transaction digest algorithm is defined, but only applicable to sigops in version 0 witness program: ] ] Double SHA256 of the serialization of: ] ... ] 2. hashPrevouts (32-byte hash) ] 3. hashSequence (32-byte hash) ] 4. outpoint (32-byte hash + 4-byte little endian) ] 5. scriptCode of the input (serialized as scripts inside CTxOuts) ] ... ] 7. nSequence of the input (4-byte little endian) So nSequence would still have been committed to per that proposal. Dropping hashSequence just removes the commitment to the other inputs being spent by the tx. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP process friction
On Thu, Jan 18, 2024 at 05:41:14AM -1000, David A. Harding via bitcoin-dev wrote: > Question: is there a recommended way to produce a shorter identifier for > inline use in reading material? For example, for proposal > BIN-2024-0001-000, I'm thinking: > > - BIN24-1 (references whatever the current version of the proposal is) > - BIN24-1.0 (references revision 0) > > I think that doesn't look too bad even if there are over 100 proposals a > year, with some of them getting into over a hundred revisions: > > - BIN24-123 > - BIN24-123.123 Having lived through y2k, two-digit years give me the ick, but otherwise sure. Cheers, aj, that's how the kids who didn't live through y2k say it, right? ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] BIP process friction
Hi all, Just under three years ago there was some discussion about the BIPs repo, with the result that Kalle became a BIPs editor in addition to Luke, eg: * https://gnusha.org/bitcoin-core-dev/2021-04-22.log * https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018859.html It remains, however, quite hard to get BIPs merged into the repo, eg the following PRs have been open for quite some time: * #1408: Ordinal Numbers; opened 2023-01-21, editors comments: Kalle: https://github.com/bitcoin/bips/pull/1408#issuecomment-1421641390 https://github.com/bitcoin/bips/pull/1408#issuecomment-1435389476 Luke: https://github.com/bitcoin/bips/pull/1408#issuecomment-1429146796 https://github.com/bitcoin/bips/pull/1408#issuecomment-1438831607 https://github.com/bitcoin/bips/pull/1408#issuecomment-1465016571 * #1489: Taproot Assets Protocol; opened 2023-09-07, editors comments: Kalle: https://github.com/bitcoin/bips/pull/1489#issuecomment-1855079626 Luke: https://github.com/bitcoin/bips/pull/1489#issuecomment-1869721535j * #1500: OP_TXHASH; opened 2023-09-30, editors comments: Luke: https://github.com/bitcoin/bips/pull/1500#pullrequestreview-1796550166 https://twitter.com/LukeDashjr/status/1735701932520382839 The range of acceptable BIPs seems to also be becoming more limited, such that mempool/relay policy is out of scope: * https://github.com/bitcoin/bips/pull/1524#issuecomment-1869734387 Despite having two editors, only Luke seems to be able to assign new numbers to BIPs, eg: * https://github.com/bitcoin/bips/pull/1458#issuecomment-1597917780 There's also been some not very productive delays due to the editors wanting backwards compatibility sections even if authors don't think that's necessary, eg: * https://github.com/bitcoin/bips/pull/1372#issuecomment-1439132867 Even working out whether to go back to allowing markdown as a text format is a multi-month slog due to process confusion: * https://github.com/bitcoin/bips/pull/1504 Anyway, while it's not totally dysfunctional, it's very high friction. There are a variety of recent proposals that have PRs open against inquisition; up until now I've been suggesting people write a BIP, and have been keying off the BIP number to signal activation. But that just seems to be introducing friction, when all I need is a way of linking an arbitrary number to a spec. So I'm switching inquisition over to having a dedicated "IANA"-ish thing that's independent of BIP process nonsense. It's at: * https://github.com/bitcoin-inquisition/binana If people want to use it for bitcoin-related proposals that don't have anything to do with inquisition, that's fine; I'm intending to apply the policies I think the BIPs repo should be using, so feel free to open a PR, even if you already know I think your idea is BS on its merits. If someone wants to write an automatic-merge-bot for me, that'd also be great. If someone wants to reform the BIPs repo itself so it works better, that'd be even better, but I'm not volunteering for that fight. Cheers, aj (It's called "numbers and names" primarily because that way the acronym amuses me, but also in case inquisition eventually needs an authoritative dictionary for what "cat" or "txhash" or similar terms refer to) ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Swift Activation - CTV
On Sat, Dec 30, 2023 at 01:54:04PM +, Michael Folkson via bitcoin-dev wrote: > > > But "target fixation" [0] is a thing too: maybe "CTV" (and/or "APO") were > > > just a bad approach from the start. > It is hard to discuss APO in a vacuum when this is going on the background > but I'm interested in you grouping APO with CTV in this statement. ... But > APO does seem to be the optimal design and have broad consensus in the > Lightning community for enabling eltoo/LN-Symmetry. Any other use cases APO > enables would be an additional benefit. I guess I'm really just reiterating/expanding on Russell's thoughts from two years ago: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html CTV and APO both take the approach of delineating a small, precise piece of functionality that is thought to be useful in known ways, and making that available for use within Bitcoin. But doing incremental consensus changes every time we discover new features that we'd like to add to wallet/L2 software is kind of clumsy, and perhaps we should be looking at more general approaches that allow more things at once. Beyond that, APO also follows the design of satoshi's ANYONECANPAY, which allows attaching other inputs. There's a decent argument to be made that that's a bad design choice (and was perhaps a bad choice for ANYONECANPAY as well as APO), and that committing to the number of inputs would still be a valable thing to do (in that it minimises how much third parties can manipulate your tx, and reduces the potential for rbf pinning). A consequence of that is that once if you fix the number of inputs to one and already know the input you're spending, you avoid txid malleability. See https://github.com/bitcoin/bips/pull/1472 eg. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Swift Activation - CTV
Huh, this list is still active? On Fri, Dec 22, 2023 at 10:34:52PM +, alicexbt via bitcoin-dev wrote: > I think CTV is not ready for activation yet. Although I want it to be > activated and use payment pools, we still have some work to do and AJ is > correct that we need to build more apps that use CTV on signet. I've said it before, and I'll say it again, but if you want to change bitcoin consensus rules, IMO the sensible process is: * work out what you think the change should be * demonstrate the benefits so everyone can clearly see what they are, and that they're worth spending time on * review the risks, so that whatever risks there may be are well understood, and minimise them * iterate on all three of those steps to increase the benefits and reduce the risks * once "everyone" agrees the benefits are huge and the risks are low, work on activating it If you're having trouble demonstrating that the benefits really are worth spending time on, you probably need to go back to the first step and reconsider the proposal. The "covtools" and "op_cat" approaches are a modest way of doing that: adding additional opcodes that mesh well with CTV, increasing the benefits from making a change. But "target fixation" [0] is a thing too: maybe "CTV" (and/or "APO") were just a bad approach from the start. Presumably "activate CTV" is really intended as a step towards your actual goal, whether that be "make it harder for totalitarians to censor payments", "replace credit cards", "make lots of money", "take control over bitcoind evelopment", or something else. Maybe there's a better step towards some/all of whatever those goals may be than "activate CTV". Things like "txhash" take that approach and go back to the first step. To me, it seems like CTV has taken the odd approach of simultaneously maximising (at least perceived) risk, while minimising the potential benefits. As far as maximising risk goes, it's taken Greg Maxwell's "amusingly bad idea" post from bitcointalk in 2013 [1] and made the bad consequence described there (namely, "coin covenants", which left Greg "screaming in horror") as the centrepiece of the functionality being added, per its motivation section. It then minimises the potential benefits that accompany that risk by restricting the functionality being provided as far as you can without neutering it entirely. If you *wanted* a recipe for how to propose a change to bitcoin and ensure that it's doomed to fail while still gathering a lot of attention, I'm honestly not sure how you could come up with a better approach? [0] https://en.wikipedia.org/wiki/Target_fixation [1] https://bitcointalk.org/index.php?topic=278122.0 > - Apart from a few PoCs that do not achieve anything big on mainnet, nobody > has tried to build PoC for a use case that solves real problems One aspect of "minimising the benefits" is that when you make something too child safe, it can become hard to actually use the tool at all. Just having ideas is easy -- you can just handwave over the complex parts when you're whiteboarding or blogging -- the real way to test if a tool is fit for purpose is to use it to build something worthwhile. Maybe a great chef can create a great meal with an easy-bake oven, but there's a reason it's not their tool of choice. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Purely off-chain coin colouring
On Sat, Feb 04, 2023 at 08:38:54PM +1000, Anthony Towns via bitcoin-dev wrote: > > AJ Towns writes: > > > I think, however, that you can move inscriptions entirely off-chain. I > > > wrote a little on this idea on twitter already [1], but after a bit more > > > thought, I think pushing things even further off-chain would be plausible. Oh, you could also do inscriptions minimally on-chain. Rather than posting the inscription on-chain per se, take a hash of the data you want to inscribe, and then do a sign-to-contract commitment of that hash. That reduces your on-chain overhead for creating an inscription to approximately zero (you're just signing a transaction), so can be much cheaper, and also can't be blocked or front run by mempool observers. But obviously means the inscription must be announced off-chain for anyone to know about it. Of course, that could be seen as a benefit: you can now have a private inscription, that's still transferable via the regular ordinals protocol. OTOH, there's no way to definitvely say "this tx is the Nth inscription that matches pattern X", as there may be many earlier sign-to-contract inscriptions that match that pattern that simply haven't been publicly revealed yet. So that wouldn't be compatible with "inscription numbers" or "first X inscripts count as minting token Y". If you go one step further and allow the sign-to-contract to be the merkle root of many inscriptions, then you've effectively reinvented timestamping. (You can't outsource inscriptions to a timestamp server, because you'd fail to own the ordinal that indicates "ownership" of the inscription, however you could provide timestamping services as a value-add while creating inscriptions) Sign-to-contract looks like: * generate a secret random nonce r0 * calculate the public version R0 = r0*G * calculate a derived nonce r = r0 + SHA256(R0, data), where "data" is what you want to commit to * generate your signature using public nonce R=r*G as usual To be able to verify sign-to-contract, you reveal R0 and data, and the verification is just checking that R=R0+SHA256(R0, data)*G. That works with both ecdsa and schnorr signatures, so doesn't require any advance preparation. While it's not widely supported, sign-to-contract is a useful feature in general for anti-exfil (eg, preventing a malicious hardware wallet from leaking your secret key when signing txs). Some references: https://www.reddit.com/r/Bitcoin/comments/d3lffo/technical_paytocontract_and_signtocontract/ https://github.com/BlockstreamResearch/secp256k1-zkp/blob/d22774e248c703a191049b78f8d04f37d6fcfa05/include/secp256k1_ecdsa_s2c.h https://github.com/bitcoin-core/secp256k1/pull/1140 https://wally.readthedocs.io/en/release_0.8.9/anti_exfil_protocol/ https://github.com/opentimestamps/python-opentimestamps/pull/14 Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Future of the bitcoin-dev mailing list
On Tue, Nov 07, 2023 at 09:37:22AM -0600, Bryan Bishop via bitcoin-dev wrote: > Web forums are an interesting option, but often don't have good email user > integration. > What about bitcointalk.org or delvingbitcoin.org? delvingbitcoin.org is something I setup; it's a self-hosted discourse instance. (You don't have to self-host discourse, but not doing so limits the number of admins/moderators, the plugins you can use, and the APIs you can access) For what it's worth, I think (discourse) forums have significant advantages over email for technical discussion: * much better markup: you can write LaTeX for doing maths, you can have graphviz or mermaid diagrams generated directly from text, you can do formatting without having to worry about HTML email. because that's done direct from markup, you can also quote such things in replies, or easily create a modified equation/diagram if desired, things that are much harder if equations/diagrams are image/pdf attachments. * consistent threading/quoting: you don't have to rely on email clients to get threading/quoting correct in order to link replies with the original message * having topics/replies, rather than everything being an individual email, tends to make it easier to avoid being distracted by followups to a topic you're not interested in. * you can do reactions (heart / thumbs up / etc) instead of "me too" posts, minimising the impact of low-content responses on readers, without doing away with those responses entirely. * after the fact moderation: with mailing lists, moderation can only be a choice between "send this post to every subscriber" or not, and the choice obviously has to be made before anyone sees the posts; forums allow off-topic/unconstructive posts to be removed or edited. Compared to mailing-lists-as-a-service, a self-hosted forum has a few other possible benefits: * it's easier to setup areas for additional topics, without worrying you're going to be forced into an arbitrarily higher pricing tier * you can setup spaces for private working groups. (and those groups can make their internal discussions public after the fact, if desired) * you can use plugin interfaces/APIs to link up with external resources There are a few disadvantages too: * discourse isn't lightweight -- you need a whole bunch of infrastructure to go from the markdown posts to the actual rendered posts/comments; so backups of just the markdown text isn't really "complete" * discourse is quite actively developed -- so it could be possible that posts that use particular features/plugins (eg to generate diagrams) will go stale eventually as the software changes, and stop being rendered correctly * discourse gathers a moderate amount of non-public/potentially private data (eg email addresses, passwords, IP addresses, login times) that may make backups and admin access sensitive (which is why there's a git archive generated by a bot for delvingbitcoin, rather than raw database dumps) There are quite a few open source projects using discourse instances, eg: Python: https://discuss.python.org/ Ruby on Rails: https://discuss.rubyonrails.org/ LLVM: https://discourse.llvm.org/ Jupyter: https://discourse.jupyter.org/ Fedora: https://discussion.fedoraproject.org/ Ubuntu: https://discourse.ubuntu.com/ Haskell: https://discourse.haskell.org/ There's also various crypto projects using it: Eth research: https://ethresear.ch/ Chia: https://developers.chia.net/ There's a couple of LWN articles on Python's adoption of discourse that I found interesting, fwiw: https://lwn.net/Articles/901744/ [2022-07-20] https://lwn.net/Articles/674271/ [2016-02-03] I don't think this needs to be an "either-or" question -- better to have technical discussions about bitcoin in many places and in many formats, rather than just one -- but I thought I'd take the opportunity to write out why I thought discourse was worth spending some time on in this context. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
On Sat, Oct 28, 2023 at 03:19:30PM +1030, Rusty Russell via bitcoin-dev wrote: [Quoted text has been reordered] > > I think there's two reasons to think about this approach: > > (a) we want to do vault operations specifically, > I'm interested in vaults because they're a concrete example I can get my > head around. Not because I think they'll be widely used! I don't think that's likely to make for a very productive discussion: we shouldn't be changing consensus to support toy examples, after all. If there's a *useful* thing to do that would be possible with similar consensus changes, lets discuss that thing; if there's nothing useful that needs these consensus changes, then lets discuss something useful that doesn't need consensus changes instead. To me, it seems pretty likely that if you're designing an API where you don't expect anyone to actually use it for anything important, then you're going to end up making pretty bad API -- after all, why put in the effort to understand the use case and make a good API if you're sure it will never be useful anyway? There are some articles on API design that I quite like: https://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html https://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html I'd rate the "lets have a mass of incomprehensible script that no one really understands and is incredibly dangerous to modify, and just make it a template" approach as somewhere between "3. Read the documentation and you'll get it right" (at best) and "-5 Do it right and it will sometimes break at runtime" (more likely). Anyway, for the specific examples: > But AFAICT there are multiple perfectly reasonable variants of vaults, > too. One would be: > > 1. master key can do anything > 2. OR normal key can send back to vault addr without delay > 3. OR normal key can do anything else after a delay. I don't think there's any point having (2) here unless you're allowing for consolidation transactions (two or more vault inputs spending to a single output that's the same vault), which you've dismissed as a party trick. > Another would be: > 1. normal key can send to P2WPKH(master) > 2. OR normal key can send to P2WPKH(normal key) after a delay. I don't think there's any meaningful difference here between (2) here and (3) above -- after the delay, you post one transaction spending to p2wpkh(normal) signed by the normal key, then immediately post a second transaction spending that new output, which is also signed with the normal key, so you've just found a way of saying "normal key can do anything else after a delay" that takes up more blockspace. Both these approaches mirror the model that kanzure posted about in 2018 [0] (which can already be done via presigned transactions) and they share the same fundamental flaw [1], namely that once someone compromises the "normal key", all they have to do is wait for the legitimate owner to trigger a spend, at which point they can steal the funds by racing the owner, with the owner having no recourse beyond burning most of the vault's funds to fees. (I think the above two variants are meaningfully worse than kanzure's in that someone with the normal key just needs to wait for the vault utxo to age, at which point they can steal the funds immediately. In kanzure's model, you first need to broadcast a trigger transaction, then wait for it to age before funds can be stolen, which isn't the greatest protection, but it at least adds some difficulty) [0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017231.html also, https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-April/020284.html If you don't mind that flaw, you can setup a vault along the lines of kanzure's design with BIP 345 fairly simply: A: ipk=master, tapscript= SWAP OVER CHECKSIGVERIFY 1 "CHECKSIGVERIFY 144 CSV" OP_VAULT (witness values:) B: ipk=master tapscript= CHECKSIGVERIFY 144 CSV You put funds into the vault by creating a utxo with address "A", at which point you can do anything with the funds via the master key, or you can trigger an unvault via the normal key, moving funds to address "B", which then has a 144 block delay before it can be spent via the normal key. That also natively supports vault spends that only include part of the vault value, or that combine two or more (compatible) vaults into a single payment. To avoid the flaw, you need to precommit to the spend that you're authorising, while still allowing clawback/recovery by the owner. One way to make that work with BIP 345 is using BIP 119's CTV to force a precommitment to the spend: A: ipk=master, tapscript= OP_VAULT_RECOVER tapscript= CHECKSIGVERIFY 1 "CTV DROP 144 CSV" OP_VAULT (witness values: ) B: ipk=master tapscript= OP_VAULT_RECOVER tapscript= CTV DROP 144 CSV Once you have funds in the vault in address A, you
Re: [bitcoin-dev] Proposed BIP for OP_CAT
On Sat, Oct 21, 2023 at 01:08:03AM -0400, Ethan Heilman via bitcoin-dev wrote: > We've posted a draft BIP to propose enabling OP_CAT as Tapscript opcode. > https://github.com/EthanHeilman/op_cat_draft/blob/main/cat.mediawiki If you're interested in making this available via inquisition, here's a set of 3 patches that should allow it to be messed with on regtest: https://github.com/ajtowns/bitcoin/commits/202310-inq25-cat Tests need updating and adding, however. You might wish to compare with similar commits from the APO/CTV PRs at https://github.com/bitcoin-inquisition/bitcoin/pull/33 https://github.com/bitcoin-inquisition/bitcoin/pull/34 It may be worth adding support for CSFS as well, if experimenting with that is desirable, rather than having them as separate script-verify flags and deployments. > [1]: R. Pike and B. Kernighan, "Program design in the UNIX > environment", 1983, > https://harmful.cat-v.org/cat-v/unix_prog_design.pdf "harmful cat", you say? Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
On Fri, Oct 20, 2023 at 02:10:37PM +1030, Rusty Russell via bitcoin-dev wrote: > I've done an exploration of what would be required (given > OP_TX/OP_TXHASH or equivalent way of pushing a scriptPubkey on the > stack) to usefully validate Taproot outputs in Bitcoin Script. Such > functionality is required for usable vaults, at least. > > > https://rusty.ozlabs.org/2023/10/20/examining-scriptpubkey-in-script.html > > (If anyone wants to collaborate to produce a prototype, and debug my > surely-wrong script examples, please ping me!) > > TL;DR: if we have OP_TXHASH/OP_TX, and add OP_MULTISHA256 (or OP_CAT), > OP_KEYADDTWEAK and OP_LESS (or OP_CONDSWAP), and soft-fork weaken the > OP_SUCCESSx rule (or pop-script-from-stack), we can prove a two-leaf > tapscript tree in about 110 bytes of Script. This allows useful > spending constraints based on a template approach. I think there's two reasons to think about this approach: (a) we want to do vault operations specifically, and this approach is a good balance between being: - easy to specify and implement correctly, and - easy to use correctly. (b) we want to make bitcoin more programmable, so that we can do contracting experiments directly in wallet software, without needing to justify new soft forks for each experiment, and this approach provides a good balance amongst: - opening up a wide range of interesting experiments, - making it easy to understand the scope/consequences of opening up those experiments, - being easy to specify and implement correctly, and - being easy to use correctly. Hopefully that's a fair summary? Obviously what balance is "good" is always a matter of opinion -- if you consider it hard to do soft forks, then it's perhaps better to err heavily towards being easy to specify/implement, rather than easy to use, for example. For (a) I'm pretty skeptical about this approach for vault operations -- it's not terribly easy to specify/implement (needing 5 opcodes, one of which has a dozen or so flags controlling how it behaves, then also needs to change the way OP_SUCCESS works), and it seems super complicated to use. By comparison, while the bip 345 OP_VAULT proposal also proposes 3 new opcodes (OP_CTV, OP_VAULT, OP_VAULT_RECOVER) [0], those opcodes can be implemented fairly directly (without requiring different semantics for OP_SUCCESS, eg) and can be used much more easily [1]. [0] Or perhaps 4, if OP_REVAULT were to be separated out from OP_VAULT, cf https://github.com/bitcoin/bips/pull/1421#discussion_r1357788739 [1] https://github.com/jamesob/opvault-demo/blob/57f3bb6b8717acc7ce1eae9d9d8a2661f6fa54e5/main.py#L125-L133 I'm not sure, but I think the "deferred check" setup might also provide additional functionality beyond what you get from cross-input introspection; that is, with it, you can allow multiple inputs to safely contribute funds to common outputs, without someone being able to combine multiple inputs into a tx where the output amount is less than the sum of all the contributions. Without that feature, you can mimic it, but only so long as all the input scripts follow known templates that you can exactly match. So to me, for the vault use case, the TXHASH/MULTISHA256/KEYADDTWEAK/LESS/CAT/OP_SUCCESS approach just doesn't really seem very appealing at all in practical terms: lots of complexity, hard to use, and doesn't really seem like it works very well even after you put in tonnes of effort to get it to work at all? I think in the context of (b), ie enabling experimentation more generally, it's much more interesting. eg, CAT alone would allow for various interesting constraints on signatures ("you must sign this tx with the given R value -- so attempting to double spend, eg via a feebump, will reveal the corresponding private key"), and adding CSFS would allow you to include authenticated data in a script, eg market data sourced from a trusted oracle. But even then, it still seems fairly crippled -- script is a very limited programming language, and it just isn't really very helpful if you want to do things that are novel. It doesn't allow you to (eg) loop over the inputs and select just the ones you're interested in, you need the opcode to do the looping for you, and that has to be hardcoded as a matter of consensus (eg, Steven Roose's TXHASH [2] proposal allows you to select the first-n inputs/outputs, but not the last-n). [2] https://github.com/bitcoin/bips/pull/1500 I've said previously [3] that I think using a lisp variant would be a promising solution here: you can replace script's "two stacks of byte-strings" with "(recursive) lists of byte-strings", and go from a fairly limited language, to a fairly complete one. I've been experimenting with this on and off since then [4], and so far I haven't seen anything much to dissuade me from that view. I think you can get a pretty effective language with perhaps 43 opcode
Re: [bitcoin-dev] Proposed BIP for OP_CAT
On Mon, Oct 23, 2023 at 12:43:10PM +1030, Rusty Russell via bitcoin-dev wrote: > 2. Was there a concrete rationale for maintaining 520 bytes? Without a limit of 520 bytes, then you can construct a script: CHECKSIGVERIFY {DUP CAT}x10 (we know have a string that is the second witness repeated 1024 times on the stack; if it was 9 bytes, call it 9216B total) {DUP} x 990 (we now have 1000 strings each of length 9216B bytes, for ~9.2MB total) SHA256SUM {CAT SHA256SUM}x999 (we now have a single 32B field on the stack) EQUAL (and can do a hardcoded check to make sure there weren't any shortcuts taken) That raises the max memory to verify a single script from ~520kB (1000 stack elements by 520 bytes each) to ~10MB (1000 stack elements by 10kB each). > 10k is the current script limit, can we get closer to that? :) The 10k limit applies to scriptPubKey, scriptSig and segwit v0 scripts. There's plenty of examples of larger tapscripts, eg: https://mempool.space/tx/0301e0480b374b32851a9462db29dc19fe830a7f7d7a88b81612b9d42099c0ae (3,938,182 bytes of script, non-standard due to being an oversized tx) https://mempool.space/tx/2d4ad78073f1187c689c693bde62094abe6992193795f838e8be0db898800434 (360,543 bytes of script, standard, I believe) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposed BIP for MuSig2 PSBT Fields
On Wed, Oct 11, 2023 at 11:59:16PM +, Andrew Chow via bitcoin-dev wrote: > On 10/11/2023 07:47 PM, Anthony Towns wrote: > > On Tue, Oct 10, 2023 at 10:28:37PM +, Andrew Chow via bitcoin-dev wrote: > >> I've written up a BIP draft for MuSig2 PSBT fields. It can be viewed at > >> https://github.com/achow101/bips/blob/musig2-psbt/bip-musig2-psbt.mediawiki. > > > > I was hoping to see adaptor signature support in this; but it seems that's > > also missing from BIP 327? > This is the first time I've heard of that, so it wasn't something that I > considered adding to the BIP. Really the goal was to just be able to use > BIP 327. Yeah, makes sense. The other related thing is anti-exfil; libwally's protocol for that (for ecdsa sigs) is described at: https://wally.readthedocs.io/en/release_0.8.9/anti_exfil_protocol/ https://github.com/BlockstreamResearch/secp256k1-zkp/blob/master/include/secp256k1_ecdsa_s2c.h Though that would probably want to have a PSBT_IN_S2C_DATA_COMMITMENT item provided before MUSIG2_PUB_NONCE was filled in, then PSBT_IN_S2C_DATA and PSBT_IN_NONCE_TWEAK can be provided. (Those all need to have specific relationships in order to be secure though) > But that doesn't preclude a future BIP that specifies how to use adaptor > signatures and to have additional PSBT fields for it. It doesn't look > like those are mutually exclusive in any way or that the fields that > I've proposed wouldn't still work. Yeah, it's just that it would be nice if musig capable signers were also capable of handling s2c/anti-exfil and tweaks/adaptor-sigs immediately, rather than it being a "wait for the next release" thing... > I don't know enough about the topic to really say much on whether or how > such fields would work. I think for signers who otherwise don't care about these features, the only difference is that you add the tweak to the musig nonces before hashing/signing, which is pretty straightforward. So I think, if it were specced, it'd be an easy win. Definitely shouldn't be a blocker though. Here's another idea for formatting the tables fwiw: https://github.com/ajtowns/bips/blob/d8a90cff616d6e5839748a1b2a50d32947f30850/bip-musig2-psbt.mediawiki Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposed BIP for MuSig2 PSBT Fields
On Tue, Oct 10, 2023 at 10:28:37PM +, Andrew Chow via bitcoin-dev wrote: > I've written up a BIP draft for MuSig2 PSBT fields. It can be viewed at > https://github.com/achow101/bips/blob/musig2-psbt/bip-musig2-psbt.mediawiki. I was hoping to see adaptor signature support in this; but it seems that's also missing from BIP 327? Though libsecp256k1-zkp has implemented it: https://github.com/BlockstreamResearch/secp256k1-zkp/blob/master/include/secp256k1_musig.h (adaptor arg to process_nonce; adapt, and extract_adaptor functions) https://github.com/BlockstreamResearch/secp256k1-zkp/blob/master/src/modules/musig/musig.md#atomic-swaps I would have expected the change here to support this to be: * an additional field to specify the adaptor, PSBT_IN_MUSIG2_PUB_ADAPTOR (optional, 33B compressed pubkey, 32B-hash-or-omitted), that signers have to take into account * an additional field to specify the adaptor secret, PSBT_IN_MUSIG2_PRIV_ADAPTOR (32B), added by a Signer role * PartialSigAgg should check if PUB_ADAPTOR is present, and if so, incorporate the value from PSBT_IN_MUSIG2_PRIV_ADAPTOR, failing if that isn't present (Note that when using adaptor signatures, signers who don't know the adaptor secret will want to ensure that the partial signatures provided by signers who do/might know the secret are valid. But that depends on the protocol, and isn't something that can be automated at the PSBT level, I think) Seems like it would be nice to have that specified asap, so that it can be supported by all signers? FWIW, "participant" is typoed a bunch ("particpant") and the tables are hard to read: you might consider putting the description as a separate row? eg: https://github.com/ajtowns/bips/blob/202310-table/bip-musig2-psbt.mediawiki Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BitVM: Compute Anything on Bitcoin
On Mon, Oct 09, 2023 at 03:46:24PM +0200, Robin Linus via bitcoin-dev wrote: > Abstract. BitVM is a computing paradigm to express Turing-complete > Bitcoin contracts. Please correct me if I'm wrong: The way I understand this idea is that you take an N-bit claim (in the given example N=4), and provide a NAND circuit that asserts whether the claim is valid or not (in the example, if I did the maths right, valid values take the form xxx0, 1x01, and only three bits were actually needed). It would be very straightforward afaics to allow for AND/OR/XOR/etc gates, or to have operations with more than two inputs/one output. The model is then a prover/challenger one: where the prover claims to have a solution, and the verifier issues challenges that the prover only be able to reply to consistently if the solution is correct. If the prover doesn't meet the challenge, they lose the funds. The circuit entails C individual assertions, with two inputs (selected from either the N inputs bits, or the output of one of the previous assertions) and a single output. You then encode each of those C assertions as tapleafs, so that spending a tx via that tapleaf validates that individual assertion. You also have an additional tapleaf per input/assertion, that allows the verifier to claim the funds immediately rather than issue another challenge if the prover ever gave two inconsistent values for either an input or the result of one of the assertions. If the prover tries to cheat -- eg, claiming that is a valid input in the example -- then the verifier can run the circuit themselves offline, establish that it's an invalid, and work backwards from the tip to establish the error. For example: TRUE=NAND(L,D) -- D is true, so L better be false L=NAND(J,A) -- A is true, so J better be true for L to be false J=NAND(H,I) -- one of H or I must be false for J to be true, prover will have to pick. suppose they pick I. I=NAND(G,B) -- B is true, so if I was false, G was true G=NAND(A,C) -- can only pass at this point with some of A,C,G being given an inconsistent value So you should need enough challenges to cover the longest circuit path (call it P) in order to reliably invalidate an attempt to cheat. I guess if your path isn't "branching" (ie one of the NAND inputs is something you already have a commitment to) then you can skip back to something that NAND's two "unknowns", at which point either one of the inputs is wrong, and you trace it further down, or the output is correct, in which case you can do a binary search across the NAND's where there wasn't any branching, which should get you roughly to P=log(C) steps, at which point you can do a billion gate circuit in ~100 on-chain transactions? I think the "reponse enabled by challenge revealing a unique preimage" approach allows you to do all the interesting work in the witness, which then means you can pre-generate 2-of-2 signatures to ensure the protocol is followed, without needing CTV/APO. You'd need to exchange O(C*log(C)) hashes for the challenge hashes as well as the 2*C commitment hashes, so if you wanted to limit that setup to 20GB, then 24M gates would be about the max. I think APO/CTV would let you avoid all the challenge hashes -- you'd instead construct P challenge txs, and P*C response txs; with the output of the C responses at level i being the i+1'th challenge, and each of the tapscripts in the P challenges having a CTV-ish commitment to a unique response tx. Still a lot of calculation, but less transfer needed. You'd still need to transfer 2*C hashes for the commitments to each of the assertions; but 20GB gets you a circuit with about ~300M gates then. > It is inefficient to express functions in simple NAND circuits. Programs > can be expressed more efficiently by using more high-level opcodes. E.g., > Bitcoin script supports adding 32-bit numbers, so we need no binary > circuit for that. I don't think that really works, though? You need a way of committing to the 32-bit number in a way that allows proof of equivocation; but without something like OP_CHECKFROMSTACK, I don't think we really have that. You could certainly have 2**K hashes to allow a K-bit number, but I think you'd have a hard time enumerating even three 16bit numbers into a 4MB tapscript even. CSFS-ish behaviour would let you make the commitments by signature, so you wouldn't need to transfer hashes in advance at all, I think. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] MATT: [demo] Optimistic execution of arbitrary programs
On Fri, Sep 29, 2023 at 03:14:25PM +0200, Johan Torås Halseth via bitcoin-dev wrote: > TLDR; Using the proposed opcode OP_CHECKCONTRACTVERIFY and OP_CAT, we > show to trace execution of the program `multiply` [1] and challenge > this computation in O(n logn) on-chain transactions: "O(n log n)" sounds wrong? Isn't it O(P + log(N)) where P is the size of the program, and N is the number of steps (rounded up to a power of 2)? You say: > node = h( start_pc|start_i|start_x|end_pc|end_i|end_x|h( > h(sub_node1)|h(sub_node2) ) But I don't think that works -- I think you want to know h(sub_node1) and h(sub_node2) directly, so that you can compare them to the results you get if you run the computation, and choose the one that's incorrect. Otherwise you've got a 50/50 chance of choosing the subnode that's actually correct, and you'll only be able to prove a mistake with 1/2**N odds? Not a big change, it just becomes 32B longer (and drops some h()s): node = start_pc|start_i|start_x|end_pc|end_i|end_x|h(sub_node1)|h(sub_node2) leaf = start_pc|start_i|start_x|end_pc|end_i|end_x|null I'm not seeing what forces the prover to come up with a balanced state tree -- if they don't have to have a balanced tree, then I think there are many possible trees for the same execution trace, and again it would become easy to hide an error somewhere the challenger can't find. Adding a "start_stepcount" and "end_stepcount" would probably remedy that? There seems to be an error in the "what this would look like for 4 state transitions" diagram -- the second node should read "0|0|2 -> 0|1|4" (combining its two children), not "0|0|2 -> 1|0|2" matching its left child. I'm presuming that the counterparty verifies they know the program (ie, all the leaves in the contract taptree) before agreeing to the contract in the first place. I think that's fine. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] Scaling Lightning With Simple Covenants
On Fri, Sep 08, 2023 at 06:54:46PM +, jlspc via Lightning-dev wrote: > TL;DR > = I haven't really digested this, but I think there's a trust vs capital-efficiency tradeoff here that's worth extracting. Suppose you have a single UTXO, that's claimable by "B" at time T+L, but at time T that UTXO holds funds belonging not only to B, but also millions of casual users, C_1..C_100. If B cheats (eg by not signing any further lightning updates between now and time T+L), then each casual user needs to drop their channel to the chain, or else lose all their funds. (Passive rollovers doesn't change this -- it just moves the responsibility for dropping the channel to the chain to some other participant) That then faces the "thundering herd" problem -- instead of the single one-in/one-out tx that we expected when B is doing the right thing, we're instead seeing between 1M and 2M on-chain txs as everyone recovers their funds (the number of casual users multiplied by some factor that depends on how many outputs each internal tx has). But whether an additional couple of million txs is a problem depends on how long a timeframe they're spread over -- if it's a day or two, then it might simply be impossible; if it's over a year or more, it may not even be noticable; if it's somewhere in between, it might just mean you're paying a modest amount in additional fees than you'd have normally expected. Suppose that casual users have a factor in mind, eg "If worst comes to worst, and everyone decides to exit at the same time I do, I want to be sure that only generates 100 extra transactions per block if everyone wants to recover their funds prior to B being able to steal everything". Then in that case, they can calculate along the following lines: 1M users with 2-outputs per internal tx means 2M transactions, divide that by 100 gives 20k blocks, at 144 blocks per day, that's 5 months. Therefore, I'm going to ensure all my funds are rolled over to a new utxo while there's at least 5 months left on the timeout. That lowers B's capital efficiency -- if all the causal users follow that policy, then B is going to own all the funds in Fx for five whole months before it can access them. So each utxo here has its total lifetime (L) actually split into two phases: an active lifetime LA of some period, and an inactive lifetime of LI=5 months, which would have been used by everyone to recover their funds if B had attempted to block normal rollover. The capital efficiency is then reduced by a factor of 1/(1+LA/LI). (LI is dependent on the number of users, their willingness to pay high fees to recover their funds, and global blockchain capacity, LA is L-LI, L is your choice) Note that casual users can't easily reduce their LI timeout just by having the provider split them into different utxos -- if the provider cheats/fails, that's almost certainly a correlated across all their utxos, and all the participants across each of those utxos will need to drop to the chain to preserve their funds, each competing with each other for confirmation. Also, if different providers collude, they can cause problems: if you expected 2M transactions over five months due to one provider failing, that's one thing; but if a dozen providers fail simultaneously, then that balloons up to perhaps 24M txs over the same five months, or perhaps 25% of every block, which may be quite a different matter. Ignoring that caveat, what do numbers here look like? If you're a provider who issues a new utxo every week (so new customers can join without too much delay), have a million casual users as customers, and target LA=16 weeks (~3.5 months), so users don't need to rollover too frequently, and each user has a balanced channel with $2000 of their own funds, and $2000 of your funds, so they can both pay and be paid, then your utxos might look like: active_1 through active_16: 62,500 users each; $250M balance each inactive_17 through inactive_35: $250M balance each, all your funds, waiting for timeout to be usable That's: * $2B of user funds * $2B of your funds in active channels * $4.5B of your funds locked up, waiting for timeout In that case, only 30% of the $6.5B worth of working capital that you've dedicated to lightning is actually available for routing. Optimising that formula by making LA as large as possible doesn't necessarily work -- if a casual user spends all their funds and disappears prior to the active lifetime running out, then those funds can't be easily spent by B until the total lifetime runs out, so depending on how persistent your casual users are, I think that's another way of ending up with your capital locked up unproductively. (There are probably ways around this with additional complexity: eg, you could peer with a dedicated node, and have the timeout path be "you+them+timeout", so that while you could steal from casual users who don't rollover, you can't steal from your dedicated peer, so that $4.5B could be
Re: [bitcoin-dev] Bitcoin Core maintainers and communication on merge decisions
On Tue, Apr 18, 2023 at 12:40:44PM +, Michael Folkson via bitcoin-dev wrote: > I do think the perception that it is “the one and only” staging > ground for consensus changes is dangerous If you think that about any open source project, the answer is simple: create your own fork and do a better job. Competition is the only answer to concerns about the bad effects from a monopoly. (Often the good effects from cooperation and collaboration -- less wasted time and duplicated effort -- turn out to outweigh the bad effects, however) In any event, inquisition isn't "the one and only staging ground for consensus changes" -- every successful consensus change to date has been staged through the developers' own repo then the core PR process, and that option still exists. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP for OP_VAULT
On Tue, Mar 07, 2023 at 10:45:34PM +1000, Anthony Towns via bitcoin-dev wrote: > I think there are perhaps four opcodes that are interesting in this class: > >idx sPK OP_FORWARD_TARGET > -- sends the value to a particular output (given by idx), and > requires that output have a particular scriptPubKey (given > by sPK). > >idx [...] n script OP_FORWARD_LEAF_UPDATE > -- sends the value to a particular output (given by idx), and > requires that output to have almost the same scriptPubKey as this > input, _except_ that the current leaf is replaced by "script", > with that script prefixed by "n" pushes (of values given by [...]) > >idx OP_FORWARD_SELF > -- sends the value to a particular output (given by idx), and > requires that output to have the same scriptPubKey as this input > >amt OP_FORWARD_PARTIAL > -- modifies the next OP_FORWARD_* opcode to only affect "amt", > rather than the entire balance. opcodes after that affect the > remaining balance, after "amt" has been subtracted. if "amt" is > 0, the next OP_FORWARD_* becomes a no-op. The BIP 345 draft has been updated [0] [1] and now pretty much defines OP_VAULT to have the behaviour specced for OP_FORWARD_LEAF_UPDATE above, and OP_VAULT_RECOVER to behave as OP_FORWARD_TARGET above. Despite that, for this email I'm going to continue using the OP_FORWARD_* naming convention. Given the recent controversy over the Yuga labs ordinal auction [2], perhaps it's interesting to consider that these proposed opcodes come close to making it possible to do a fair, non-custodial, on-chain auction of ordinals [3]. The idea here is that you create a utxo on chain that contains the ordinal in question, which commits to the address of the current leading bidder, and can be spent in two ways: 1) it can be updated to a new bidder, if the bid is raised by at least K satoshis, in which case the previous bidder is refunded their bid; or, 2) if there have been no new bids for a day, the current high bidder wins, and the ordinal is moved to their address, while the funds from their winning bid are sent to the original vendor's address. I believe this can be implemented in script as follows, assuming the opcodes OP_FORWARD_TARGET(OP_VAULT_RECOVER), OP_FORWARD_LEAF_UPDATE(OP_VAULT), OP_FORWARD_PARTIAL (as specced above), and OP_PUSHCURRENTINPUTINDEX (as implemented in liquid/elements [4]) are all available. First, figure out the parameters: * Set VENDOR to the scriptPubKey corresponding to the vendor's address. * Set K to the minimum bid increment [5]. * Initially, set X equal to VENDOR. * Initially, set V to just below the reserve price (V+K is the minimum initial bid). Then construct the following script: [X] [V] [SSS] TOALT TOALT TOALT 0 PUSHCURRENTINPUTINDEX EQUALVERIFY DEPTH NOT IF 0 1 FORWARD_PARTIAL 0 FROMALT FORWARD_TARGET 1 [VENDOR] FWD_TARGET 144 ELSE FROMALT SWAP TUCK FROMALT [K] ADD GREATERTHANOREQUAL VERIFY 1 SWAP FORWARD_TARGET DUP FORWARD_PARTIAL 0 ROT ROT FROMALT DUP 3 SWAP FORWARD_LEAF_UPDATE 0 ENDIF CSV 1ADD where "SSS" is a pushdata of the rest of the script ("TOALT TOALT TOALT .. 1ADD"). Finally, make that script the sole tapleaf, accompanied by a NUMS point as the internal public key, calculate the taproot address corresponding to that, and send the ordinal to that address as the first satoshi. There are two ways to spend that script. With an empty witness stack, the following will be executed: [X] [V] [SSS] TOALT TOALT TOALT -- altstack now contains [SSS V X] 0 PUSHCURRENTINPUTINDEX EQUALVERIFY -- this input is the first, so the ordinal will move to the first output DEPTH NOT IF -- take this branch: the auction is over! 1 [VENDOR] FWD_TARGET -- output 1 gets the entire value of this input, and pays to the vendor's hardcoded scriptPubKey 0 1 FORWARD_PARTIAL 0 FROMALT FORWARD_TARGET -- we forward at least 10k sats to output 0 (if there were 0 sats, the ordinal would end up in output 1 instead, which would be a bug), and output 0 pays to scriptPubKey "X" 144 ELSE .. ENDIF -- skip over the other branch CSV -- check that this input has baked for 144 blocks (~1 day) 1ADD -- leave 145 on the stack, which is true. success! Alternatively, if you want to increase the bid you provide a stack with two items: your scriptPubKey and the new bid [X' V']. Execution this time looks like: [X] [V] [SSS] TOALT TOALT TOALT -- stack contains [X' V'], altstack now contains [SSS V X] 0 PUSHCURRENTINPUTINDEX EQUALVERIFY -- this input is the first, so the ordinal will move to the first output DEPTH NOT IF ... ELSE -
Re: [bitcoin-dev] BIP for OP_VAULT
On 10 March 2023 4:45:15 am AEST, Greg Sanders via bitcoin-dev wrote: >1) OP_FORWARD_SELF is a JET of OP_FLU in the revaulting common case. Maybe >obvious but I missed this initially and thought it was useful to be pointed >out. That was true for TLUV - iirc "FALSE FALSE 0 TLUV" would preserve the spk - but I don't think it's true for OP_FLU: you can't commit to preserving the current script without a way to observe the current script; trying to include a copy of the script in the script makes the script size infinite, and trying to include a hash of the script inside the script is cryptographically infeasible. You could just special case "0 0 OP_FLU" to result in the same script rather than an empty one though, which would avoid the need for a dedicated FWD_SELF opcode. (Not convinced calling things Jets when they're unrelated to simplicity makes sense) Cheers, aj -- Sent from my phone. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP for OP_VAULT
On Mon, Mar 06, 2023 at 10:25:38AM -0500, James O'Beirne via bitcoin-dev wrote: > What Greg is proposing above is to in essence TLUV-ify this proposal. FWIW, the way I'm thinking about this is that the "OP_VAULT" concept is introducing two things: a) the concept of "forwarding" the input amount to specified outputs in a way that elegantly allows merging/splitting b) various restrictions on the form of the output scripts These concepts go together well, because restricting an output script is only an interesting thing to do if you're moving value from this input into it. And then it's just a matter of figuring out a nice way to pick opcodes that combine those two concepts in interesting ways. This is different from TLUV, in that TLUV only did part (b), and assumed you'd do part (a) manually somehow, eg via "OP_IN_OUT_AMOUNT" and arithmetic opcodes. The advantage of this new approach over that one is that it makes it really easy to get the logic right (I often forgot to include the IN_OUT_AMOUNT checks at all, for instance), and also makes spending multiple inputs to a single output really simple, something that would otherwise require kind-of gnarly logic. I think there are perhaps four opcodes that are interesting in this class: idx sPK OP_FORWARD_TARGET -- sends the value to a particular output (given by idx), and requires that output have a particular scriptPubKey (given by sPK). idx [...] n script OP_FORWARD_LEAF_UPDATE -- sends the value to a particular output (given by idx), and requires that output to have almost the same scriptPubKey as this input, _except_ that the current leaf is replaced by "script", with that script prefixed by "n" pushes (of values given by [...]) idx OP_FORWARD_SELF -- sends the value to a particular output (given by idx), and requires that output to have the same scriptPubKey as this input amt OP_FORWARD_PARTIAL -- modifies the next OP_FORWARD_* opcode to only affect "amt", rather than the entire balance. opcodes after that affect the remaining balance, after "amt" has been subtracted. if "amt" is 0, the next OP_FORWARD_* becomes a no-op. Then each time you see OP_FORWARD_TARGET or OP_FORWARD_LEAF_UPDATE, you accumulate the value that's expected to be forwarded to the output by each input, and verify that the amount for that output is greater-or-equal to the accumulated value. > ## Required opcodes > - OP_VAULT: spent to trigger withdrawal > - OP_VAULT_RECOVER: spent to recover Naming here is OP_VAULT ~= OP_FORWARD_LEAF_UPDATE; OP_VAULT_RECOVER ~= OP_FORWARD_TARGET. > For each vault, vaulted coins are spent to an output with the taproot > structure > > taproot(internal_key, {$recovery_leaf, $trigger_leaf, ...}) > > where > > $trigger_leaf = >
Re: [bitcoin-dev] BIP for OP_VAULT
On Wed, Mar 01, 2023 at 10:05:47AM -0500, Greg Sanders via bitcoin-dev wrote: > Below is a sketch of a replacement for the two opcodes. I like this! I tried to come up with something along similar lines for similar reasons, but I think I tried too hard to reduce it to two opcodes or something and got myself confused. > `OP_TRIGGER_FORWARD`: Takes exactly three arguments: > 1) output index to match against (provided at spend time normally) > 2) target-outputs-hash: 32 byte hash to be forwarded to output given at (1) > (provided at spend time normally) > 3) spend-delay: value to be forwarded to output given at (1) I think you could generalise this as follows: idx .. npush script OP_FORWARD_LEAF_UPDATE (OP_FLU :) with the behaviour being: pop script from the stack pop npush from the stack (error if non-minimal or <0) pop npush entries from the stack, prefix script with a minimal push of that entry pop idx off the stack (error if idx is not a valid output) calculate the spk corresponding to taking the current input's spk and replacing the current leaf with the given script check the output at idx matches this spk, and the value from this input accumulates to that output Then instead of `idx hash delay OP_TRIGGER_FORWARD` you write `idx hash delay 2 "OP_CSV OP_DROP OP_FORWARD_OUTPUTS" OP_FORWARD_LEAF_UPDATE` That's an additional 5 witness bytes, but a much more generic/composable opcode. Being able to prefix a script with push opcodes avoids the possibility of being able to add OP_SUCCESS instructions, so I think this is a fairly safe way of allowing a TLUV-ish script to be modified, especially compared to OP_CAT. I do recognise that it makes it take a variable number of stack elements though :) > As the derived tapscript, embedded in a output scriptpubkey of the form: > `tr(NUMS,{...,EXPR_WITHDRAW})`, meaning we literally take the control block > from the spending input, swap the inner pubkey for `NUMS`, use > `EXPR_WITHDRAW` as the tapleaf, reconstruct the merkle root. If the output > scriptpubkey doesnt match, fail. I don't think replacing the internal-public-key makes sense -- if it was immediately spendable via the keypath before there's no reason for it not to be immediately spendable now. > Could save 2 WU having OP_FORWARD_OUTPUTS take the directly > as an argument, or keep it more general as I did. Having OP_FORWARD_OUTPUTS not leave its input on the stack would let you move the OP_CSV to the end and drop the OP_DROP too, saving 1 WU. > Would love to know what you and others think about this direction. I > apologies for any misunderstandings I have about the current OP_VAULT BIP! I think the existing OP_VAULT cleverness would work here, allowing you to spend two inputs to the same output, accumulating their values. I don't think it quite gives you a way to "refund" values though -- so that you can take a vault with 3 BTC, start the wait to spend 1.4 BTC, and then immediately decide to spend an additional 0.8 BTC on something else, without the 0.8 BTC effectively having a doubled delay. I think you could fix that with something as simple as an additional "idx OP_FORWARD_REFUND" opcode, though -- then the restriction is just that the output at the refund idx has the same sPK as this input, and the total value of this input is accumulated amongst all the outputs specified by OP_FORWARD opcodes. (Maybe you need to specify the refund amount explicitly as well, to keep verification easy) That would make maybe three new opcodes to cover the "accumulate value from one or more inputs into specified outputs": - OP_FORWARD_LEAF_UPDATE --> forward input value to modified spk - OP_FORWARD_DESTINATION --> forward input value to given spk - OP_FORWARD_REFUND --> forward part of input value to same spk along with OP_CTV: - OP_FORWARD_OUTPUTS --> pay to specific outputs OP_VAULT's "accumulate value" behaviour here makes the OP_IN_OUT_AMOUNT things from TLUV more implicit and automatic, which is nice. I think doing TLUV payment pools wouldn't require much more than the ability to combine OP_FLU and OP_FDEST in a single script, explicitly specifying how much value is extracted via OP_FDEST with the rest assigned to OP_FLU. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Mon, Feb 20, 2023 at 03:22:30PM +, Pieter Wuille via bitcoin-dev wrote: > On Sunday, February 19th, 2023 at 6:56 PM, Anthony Towns > wrote: > > On Fri, Feb 17, 2023 at 10:13:05PM +, Pieter Wuille via bitcoin-dev > > wrote: > > > > I think it's probably less complex to close some of the doors? > > > > 2) are short ids available/meaningful to send prior to VERACK being > > > > completed? > > > > Ah, I hadn't considered this nuance. If we don't care about them being > > > > available before VERACK negotiation, then it may be possible to > > > > introduce a way to negotiate a different short id mapping table without > > > > needing a mechanism for re-negotiating. > > I think you still need/want two negotiation steps -- once to tell each > > other what tables you know about, once to choose a mutually recognised > > table and specify any additions. > Right, I wasn't talking about how many steps/messages the negotiation takes. > I just meant that if all negotiation of the mapping table happens just once > (before VERACK) and that negotiation itself happens without use of short > commands, then there is no need for re-negotiating short commands after they > are already in use. Nothing concrete, but I can imagine that that may > simplify some implementations. Yeah; I was just thinking of the fact that currently the negotiation is: * send your VERSION message * see what their VERSION message is * announce a bunch of features, depending on the version (or service flags) * send the VERACK (and GETADDR and final ALERT) * wait for their announcements and VERACK * negotiation is finished; we know everything; we're ready to go which only gets you two steps if you send the short id stuff as part of the VERSION message. Obviously you could just add an extra phase either just before or just after the VERACK, though. I suppose being able to choose your own short id mapping from day 0 would mean that every bip324 node could use a single short id mapping for all outgoing messages, which might also make implementation marginally easier (no need to use one table for modern nodes, but also support the original table for old bip324 implementations)... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Fri, Feb 17, 2023 at 10:13:05PM +, Pieter Wuille via bitcoin-dev wrote: > > I think it's probably less complex to close some of the doors? > > 2) are short ids available/meaningful to send prior to VERACK being > > completed? > Ah, I hadn't considered this nuance. If we don't care about them being > available before VERACK negotiation, then it may be possible to introduce a > way to negotiate a different short id mapping table without needing a > mechanism for *re*-negotiating. I think you still need/want two negotiation steps -- once to tell each other what tables you know about, once to choose a mutually recognised table and specify any additions. > > I think the things missing from the current list (and not currently in > > use by bitcoin core) are: > > bip 61: REJECT > > bip 331: GETPKGTXNS, PKGTXNS, ANCPKGINFO > Do you feel REJECT should be included? I don't think it matters much; reject messages are both rare and include a reason so you'd only be saving maybe 12 bytes out of 62 (~20%) for maybe 6000 messages a day per peer that sends reject messages, so 72kB/day/reject-peer? > Perhaps a possibility is having the transport layer translate > short-command-number-N to the 12-byte command "\x00\x00..." + byte(N), and > hand that to the application layer, which could then do the mapping? Presuming the transport layer also continues to reject commands that have a '\x00' byte at the start or in the middle (ie !IsCommandValid()), that seems pretty reasonable... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Thu, Feb 16, 2023 at 05:43:22PM +, Dhruv M via bitcoin-dev wrote: > Problem: > - 1 byte message type IDs are lacking a co-ordination mechanism when multiple > in-flight BIPs are proposing new message types as the id space is reduced > form 12 ASCII bytes to 1 byte. > - 1 byte IDs are scarce and should be allocated judiciously, especially given > that gains on bandwidth are very much non-uniform across message types. ACK. > Solutions: > - Uniform encoding using the high-bit increases the available ID space > drastically, however, there's still the issue of making sure that the most > frequent message types get the shorter IDs. > - Making type IDs negotiable(editable, really) per direction per connection > solves that issue at the cost of some increased complexity. > > Since we don't really know the extent to which the protocol will ossify over > time and that BIP324 is already quite a large change, we might want to > optimize for the least additional complexity that doesn't close the doors on > any of the solutions. I think it's probably less complex to close *some* of the doors? In particular, I think there's two questions that have to get answered: 1) how do you distinguish the command from the payload for non short-ids -- by a length prefix, or by setting the high-bit of the final command byte? 2) are short ids available/meaningful to send prior to VERACK being completed? > How about this: > - BIP324 restricts type IDs to [1, 127] Is this for short ids (currently [13-255] per the bip) or for every byte in a non-short-id command (for p2p v1, IsCommandValid() restricts each byte to being in the printable ascii range, ie [32-126])? Here's another approach: idea: we use short ids to minimise bandwidth, and don't care about bandwidth for long ids implementation: short id 0 is reserved for long commands. when received, we decode the first 12 bytes of the payload and treat them exactly the same as a v1 p2p message (trailing 0-bytes, etc) (if there's not 12 bytes of payload, it's just treated as an invalid command and dropped) short ids 1-255 are available for use as aliases of particular long commands (That's exactly compatible with p2p v1, and also avoids the temptation to try to choose short command names rather than descriptive ones -- the 0-padding to 12 bytes prevents you from saving any bandwidth that way; but that's what we have short ids for anyway) If we decide we want >255 short ids, we can figure out how to extend them later, in a fairly open ended way I think, eg by having [128-255] imply a 2 byte short id, so that seems fine? > - We remove 1 byte allocations for messages that are sent at most once per > connection per direction I think this leaves 32 commands that get short ids initially: misc: ADDR, ADDRV2, BLOCK, FEEFILTER, GETBLOCKS, GETDATA, GETHEADERS, HEADERS, INV, NOTFOUND, PING, PONG, TX bip 35/37: FILTERADD, FILTERCLEAR, FILTERLOAD, MEMPOOL, MERKLEBLOCK bip 152: BLOCKTXN, CMPCTBLOCK, GETBLOCKTXN bip 157: CFCHECKPT, CFHEADERS, CFILTER, GETCFCHCKPT, GETCFHEADERS, GETCFILTERS bip 330: RECONCILDIFF, REQRECON, REQSKETCHEXT, SENDCMPCT, SKETCH which drops: VERSION, VERACK, GETADDR, SENDADDRV2, SENDHEADERS, SENDTXRCNCL, WTXIDRELAY compared to bip 324 currently. I think the things missing from the current list (and not currently in use by bitcoin core) are: bip 61: REJECT bip 331: GETPKGTXNS, PKGTXNS, ANCPKGINFO > - Optionally, in the implementation we can attempt to move the type id > mapping to the p2p layer away from the transport layer. I suspect this could > also be done after the implementation is merged but might be cleaner as the > mapping is a p2p concern. I agree that's fine, though I expect that we'll probably want to do it not long after bip 331 is ready for merge (or some other p2p improvement comes along)... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Debate: 64 bytes in OP_RETURN VS taproot OP_FALSE OP_IF OP_PUSH
On Sat, Feb 04, 2023 at 07:11:35PM -0500, Russell O'Connor via bitcoin-dev wrote: > Since bytes in the witness are cheaper than bytes in the script pubkey, > there is a crossover point in data size where it will simply be cheaper to > use witness data. Given today's standardness constraints, that's true (because you first need to construct a p2wsh/tapscript output that commits to the data, then you have to spend it), but it needn't stay that way. Allowing a data carrier entry in the annex (as contemplated for eltoo [0]) would allow you to publish the data with a single transaction, with malleability prevented because the annex content is committed to by the signature. [0] https://github.com/bitcoin-inquisition/bitcoin/pull/22 I think the cost for publishing data via the witness today is roughly: 115 vb - for the commitment tx 115 vb + datalen/4 - for the publication tx versus 125 vb + datalen - for a tx with an OP_RETURN output so the crossover point is at a datalen of about 140 bytes. Perhaps slightly more or less depending on how much you can combine these inputs/outputs with other txs you would have made anyway. With a datacarrier in the annex that has similar or higher limits than OP_RETURN, I don't think OP_RETURN would ever be cheaper. The other advantage to using the witness for random data compared to OP_RETURN is that the txid commits to OP_RETURN output, so you must download all OP_RETURN data to validate a block's merkle tree, whereas you can partially validate a block (in particular, you can validate the spendable utxo set) without downloading witness data [1]. [1] https://github.com/bitcoin/bitcoin/pull/27050 Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] bitcoin-inquistion 24.0
Hi *, Bitcoin Inquisition 24.0 is tagged with guix builds available: https://github.com/bitcoin-inquisition/bitcoin/releases/tag/inq-v24.0 It includes support for BIP 118 (ANYPREVOUT) and BIP 119 (CHECKTEMPLATEVERIFY) on regtest and signet. The main change since 23.0 is simply the rebase on top of Bitcoin Core 24.0, though the patchsets for both BIPs have been tightened up a little as well. Additional soft forks or relay policy changes may be proposed by filing a pull request, and work in progress is tracked on a project board: https://github.com/bitcoin-inquisition/bitcoin/pulls?q=is%3Apr https://github.com/orgs/bitcoin-inquisition/projects/2/views/1 For more background, the 23.0 announcement may be worth reading: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-December/021275.html Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Unenforceable fee obligations in multiparty protocols with Taproot inputs
On Sat, Feb 11, 2023 at 09:40:38AM -0500, Russell O'Connor via bitcoin-dev wrote: > Yes. If you would otherwise sign the tapleaf, then I would recommend also > signing the entire tapbranch. Opened https://github.com/bitcoin-inquisition/bitcoin/issues/19 for this. (I think it's better to call it the path to the leaf, as "tapbranch" seems more to refer to each splitting step in the tree) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Unenforceable fee obligations in multiparty protocols with Taproot inputs
On 9 February 2023 12:04:16 am AEST, Russell O'Connor via bitcoin-dev wrote: >The fix for the bug is to sign the entire tapbranch instead of the tapleaf. > >On Wed., Feb. 8, 2023, 04:35 Michael Folkson, >wrote: > >> Hi Andrew >> >> > There is a bug in Taproot that allows the same Tapleaf to be repeated >> multiple times in the same Taproot, potentially at different Taplevels >> incurring different Tapfee rates. >> > >> > The countermeasure is that you should always know the entire Taptree >> when interacting with someone's Tapspend. >> >> I wouldn't say it is a "bug" unless there is a remedy for the bug that >> wasn't (and retrospectively should have been) included in the Taproot >> design. In retrospect and assuming you could redesign the Taproot consensus >> rules again today would you prevent spending from a valid P2TR address if a >> repeated Tapleaf hash was used to prove that a spending path was embedded >> in a Taproot tree? That's the only thing I can think of to attempt to >> remedy this "bug" and it would only be a partial protection as proving a >> spending path exists within a Taproot tree only requires a subset of the >> Tapleaf hashes. >> >> I only point this out because there seems to be a push to find "bugs" and >> "accidental blowups" in the Taproot design currently. No problem with this >> if there are any, they should definitely be highlighted and discussed if >> they do exist. The nearest to a possible inferior design decision thus far >> that I'm aware of is x-only pubkeys in BIP340 [0]. >> >> Thanks >> Michael >> >> [0]: >> https://btctranscripts.com/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/#a-retrospective-look-at-bip340 >> >> -- >> Michael Folkson >> Email: michaelfolkson at protonmail.com >> Keybase: michaelfolkson >> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3 >> >> --- Original Message --- >> On Tuesday, February 7th, 2023 at 18:35, Russell O'Connor via bitcoin-dev < >> bitcoin-dev@lists.linuxfoundation.org> wrote: >> >> There is a bug in Taproot that allows the same Tapleaf to be repeated >> multiple times in the same Taproot, potentially at different Taplevels >> incurring different Tapfee rates. >> >> The countermeasure is that you should always know the entire Taptree when >> interacting with someone's Tapspend. >> >> >> On Tue, Feb 7, 2023 at 1:10 PM Andrew Poelstra via bitcoin-dev < >> bitcoin-dev@lists.linuxfoundation.org> wrote: >> >>> >>> Some people highlighted some minor problems with my last email: >>> >>> On Tue, Feb 07, 2023 at 01:46:22PM +, Andrew Poelstra via bitcoin-dev >>> wrote: >>> > >>> > >>> > >>> > [1] https://bitcoin.sipa.be/miniscript/ >>> > [2] In Taproot, if you want to prevent signatures migrating to another >>> > branch or within a branch, you can use the CODESEPARATOR opcode >>> > which was redisegned in Taproot for exactly this purpose... we >>> > really did about witness malleation in its design! >>> >>> In Taproot the tapleaf hash is always covered by the signature (though >>> not in some ANYONECANPAY proposals) so you can never migrate signatures >>> between tapbranches. >>> >>> I had thought this was the case, but then I re-confused myself by >>> reading BIP 341 which has much of the sighash specified, but not >>> all of it! The tapleaf hash is added in BIP 342. >>> >>> > >>> > If you want to prevent signatures from moving around *within* a >>> > branch, >>> > >>> >>> And this sentence I just meant to delete :) >>> >>> >>> -- >>> Andrew Poelstra >>> Director of Research, Blockstream >>> Email: apoelstra at wpsoftware.net >>> Web: https://www.wpsoftware.net/andrew >>> >>> The sun is always shining in space >>> -Justin Lewis-Webster >>> >>> ___ >>> bitcoin-dev mailing list >>> bitcoin-dev@lists.linuxfoundation.org >>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >>> >> >> Is this something that should be fixed in bip118 signatures then? Cheers, aj -- Sent from my phone. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Purely off-chain coin colouring
On Thu, Feb 02, 2023 at 10:39:21PM -0800, Casey Rodarmor via bitcoin-dev wrote: > Apologies for posting! I've tried to keep discussion of ordinals and > inscriptions off-list, because I consider it to be of little relevance to > general Bitcoin development. Anything that potentially uses up a large percentage of blockspace seems pretty relevant to general Bitcoin development to me... > AJ Towns writes: > > I think, however, that you can move inscriptions entirely off-chain. I > > wrote a little on this idea on twitter already [1], but after a bit more > > thought, I think pushing things even further off-chain would be plausible. I guess I should have explained why I think moving things off-chain is a worthwhile goal. Riffing off: > Another issue is salience and scarcity, as has been mentioned. Off-chain > content is unbounded, and thus less scarce. Usually, we design for > efficiency, volume, and scale. For NFT designs, which are intended to be > collectable, this is in some ways counterproductive. "scarce" has two meanings -- one is that there's not much of it, the other is that it's highly valued (or a third, where it's is consistently underpriced and unavailable even for people who'd pay more, but that hopefully doesn't apply). I think for bitcoin's blockspace, we ideally only want the first of these to be true. We want small blocks because that makes it cheap to verify bitcoin, which reduces the need to trust third parties and aids in decentralisation. But we don't want blockspace to be especially valuable, as that makes it expensive to use bitcoin, which then limits who can use it. Moving things off-chain helps with both these goals: it doesn't make it harder to validate bitcoin, and it also decreases demand for blockspace, making it cheaper for those cases where things can't be moved off-chain. As a result of this approach, bitcoin blockspace is currently quite cheap -- so inscribing at 100kB jpeg at 25kvB might cost perhaps $60 in a peak period, or $6 if you wait for 1sat/vb to confirm. Not exactly a luxury purchase. If you keep jpegs on-chain, as far as I can see, there's three outcomes: * blockspace stays relatively cheap, and there's no "scarcity" benefit to minting via on-chain inscriptions; it's cheap enough to just mint any random meme, and there's no prestige to doing so * blockspace becomes filled with jpegs, driving up costs for everyone, making jpeg collectors happy, but transactors sad * the amount of blockspace is increased, keeping prices low, and reducing "scarcity" in both senses, so also making it harder to validate bitcoin. no one really wins. I'd guess the first of these is the most likely, personally. As far as salience/notability goes, personally, I'd see ownership of inscriptions as a negative indicator; "hey, when I was young and foolish I wasted x-thousand bytes on the bitcoin blockchain, pointlessly creating a permanent cost for everyone trying to use bitcoin in future". That's not unforgivable; people do all sorts of foolish things, and bitcoin's meant to survive attacks, not just foolish pranks. But it doesn't seem like something to brag about or encourage, either, at least if you want bitcoin to be a monetary network that's usable in practice by many/most people. (Even if one day that goes the other way, and there is real (and transferable) social value in being able to say "I donated x sats to fees to help secure bitcoin", such a claim is more charitable/admirable/value with a smaller on-chain footprint, both in that it again keeps validation easier, but also in that it makes it easier for others to also simultaneously make the same charitable contribution) > NFT collectors have a strong revealed preference for on-chain content. The > content of high-value NFTs is often stored partially or completely on > chain, When you identify an NFT by a url that points at someone else's server, that's an obvious vulnerability, as Moxie demonstrated pretty well. But solving that by saying "okay, we'll just externalise the storage costs to the public, while privatising all the benefits" isn't a good approach either. > User protection when off-chain content is involved is fraught. I mean, that seems trivially solvable? Users already have to store the private key that controls ownership of these digital assets; storing the asset as well, which doesn't need to be private, isn't a big ask. And if a public site like ordinals.net is willing to store all the inscriptions that might be on the blockchain, they could just as easily store the same amount of off-chain digital assets. > When a user buys an NFT with > off-chain content, they now have the primary economic incentive to preserve > that content, so that their NFT retains value and can be enjoyed or sold. Yes -- the people who potentially benefit from the NFT should be the ones paying the costs of preserving that NFT. > Many existing NFT marketplaces that sell off-chain content do not explain > t
[bitcoin-dev] Purely off-chain coin colouring
Hi *, Casey Rodarmor's ordinals use the technique of tracking the identity of individual satoshis throughout their lifetime: On Tue, Feb 22, 2022 at 04:43:52PM -0800, Casey Rodarmor via bitcoin-dev wrote: > Briefly, newly mined satoshis are sequentially numbered in the order in > which they are mined. These numbers are called "ordinal numbers" or > "ordinals". When satoshis are spent in a transaction, the input satoshi > ordinal numbers are assigned to output satoshis using a simple > first-in-first-out algorithm. This is proposed as a BIP at https://github.com/bitcoin/bips/pull/1408 When accompanied by a standard for associating some data or right with such an identity, this allows the creation of non-fungible tokens (or semi-fungible tokens) whose ownership can be transferred by a bitcoin transaction. The proposed BIP doesn't document any method for associating data or a right with an ordinal, but the "ord" tool defines "inscriptions" to fill this gap [0], providing a way of including mime-encoded data in a taproot witness. To make such an inscription, two transactions are required: one paying some sats to a special scriptPubKey that commits to the inscribed data, and a second that spends those sats to the owner of the newly inscribed ordinal, and in so doing revealing the full inscription. [0] https://docs.ordinals.com/inscriptions.html I think, however, that you can move inscriptions entirely off-chain. I wrote a little on this idea on twitter already [1], but after a bit more thought, I think pushing things even further off-chain would be plausible. [1] https://twitter.com/ajtowns/status/1619554871166013441 In particular, rather than looking at it as being the owner of the sats that inscribes some content on those sats (analogously to signing a $100 bill [2]), you could look at it as saying "the owner of this thing is whoever owns this particular sat" (eg instead of "whoever owns this share certificate is a shareholder", it's "whoever owns the $1 bill with serial number X is a shareholder"). [2] https://www.espn.com/nfl/story/_/id/14375536/owner-100-bill-autograph-cleveland-browns-qb-johnny-manziel-getting-offers Implementing that is fairly straightforward: you just need a protocol for creating an asset offchain and associating it with an ordinal -- nothing needs to happen on-chain at all. That is, you can do something as simple as posting a single nostr message: { "pubkey": "kind": 0, "tags": [ ["ord", "txid:vout:sat"] ], "content": [jpeg goes here], "id": "sig": } You can prove current ownership of the message by showing a custody chain, that is the transaction specified by "txid" in the "ord" tag, then every transaction that spent the given sat, until you get to one that's still in the utxo set [3]. You don't need to provide witness data or validate any of these tx's signatures, as that is already implicit in that you end up at a tx in the utxo set. Just calculating the txids and comparing against the output containing the sat you're interested in is sufficient. [3] If the satoshi was lost to fees at some point, you could continue to follow ownership by including an entire block in the custody chain. But seems better to just consider it as "abandoned" or "lost to the public domain" at that point. This approach allows all the "inscription" data to be entirely off-chain, the only thing that requires a transaction on-chain is transferring ownership to someone else. That allows the NFT's existance can be kept entirely private if desired; it also makes it cheap to create a new NFT (you don't need to pay any on-chain fees at all); and it doesn't impose an outsized overhead on people who aren't interested in your inscriptions, but may be interested either in bitcoin per se, or in other inscriptions. For things that have real intrinsic value -- equity rights in a company, bragging rights for supporting an artist, etc -- this seems like it's probably a viable approach: owners can "self-custody" all the information about the things they own without having to rely on third parties, transfers are no more censorable than any other bitcoin transaction (especially if the association of the NFT with some particular sat is not widely known), etc. The "inscription" approach might still be desirable for broadcasting information that might otherwise be subject to heavy censorship; presuming that the censoring entity isn't also willing and able to censor bitcoin itself. It's not clear that there's any "rights" to be owned for such a case -- you can't buy the right to be the person that first published it, and the point of widely broadcasting the information is so it's not a secret only known to a few anymore. Also, claiming ownership of such information would presumably make you a target for the censor, even if just as an example for others. So I'm dubious of the value of associating an inscription with an ordinal for that use case. It's also possi
Re: [bitcoin-dev] Reference example bech32m address for future segwit versions
On Tue, Jan 31, 2023 at 01:33:13PM -1000, David A. Harding via bitcoin-dev wrote: > I thought the best practice[1] was that wallets would spend to the output > indicated by any valid bech32m address. I think it depends -- if the wallet in question is non-custodial and might not be upgraded by the time witness v2 addresses are in use, then being able to send to such addresses now makes sense. If it's a custodial wallet where the nominal owner of the coins isn't the one signing the tx, then I could see a pretty strong argument to not allowing sending to such addresses until they're in use: (a) nobody will be running the old software, since the custodian can just force everyone to upgrade (eg, by deploying a new version of their own website), and (b) signing a tx to send the bitcoins you're holding on Bob's behalf to an address that will just get them stolen could be considered as negligence, and you might end up forced to make Bob whole again. So maybe the argument is: * is this a custodial wallet? then what's the point of testing a scenario that's likely years away -- the custodian will probably have changed their system entirely by then anyway * is it a non-custodial wallet? then it's worth testing -- you might not be able to find compatible software in future to move your private keys and have to dig up the current software and use it. will it still work? but in that case, you ought to be able to capture the tx it generates before broadcasting it, and don't need to publish it on chain, and then it doesn't matter what script you use? (For libraries and non-wallet software like block explorers or alternate node implementations, it's a different matter) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] OP_VAULT: a new vault proposal
On Mon, Jan 16, 2023 at 11:47:09PM +, Andrew Chow via bitcoin-dev wrote: > It seems like this proposal will encourage address reuse for vaults, (That is listed as an explicit goal: "A single vault scriptPubKey should be able to "receive" multiple deposits") > However the current construction makes it impossible to spend these > vaults together. Since OP_VAULT requires the recovery script of the > unvault output to match what's provided in the input, I don't think this part is a big problem -- the recovery path requires revealing a secret, but if you separate that secret from the recovery path sPK, you could vary the secret. ie: unvault1 delay recovery1 VAULT unvault2 delay recovery2 VAULT where recovery1 = SHA256(SHA256(secret1), rSPK) and recovery2 = SHA256(SHA256(secret2), rSPK), and both are spendable when the top stack element is secretN and the first output pays at least the sum of all the OP_VAULT using inputs to rSPK. So batched recovery could work fine, I think. (If you're using the same "recovery" parameter to each VAULT, then you're revealing which txs are in your vault at spend time, rather than at receive time, which doesn't seem all that much better to me) But the problem with this is it prevents you from combining vaults when spending normally: so if you've got a bunch of vaults with 1 BTC each, and want to spend 10 BTC on a house, you'll need to make 11 separate transactions: * 10 txs each spending a single vault utxo, satisfying OP_VAULT via the uN path, creating an output of OP_UNVAULT * 1 tx spending all the OP_UNVAULT outputs to a common set of outputs , with nSequence set to a relative timelock of at least Whereas if you use an identical OP_VAULT script for all the utxos in your vault, that can look like: * 1 tx, spending all the vault utxos, to a single OP_UNVAULT output, with the same that all the inputs share. * 1 tx spending the OP_UNVAULT output after a delay But maybe you can get the best of both worlds just by having the unvault path for OP_VAULT require you to put the vout number for its corresponding OP_UNVAULT output on the stack? Then if you're doing address reuse, you use a single vout for multiple inputs; and if you're avoiding address reuse, you use multiple outputs, and provide the mapping between inputs and outputs explicitly. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] SIGHASH_GROUP vs Ephemeral anchors
Hello world, I think it's interesting to compare SIGHASH_GROUP [0] and Ephemeral anchors [1]. [0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019243.html [1] https://github.com/bitcoin/bitcoin/pull/26403 SIGHASH_GROUP is the idea that you provide a way for the inputs of a transaction to divide the outputs of a transaction into non-overlapping groups. So input 1 can specify "I'm starting a group of 3 outputs", input 2 can specify "I'm using the same group as the previous input", input 3 can specify "I'm starting a group of 2 outputs"; and each input can use the SIGHASH_GROUP flag to specify their signature is signing for the subgroup they've specified, rather than just a single output, or all of them. The idea behind this is that then you can use a signature to link a set of inputs and outputs via a signature in a way that's more general than SIGHASH_ANYONECANPAY (since you can have many inputs attesting to the same subset of outputs), SIGHASH_SINGLE (since you can attest to multiple outputs), and SIGHASH_ALL (since you don't have to attest to all outputs). This means that (eg) you can have a tx closing a lightning channel commit to a dozen outputs that specify where the channel's funds end up, but are also able to add additional inputs to cover fees, and additional outputs to collect the change from those fees. Ephemeral anchors, by contrast, are just a realy policy level rule that a transaction may create a single 0-value output with sPK of OP_TRUE (the "ephemeral anchor"), and that that tx won't be rejected as being dust, provided that the tx that introduces the anchor pays 0 fees (so it is not profitable to mine on its own) and that it's relayed as a package with another tx that spends that anchor. (And there are additional proposed rules beyond those) To some degree, this provides an alternative way of getting the same benefits of SIGHASH_GROUP: if you were constructing a transaction consisting of {i1,i2,i3,i4,f} -> {o1,o2,o3,c} with {i1,i2,i3} committing to {o1} and {i4} committing to {o2,o3} and f providing the fees with c collecting the change, you could instead create three transactions: {i1,i2,i3} -> {o1, eph1} {i4} -> {o2,o3, eph2} {eph1, eph2, f} -> {c} (where eph1/eph2 are ephemeral anchors) and instead of signing with SIGHASH_GROUP, you'd just sign with SIGHASH_ALL. (This is likewise similar to the "sponsored transactions" concept [2], where a transaction may "sponsor" another transaction, meaning it cannot be included in a block unless the transaction it sponsors is also included in the block. Given the "txs-may-only-have-one-sponsor" rule, ephemeral anchors could be considered as "you can design a tx that always has a sponsor, or never has a sponsor") [2] https://bitcoinops.org/en/newsletters/2020/09/23/#transaction-fee-sponsorship Ephemeral anchors aren't a complete replacement for SIGHASH_GROUP -- if i1 had two signatures, one signing with SIGHASH_GROUP, but the other signing with SIGHASH_ALL, then it's difficult to duplicate that behaviour exactly with ephemeral anchors. However, it's likely the only benefit to using SIGHASH_ALL there is to reduce malleability risk, and ephemeral anchors probably already achieve that. Additionally, if the value of i1+i2+i3 was less than o1 or i4 was less than o2+o3, then the introduction of f is too late to compensate for that with ephemeral anchors, but would have been fine with SIGHASH_GROUP. The detailed proposed rules for ephemeral anchors as they stand are, I think: > A transaction with one or more CScript([OP_2]) output MUST: > eph.1) Be 0-fee > eph.2) Have [the ephemeral anchor output] spent in the same memppol relay > package > eph.3) Be nversion==3 > eph.4) Must not have more than one [such output] - https://github.com/bitcoin/bitcoin/pull/26403/commits/431a5e3e0376d8bf55563a0168e79dd73b04a1f8 And implied by "nversion==3": > v3.1) A V3 transaction can be replaced, [...] > v3.2) Any descendant of an unconfirmed V3 transaction must also be V3. > v3.3) A V3 transaction's unconfirmed ancestors must all be V3. > v3.4) A V3 transaction cannot have more than 1 descendant. > v3.5) A V3 transaction that has an unconfirmed V3 ancestor cannot be >larger than 1000 virtual bytes. > v3.4b) A V3 transaction cannot have more than 1 ancestor - https://github.com/bitcoin/bitcoin/blob/0c089a327a70d16f824b1b4dfd029d260cc43f09/doc/policy/version3_transactions.md The v3.4b rule unfortunately prevents ephemeral anchors from being used to provide fees for multiple input/output groups in the way I suggest above. That's intended to prevent attaching large ancestors to a package, allowing the descendent to be high fee / low feerate, thus preventing that descendant from both being replaced (due to requiring a higher absolute fee) and mined (due to having a low fee rate). (I suspect the only way to remove that restriction without reinstating the pinning vector is to allow replacements that have a higher
Re: [bitcoin-dev] OP_VAULT: a new vault proposal
On Tue, Jan 10, 2023 at 03:22:54PM -0500, James O'Beirne wrote: > > I don't think that makes sense? With a general scheme, you'd only be > > bloating the witness data (perhaps including the witness script) not > > the scriptPubKey? > Sorry, sloppy language on my part. To be charitable, I'm talking about > the "figurative sPK," which of course these days lives in the witness > for script-path-ish spends. Maybe the witness discount means that > "complicated" scripts aren't as big a deal, depending on the actual > difference in raw script size. Sure. I think there's three aspects that matter for the witness script: 1) it shouldn't be long/costly to do things that are common and easy: if you can express "OP_VAULT" in ~70 bytes, you shouldn't have to spend 1000 bytes to do so; if it can be cheap to validate, you shouldn't have to pay 100x markup in fees to use it. With the exception of things that build up from basics (like CAT/CHECKSIGFROMSTACK approaches), I think this is mostly fine though. 2) once someone figures out a design, it should be easy to reuse; but I think that's not a big deal: you just write up a spec for your script, and people use that in their different wallet software, much like the specialised scripts for lightning HTLCs 3) primitives should be designed to be easy to safely build on and scripts should be as easy as possible to analyse once written; ie, we want things more like miniscript than "The story of Mel, a Real Programmer" With some caveats (like that using the cold wallet xpub to scan the blockchain before you've frozen all your funds is dangerous), OP_VAULT seems really good on all those fronts, of course. > > I think it might be better to use a pay-to-contract construction for > > the recovery path, rather than an empty witness. > So I guess the one advantage that what you're proposing has over just > using a recovery-path key signature is that it's all derivable from your > cold privkey; you don't have to worry about accidentally losing the > recovery-path key. > Of course you're still vulnerable to spurious sweeps if the > sha256(secret) value gets found out, which presumably you'd want in an > accessible cache to avoid touching the cold secret every time you want > to sweep. Sure, "sha256(secret)" itself needs to be semi-secret -- it allows anyone who knows it to freeze your funds, even if it doesn't allow anyone to steal them. You could presumably do all the usual things to protect that secret: split it up with secret sharing; put it in a hardware wallet; keep it offline; etc. > What do you think about the idea of making the recovery-path > authorization behavior variable on a single byte flag preceding the 32 > byte data push, as I mentioned in another post? ] "if ] is 32 bytes, treat it as it's currently used. If it's ] 33 bytes, use the first byte as a parameter for how to interpret it." To ] start with, an extra prefix byte of 0x00 could mean "require a witness ] satisfying the scriptPubKey that hashes to the remaining 32 bytes" in the ] same way we do the unvault signing. I don't think 33 bytes would be enough? There isn't really a way to commit to the recovery destination within the script? So I think you'd need "<32 byte recovery-path-hash>" Aside from that, my opinion's one/all of: a) sounds fine b) maybe you could just always have it include a scriptPubKey? for the times when you just want "reveal the cold wallet preimage" just have the scriptPubKey be the single byte "OP_TRUE"; for the times when you it to be "reveal random preimage" you'd have it be the 22 byte "HASH160 EQUAL"? c) delegation to a script is a great idea, that's come up multiple times (OP_EVAL, BIP117, graftroot) -- it's probably better to have it available as a generic feature, than bolted on to particular features > > I think a generic OP_UNVAULT can be used to simulate OP_CTV: replace > > " OP_CTV" with "<000..0> 0 OP_UNVAULT". > Yup, that's an inefficient way of emulating CTV. Sure; I think it's only interesting in establishing how powerful the construct is in the abstract. It's not an exact match for CTV since it hashes some things differently. I don't really think it's necessarily that inefficient fwiw; "0 SHA256 0 UNVAULT" is only 3 more bytes than " CTV", could give you an unspendable recovery path, provided UNVAULT wants either a BIP341 tagged hash (which is what the implementation does, by the looks), or a HASH256 for the recovery path. (Again, this assumes UNVAULT is available in script, and isn't just a special scriptPubKey type) > > I think there's maybe a cleverer way of batching / generalising > > checking that input/output amounts match. > > [...] > > * set C = the sum of each output that has a vault tag with > >#recovery=X > This would also need to take into account that the s are > compatible, but your point is well taken. Sure, I guess output/UNVAULT delay >= input/VAULT delay would be sufficient for tha
Re: [bitcoin-dev] OP_VAULT: a new vault proposal
On Mon, Jan 09, 2023 at 11:07:54AM -0500, James O'Beirne via bitcoin-dev wrote: > But I also found proposed "general" covenant schemes to be > unsuitable for this use. The bloated scriptPubKeys, I don't think that makes sense? With a general scheme, you'd only be bloating the witness data (perhaps including the witness script) not the scriptPubKey? Terminology suggestion: instead of calling it the "recovery" path, call it "freezing your funds". Then you're using your "hot wallet" (aka unvault-spk-hash) for the unvault path for all your normal transactions, but if there's a problem, you freeze your funds, and they're now only accessible via your "cold wallet" (aka recovery-spk-hash). As I understand it, your scheme is: scriptPubKey: <#recovery> ( <#unvault>) where #recovery is the sha256 hashes of an arbitrary scriptPubKey, #unvault is the sha256 hash of a witness script, and delay is a relative block count. This scriptPubKey allows for two spend paths: recovery: spends directly to ; verified by checking it the hash of the sPK matches <#recovery> and the amount it preserved unvaulting: spends to a scriptPubKey of <#recovery> ( <#target>) verified by checking that the witness script hashes to #unvault and is satisfied, and #target is a CTV-ish commitment of the eventual withdrawal (any CHECKSIG operations in the unvault witness script will commit to the sPK output, preventing this from being modified by third parties). #recovery and delay must match the values from the vault scriptPubKey. The unvault scriptPubKey likewise likewise allows for two spend paths: recovery: same as above withdrawal: verifies that all the outputs hash to #target, and that nSequence is set to a relative timelock of at least delay. This means that as soon as your recovery address (the preimage to #recovery) is revealed, anyone can move all your funds into cold storage (presuming they're willing to pay the going feerate to do so). I think this is a feature, not a bug, though: if your hot wallet is compromised, moving all your funds to cold storage is desirable; and if you want to have different hot wallets with a single cold wallet, then you can use a HD cold-wallet so that revealing one address corresponding to one hot wallet, doesn't reveal the addresses corresponding to other hot wallets. (This is addressed in the "Denial-of-service protection" section) It does however mean that the public key for your cold wallet needs to be handled secretly though -- if you take the cold wallet xpub and send it to a random electrum server to check your cold wallet balance, that would allow a malicious party to lock up all your funds. I think it might be better to use a pay-to-contract construction for the recovery path, rather than an empty witness. That is, take your recovery address R, and calculate #recovery=sha256(R, sha256(secret)) (where "secret" is something derived from the R's private key, so that it can be easily recovered if you only have your cold wallet and lose all your metadata). When you want to recover all your funds to address R, you reveal sha256(secret) in the witness data and R in the scriptPubKey, OP_VAULT hashes these together and checks the result matches #recovery, and only then allows it. That would allow you to treat R as public knowledge, without risking your funds getting randomly frozen. This construct allows delayed withdrawals (ie "the cold wallet can withdraw instantly, the hot wallet can withdraw only after delay blocks"), but I don't think it provides any way to cap withdrawals ("the cold wallet can withdraw all funds, the hot wallet can only withdraw up to X funds per day/week"). Having a fixed limit probably isn't compatible with having a multi-utxo vault ("you can withdraw $10k per day" doesn't help if your $5M is split across 500 $10k utxos, and the limit is only enforced per-utxo), but I think a percentage limit would be. I think a generic OP_UNVAULT can be used to simulate OP_CTV: replace " OP_CTV" with "<000..0> 0 OP_UNVAULT". The paper seems to put "OP_UNVAULT" first, but the code seems to expect it to come last, not sure what's up with that inconsistency. I'm not sure why you'd want a generic opcode though; if you want the data to be visible in the scriptPubKey, you need to use a new segwit version with structured data, anyway; so why not just do that? I think there's maybe a cleverer way of batching / generalising checking that input/output amounts match. That is, rather than just checking that "the input's a vault; so the corresponding output must be one of these possibilities, and the input/output values must exactly match", that it's generalised to be: * set A = the sum of each input that's taking the unvaulting path from a vault scriptPubKey with #recovery=X * set B = the sum of each output that has an unvault tag with #recovery=X * set C = the sum of each output that has a vault tag with #recove
Re: [bitcoin-dev] Refreshed BIP324
On Fri, Jan 06, 2023 at 09:12:50AM +1000, Anthony Towns via bitcoin-dev wrote: > On Thu, Jan 05, 2023 at 10:06:29PM +, Pieter Wuille via bitcoin-dev wrote: > > Oh, yes. I meant this as an encoding scheme, not as a (replacement for) the > > negotiation/coordination mechanism. There could still be an initial > > assignment for 1-byte encodings, and/or an explicit mechanism to negotiate > > other assignment, and/or nothing at all for now. > The current implementation for 324 does the aliasing > as part of V2TransportDeserializer::GetMessage and > V2TransportSerializer::prepareForTransport. That makes a lot of sense, > [...] So I think you can make this setup work with a negotiated assignment of shortids, perhaps starting off something like: https://github.com/ajtowns/bitcoin/commit/6b8edd754bdcb582e293e4f5d0b41297711bdbb7 That has a 242 element array per peer giving the mappings (which is just ~250 bytes per peer) for deserialization, which seems workable. [0] It also has a single global map for serialization, so we'll always shorten CFILTER to shortid 39 for every peer that supports shortids, even, eg, for a peer who's told us they'll send CFILTER as shortid 99 and that we should interpret shortid 39 from them as NEWFEATUREX. That has three advantages: * each peer can choose a mapping that minimises their own outbound traffic, even potentially for asymmetric connections, and don't need to coordinate with the other peer to decide a common optimal mapping that they both use across their connection * you don't have to have different serialization tables per-peer, reducing memory usage / implementation complexity * you can leave V2TransportSerializer as a `const` object, and not have to introduce additional locking logic to be able to update its state... I'm not seeing a good way to introduce shortids for future one-shot negotiation messages though (like VERSION, VERACK, SENDADDRV2, WTXIDRELAY, SENDTXRCNCL): * if you explicitly announce the mapping first, you're just wasting bytes ("99=FOOBAR; 99 baz quux" vs just "FOOBAR baz quux") * if you negotiate the tables you support between VERSION/VERACK and then choose a mutually supported table after VERACK, that's too late for pre-VERACK negotation messages * announcing the tables you support as part of the VERSION message would work, but seems a bit klunky Also, if you did want to shift to a new table, you'd probably want to always support sending/receiving {37, 44, 46, 47, 36} messages? I guess I still kind-of think it'd make more sense to just reserve shortids for post-VERACK messages that are going to be sent more than once per connection... At that point, even if you don't have any table in common with your peer, just following VERACK with an immediate announcement of each shortid you want to use and its meaning would still make reasonable sense. If we included the ability to define your own shortids concurrently with bip324 rollout, then I think nodes could always have a static set of shortids they use for all their peers for outbound messages, which, as above, seems like it would make for simpler implementations. ie, you might send: VERSION SHORTIDTBLS ["","awesomeshortids"] WTXIDRELAY SENDADDRV2 SENDPACKAGES 1 VERACK SHORTID "" [(52,"getpkgtxns"), (53, "pkgtxns"), (54, "ancpkginfo")] ...but you'd do all that long form, and only switch to shortids for messages after you've declared exactly what your shortids are going to be. (where "" is the table name for bip324's table, and "awesomeshortids" is an updated table that includes the package relay commands already, perhaps) Cheers, aj [0] m_deserializer is used from the SocketHandler thread in CNode::ReceiveMsgBytes(), but the p2p protocol is managed from the MessageHandler thread; with multiple messages potentially deserialized into vRecvMsg() at once -- but that means that if the first message redefines shortid decoding, and the second message uses one of the redefined shortids, it will have already been decoded incorrectly. So that would need some futzing about still. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Thu, Jan 05, 2023 at 10:06:29PM +, Pieter Wuille via bitcoin-dev wrote: > > > So this gives a uniform space which commands can be assigned from, and > > > there is no strict need for thinking of the short-binary and > > > long-alphabetic commands as distinct. In v2, some short ones would be > > > treated as aliases for old long-alphabetic ones. But new commands could > > > also just be introduced as short ones only (even in v1). > Oh, yes. I meant this as an encoding scheme, not as a (replacement for) the > negotiation/coordination mechanism. There could still be an initial > assignment for 1-byte encodings, and/or an explicit mechanism to negotiate > other assignment, and/or nothing at all for now. > > I just thought it would be interesting to have a uniform encoding without > explicit distinction between "short commands" and "long commands" at that > layer. > But maybe none of this is worth it, as it's perhaps more complexity than the > alternative, and the alternative already has a working implementation and > written-up specification. Heh, I was just looking at this yesterday, but failing to quite reach a conclusion. One thing I hadn't realised about this was that it's not actually a restriction compared to what we currently allow with p2p v1: CMessageHeader::IsCommandValid() already rejects commands that use characters outside of 0x20 to 0x7E, so the high bit is already available for signalling when we reach the last byte. The current implementation for 324 does the aliasing as part of V2TransportDeserializer::GetMessage and V2TransportSerializer::prepareForTransport. That makes a lot of sense, but particularly if we were to negotiate short commands sometime around VERSION or VERACK, it might make more sense for the aliasing to move up to the protocol layer rather than have it close to the wire layer. In that case having a uniform encoding means we could just keep using CSerializedNetMsg whether we're sending a short command or a multibyte ascii command -- without a uniform encoding, if we wanted to move short commands up a layer, I think we'd need to change CSerializedNetMsg to have m_type be a `std::variant` instead of just a string, or something similar. I think I'm leaning towards "it doesn't matter either way" though: * if we can negotiate short commands on a per-peer basis, then once negotiation's finished we'll only be using short commands so saving a byte on long commands doesn't matter much * if we've only got around 30 or 40 commands we understand anyway (even counting one-time-only negotiation stuff), then it doesn't matter if we can do 102, 126 or 242 short commands since those are all more than we need * whether we'd have to tweak an internal struct if we want to change the way our code is structured shouldn't really be much of an influence on protocol design... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger (angus)
On Fri, Dec 09, 2022 at 03:58:37PM +, angus via bitcoin-dev wrote: > Those in favour of Full RBF see trusting and relying on predictable > mempool policy as a fundamentally flawed bad idea. I don't believe that claim is true, at least in general: the motivation for the -mempoolfullrbf PR was to make mempool policy behave in a way that was (believed to be) more reliable and predictable than the current behaviour. In particular, if you can't rely on predictable relay/mempool policy, you can't build "fraud proof" protocols on top of the blockchain: whether that be like lightning today, which relies on people being able to get a penalty transaction mined in a reasonable amount of time, or lightning in the future which in an eltoo world relies on getting an update transaction mined in a similar amount of time, or optimistic rollups that offer the ability for people to challenge with a fraud proof. I think the basis for the fullrbf vision is more that fullrbf advocates think miners will always want to optimise fees in the short term: that is, given two conflicting transactions H and L, if including H in the block gives a higher total reward for that block than including L, they will always want to include H. Presuming that is a common attitude amongst miners, fullrbf is a natural outcome: those miners will advertise how to connect to their nodes, and anyone who prefers H over L will send H to them directly, and H will be mined and L will not be. I think it's fair to say that's what people mean when they talk about "incentive compatible" -- miners want to see the highest fee alternative of a transaction, and are "incentivised" by fees to do so, so relaying that transaction is "incentive compatible" while relaying lower fee alternatives is "incentive incompatible". That can be a stable outcome too: if it's common to have multiple transactions like that, then the pools that take the higher fee transactions will give higher payouts per hashrate, and owners of mining hardware will switch to those pools, so that the amount of hashrate accepting replacements will tend towards 100%. That scenario is already the case for opt-in RBF. However, expecting pools/miners to optimise fees in the short term is an assumption, not the economic fact of life that some seem to assume it is. It's also possible that owners of ASICs or pool operators will decide that they're in this for the long term, and therefore that it's smarter to look at fee income over multiple blocks, rather than taking each block as its own entity. Similar to treating the prisoner's dilemma as a one-off game (where the dominant strategy is to always defect) versus an iterated game (where cooperation or tit-for-tat may be better strategies). In particular, the outcome of fullrbf might not simply be the rosy scenario: + there are just as many txs paying similar fees as there now, except that + it's easy for people to cancel mistakes, and + people stop complaining to wallet authors when their opt-in rbf tx takes longer to confirm than they expected but might instead be either: + everyone using BTC for payments switches to lightning, causing - on-chain traffic to drop and fee income to drop with it or - everyone paying for goods/services online with cryptocurrency switches to stablecoins or ETH or Liquid or RSK, - bitcoin traffic and tx fees drop substantially as a result, - and bitcoin price drops too as people switch to hodling their hot wallet balances in stablecoins and ETH which pool operators or hashrate owners might consider to not be in their best interests. Sergej's numbers at https://github.com/bitcoin/bitcoin/pull/26525#issuecomment-1332823282 suggest bitefill's zeroconf txs alone account for something like 0.5% of on-chain txs. I'm not really sure how to interpret the numbers Daniel Lipshitz/Max Krupyshev reported; "700+ inputs a month" doesn't sound like very many, but "300k incoming transactions" would be 35 years worth of 700 inputs per month, so something doesn't add up... The gap600 webpage from 2018 cites 3 million Bitcoin txs over about 13 months, which would be about 230k/month, which would be roughly 3% of on-chain txs at the moment. It's not clear to me what that adds up to; is reducing tx volume by perhaps 5% or 10% a big deal? Given fee income is maybe 2% of the block reward at the best of times, reducing it by 5% (to 1.9%) probably doesn't matter directly, but then, nor would increasing it by 5%. So if there's a negative effect on demand for bitcoin because it becomes slower and less widely accepted, driving its price down, though, that probably dominates. Is that likely to be signficant? No idea. Is there some counterveiling effect where mempoolfullrbf would make bitcoin more desirable, increasing demand and hence increasing its price? I can't see one. (The original reasoning for the mempoolfullrbf option was that it would allow new use cases, namely collaborative funding of coinjoins or
[bitcoin-dev] bitcoin-inquistion 23.0: evaluating soft forks on signet
Hi *, Bitcoin Inquisition 23.0 is tagged: https://github.com/bitcoin-inquisition/bitcoin/releases/tag/inq-v23.0 It includes support for BIP 118 (ANYPREVOUT) and BIP 119 (CHECKTEMPLATEVERIFY) on regtest and signet. As previously discussed, the hope is that this will allow more experimentation and building a greater understanding of the risks, benefits, and tradeoffs of proposals like BIP 118 and BIP 119. For an initial trial period, we've switched to mining 100% of blocks on the default global signet using this patchset. However should a problem occur (eg the node software crashing, or some unintended hard fork being triggered), the signet miners are configured to automatically fall back to using bitcoin core nodes to ensure that signet continues to be available. In order to more reliably relay transactions using the new soft forks, you may wish to manually connect to a node that supports these features; you can do so by specifying: addnode=inquisition.bitcoin-signet.net addnode=phfrpeh47vpjvoi2dgpngfk6ynl7vbnxwekwtcpg3zancixnnjvq.b32.i2p If you are trying to do experiments with signet and would like a larger budget than the various faucets will give you, please join the #bitcoin-signet IRC channel on Libera, and let us know. That applies whether or not you're making use of inquisition-y features! If you wish to enable these soft forks on a custom signet, you should mine a block with version 0x60007600 (BIP 118) and/or version 0x60007700 (BIP 119), then monitor activation using `bitcoin-cli getdeploymentinfo`. The inquisition node software should also correctly validate/relay CTV transactions on the existing ctv signet (ctvsignet.com). As one simple bit of experimentation, block rewards are currently being sent to the address tb1pwzv7fv35yl7ypwj8w7al2t8apd6yf4568cs772qjwper74xqc99sk8x7tk This is a taproot address with an ANYPREVOUT script path ("OP_1 OP_CHECKSIG"), which has an example spend splitting 1000 sBTC into 900 sBTC for Kalle's mining wallet and 100 sBTC into mine: https://mempool.space/signet/tx/2ba88276dee53abdff23258b7f5b8d41005c69f03dc9a5bb9d5cb7b7f41f3e45 Because this transaction was signed with an ANYPREVOUTANYSCRIPT|ALL signature, that signature can be replayed with other utxos of that pubkey, eg in the following transaction: https://mempool.space/signet/tx/ef8b3351def1163da97f51b8d2cba53c9671dfbd69ae4b1278506b9282bfbdea That transaction was generated by setting up a watchonly wallet to monitor that address: $ bitcoin-cli -signet createwallet testapo true false '' false true true $ bitcoin-cli -signet -rpcwallet=testapo importdescriptors '[{"desc":"addr(tb1pwzv7fv35yl7ypwj8w7al2t8apd6yf4568cs772qjwper74xqc99sk8x7tk)#30t3uj6k", "active":false, "timestamp":1670803200}]' then manually putting together sufficient inputs to fund the required signed outputs (with the excess going to fees, and hence back to the original address since that's where mining payouts are being sent) and adding the same witness data once for each input: $ (X=20; printf "0201%02x" $X; bitcoin-cli -signet -rpcwallet=testapo listunspent | jq -j '.[] | (.txid, " ", .vout, "\n")' | head -n$X | while read txid vout; do rtxid=$(echo $txid | sed 's/../& /g' | tr ' ' '\n' | tac | tr -d '\n'); printf "%s%02x%06x00" "$rtxid" "$vout"; done; printf "0200046bf4140016001481113cad52683679a83e76f76f84a4cfe36f750100e40b5402001600141b94e6a88e7bfb3a552d6888b102c4f615dc2f56"; for a in `seq 1 $X`; do printf "034189d888393f0c46872fbd002b3523cf58dd474ab86014096bdf69e5248cc06cd6f4b5a223053eb97a708b47ed1d25ad26be7f197536af86ad3389cb1d53a0e643c10251ac21c0624aa2e3277b1f5d667c5acc0ec58eccad8c8be7c7815e122d2b65127f8b0e28"; done; echo "" ) | sed 's/^/["/;s/$/"]/' | bitcoin-cli -stdin -signet testmempoolaccept That should be something anyone running an inquisition client can duplicate (err, if you can handle the awesomeness of my shell one-liners); though doing more than just testmempoolaccept will be naturally rate limited as it spends 20 blocks worth of reward. There shouldn't be any way of turning it into a profit, or much of a denial-of-service attack, but if you find one, that's probably new information about BIP 118! (One thing you could do is use it as a way of creating an excessively high CPFP feerate, though without package relay, and on signet, that's probably not terribly useful) Finally, yes this is based on Bitcoin Core version 23.0 when 24.0.1 has just been released. The plan in general is to keep inquisition focussed on released versions of core to minimise rebasing, and more concretely, to start forward porting the current patches to that now that it has been released, and possibly consider including support for additional BIPs (ideas so far include: defining annex interpretation and package rbf). Discussion of those ideas welcome (on list, on the repo, or #bitcoin-signet or #bitco
Re: [bitcoin-dev] Announcement: Full-RBF Miner Bounty
On Fri, Dec 09, 2022 at 05:04:05PM +0100, 0xB10C via bitcoin-dev wrote: > For further monitoring, I've set-up a mempoolfullrbf=1 node and are > logging replacement events with [0]. I filter the full-RBF replacements > and list the replaced and replacement transactions here: > https://fullrbf.mempool.observer/ I think it would be helpful to list (a) how long ago the replaced transaction was seen compared to the [full RBF event] timestamp, and (b) particularly for cases where the replaced tx was the one that was mined, how long after the [full RBF event] the block was received. Might be more hassle than it's worth, but you might consider grouping all related replacements together, so that you can see: 2b771b7949e62434bf3002ad0d38f383a29b362cf2dc31627a35a883d0de36c3 [2601.19 sat/vb] 9ce405ab14e8d68c7a43d190414e39564d90bbee21f23020f2ce45136927ce9b [2448.25 sat/vb] 1a5f239e7fc008f337605e0b405579234d0fecebaf6be966000dcfaf0bcb7beb [2295.31 sat/vb] 0500a3ca5e4998fb273be9b51a4c3a75780acf28b23409a54f4260e069441e32 [2142.37 sat/vb] 955623c789eb0a7ca0848ce964325d6b2c7d1731a686d573018f6348de6c00a1 [1989.42 sat/vb] 7838bc60405f9b38a79c94f4f045b8debaf41c0a5acfdeebc6a3904188b2bbc9 [1836.48 sat/vb] 2d5e6b84602d45c5c834e0cad4db4dd8a9142ba7eff6bacdb34a97e5cfacb418 [1683.54 sat/vb] 130851951a1d9270776852491bda3e17bb08b9309e5b76b206633f88a9341604 [1530.60 sat/vb] 3c9b2530c02a22c966fa9ef60ec0acf17bd23a8b0b4c5202589c190ee770c880 [1377,66 sat/vb] 49889043ec7dae7a4f1573c5feaca6a549d88e4fb306cf3b410155ba919da83d [1224.72 sat/vb] 861156e18ae0399cd458c6f7f7faed1a94142db45f1d879b9ae78cb11cd7e96c [1071.78 sat/vb] 961bf21f1fc35edacd4929e8db67b27a52c69a593d32aab5a024757503c0490b [ 918.84 sat/vb] 5cdb24e2ed30dfc55b23820676a9d47d158fec982e77dddadc648280f0b2c914 [ 765.90 sat/vb] 159494115af33b414df77d3965de5eb191b4a3af1c1219509f3175abc5dcd132 [ 612.95 sat/vb] dcf4c0e688ee76188e9ef829d03cc66ee7da404717b9d56d74bcd75708612271 [ 460.01 sat/vb] 1971d9122551a898bcbc5183d4ea79e273dea89aa4075d4e014f8f7dd4eb8321 [ 307.07 sat/vb] c43316d2e4bb12fbb408de01d64c4b480fd7d6875db4ebbf006fdc7579546d13 [ 154.13 sat/vb] 8bbf6a0f3dd8358c6d8a81f0811216d7980288b14daeac82ff2f672ea8e4851d [ 1.19 sat/vb] [mined in 767044] at a glance as a single block (ideally with timestamps). Arguably, that might be interesting to see in general (ie for bip-125 opt-in rbf txs as well), to see how common the "offer a low fee, and raise it over time" approach is in practice. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Sat, Nov 12, 2022 at 03:23:16AM +, Pieter Wuille via bitcoin-dev wrote: > Another idea... > This means: > * Every alphabetic command of L characters becomes L bytes. > * 102 non-alphabetic 1-byte commands can be assigned. > * 15708 non-alphabetic 2-byte commands can be assigned. (There are 128**L possible L-byte commands, 26**L alphabetic L-byte commands, and hence 128**L-26**L non-alphabetical L-byte commands) > * etc > So this gives a uniform space which commands can be assigned from, and there > is no strict need for thinking of the short-binary and long-alphabetic > commands as distinct. In v2, some short ones would be treated as aliases for > old long-alphabetic ones. But new commands could also just be introduced as > short ones only (even in v1). Isn't that optimising for the wrong thing? Aren't the goals we want: 1) it should be easy to come up with a message identifier without accidently conflicting with someone else's proposal 2) commonly used messages on the wire should have a short encoding in order to save bandwidth Depending on how much the p2p protocol ossifies, which messages are "commonly used on the wire" might be expected to change; and picking an otherwise meaningless value from a set of 102 elements seems likely to produce conflicts... Doing: -> VERSION <- VERSION <- SHORTMSG ["BIP324", "foo.org/experiment"] <- VERACK -> SHORTMSG ["BIP324"] -> VERACK -> SENDSHORTMSG "BIP324" [(13, "ADDRV3")] <- SENDSHORTMSG "BIP324" -> 34 (SENDHEADERS) -> 25 (GETHEADERS) ... where "SHORTMSG" lets you specify an array of tables you support (such as BIP324's), and, once you know both sides supports a table, you can send `SENDSHORTMSG` to choose the table you'll use to abbreviate messages you send, as well as any modifications to that table (like replacing ADDR with ADDRV3). Then when writing BIPs, you just choose a meaningful string ("ADDRV3"), and when implementing you send a one-time "SENDSHORTMSG" message to optimise the messages you'll send most often... As time goes on and the most common messages change, issue a new BIP with a new table so that your one-time SHORTIDs message becomes shorter too, at least when you connect to peers that also know about the new bip.. Could potentially support that without bip324, by taking over the one byte message space, presuming you don't have any one-character messages you actually want to send? Could do this /as well as/ the encoding above, I guess; then you would have bips specify alphabetic message ids, and use SENDSHORTMSG to dynamically (and sequentially) override/allocate non-alphabetic ids? That would presumably limit the number of non-alphabetic ids to how many you could specify in a single SENDSHORTIDs message, which I think would be up to something like 749k different 1/2/3/4/5/6-byte alphabetic message ids (encoded as 1/2/3-byte non-alphabetic messages). Probably some more specific limit would be better though. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Announcement: Full-RBF Miner Bounty
On Tue, Nov 08, 2022 at 01:16:13PM -0500, Peter Todd via bitcoin-dev wrote: > FYI I've gotten a few hundred dollars worth of donations to this effort, and > have raised the reward to about 0.02 BTC, or $400 USD at current prices. Seems like this has been mostly claimed (0.014btc / $235, 9238sat/vb): https://mempool.space/tx/397dcbe4e95ec40616e3dfc4ff8ffa158d2e72020b7d11fc2be29d934d69138c The block it was claimed in seems to have been about an hour after the default mempool filled up: https://twitter.com/murchandamus/status/1592274621977477120 That block actually seems to have included two alice.btc.calendar.opentimestamps.org txs, the other paying $7.88 (309sat/vb): https://mempool.space/tx/ba9670109a6551458d5e1e23600c7bf2dc094894abdf59fe7aa020ccfead07cf Timeline (utc) to me looks like: - 13:12 - block 763148 is mined: last one that had a min fee < 1.5sat/vb - 13:33 - f503868c64d454c472859b793f3ee7cdc8f519c64f8b1748d8040cd8ce6dc6e1 is announced and propogates widely (1.2sat/vb) - 18:42 - 746daab9bcc331be313818658b4a502bb4f3370a691fd90015fabcd7759e0944 is announced and propogates widely (1.2sat/vb) - 21:52 - ba967010 tx is announced and propogates widely, since conflicting tx 746daab9 has been removed from default mempools - 21:53 - murch tweets about default mempool filling up - 22:03 - 397dcbe4 tx is announced and propogates widely, since conflicting tx f503868 has already been removed from default mempools - 22:35 - block 763189 is mined - 22:39 - block 763190 is mined - 23:11 - block 763191 is mined - 23:17 - block 763192 is mined including 397dcbe4 miningpool.observer reports both 397dcbe4 and ba967010 as missing in the first three blocks, and gives similar mempool ages for those txs to what my logs report: https://miningpool.observer/template-and-block/000436aba59d8430061e0e50592215f7f263bfb1073ccac7 https://miningpool.observer/template-and-block/0005600404792bacfd8a164d2fe9843766afb2bfbd937309 https://miningpool.observer/template-and-block/0004a3073f58c9eae40f251ea7aeaeac870daeac4b238fd1 That presumably means those pools (AntPool twice and "unknown") are running with large mempools that didn't kept the earlier 1.2sat/vb txs. The txs were mined by Foundry: https://miningpool.observer/template-and-block/0001382a226aedac822de80309cca2bf1253b35d4f8144f5 This seems to be pretty good evidence that we currently don't have any significant hashrate mining with fullrbf policies (<0.5% if there was a high fee replacement available prior to every block having been mined), despite the bounty having been collected. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Refreshed BIP324
On Wed, Oct 26, 2022 at 04:39:02PM +, Pieter Wuille via bitcoin-dev wrote: > However, it obviously raises the question of how the mapping table between the > 1-byte IDs and the commands they represent should be maintained: > > 1. The most straightforward solution is using the BIP process as-is: let > BIP324 >introduce a fixed initial table, and future BIPs which introduce new >messages can introduce new mapping entries for it. [...] > 3. Yet another possibility is not having a fixed table at all, and negotiate >the mapping dynamically. E.g. either side could send a message at >connection time with an explicit table of entries "when I send byte X, I >mean command Y". FWIW, I think these two options seem fine -- maintaining a purely local and hardcoded internal mapping of "message string C has id Y" where Y is capped by the number of commands you actually implement (presumably less than 65536 total) is easy, and providing a per-peer mapping from "byte X" to "id Y" then requires at most 512 bytes per peer, along with up to 3kB of initial setup to tell your peer what mappings you'll use. > Our idea is to start out with approach (1), with a mapping table effectively > managed by the BIP process directly, but if and when collisions become a > concern (maybe due to many parallel proposals, maybe because the number of > messages just grows too big), switch to approach (3), possibly even > differentially (the sent mappings are just additions/overwrites of the > BIP-defined table mappings, rather than a full mapping). I guess I think it would make sense to not start using a novel 1-byte message unless you've done something to introduce that message first; whether that's via approach (3) ("I'm going to use 0xE9 to mean pkgtxns") or via a multibyte feature support message ("I sent sendaddrv3 as a 10-byte message, that implies 0xA3 means addrv3 from now on"). I do still think it'd be better to recommend against reserving a byte for one-shot messages, and not do it for existing one-shot messages though. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Preventing/detecting pinning of jointly funded txs
On Sun, Nov 06, 2022 at 06:22:08PM -0500, Antoine Riard via bitcoin-dev wrote: > Adding a few more thoughts here on what coinjoins/splicing/dual-funded > folks can do to solve this DoS issue in an opt-in RBF world only. > > I'm converging that deploying a distributed monitoring of the network > mempools in the same fashion as zeroconf people is one solution, as you can > detect a conflicting spend of your multi-party transaction. > Let's say you > have a web of well-connected full-nodes, each reporting all their incoming > mempool transactions to some reconciliation layer. > > This "mempools watchdog" infrastructure isn't exempt from mempools > partitioning attacks by an adversary, A mempool partitioning attack requires the adversary to identify your nodes. If they're just monitoring and not being used as the initial broadcaster of your txs, that should be difficult. (And I think it would make sense to do things that make it more difficult to successfully partition a node for tx relay, even if you can identify it) > where the goal is to control your > local node mempool view. A partitioning trick is somehow as simple as > policy changes across versions (e.g allowing Taproot Segwit v0.1 spends) or > two same-feerate transactions. The partitioning attack can target at least > two meaningful subsets. An even easier way to partition the network is to create two conflicting txs at the same fee/fee rate and to announce them to many peers simultaneously. That way neither will replace the other, and you can build from there. In order to attack you, the partition would need to be broad enough to capture all your monitoring nodes on one side (to avoid detection), and all the mining nodes on the other side (to prevent your target tx from being confirmed). If that seems likely, maybe it indicates that it's too easy to identify nodes that feed txs to mining pools... > Either the miner mempools only, by conflicting all > the reachable nodes in as many subsets with a "tainted" transaction (e.g > set a special nSequence value for each), and looking on corresponding > issued block. Or targeting the "watchdog" mempools only, where the > adversary observation mechanism is the multi-party blame assignment round > itself. I think there's a few cases like that: you can find out what txs mining pools have seen by looking at their blocks, what explorers have seen by looking at their website... Being able to use that information to identify your node(s) -- rather than just your standardness policy, which you hopefully share with many other nodes -- seems like something we should be working to fix... > There is an open question on how many "divide-and-conquer" rounds > from an adversary viewpoint you need to efficiently identify all the > complete set of "mempools watchdog". If the transaction-relay topology is > highly dynamic thanks to outbound transaction-relay peers rotation, the > hardness bar is increased. I'm not sure outbound rotation is sufficient? In today's world, if you're a listening node, an attacker can just connect directly, announce the conflicting tx to you, and other txs to everyone else. For a non-listening node, outbound rotation might be more harmful than helpful, as it increases the chances a node will peer with a given attacker at some point. > Though ultimately, the rough mental model I'm thinking on, this is a > "cat-and-mouse" game between the victims and the attacker, where the latter > try to find the blind spots of the former. I would say there is a strong > advantage to the attacker, in mapping the mempools can be batched against > multiple sets of victims. While the victims have no entry barriers to > deploy "mempools watchdog" there is a scarce resource in contest, namely > the inbound connection slots (at least the miners ones). Anytime you deploy a new listening node, you use up 10 inbound connections, but provide capacity for 115 new ones. Worst case (if it's too hard to prevent identifying a listening node if you use it for monitoring), you setup all your monitoring nodes as non-listening nodes, and also set enough listening nodes in different IP ranges to compenasate for the number of outbound connections your monitoring nodes are making, and just ignore the mempools of those listening nodes. > Victims could batch their defense costs, in outsourcing the monitoring to > dedicated entities (akin to LN watchtower). However, there is a belief in > lack of a compensation mechanism, you will have only a low number of public > ones (see number of BIP157 signaling nodes, or even Electrum ones). Seems like a pretty simple service to pay for, if there's real demand: costs are pretty trivial, and if it's a LN service, you can pay for it over lightning... Fairly easy to test if you're getting what you're paying for too, by simultaneously announcing two conflicting txs paying yourself, and checking you get an appropriate alert. > Assuming we can solve them, there is still the issue of assi
[bitcoin-dev] Preventing/detecting pinning of jointly funded txs
On Fri, Oct 28, 2022 at 03:21:53AM +1000, Anthony Towns via bitcoin-dev wrote: > What should folks wanting to do coinjoins/dualfunding/dlcs/etc do to > solve that problem if they have only opt-in RBF available? ref: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021124.html So, having a go at answering my own question. I think ultimately the scenario here is: * you have a joint funding protocol, where everyone says "here's an unspent utxo that will be my contribution", collaborates on signing a transaction spending all those utxos, and then broadcasts it * everyone jointly agrees to pay some amount in fees for that transaction, targeting confirmation within N blocks * the goal is to have the transaction confirm, obviously; but it's also acceptable to discover a conflicting transaction, as that will demonstrate that a particular participant has been dishonest (their utxo was not "unspent"), allowing the overall protocol to make progress The question then is how much effort do you have to go to to make such a protocol work? As an extreme example, you could always have each participant maintain a dedicated amount of hashpower: eg, if each participant individually controls 0.5% of hashpower, then having two honest participants would give you a 75% chance of confirmation within 137 blocks (roughly a day), even if your transaction failed to relay at all, and the only way to prevent confirmation is for a conflicting transaction to be confirmed earlier. Of course, needing to have 0.5% of hashpower would mean fewer than 200 people globally could participate in such a protocol, and requires something like $10M in capital investment just for ASICs in order to participate. I think the next step from that pretty much requires introducing the assumption that the vast majority of the bitcoin p2p network (both nodes and hashrate) will accept your transaction, at least in a world where all your collaborators are honest and don't create conflicting transactions. You can limit that assumption a little bit, but without most p2p peers being willing to relay your tx, you start having privacy issues; and without most miners being willing to mine your tx, you start getting problems with predicting fees. And in any event, I don't think anyone's trying to make weird transactions here, just get their otherwise normal transactions to actually confirm. I think the same approach used to detect double spend races by people accepting zeroconf would work here too. That is setup a couple of anonymous bitcoin nodes, let them sit for a couple of weeks so their mempools are realistic, then when you broadcast a jointly funded transaction, query their mempools: if your new tx made it there, it likely made it to mining pools too, and you're fine; if it didn't, then the transaction that's blocking it almost certainly did, so you can find out what that is and can go from there. (If you don't see either your tx, or a conflicting one, then it likely means the nodes that broadcasted your tx are being sybil attacked, either because their peers are directly controlled by an attacker, or they've been identified by an attacker and attacked in some other way; presumably you could pick a couple of node that have been confirmed by both your anonymous nodes' as valid and reachable, and connect to them to break out of the sybil attack; if that doesn't work either, you probably need to change ISPs or put your active node via a (different) VPN provider...) Your capital expenses are much lower that way: perhaps on the order of $20/month to run a couple of nodes on AWS or linode or similar. But, you might say, what if I don't even want to run multiple bitcoin nodes 24/7 indefinitely? Can we outsource that, like we outsource mining by paying tx fees? That seems harder, particularly if you want to avoid whoever you're outsourcing too from associating you with the jointly funded transaction you're interested in. If you're *not* worried about that association, it's probably easy: just find some public explorers, and see if they list any conflicts in their mempool, or use the "broadcast tx" feature and see if it gives an error identifying the conflicting transaction. I think it's probably hard to make that behaviour a normal part of p2p tx relay though: if someone's trying to relay tx T but you reject it because of a conflicting tx C; then it's easy to tell the node that first relayed T to you about C -- but how does that information get back to the original broadcaster? One way would be to broadcast "C" back to whoever announced T to you, and let C propogate all the way back to whoever originally proposed T -- but that only works if everyone's running a mempool policy where there's a total ordering for tx replacement, ie for any conflicting txs, eithe
Re: [bitcoin-dev] On mempool policy consistency
On Mon, Oct 31, 2022 at 12:25:46PM -0400, Greg Sanders via bitcoin-dev wrote: > For 0-conf services we have potential thieves who are willing > to *out-bid themselves* to have funds come back to themselves. It's not a > "legitimate" use-case, but a rational one. I think that's a huge oversimplification of "rational" -- otherwise you might as well say that deliberately pinning txs is also rational, because it allows the person doing the pinning to steal funds from their counterparty by forcing a timeout to expire. There's no need for us as developers, or us as node operators, to support every use case that some individual might find rational at some point in time. After all, it might be individually rational for someone to want the subsidy to stop decreasing, or to include 8MB of transactions per block. Note that it's also straightforwardly rational and incentive compatible for miners to not want this patch to be available, under the following scenario: - a significant number of on-chain txs are for zeroconf services - fee income would be reduced if zeroconf services went away (both directly due to the absence of zeroconf payments, and by reducing mempool pressure, reducing fee income from the on-chain txs that remain) - miners adopting fullrbf would cause zeroconf services to go away, (and won't enable a comparable volume of new services that generates comparable transaction volume) - including the option in core would make other miners adopting fullrbf more likely I think the first three of those are fairly straightforward and objective, at least at this point in time. The last is just a risk; but without any counterbalancing benefit, why take it? Gaining a few thousand sats due to high feerate replacement txs from people exploiting zeroconf services for a few months before all those services shutdown doesn't make up for the lost fee income over the months or years it might have otherwise taken people to naturally switch to some better alternative. Even if fullrbf worked for preventing pinning that likely doesn't directly result in much additional fee income: once you know that pinning doesn't work, you just don't try it, which means there's no opportunity for miners to profit from a bidding war from the pinners counterparties repeatedly RBFing their preferred tx to get it mined. That also excludes second order risks: if you can't do zeroconf with BTC anymore, do you switch to ERC20 tokens, and then trade your BTC savings for ETH or USDT, and do enough people do that to lower the price of BTC? If investors see BTC being less used for payments, does that lower their confidence in bitcoin's future, and cause them to sell? > Removing a > quite-likely-incentive-compatible option from the software just encourages > miners to adopt an additional patch Why shouldn't miners adopt an additional patch if they want some unusual functionality? Don't we want/expect miners to have the ability to change the code in meaningful ways, at a minimum to be able to cope with the scenario where core somehow gets coopted and releases bad code, or to be able to deal with the case where an emergency patch is needed? Is there any evidence miners even want this option? Peter suggested that some non-signalling replacements were being mined already [0], but as far as I can see [1] all of those are simply due to the transaction they replaced not having propagated in the first place (or having been evicted somehow? hard to tell without any data on the original tx). [0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021012.html [1] https://github.com/bitcoin/bitcoin/pull/26287#issuecomment-1292692367 > 2) Forcing miners to honor fees left on the table with respect to 0-conf, > or forcing them to run a custom patchset to go around it, is a step > backwards. As you already acknowledged, any miner that wants this behaviour can just pick up the patch (or could run Knots, which already has the feature enabled by default). It's simply false to say miners are being forced to do anything, no matter what we do here. If the direction you're facing is one where you're moving towards making life easier for people to commit fraud, and driving away businesses that aren't doing anyone harm, without achieving anything much else; then taking a step backwards seems like a sensible thing to do to me. (I remain optimistic about coming up with better RBF policy, and willing to be gung ho about everyone switching over to it even if it does kill off zeroconf, provided it actually does some good and we give people 6 months or more notice that it's definitely happening and what exactly the new rules will be, though) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On mempool policy consistency
On Sun, Oct 30, 2022 at 11:02:43AM +1000, Anthony Towns via bitcoin-dev wrote: > > Some napkin math: there are about 250,000 transactions a day; if > > we round that up to 100 million a year and assume we only want one > > transaction per year to fail to initially propagate on a network where > > 30% of nodes have adopted a more permissive policy, lightweight clients > > will need to connect to over 50 randomly selected nodes.[1] > A target failure probability of 1-in-1e8 means: Oh, based on the "receive version message" log entries of a node that only does outbound connections, over the last ~3 weeks I see about 3000 outbound connections (mostly feelers/block-relay-only ones), of which a bunch identify as non-taproot supporting: 10 /Satoshi:0.16.0/: 13 /Satoshi:0.17.0/: 13 /Satoshi:0.17.0.1/: 28 /Satoshi:0.16.3/: 29 /Satoshi:0.19.0.1/: 36 /Satoshi:0.18.1/: 37 /Satoshi:0.19.1/: 39 /Satoshi:0.17.1/: 50 /Satoshi:0.20.0/: 94 /Satoshi:0.21.0/: 95 /Satoshi:0.18.0/: 244 /Satoshi:0.20.1/: Those add up to 688+ of 3065 total; if that's representative, it presumably means a random node connecting to 8 random listening peers has a 6.44-in-1-million chance of only connecting to peers that don't support taproot, ie failing your suggested threshold by a factor of about 644. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On mempool policy consistency
On Thu, Oct 27, 2022 at 09:29:47PM +0100, Antoine Riard via bitcoin-dev wrote: > Let's take the contra. (I don't think I know that phrase? Is it like "play devil's advocate"?) > I would say the current post describes the state of Bitcoin Core and > beyond policy > rules with a high-degree of exhaustivity and completeness, though itt what > is, mostly a description. While I think it ends with a set of > recommendations It was only intended as a description, not a recommendation for anything. At this point, the only thing I think I could honestly recommend that doesn't seem like it comes with massive downsides, is for core to recommend and implement a particular mempool policy, and only have options that either make it feasible to scale that policy to different hardware limitations, and provide options that users can activate en-masse if it turns out people are doing crazy things in the mempool (eg, a new policy turns out to be ill-conceived, and it's better to revert to a previous policy; or a potential spam vector gets exploited at scale). > What should be actually the design goals and > principles of Core's transaction-relay propagation rules > of which mempool accepts ones is a subset? I think the goals of mempool/relay policy are _really_ simple; namely: * relay transactions from users to all potential miners, so that non-mining nodes don't have to directly contact miners to announce their tx, both for efficiency (your tx can appear in the next block anyone mines, rather than just the next block you mine) and privacy (so that miners don't know who a transaction belongs to, so that users don't have to know who miners are, and so there's no/minimal correlation between who proposed a tx and who mined the block it appears in) * having most of the data that makes up the next block pre-validated and pre-distributed throughout the network, so that block validation and relay is much more efficient > By such design goals, I'm > thinking either, a qualitative framework, like attacks game for a concrete > application ("Can we prevent pinning against multi-party Coinjoin ?"). I don't think that even makes sense as a question at that level: you can only ask questions like that if you already have known mempool policies across the majority of nodes and miners. If you don't, you have to allow for the possibility that 99% of hashrate is receiving private blacklists from OFAC and that one of your coinjoin counterparties is on that list, eg, and at that point, I don't think pinning is even conceivably solvable. > I believe we would come up with a > second-order observation. That we might not be able to satisfy every > use-case with the standard set of policy rules. E.g, a contracting protocol > could look for package size beyond the upper bound anti-Dos limit. One reason that limit is in place is that it the larger the tx is compared to the block limit, the more likely you are to hit corner cases where greedily filling a block with the highest fee ratex txs first is significantly suboptimal. That might mean, eg, that there's 410kvB of higher fee rate txs than your 600kvB large package, and that your stuff gets delayed, because the miner isn't clever enough to realise dropping the 10kvB is worthwhile. Or it might mean that your tx gets delayed because the complicated analysis takes a minute to run and a block was mined using the simpler algorithm first. Or it might mean that some mining startup with clever proprietary software that can calculate this stuff quickly make substantially more profit than everyone else, so they start dominating block generation, despite the fact that they're censoring transactions due to OFAC rules. > Or even the > global ressources offered by the network of full-nodes are not high enough > to handle some application event. Blocks are limited on average to 4MB-per-10-minutes (6.7kB/second), and applications certainly shouldn't be being designed to only work if they can monopolise the entirety of the next few blocks. I don't think it makes any sense to imagine application events in Bitcoin that exceed the capacity of a random full node. And equally, even if you're talking about some other blockchain with higher capacity; I don't really think it makes sense to call it a "full" node if it can't actually cope with the demands placed on it by any application that works on that network. > E.g a Lightning Service Provider doing a > liquidity maintenance round of all its counterparties, and as such > force-closing and broadcasting more transactions than can be handled at the > transaction-relay layer due to default MAX_PEER_TX_ANNOUNCEMENTS value. MAX_PEER_TX_ANNOUNCEMENTS is 5000 txs, and it's per-peer. If you're an LSP that's doing that much work, it seems likely that you'd at least be running a long-lived listening node, so likely have 100+ peers, and could conceivably simultaneously announce 500k txs distributed across them, which at 130vB each (1-taproo
Re: [bitcoin-dev] On mempool policy consistency
On Fri, Oct 28, 2022 at 09:45:09PM -1000, David A. Harding via bitcoin-dev wrote: > I think this might be understating the problem. A 95% chance of having > an outbound peer accept your tx conversely implies 1 in 20 payments will > fail to propagate on their initial broadcast. Whether that's terrible or not depends on how easy it is to retry (and how likely the retry is to succeed) after a failure -- if a TCP packet fails, it just gets automatically resent, and if that succeeds, there's a little lag, but your connection is still usable. I think it's *conceivable* that a 5% failure rate could be detectable and automatically rectified. Not that I have a good idea how you'd actually do that, in a way that's efficient/private/decentralised... > Some napkin math: there are about 250,000 transactions a day; if > we round that up to 100 million a year and assume we only want one > transaction per year to fail to initially propagate on a network where > 30% of nodes have adopted a more permissive policy, lightweight clients > will need to connect to over 50 randomly selected nodes.[1] A target failure probability of 1-in-1e8 means: * with 8 connections, you need 90% of the network to support your txs * with 12 connections, you need ~79% * with 24 connections (eg everyone running a long-lived node is listening, so long lived nodes make 12 outbound and receive about ~12 inbound; shortlived nodes just do 24 outbound), you need ~54% So with that success target, and no preferential peering, you need a majority of listening nodes to support your tx's features in most reasonable scenarios, I think. > For a more > permissive policy only adopted by 10% of nodes, the lightweight client > needs to connect to almost 150 nodes. I get 175 connections needed for that scenario; or 153 with a target failure rate of 1-in-10-million. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On mempool policy consistency
On Thu, Oct 27, 2022 at 11:56:45AM +0200, John Carvalho via bitcoin-dev wrote: > I took the time to read your whole post. Despite a diplomatic tone, I find > your takeaways from all your references to remain conveniently biased for > protecting the plan of RBF Yes, I am heavily biased against zeroconf: there's no way I'd personally be willing to trust it for my own incoming funds, no matter how much evidence you show me that it's safe in practice. Show me a million transactions where every single one worked fine, and I'm still going to assume that the payment going to me is going to be the one that makes the error rate tick up from 0% to 0.0001%. That's okay; just because I wouldn't do something, doesn't mean other people shouldn't. It does mean I'm not going to be a particularly good advocate for zeroconf though. I mean, I might still be a fine advocate for giving people time to react, making it clear what's going on, finding ways that might make everyone happy, or just digging it to random technical details; but, for me, I'm more interested in a world where chargebacks are impossible, not where we just make the best of what was possible with technology from five or ten years ago. But that's fine: it just means that people, like yourself, who will tolerate the risks of zeroconf, should be involved in the discussion. > You show multiple examples where, when I read them, I assume the next thing > you will say will be "so we really should stop trying to impose optional > features, particularly when they affect existing use cases" but instead you > persist. Sure, that's natural: you read a sign saying "you can have any ice cream you want for 5c" and think "Awesome, who wouldn't want cheap chocolate ice cream!!" and see me going for a Golden Gaytime and think "wtf dude". Different strokes. For me, I see the gmaxwell github comment I quoted saying: There is also a matter of driving competent design rather than lazy first thing that works. and think "yeah, okay, maybe we should be working harder to push lightning adoption, rather than letting people stick with wallet UX from 2015" and have altcoins take over >50% of payment volume. Likewise, There is also a very clear pattern we've seen in the past where people take anything the system lets them do as strong evidence that they have a irrevocable right to use the system in that way, and that their only responsibility-- and if their usage harms the system it's the responsibility of the system to not permit it. seems a pretty good match against your claim "I expect the things I do with Bitcoin today to work FOREVER." Better to nip that thinking in the bud; and even if the best time to do that was years ago, the second best time to do it is still now. By contrast, from the same post, I'd guess you're focussing on: Network behavior is one of the few bits of friction driving good technical design rather than "move fast, break things, and force everyone else onto my way of doing thing rather than discussing the design in public". and thinking "yeah, move fast, break things, force everyone else -- that's exactly what's going on here, and shouldn't be". But that's also okay: even when there is common ground to be found, sometimes it requires actual work to get people who start from different views to get there. > The problem is that RBF has already been an option for years, and anyone > that wants to use it can. Is that true? Antoine claims [1] that opt-in RBF isn't enough to avoid a DoS issue when utxos are jointly funded by untrusting partners, and, aiui, that's the main motivation for addressing this now. [1] https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-May/003033.html The scenario he describes is: A, B, C create a tx: inputs: A1, B1, C1 [opts in to RBF] fees: normal outputs: [lightning channel, DLC, etc, who knows] they all analyse the tx, and agree it looks great; however just before publishing it, A spams the network with an alternative tx, double spending her input: inputs: A1 [does not opt in to RBF] fees: low outputs: A If A gets the timing right, that's bad for B and C because they've populated their mempool with the 1st transaction, while everyone else sees the 2nd one instead; and neither tx will replace the other. B and C can't know that they should just cancel their transaction, eg: inputs: B1, C1 [opts in to RBF] fees: 50% above normal outputs: [smaller channel, refund, whatever] and might instead waste time trying to fee bump the tx to get it mined, or similar. What should folks wanting to do coinjoins/dualfunding/dlcs/etc do to solve that problem if they have only opt-in RBF available? If you're right that opt-in RBF is enough, that question has a good answer. I don't believe anyone's presented an answer to it in the 17 months since Antoine raised the concern. > passive aggression > escalation > unfair advantage > oppressive, dark-pattern design > strong-arming
Re: [bitcoin-dev] On mempool policy consistency
On Thu, Oct 27, 2022 at 01:36:47PM +0100, Gloria Zhao wrote: > > The cutoff for that is probably something like "do 30% of listening > > nodes have a compatible policy"? If they do, then you'll have about a > > 95% chance of having at least one of your outbound peers accept your tx, > > just by random chance. > Yes, in most cases, whether Bitcoin Core is restricting or loosening > policy, the user in question is fine as long as they have a path from their > node to a miner that will accept it. This is the case for something like > -datacarriersize if the use case is putting stuff into OP_RETURN outputs, > or if they're LN and using CPFP carveout, v3, package relay, etc. > But > replacement is not only a question of "will my transaction propagate" but > also, "will someone else's transaction propagate, invalidating mine" or, in > other words, "can I prevent someone else's transaction from propagating." "Can I prevent someone else's transaction from propagating" is almost the entirety of the question with -datacarrier, -datacarriersize and -permitbaremultisig though: "we" don't want people to spam the utxo set or the blockchain with lots of data (cf BSV's gigabytes worth of dog pictures [0]), so for the people who are going to find some way of putting data in we'd like to encourage them to make it small, and do it in a way that's prunable and doesn't bloat the utxo set, whose size matters even more than the overall blockchain's size does. As I understand it, people were doing that by creating bare multisig utxos, ie a bare (non-p2sh) scriptPubKey that perhaps looks like: 1 my_key data1 data2 data3 data4 data5 5 CHECKMULTISIG which is "bad" in two ways: you're only committing to the data, so why not save 128 bytes by doing hash(data1 data2 data3 data4 data5) instead; and even more so, that data is only interesting to you, not everyone else, so why not do it in a way that doesn't bloat the utxo set, which we want to keep small so that it's easier to efficiently look up potential spends. Hence the -datacarriersize limitation that limits you to about 2.5 "dataN" entries per tx ("we'll prevent your tx from propagating if you do much more than publish a hash") and hence at least the potential for doing the same for baremultisig in general. [0] https://twitter.com/bsvdata/status/1427866510035324936 > A > zeroconf user relies on there *not* being a path from someone else's full > RBF node to a full RBF miner. This is why I think RBF is so controversial > in general, Yes; but I think it's also true to say that this is why *zeroconf* is as controversial as it is. Likewise OP_RETURN has had its own "controversies" to some extent, too: https://blog.bitmex.com/dapps-or-only-bitcoin-transactions-the-2014-debate/ https://github.com/bitcoin/bitcoin/pull/3715 https://github.com/bitcoin/bitcoin/pull/3939 > why -mempoolfullrbf on someone else's node is considered more > significant than another policy option, and why full RBF shouldn't be > compared with something like datacarriersize. It's definitely a different scenario: unexpected RBF can cause you to have less money than you expected; whereas more OP_RETURN data just bloats the blockchain, and losing money that you thought was yours is definitely more painful than more spam. But while the level of pain is different; I don't think the mechanism is: whether you're trying to preserve zeroconf or prevent utxo set spam, you're still relying on a vast majority of nodes working together to prevent even a small minority of hashpower from doing "bad" things, with no cryptographic assurances that will continue to work well or at all. > I don't think past patterns can be easily applied here, I mean, technically they trivially could? We *could* roll out support for full RBF in exactly the same way we rolled out support for opt-in RBF: making it the default for all nodes, but supplying an option that node operators can use to disable the feature for seven releases / ~4 years: https://bitcoin.org/en/release/v0.12.0#opt-in-replace-by-fee-transactions https://bitcoin.org/en/release/v0.19.0.1#deprecated-or-removed-configuration-options If we don't want to do that immediately, but also want to make a definite move forward, then we could: * just say that, and then keep our word about it * keep the feature in master, but remove it in 24.x * put a time delay on the feature so that it doesn't happen immediately but is locked in in the code for whenever we are ready to do it > and I don't think this necessarily shows a > different "direction" in thinking about mempool policy in general. If we're not applying past patterns, then this is a different direction in how we're thinking about things than what we did in the past. That's not necessarily a bad thing -- maybe we should be thinking differently; but I don't see how you can honestly dispute it: those are just two ways of saying the exact same thing. Cheers, aj ___
[bitcoin-dev] On mempool policy consistency
Hi *, TLDR: Yes, this post is too long, and there's no TLDR. If it's any consolation, it took longer to write than it does to read? On Tue, Jun 15, 2021 at 12:55:14PM -0400, Antoine Riard via bitcoin-dev wrote: > Subject: Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0 > I'm writing to propose deprecation of opt-in RBF in favor of full-RBF > If there is ecosystem agreement on switching to full-RBF, but 0.24 sounds > too early, let's defer it to 0.25 or 0.26. I don't think Core has a > consistent deprecation process w.r.t to policy rules heavily relied-on by > Bitcoin users, if we do so let sets a precedent satisfying as many folks as > we can. One precedent that seems to be being set here, which to me seems fairly novel for bitcoin core, is that we're about to start supporting and encouraging nodes to have meaningfully different mempool policies. From what I've seen, the baseline expectation has always been that while certainly mempools can and will differ, policies will be largely the same: Firstly, there is no "the mempool". There is no global mempool. Rather each node maintains its own mempool and accepts and rejects transaction to that mempool using their own internal policies. Most nodes have the same policies, but due to different start times, relay delays, and other factors, not every node has the same mempool, although they may be very similar. - https://bitcoin.stackexchange.com/questions/98585/how-to-find-if-two-transactions-in-mempool-are-conflicting Up until now, the differences between node policies supported by different nodes running core have been quite small, with essentially the following options available: -minrelaytxfee, -maxmempool - changes the lowest fee rate you'll accept -mempoolexpiry - how long to keep txs in the mempool -datacarrier - reject txs creating OP_RETURN outputs -datacarriersize - maximum size of OP_RETURN data -permitbaremultisig - prevent relay of bare multisig -bytespersigop - changes how SIGOP accounting works for relay and mining prioritisation as well as these, marked as "debug only" options (only shown with -help-debug): -incrementalrelayfee - make it easier/harder to spam txs by only slightly bumping the fee; marked as a "debug only" option -dustrelayfee - make it easier/harder to create uneconomic utxos; marked as a "debug only" option -limit{descendant,ancestor}{count,size} - changes how large the transaction chains can be; marked as a "debug only" option and in theory, but not available on mainnet: -acceptnonstdtxn - relay/mine non standard transactions There's also the "prioritisetransaction" rpc, which can cause you to keep a low feerate transaction in your mempool longer than you might otherwise. I think that -minrelaytxfee, -maxmempool and -mempoolexpiry are the only ones of those options commonly set, and those only rarely result in any differences in the txs at the top of the mempool. There are also quite a few parameters that aren't even runtime configurable: - MAX_STANDARD_TX_WEIGHT - MIN_STANDARD_TX_NONWITNESS_SIZE (see also #26265) - MAX_P2SH_SIGOPS (see also #26348) - MAX_STANDARD_TX_SIGOPS_COST - MAX_STANDARD_P2WSH_STACK_ITEMS - MAX_STANDARD_P2WSH_STACK_ITEM_SIZE - MAX_STANDARD_TAPSCRIPT_STACK_ITEM_SIZE - MAX_STANDARD_P2WSH_SCRIPT_SIZE - MAX_STANDARD_SCRIPTSIG_SIZE - EXTRA_DESCENDANT_TX_SIZE_LIMIT - MAX_REPLACEMENT_CANDIDATES And other plausible options aren't configurable even at compile time -- eg, core doesn't implement BIP 125's inherited signalling rule so there's no way to enable it; core doesn't allow opting out of BIP 125 rule 3 ratchet on absolute fee; core doesn't allow CPFP carveout with more than 1 ancestor; core doesn't allow opting out of LOW_S checks (even via -acceptnonstdtxn); etc. We also naturally have different mempool policies between different releases: eg, expansions of policy, such as allowing OP_RETURN or expanding it from 40 to 80 bytes or new soft forks where old nodes won't relay transactions that use the new; and also occassional restrictions in policy, such as the LOW_S requirement. While supporting and encouraging different mempool polices might be new for core, it's not new for knots: knots changes some of these defaults (-permitbaremultisig defaults to false, -datacarriersize is reduced to 42), allows the use of -acceptnonstdtxn on mainnet, and introduces new options including -spkreuse and -mempoolreplacement (giving the latter full rbf behaviour by default). Knots also includes a `-corepolicy` option to make it easy to get a configuration matching core's defaults. I think gmaxwell's take from Feb 2015 (in the context of how restrictive policy on OP_RETURN data should be) was a reasonable description for core's approach up until now: There is also a matter of driving competent design rather than lazy first thing that works. E.g. In stealth addresses the early proposals use highly inefficient single ECDH point per output instead
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Thu, Oct 20, 2022 at 02:37:53PM +0200, Sergej Kotliar via bitcoin-dev wrote: > > If someone's going to systematically exploit your store via this > > mechanism, it seems like they'd just find a single wallet with a good > > UX for opt-in RBF and lowballing fees, and go to town -- not something > > where opt-in rbf vs fullrbf policies make any difference at all? > Sort of. But yes once this starts being abused systemically we will have to > do something else w RBF payments, such as crediting the amount in BTC to a > custodial account. But this option isn't available to your normal payment > processor type business. So, what I'm hearing is: * lightning works great, but is still pretty small * zeroconf works great for txs that opt-out of RBF * opt-in RBF is a pain for two reasons: - people don't like that it's not treated as zeroconf - the risk of fiat/BTC exchange rate changes between now and when the tx actually confirms is worrying even if it hasn't caused real problems yet (Please correct me if that's too far wrong) Maybe it would be productive to explore this opt-in RBF part a bit more? ie, see if "we" can come up with better answers to some question along the lines of: "how can we make on-chain payments for goods priced in fiat work well for payees that opt-in to RBF?" That seems like the sort of thing that's better solved by a collaboration between wallet devs and merchant devs (and protocol devs?), rather than just one or the other? Is that something that we could talk about here? Or maybe it's better done via an optech workgroup or something? If "we'll credit your account in BTC, then work out the USD coversion and deduct that for your purchase, then you can do whatever you like with any remaining BTC from your on-chain payment" is the idea, maybe we should just roll with that design, but make it more decentralised: have the initial payment setup a lightning channel between the customer and the merchant with the BTC (so it's not custodial), but do some magic to allow USD amounts to be transferred over it (Taro? something oracle based so that both parties are confident a fair exchange rate will be used?). Maybe that particular idea is naive, but having an actual problem to solve seems more constructive than just saying "we want rbf" "but we want zeroconf" all the time? (Ideally the lightning channels above would be dual funded so they could be used for routing more generally; but then dual funded channels are one of the things that get broken by lack of full rbf) > > I thought the "normal" avenue for fooling non-RBF zeroconf was to create > > two conflicting txs in advance, one paying the merchant, one paying > > yourself, connect to many peers, relay the one paying the merchant to > > the merchant, and the other to everyone else. > > I'm just basing this off Peter Todd's stuff from years ago: > > https://np.reddit.com/r/Bitcoin/comments/40ejy8/peter_todd_with_my_doublespendpy_tool_with/cytlhh0/ > > https://github.com/petertodd/replace-by-fee-tools/blob/master/doublespend.py > Yeah, I know the list still rehashes a single incident from 10 years ago to > declare the entire practice as unsafe, and ignores real-world data that of > the last million transactions we had zero cases of this successfully > abusing us. I mean, the avenue above isn't easy to exploit -- you have to identify the merchant's node so that they get the bad tx, and you have to connect to many peers so that your preferred tx propogates to miners first -- and probably more importantly, it's relatively easy to detect -- if the merchant has a few passive nodes that the attacker doesn't know about it, and uses those to watch for attempted doublespends while it tries to ensure the real tx has propogated widely. So it doesn't surprise me at all that it's not often attempted, and even less often successful. > > > Currently Lightning is somewhere around 15% of our total bitcoin > > > payments. > > So, based on last year's numbers, presumably that makes your bitcoin > > payments break down as something like: > >5% txs are on-chain and seem shady and are excluded from zeroconf > > 15% txs are lightning > > 20% txs are on-chain but signal rbf and are excluded from zeroconf > > 60% txs are on-chain and seem fine for zeroconf > Numbers are right. Shady is too strong a word, Heh, fair enough. So the above suggests 25% of payments already get a sub-par experience, compared to what you'd like them to have (which sucks, but if you're trying to reinvent both money and payments, maybe isn't surprising). And going full rbf would bump that from 25% to 85%, which would be pretty terrible. > RBF is a strictly worse UX as proven by anyone > accepting bitcoin payments at scale. So let's make it better? Building bitcoin businesses on the lie that unconfirmed txs are safe and won't be replaced is going to bite us eventually; focussing on trying to push that back indefinitely is just going to make everyone l
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Wed, Oct 19, 2022 at 04:29:57PM +0200, Sergej Kotliar via bitcoin-dev wrote: > The > biggest risk in accepting bitcoin payments is in fact not zeroconf risk > (it's actually quite easily managed), You mean "it's quite easily managed, provided the transaction doesn't opt-in to rbf", right? At least, that's what I understood you saying last time; ie that if the tx signals rbf, then you just don't do zeroconf no matter what other trustworthy signals you might see: https://twitter.com/ziggamon/status/1435863691816275970 (rbf txs seem to have increased from 22% then to 29% now) > it's FX risk as the merchant must > commit to a certain BTCUSD rate ahead of time for a purchase. Over time > some transactions lose money to FX and others earn money - that evens out > in the end. > But if there is an _easily accessible in the wallet_ feature to > "cancel transaction" that means it will eventually get systematically > abused. A risk of X% loss on many payments that's easy to systematically > abuse is more scary than a rare risk of losing 100% of one occasional > payment. It's already possible to execute this form of abuse with opt-in > RBF, If someone's going to systematically exploit your store via this mechanism, it seems like they'd just find a single wallet with a good UX for opt-in RBF and lowballing fees, and go to town -- not something where opt-in rbf vs fullrbf policies make any difference at all? It's not like existing wallets that don't let you set RBF will suddenly get a good UX for replacing transactions just because they'd be relayed if they did, is it? > To successfully fool (non-RBF) > zeroconf one needs to have access to mining infrastructure and probability > of success is the % of hash rate controlled. I thought the "normal" avenue for fooling non-RBF zeroconf was to create two conflicting txs in advance, one paying the merchant, one paying yourself, connect to many peers, relay the one paying the merchant to the merchant, and the other to everyone else. I'm just basing this off Peter Todd's stuff from years ago: https://np.reddit.com/r/Bitcoin/comments/40ejy8/peter_todd_with_my_doublespendpy_tool_with/cytlhh0/ https://github.com/petertodd/replace-by-fee-tools/blob/master/doublespend.py > Currently Lightning is somewhere around 15% of our total bitcoin payments. So, based on last year's numbers, presumably that makes your bitcoin payments break down as something like: 5% txs are on-chain and seem shady and are excluded from zeroconf 15% txs are lightning 20% txs are on-chain but signal rbf and are excluded from zeroconf 60% txs are on-chain and seem fine for zeroconf > This is very much not nothing, and all of us here want Lightning to grow, > but I think it warrants a serious discussion on whether we want Lightning > adoption to go to 100% by means of disabling on-chain commerce. If the numbers above were accurate, this would just mean you'd go from 60% zeroconf/25% not-zeroconf to 85% not-zeroconf; wouldn't be 0% on-chain. > For me > personally it would be an easier discussion to have when Lightning is at > 80%+ of all bitcoin transactions. Can you extrapolate from the numbers you've seen to estimate when that might be, given current trends? Or perhaps when fine-for-zeroconf txs drop to 20%, since opt-in-RBF txs and considered-unsafe txs would still work the same in a fullrbf world. > The benefits of Lightning are many and obvious, > we don't need to limit onchain to make Lightning more appealing. To be fair, I think making lightning (and coinjoins) work better is exactly what inspired this -- not as a "make on-chain worse so we look better in comparison", but as a "making lightning work well is a bunch of hard problems, here's the next thing we need in order to beat the next problem". > Sidenote: On the efficacy of RBF to "unstuck" stuck transactions > After interacting with users during high-fee periods I've come to not > appreciate RBF as a solution to that issue. Most users (80% or so) simply > don't have access to that functionality, because their wallet doesn't > support it, or they use a custodial (exchange) wallet etc. Of those that > have the feature - only the power users understand how RBF works, and > explaining how to do RBF to a non-power-user is just too complex, for the > same reason why it's complex for wallets to make sensible non-power-user UI > around it. Current equilibrium is that mostly only power users have access > to RBF and they know how to handle it, so things are somewhat working. But > rolling this out to the broad market is something else and would likely > cause more confusion. > CPFP is somewhat more viable but also not perfect as it would require lots > of edge case code to handle abuse vectors: What if users abuse a generous > CPFP policy to unstuck past transactions or consolidate large wallets. Best > is for CPFP to be done on the wallet side, not the merchant side, but there > too are the same UX issues as with RBF. I thi
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Mon, Oct 17, 2022 at 05:41:48PM -0400, Antoine Riard via bitcoin-dev wrote: > > 1) Continue supporting and encouraging accepting unconfirmed "on-chain" > > payments indefinitely > > 2) Draw a line in the sand now, but give people who are currently > > accepting unconfirmed txs time to update their software and business > > model > > 3) Encourage mainnet miners and relay nodes to support unconditional > > RBF immediately, no matter how much that increases the risk to > > existing businesses that are still accepting unconfirmed txs > To give more context, the initial approach of enabling full RBF through > #25353 + #25600 wasn't making the assumption the enablement itself would > reach agreement of the economic majority or unanimity. Full RBF doesn't need a majority or unanimity to have an impact; it needs adoption by perhaps 10% of hashrate (so a low fee tx at the bottom of a 10MvB mempool can be replaced before being mined naturally), and some way of finding a working path to relay txs to that hashrate. Having a majority of nodes/hashrate support it makes the upsides better, but doesn't change the downsides to the people who are relying on it not being available. > Without denying that such equilibrium would be unstable, it was designed to > remove the responsibility of the Core project itself to "draw a hard line" > on the subject. Removing responsibility from core developers seems like it's very much optimising for the wrong thing to me. I mean, I guess I can understand wanting to reduce that responsibility for maintainers of the github repo, even if for no other reason than to avoid frivolous lawsuits, but where do you expect people to find better advice about what things are a good/bad idea if core devs as a whole are avoiding that responsibility? Core devs are supposedly top technical experts at bitcoin -- which means they're the ones that should have the best understanding of all the implications of policy changes like this. Is opt-in RBF only fine? If you look at the network today, it sure seems like it; it takes a pretty good technical understanding to figure out what problems it has, and an even better one to figure out whether those problems can be solved while keeping an opt-in RBF regime, or if full RBF is needed. At that point, the technical experts *should* be coming up with a specific recommendation, and, personally, I think that's exactly what happened with [0] [1] and [2]. [0] https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-May/003033.html [1] https://github.com/bitcoin/bitcoin/pull/25353 [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020557.html That did draw hard line in the sand: it said "hey, opt-in RBF had a good run, but it's time to switch over to full RBF, for these reasons". It's a bit disappointing that the people that's a problem for didn't engage earlier -- though looking back, I guess there wasn't all that much effort made to reach out, either. There were two mentions in the optech newsletter [3] [4] but it wasn't called out as an "action item" (maybe those aren't a thing anymore), so it may have been pretty missable, especially given RBF has been discussed on and off for so long. And the impression I got from the PR review club discussion more seemed like devs making assumptions about businesses rather than having talked to them (eg "[I] think there are fewer and fewer businesses who absolutely cannot survive without relying on zeroconf. Or at least hope so"). [3] https://bitcoinops.org/en/newsletters/2022/06/22/ [4] https://bitcoinops.org/en/newsletters/2022/07/13/ If we're happy to not get feedback until we start doing rcs, that's fine; but if we want to say "oops, we're into release candidates, you should have something earlier, it's too late now", that's a pretty closed-off way of doing things. And I mean: all this is only about drawing a line in *sand*; if people think core devs are wrong, they can still let that line blow away in the wind, by running different software, configuring core differently, patching core, or whatever else. > Moreover, relying on node operators turning on the setting > provides a smoother approach offering time to zero-conf services to react > in consequence. I don't think that's remotely true: take a look at taproot activation: it took two months between releasing code that supported signalling and having 98% of hashrate signalling; with 40% of blocks signalling within the first two weeks. > So the current path definitely belongs more to a 3) approach. > > 3) Encourage mainnet miners and relay nodes to support unconditional > > RBF immediately, no matter how much that increases the risk to > > existing businesses that are still accepting unconfirmed txs Yes, that's how it appears to me, too. It's not my preference (giving people clear warning of changes seems much better to me), but I can certainly live with it. But if the line in the sand is "we're d
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Thu, Oct 13, 2022 at 02:35:22PM +1000, Anthony Towns via bitcoin-dev wrote: > On Wed, Oct 12, 2022 at 04:11:05PM +, Pieter Wuille via bitcoin-dev wrote: > > In my view, it is just what I said: a step towards getting full RBF > > on the network, by allowing experimentation and socializing the notion > > that developers believe it is time. > We "believe it is time" for what exactly, though? (a) To start > deprerecating accepting zeroconf txs on mainnet, over the next 6, 12 or > 18 months; or (b) to start switching mainnet mining and relay nodes over > to full RBF? For what it's worth, that was a serious question: I don't feel like I know what other people's answer to it is. Seems to me like there's fundamentally maybe three approaches: 1) Continue supporting and encouraging accepting unconfirmed "on-chain" payments indefinitely 2) Draw a line in the sand now, but give people who are currently accepting unconfirmed txs time to update their software and business model 3) Encourage mainnet miners and relay nodes to support unconditional RBF immediately, no matter how much that increases the risk to existing businesses that are still accepting unconfirmed txs I think Antoine gave a pretty decent rationale for why we shouldn't indefinitely continue with conditional RBF in [0] [1] -- it makes it easy to disrupt decentralised pooling protocols, whether that be for establishing lightning channels or coinjoins or anything else. [0] https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-May/003033.html [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020557.html It's also an unstable equilibrium -- if everyone does first-seen-is-final at the mempool level, everything is fine; but it only takes a few defectors to start relaying and mining full RBF txs to spoil zeroconf for everyone -- so even if it were desirable to maintain it forever, it's probably not actually possible to maintain it indefinitely. If so, that leaves the choice between (2) and (3). You might argue that there's a 4th option: ignore the problem and think about it later; but to me that seems like it will just eventually result in outcome (3). At least a few people are already running full RBF relay nodes [2] [3] [4], and there's a report that non-signalling RBF txs are now getting mined [5] when they weren't a few months ago [6]. I wasn't able to confirm the latter to my satisfaction: looking at mempool.observer, the non-RBF signalling conflicting txs don't seem to have been consistently paying a higher feerate, so I couldn't rule out the possibility that the difference might just be due to inconsistent relaying. [2] https://twitter.com/murchandamus/status/1552488955328831492 [3] https://twitter.com/LukeDashjr/status/977211607947317254 [4] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020592.html [5] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020592.html [6] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020592.html It seems to me that the best approach for implementing (3) would be to change the default for -mempoolfullrbf to true immediately, which is both what Knots has been doing for years, and what #26305 proposes [7]. So from seeing what people are actually *doing*, I could easily be convinced that (3) is the goal people are actually working towards. [7] https://github.com/bitcoin/bitcoin/pull/26305 But if (3) *is* what we're really trying to do, I think it's a bit disingenuous to assume that that effort will fail, and tell people that nothing's going to change on mainnet in the near future [8] [9] [10] [11]. If pools are starting to allow replacements of txs that didn't signal according to BIP 125 and mine blocks including those replacements, then it's true that zero-conf apps are in much more immediate danger than they were a month ago, and as far as I can see, we shouldn't be pretending otherwise. [8] https://github.com/bitcoin/bitcoin/pull/26287#issuecomment-1274953204 [9] https://github.com/bitcoin/bitcoin/pull/26287#issuecomment-1276682043 [10] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/020981.html [11] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-October/021006.html Personally, I prefer an approach like (2) -- commit to doing something first, give people time to prepare for it, and then do it, and outside of Knots, I don't think there's been any clear commitment to deprecating zeroconf txs up until now. But what we're currently doing is suboptimal for that in two ways: - there's no real commitment that the change will actually happen - even if it does, there's no indication when that will be - it's not easy to test your apps against the new world order, because it's
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Wed, Oct 12, 2022 at 04:11:05PM +, Pieter Wuille via bitcoin-dev wrote: > In my view, it is just what I said: a step towards getting full RBF > on the network, by allowing experimentation and socializing the notion > that developers believe it is time. We "believe it is time" for what exactly, though? (a) To start deprerecating accepting zeroconf txs on mainnet, over the next 6, 12 or 18 months; or (b) to start switching mainnet mining and relay nodes over to full RBF? As far as experimentation goes, I don't really see this option as being very likely to help: the default for this option is still false, so it's likely going to be difficult to get non-opt-in RBF txs relayed or mined anywhere, even on testnet or signet, no? (Maybe that's a difficulty that's resolved by an addnode, but it's still a difficulty) If experimentation's the goal, making the default be true for testnet/signet at least seems like it would be pretty useful at least. Meaningful experimentation is probably kind of difficult in the first place while fees are low and there's often no backlog in the mempool, as well; something that perhaps applies more to test nets than mainnet even. If we're trying to socialise the idea that zeroconf deprecation is happening and that your business now has a real deadline for migrating away from accepting unconfirmed txs if the risk of being defrauded concerns you, then enabling experimentation on test nets and not touching mainnet until a later release seems fairly fine to me -- similar to activating soft forks on test nets prior to activating it on mainnet. > So I have a hard time imagining how it > would change anything *immediately* on the network at large (without > things like default on and/or preferential peering, ...), but I still > believe it's an important step. If we're instead trying to socialise the idea that relaying and mining full RBF txs on mainnet should be starting now, then I think that's exactly how this *would* change things almost immediately on the network at large. I think all it would take in practice to be able to repeatedly defraud businesses accepting unconfirmed txs is perhaps 5% or 10% of blocks to include full RBF txs [0] [1], and knowing some IP addresses to addnode so that your txs relayed to those miners. And if core devs are advocating that full RBF is ready now [2], and a patch to easily enable it is included in a bitcoin core release, why wouldn't some small pools start trying it out, leading to exactly that situation? If most of the network doesn't relay your full-rbf txs, then that's annoying for protocol developers who'd like to rely on it, but it's fine for an attacker: it just means the business you're trying to trick has less chance of noticing the attack before it's too late, because they'll be less likely to see the conflicting tx via both their own node or public explorers. Cheers, aj [0] A few months ago, Peter Todd reported switching an OTS calendar to do non-opt-in RBF, and didn't observe bumped txs being mined, which seems to indicate there's not much hash power currently mining full RBF. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020592.html [1] Also why I remain surprised that accepting zeroconf is safe enough in practice for anyone to do it. I suppose 5% of hashpower is perhaps $100M+ investment in ASICs and $900k/day in revenue, and perhaps all the current ways of enabling full RBF are considered too risky to mess around with at that level. [2] Antoine Riard's mail from June (that Peter's mail above was in reply to) announced such a public node, and encouraged miners to start adoption: "If you're a mining operator looking to increase your income, you might be interested to experiment with full-rbf as a policy." Presuming the IRC channel "##uafrbf" stands for "user-activated full rbf", that also seems in line with the goal being to socialise doing full RBF on mainnet immediately... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020557.html ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Opt-in full-RBF] Zero-conf apps in immediate danger
On Tue, Oct 11, 2022 at 04:18:10PM +, Pieter Wuille via bitcoin-dev wrote: > On Friday, October 7th, 2022 at 5:37 PM, Dario Sneidermanis via bitcoin-dev > wrote: > > Thanks for the fast answer! It seems I missed the link to the PR, sorry for > > the > > confusion. I'm referring to the opt-in flag for full-RBF from #25353 > > (https://github.com/bitcoin/bitcoin/pull/25353). > It is not clear to me why you believe the merging of this particular pull > request poses an immediate risk to you. Did you see the rest of Dario's reply, bottom-posted after the quoted text? Namely: On Fri, Oct 07, 2022 at 06:37:38PM -0300, Dario Sneidermanis via bitcoin-dev wrote: > The "activation" of full-RBF after deployment works in a pretty interesting > way: > > 1. If no miner is running full-RBF or there aren't easily accessible > connected components of nodes running full-RBF connected to the miners, then > full-RBF is mostly ineffective since replacements aren't relayed and/or mined. > 2. There's a middle ground where *some* connected components of full-RBF >nodes can relay and mine replacements, where some full-RBF nodes will be >able to replace via full-RBF and some won't (depending on their peers). > 3. With high enough adoption, the relay graph has enough density of full-RBF >nodes that almost all full-RBF nodes can replace transactions via >full-RBF. > > While there have been forks of Bitcoin Core (like Bitcoin Knots) running > full-RBF for a while, today most nodes (by far) are running Bitcoin Core. > So, > Bitcoin Core adding an opt-in flag (ie. off by default) makes it easier to > be > picked up by most node operators. Making the flag opt-out (ie. on by > default) > would make it easier still. We are dealing with a gradient going from hard > enough that we are still in 1, to easy enough that we get to 3. > > The question then is whether an opt-in flag for full-RBF will have enough > adoption to get us from 1 to 2. If it isn't, then #25353 won't meet its > objective of allowing nodes participating in multi-party funding protocols > to assume that they can rely on full-RBF. If it is, then zero-conf > applications > will be at severe risk (per the logic in the initial email). That logic seems reasonably sound to me: - if adding the option does nothing, then there's no point adding it, and no harm in restricting it to test nets only - if adding the option does do something, then businesses using zero-conf need to react immediately, or will go from approximately zero risk of losing funds, to substantial risk (I guess having the option today may allow you to manually switch your node over to supporting fullrbf in future when the majority of the network supports it, without needing to do an additional upgrade in the meantime; but that seems like a pretty weak benefit) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Packaged Transaction Relay
On Sun, Oct 09, 2022 at 12:00:04AM -0700, e...@voskuil.org wrote: > On Sat, Oct 08, 2022, Anthony Towns via bitcoin-dev wrote: > > > > > Protocol cannot be defined on an ad-hoc basis as a "courtesy" > > > > BIPs are a courtesy in the first place. > > > I suppose if you felt that you were the authority then this would be > > > your perspective. > > You seem to think that I'm arguing courtesy is not a good thing, or that > we > > couldn't use more of it? > That is neither what I said nor implied. You were clearly dismissing the > public process, not advocating for politeness. And that is neither what I said nor implied, nor something I believe. If you think courtesy is something that can be ignored in a public process, I don't think you should expect much success. If you'd like to actually participate in public standards development, please feel free to make some technical comments on my proposals, or others, or make your own proposal, either here or on github, or heck, anywhere else. I mean, that's what I'd suggest anyway; I'm not your boss. I promise to at least be entertainingly surprised if you make any progress with your current approach though. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Packaged Transaction Relay
On Sat, Oct 08, 2022 at 12:58:35PM -0700, Eric Voskuil via bitcoin-dev wrote: > > > Protocol cannot be defined on an ad-hoc basis as a "courtesy" > > BIPs are a courtesy in the first place. > I suppose if you felt that you were the authority then this would be your > perspective. You seem to think that I'm arguing courtesy is not a good thing, or that we couldn't use more of it? If it helps: courtesy is a good thing, and we could use more of it. > The BIP process was created by Amir specifically because Bitcoin standards > were being discussed and developed behind closed doors. It definitely bothers me that Bitcoin development is not being discussed out in the open as much as I would like, and to counter that, I try to encourage people to post their ideas to this list, and write them up as a BIP; and likewise try to do both myself as well. But how much value do you think anyone's actually getting from posting their development ideas to this list these days? Do you really think people reading your mail will be more inspired to discuss their ideas in the open, or that they'll prefer to get in a room with their friends and allies, and close the doors so they can work in peace? > > There's no central authority to enforce some particular way of doing > > things. > As if reaching consensus with other people implies a singular authority. Reaching consensus with other people doesn't require putting a document in some particular github repo, either. Which is a good thing, or the people in control of that repo would become that singular authority. > > If you think that the version restriction should be part of the BIP, > > why not do a pull request? The BIP is still marked as "Draft". > I did not implement and ship a deviation from the posted proposal. You think BIP 155 is suboptimal, and would rather see it changed, no? But if you won't put any effort into changing it (and how much effort do you think a PR to change it document it as being gated by version 70016 would be?), why do you imagine the people who are happy with the BIP as it is would put any effort in? > > > I doubt that anyone who's worked with it is terribly fond of Bitcoin's > > > P2P protocol versioning. I've spent some time on a proposal to > > > update it, though it hasn't been a priority. If anyone is > > > interested in collaborating on it please contact me directly. "contact me directly" and wanting something other than standards "being discussed and developed behind closed doors" seems quite contradictory to me. (In my opinion, a big practical advantage of doing things in public is that it's easy for people to contribute, even if it's not a particular priority, and that it's also easy for someone new to take over, if the people previously working on it decide they no longer have time for that particular project) > > Bottlenecking a proposal on someone who doesn't see it as a priority > > doesn't seem smart? > I didn't realize I was holding you up. As far as I've been able to gather, > it hasn't been a priority for anyone. Yet somehow, on the same day that I > posted the fact that I was working on it, it became your top priority. It's not my top priority; it's just that writing a BIP and posting it publicly is fundamentally no harder than writing an email to bitcoin-dev. So since I'm willing to do one, why waste anyone's time by not also doing the other? Would've been even easier if I'd remembered Suhas had already written up a draft BIP two years ago... And if I'm going to suggest you should post a patch to a BIP you think is flawed, then not drafting a BIP to improve on a practice I think is flawed would be pretty hypocritical, no? (I didn't read what you said to imply that you were working on it, just that you'd spent time thinking about it, were interested, and might do more if people contacted you. If you have been working on it, why not do so in public? You already have a public bips fork at https://github.com/evoskuil/bips/branches -- how about just pushing your work-in-progress there?) (Ah, I also see now that I did contact you in Dec 2020/Jan 2021 on this topic, but never received a response. Apologies; the above was meant as a general statement in favour of just collaborating in public from the start for the practical advantages I outline above, not a personal dig) > > Here's what I think makes sense: > > https://github.com/ajtowns/bips/blob/202210-p2pfeatures/bip- > > p2pfeatures.mediawiki > Looks like you put about 10 minutes of thought into it. In your words, BIPs > are a courtesy - feel free to do what you want. So, you wrote a lot of stuff after this, but unless I missed it, it didn't include any substantive criticism of the proposal, or specific suggestions for changing it, or even any indication why you would have any difficulty supporting/implementing it in the software you care about. > Your contributions notwithstanding, you are in no place to exhibit such > arrogance. I don't understand what you thin
Re: [bitcoin-dev] Packaged Transaction Relay
On Wed, Oct 05, 2022 at 09:32:29PM -0700, Eric Voskuil via bitcoin-dev wrote: > Protocol cannot be defined on an ad-hoc basis as a "courtesy" BIPs are a courtesy in the first place. There's no central authority to enforce some particular way of doing things. > - and it's not exactly a courtesy to keep yourself from getting dropped by > peers. It is not clear to me why such a comment would be accepted instead of > specifying this properly. If you think that the version restriction should be part of the BIP, why not do a pull request? The BIP is still marked as "Draft". > I doubt that anyone who's worked with it is terribly fond of Bitcoin's P2P > protocol versioning. I've spent some time on a proposal to update it, though > it hasn't been a priority. If anyone is interested in collaborating on it > please contact me directly. Bottlenecking a proposal on someone who doesn't see it as a priority doesn't seem smart? Here's what I think makes sense: https://github.com/ajtowns/bips/blob/202210-p2pfeatures/bip-p2pfeatures.mediawiki Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Packaged Transaction Relay
On Tue, Oct 04, 2022 at 05:01:04PM -0700, Eric Voskuil via bitcoin-dev wrote: > [Regarding bandwidth waste: I've pointed out in years past that > breaking the Bitcoin versioning scheme creates a requirement that any > unknown message type be considered valid. Up until a recently-deployed > protocol change, it had always been possible to validate messages by > type. I noticed recently that validating nodes have been dropping peers > at an increasing rate (a consequence of that deployment). Despite being > an undocumented compatibility break, it is now unfortunately a matter > of protocol that a peer must allow its peers to waste its bandwidth to > remain compatible - something which we should eliminate.] The only message listed as not being preceded by a bumped version number in: https://github.com/libbitcoin/libbitcoin-network/wiki/Protocol-Versioning is addrv2 (though addrv2 is gated on mutual exchange of sendaddrv2, so it's presumably the sendaddrv2 message at issue), however since [0] sendaddrv2 messages are only sent to nodes advertising version 70016 or later (same as wtxidrelay). ADDRV2 was introduced May 20 2020 after the 0.20 branch, and SENDADDRV2 gating was merged Dec 9 2020 and included from 0.21.0rc3 onwards. [0] https://github.com/bitcoin/bitcoin/pull/20564 I'm only seeing "bytesrecv_per_msg.*other*" entries for nodes advertising a version of 0.17 and 0.18, which I presume is due to REJECT messages (for taproot txs, perhaps?). Otherwise, I don't think there are any unexpected messages you should be receiving when advertising version 70015 or lower. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
On Sun, Oct 02, 2022 at 03:25:19PM +, Michael Folkson via bitcoin-dev wrote: > I'm also perfectly happy with the status quo of the default signet > having block signers and gatekeepers for soft forks activated on the > default signet. I'm more concerned with those gatekeepers being under > pressure to merge unfinished, buggy soft fork proposals on the default > signet which need to be reversed or changed disrupting all default > signet users. First, I think it's far better for signet miners to be under that pressure than either mainnet miners or maintainers/devs of bitcoin core. Or for that matter, users of bitcoin who are just trying to use bitcoin and not lose their money to bank confiscation or central bank hyperinflation. That's where we stand today, whether you look solely at historical precedent (cltv, csv, segwit were only testable on blockstream's elements alpha prior to being merged into core, and combined with confidential assets, that's not really a 1:1 test environment; taproot wasn't really testable anywhere prior to being merged into core), or you consider the focus of people actively trying to get forks deployed currently (ctv has been pushing for a merge [0], and considered trying to get users and miners to adopt it [1]; likewise the great consensus cleanup first proposed a PR for core [2] before posting a bip draft [3] and progress stopped when the PR didn't move forwards; likewise drivechains/bip300's current deployment approach is "do a uasf on mainnet"); or see sentiment such as [4]. [0] https://www.erisian.com.au/bitcoin-core-dev/log-2022-01-13.html#l-490 https://rubin.io/bitcoin/2021/12/24/advent-27/ [1] https://rubin.io/bitcoin/2022/04/17/next-steps-bip119/ [2] https://github.com/bitcoin/bitcoin/pull/15482 [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html [4] https://twitter.com/CobraBitcoin/status/1570380739010793479 It's *great* that core maintainers, reviewers, devs, URSF advocates, etc are able to resist pressure to merge bad things; what's not great is directing the time and attention of researchers and devs and businesses who are trying to come up with good things for bitcoin at something that doesn't encourage useful forward progress. But second, APO and CTV aren't being kept out of core because they're "unfinished and buggy" per se (which isn't to say they aren't buggy or shouldn't be kept out for that reason); at least in my view, they're being kept out because of a combination of (a) it's not clear that they're desirable to deploy on mainnet (whether at all, or in comparison to other ways of obtaining similar functionality); and (b) prioritising reducing risk on mainnet, vs improving the ability to test new ideas outside of mainnet. Bugs are much easier to deal with in comparison: you put a bunch of testing/dev effort in to figure out what bugs there might be, then you analyse them, then you fix them. If it were just a matter of finding and fixing bugs, that's still hard, sure, but it's something we know how to do. It's the broader questions that are trickier: eg, do we want CTV first, or CTV+APO at the same time, or just APO first? do we want some subtle tweaks to CTV or APO rules to make them better? do we want OP_TXHASH or OP_TX or some other variant instead? do we want to skip the intermediate steps and go straight to simplicity/lisp? do we want to never have anything that may risk covenant-like behaviour ever? Without even an idea how to get answers to those, it's not clear that it even makes sense to spend the time working on finding/fixing obscure implementation bugs in the proposals. (Ultimately, in my opinion, it's the same thing with drivechains and the great consensus cleanup: are these ideas sensible to deploy on mainnet? If the answer to that were a clear yes for either of them, then it would make sense to work on merging them in core and activating on mainnet; but at least to me, it's not clear whether the answer should be yes, yes after some specific set of changes, or no. Nor is it clear what work would help produce a clear answer) I think breaking the loop there is helpful: get these ideas out on signet, where finding and fixing bugs does matter and is worth doing, but where you *don't* have to deal with deep existential questions because you're not messing with a multi billion dollar system and committing to supporting the feature for the entire future of humanity. Then, if there are alternative approaches that people think might be better, get them out on signet too so that you can do apples-to-apples comparisons: see how much code they are to actually implement, how convenient they are to build on, whether there are any unexpected differences between theory and practice, etc. Then you can build up real answers to "is this a sensible thing to deploy on mainnet?" For that, to get things onto signet you really only need to establish: * it's interesting enough to be worth spending time on * it's gon
Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
On Fri, Sep 16, 2022 at 05:15:45PM +1000, Anthony Towns via bitcoin-dev wrote: > So that's the concept. For practical purposes, I haven't yet merged > either CTV or APO support into the bitcoin-inquisition 23.0 branch yet I've now merged CTV and updated my signet miner to enforce both CTV and APO (though you'd need to be either lucky or put some effort into it to figure out how to get a CTV/APO transaction relayed to my miner). Updating APO to be compatible with CTV actually seems to have found a previously unknown bug in the CTV PR against core [0], so that seems productive at the very least. [0] https://github.com/bitcoin-inquisition/bitcoin/pull/8 https://github.com/bitcoin/bitcoin/pull/21702#pullrequestreview-1118047730 I've also mined a couple of test APO transactions [1]; both reusing an APOAS signature [2], including demonstrating the case where a third party can replay APO signatures to send funds from duplicate UTXOs to fees, by spending those UTXOs in a single tx [3] [4]. [1] https://mempool.space/signet/address/tb1pesae595q3cpzp8j3gy07gussw9t9hh580xj027sfz6g8e530u3nqscn0yn [2] "ec764a8ed632916868ca6dbfdc5bed95f74b83be62d01397aba7ec916982c6721c923fa22d29b5e0c4fddde0148233f0cf401758df23d8cc89c88f04beffc3c3c1" -- sighash of 0xc1 = ANYPREVOUTANYSCRIPT|ALL https://mempool.space/signet/tx/ee6f6eda93a3d80f4f65f2e1000334132c9a014b3ed3dec888fdcc1f3441f52c https://mempool.space/signet/tx/2cbcc4857e6ee8510d9479c01fbf133a9a2cde3f5c61ccf9439c69b7b83334ba [3] https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki#Signature_replay [4] https://mempool.space/signet/tx/53a9747546e378956072795351e8436cf704da72d235c8ac7560787b554a4d3f Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
On Wed, Sep 28, 2022 at 11:48:32AM +, Michael Folkson via bitcoin-dev wrote: > SegWit was added > to a new testnet (Segnet) for testing rather than the pre-existing testnet > and I think future soft fork proposals should follow a similar approach. I think past history falls into a few groups: * p2sh was briefly tested on testnet (and an alternative was tested on mainnet) https://bitcointalk.org/index.php?topic=58579.msg786939#msg786939 * cltv and csv were mostly tested on elements alpha (liquid precursor testnet); though they were activated on testnet 6 and 11 weeks prior to mainnet http://diyhpl.us/wiki/transcripts/gmaxwell-sidechains-elements/ * segwit was also tested via elements alpha, though in a different form to what was deployed for bitcoin (ie, the elements approach would have been a hard fork). because of the p2p changes (you need additional data to validate blocks post segwit), segwit had dedicated test networks, up to segnet4, from 1st Jan 2016 to 30th Mar 2016. segwit was activated on testnet on 13th May 2016, merged into core on 25th June 2016, and included in the 0.13.1 released on 27th October 2016. I couldn't find very good public references about segnet, and don't think it saw much use outside of people implementing segwit consensus features themselves. * taproot was merged 15th October 2020 (#18267), and activated on signet as of genesis around 21st October 2020 (#20157). It was locked in on mainnet on 12th June 2021, activated on testnet on 8th July 2021, and activated on mainnet on 14th November 2021. * CTV had ctv-signet created around 17th December 2020, but it wasn't really announced or encouraged until 17th Feb 2022. The core PR (#21702) was opened 16th April 2021. https://www.erisian.com.au/bitcoin-core-dev/log-2020-12-17.html#l-845 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019925.html * I think Drivechains has a DriveNet testnet (since 2018 or earlier?), though I don't see an explorer, and it looks like the bitcoin code it's based on predates taproot. * Other than CTV, Drivechains and ideas being explored on Liquid, most other ideas for bitcoin consensus changes haven't really progressed past a gist, mailing list post, bip or draft patch to somewhere that you can actually experiment (great consensus cleanup, anyprevout, OP_TX/TXHASH, TLUV, SIGHASH_GROUP, PUSH_ANNEX, checkblockatheight, g'root/graftroot, etc...) I thing segnet was mostly used for development of segwit itself, rather than testing or application development -- it was reset about once a month as changes to the segwit design occurred, and after the design was finalised, was active on testnet, either using -addnode to connect directly to know segwit-enabled peers, or, eventually, with seed nodes updated and filtering via the WITNESS feature. The 23rd June 2016 meeting logs have some relevant discussion: https://www.erisian.com.au/bitcoin-core-dev/log-2016-06-23.html#l-178 > Even if there is community consensus on what soft fork proposals should be > added to the default signet today (which may or may not be case) I find it > highly unlikely this will always be the case. The point of doing it via signet and outside of core is there doesn't need to be any community consensus on soft forks. Unlike mainnet, signet sBTC isn't money, and it isn't permissionless; and unlike merging it into core, there isn't a risk of a mege having unintended side effects impacting mainnet operation. > We then get into the situation where the block signers (currently AJ and > Kalle) are the gatekeepers on what soft fork proposals are added. Because signet mining is a closed set (determined by the first block after genesis when the signetchallenge is locked in), signet soft forks always have gatekeepers. If signet miners don't opt-in to new soft forks (by upgrading their node to allow mempool acceptance according to new soft fork rules, and thus allow inclusion in block templates; or, if they're running with -acceptnonstdtxn, to reject txs that don't comply with the rules, so that funds using the new rules aren't actually anyone can spend) then you can't test the new soft fork rules on signet. > I don't think it is fair on the signet block signers to put them in that > position and I don't think it is wise to put other Bitcoin Core > contributors/maintainers in the position of having to defend why some > proposed soft forks are accessible on the default signet while others aren't. So, I think it's more accurate to say signet miners are fundamentally *already* in that position. They can delegate that power of course, saying "we'll just implement the default rules for the software we run", but that just moves the responsibility around. > The default signet was a long term project to address the unreliability and > weaknesses of testnet. That's certainly one goal. For me, that's one facet of the b
Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
On Sun, Sep 18, 2022 at 02:47:38PM -0400, Antoine Riard via bitcoin-dev wrote: > Said succinctly, in the genesis of creative ideas, evaluation doesn't > happen at a single clear point but all along the idea lifetime, where this > evaluation is as much done by the original author than its peers and a > wider audience. Sure. I definitely didn't mean to imply a waterfall development model, or that the phases wouldn't overlap etc. > I would still expose a concern to not downgrade in the pure empiricism in > matter of consensus upgrades. I.e, slowly emerging the norm of a working > prototype running on bitcoin-inquisition` as a determining factor of the > soundness of a proposal. E.g with "upgrading lightning to support eltoo", a > running e2e won't save us to think the thousands variants of pinnings, the > game-theory soundness of a eltoo as mechanism in face of congestions, the > evolvability of APO with more known upgrades proposals or the > implementation complexity of a fully fleshed-out state machine and more > questions. I agree here; but I think not doing prototypes also hinders thinking about all the thousands of details in a fork. It's easy to handwave details away when describing things on a whiteboard; and only realise they're trickier than you thought when you go to implement things. > E,g if one implements the "weird" ideas > about changes in the block reward issuance schedule discussed during the > summer, another one might not want "noise" interferences with new > fee-bumping primitives as the miner incentives are modified. (I don't think "miner incentives" are really something that can be investigated on signet. You can assume how miners will respond to incentives and program the mining software to act that way; but there's no competitive pressure in signet mining so I don't think that really demonstrates anything very much. Likewise, there's much less demand for blockspace on signet than on mainnet, so it's probably hard to experiment with "fee incentives" too) > I hope the upcoming > Contracting Primitives WG will be able to document and discuss some of the > relevant experiments run on bitcoin-inquisition. Likewise. (Lots trimmed due to either agreeing with it or having nothing to add) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
On Fri, Sep 16, 2022 at 12:46:53PM -0400, Matt Corallo via bitcoin-dev wrote: > On 9/16/22 3:15 AM, Anthony Towns via bitcoin-dev wrote: > > As we've seen from the attempt at a CHECKTEMPLATEVERIFY activation earlier > > in the year [0], the question of "how to successfully get soft fork > > ideas from concept to deployment" doesn't really have a good answer today. > I strongly disagree with this. Okay? "X is good" is obviously just a statement of opinion, so if you want to disagree, that's obviously allowed. I also kind of feel like that's the *least* interesting paragraph in the entire email to talk further about; if you think the current answer's already good, then the rest of the mail's just about (hopefully) making it better, which would be worthwhile anyway? > Going back many, many years we've had many > discussions about fork process, and the parts people (historically) agreed > with tend to be: > (1) come up with an idea > (2) socialize the idea in the technical community, see if anyone comes up > with any major issues or can suggest better ideas which solve the same > use-cases in cleaner ways > (3) propose the concrete idea with a more well-defined strawman, socialize > that, get some kind of rough consensus in the loosely-defined, subjective, > "technical community" (ie just ask people and adapt to feedback until you > have found some kind of average of the opinions of people you, the > fork-champion, think are reasonably well-informed!). > (4) okay, admittedly beyond this is a bit less defined, but we can deal with > it when we get there. > Turns out, the issue today is a lack of champions following steps 1-3, we > can debate what the correct answer is to step (4) once we actually have > people who want to be champions who are willing to (humbly) push an idea > forward towards rough agreement of the world of technical bitcoiners > (without which I highly doubt you'd ever see broader-community consensus). Personally, I think this is easily refuted by contradiction. 1) If we did have a good answer for how to progress a soft-fork, then the great consensus cleanup [0] would have made more progress over the past 3.5 years. Maybe not all of the ideas in it were unambiguously good [1], but personally, I'm convinced at least some of them are, and I don't think I'm alone in thinking that. Even if the excuse is that its original champion wasn't humble enough, there's something wrong with the process if there doesn't exist some other potential champion with the right balance of humility, confidence, interest and time who could have taken it over in that timeframe. 2) Many will argue that CTV has already done steps (1) through (3) above: certainly there's been an idea, it's been socialised through giving talks, having discussion forums, having research workshops [2], documenting use cases use cases; there's been a concrete implementation for years now, with a test network that supports the proposed feature, and new tools that demonstrate some of the proposed use cases, and while alternative approaches have been suggested [3], none of them have even really made it to step (2), let alone step (3). So that leaves a few possibilities to my mind: * CTV should be in step (4), and its lack of definition is a problem, and trying the "deal with it when we get there" approach is precisely what didn't work back in April. * The evaluation process is too inconclusive: it should either be saying "CTV is not good enough, fix these problems", or "CTV hasn't sufficiently demonstrated its value/cost, work on X next", but it isn't. * Parts (2) to (3) are too hard, and that's preventing alternatives from making progress, which in turn is preventing people from being able to decide whether CTV is the superior approach, or some alternative is. But each of those possibilities indicates a significant problem with our answer for developing soft forks. I guess my belief is that it's mostly (2) and (3) being too hard (which taproot overcame because many were excited about it, and CTV overcame because Jeremy's put a lot of effort into it; but consensus cleanup, APO, simplicity, TXHASH, etc have not similarly overcome at this point), which leads to the evaluation process being inconclusive when plausible alternatives exist. (In particular, I think having the process be massively difficult is going to naturally cause any "humble" champion to decide that they're not up to the task of following the process through to completion) Anyway, that's some additional reasons why I believe what I said above, in case that's interesting. But like I said at the start, if you still disagree, that's fine of course. Cheer
[bitcoin-dev] bitcoin-inquistion: evaluating soft forks on signet
Subhead: "Nobody expects a Bitcoin Inquistion? C'mon man, *everyone* expects a Bitcoin Inquisition." As we've seen from the attempt at a CHECKTEMPLATEVERIFY activation earlier in the year [0], the question of "how to successfully get soft fork ideas from concept to deployment" doesn't really have a good answer today. Obviously, a centralised solution to this problem exists: we could establish a trusted group, perhaps one containing devs, industry representatives, investors, etc, have them review proposals and their implementations, and follow their lead when they decide that a proposal has met their standards and should be widely deployed. Some might even say "sipa is precisely that group". The problem with having a group of that nature isn't one of effectiveness, but rather that they are then vulnerable to pressure and corruption, which isn't desirable if we want everyone using Bitcoin to truly be peers, and often isn't desirable for the prospective members of the group either. So that's not something we should want people to volunteer for, nor is it a duty we should thrust on people. Or at least, that's my opinion, anyway. I think any alternative approach to doing consensus changes (while avoiding a chain split) has to look something like this: * propose an idea (research phase) * implement the idea (development phase) * demonstrate the idea is worthwhile (evaluation phase) * once everyone is convinced, activate (deployment phase) Without an evaluation phase that is thorough enough to convince (almost) everyone, I think deployment becomes controversial and perhaps effectively impossible (at least without some trusted leadership group). But with an evaluation phase that demonstrates to everyone who's interested that the proposal has actual value, minimal cost and no risk, I think activation could be fairly easy and straightforward. I contend that the most significant problem we have is in the "evaluation phase". How do you convince enough people that a change is sufficiently beneficial to justify the risk of messing with their money? If you're only trying to convince a few experts, then perhaps you can do that with papers and talks; but limiting the evaluation to only a few experts is effectively just falling back to the centralised approach. So I think that means that part of the "evaluation phase" should involve implementing real systems on top of the proposed change, so that you can demonstrate real value from the change. It's easy to say that "CTV can enable vaults" or "CTV can make opening a lightning channel non-interactive" -- but it's harder to go from saying something is possible to actually making it happen, so, at least to me, it seems reasonable to be skeptical of people claiming benefits without demonstrating they're achievable in practice. I contend the easiest way we could make it easy to demonstrate a soft fork working as designed is to deploy it on the default global signet, essentially as soon as it has a fully specified proposal and a reasonably high-quality implementation. The problem with that idea is that creates a conundrum: you can't activate a soft fork on the default signet without first merging the code into bitcoin core, you can't merge the code into bitcoin core until it's been fully evaluated, and the way you evaluate it is by activating it on the default signet? I think the weakest link in that loop is the first one: what if we did activate soft forks on the default signet prior to the code being merged into core? To that end, I'm proposing a fork of core that I'm calling "bitcoin-inquisition", with the idea that it branches from stable releases of core and adds support for proposed changes to consensus (CTV, ANYPREVOUT, TLUV, OP_CAT, etc...) and potentially also relay policy (relay changes are often implied by consensus changes, but also potentially things like package relay). https://github.com/bitcoin-inquisition/bitcoin/wiki https://github.com/bitcoin-inquisition/bitcoin/pulls The idea being that if you're trying to work on "upgrading lightning to support eltoo", you can iterate through changes needed to consensus (via bitcoin-inquisition) and client apps (cln, lnd, eclair etc), while testing them in public (on signet) and having any/all the pre-existing signet infrastructure available (faucets, explorers etc) without having to redeploy it yourself. Having multiple consensus changes deployed in one place also seems like it might make it easier to compare alternative approaches (eg CTV vs ANYPREVOUT vs OP_TXHASH vs OP_TX, etc). So that's the concept. For practical purposes, I haven't yet merged either CTV or APO support into the bitcoin-inquisition 23.0 branch yet, and before actually mining blocks I want to make the signet miner able to automatically detect/recover if the bitcoin-inquisition node either crashes or starts producing incompatible blocks. Anyway, I wanted to post the idea publicly, both to give folks an opportunity to poke holes in the
Re: [bitcoin-dev] Mock introducing vulnerability in important Bitcoin projects
On Thu, Nov 18, 2021 at 09:29:24PM +0100, Prayank via bitcoin-dev wrote: > After reading all the emails, personally experiencing review process > especially on important issues like privacy and security, re-evaluating > everything and considering the time I can spend on this, I have decided to do > this exercise for 3 projects with just 1 account. I have created a salted > hash for the username as you had mentioned in the first email: > f40bcb13dbcbf7b6245becb75586c22798ed7360cd9853572152ddf07a39 > 3 Bitcoin projects are Bitcoin Core (full node implementation), LND (LN > implementation) and Bisq (DEX). > Pull requests will be created in next 6 months. If vulnerability gets caught > during review, will publicly announce here that the project caught the PR and > reveal the de-commitment publicly. If not caught during review, will > privately reveal both the inserted vulnerability and the review failure via > the normal private vulnerability-reporting channels. A summary with all the > details will be shared later. It's now been nine months since this email, but I don't believe there's been any public report on this exercise. Does this mean that a vulnerability has been introduced in one or all of the named projects? Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Bitcoin covenants are inevitable
On Fri, Jun 03, 2022 at 06:39:34PM +, alicexbt via bitcoin-dev wrote: > Covenants on bitcoin will eventually be implemented with a soft fork. That's begging the question. The issue is whether we should allow such soft forks, or if the danger of losing coins to covenants and thus losing fungibility and the freedom to transact is too much of a risk, compared to whatever benefits the soft fork would bring. > **Why covenants are not contentious?** I think this actually misses the point: covenants are "contentious" because that term is usually used to describe a concept that's utterly counter to the point of bitcoin as a monetary instrument. We've taken the term and applied it for something that's different in key ways, which just ends up confusing and misleading. Using a traditional meaning, a "covenant" is an "agreement" but with an implication of permanency: whether that's because you're making a covenant with God that will bind your children and their children, or you're putting a covenant on your property that "runs with the land", eg: """A covenant is said to run with the land in the event that the covenant is annexed to the estate and cannot be separated from the land or the land transferred without it. Such a covenant exists if the original owner as well as each successive owner of the property is either subject to its burden or entitled to its benefit.""" [0] [0] https://legal-dictionary.thefreedictionary.com/covenant Sometimes people talk about "recursive covenants" in bitcoin, which I think is intended to imply something similar to the "runs with the land" concept above. But recursion in programming generally terminates (calculating "Fib(n) := if (n <= 1) then 1 else Fib(n-1) + Fib(n-2)" eg), while covenants that run with the land are often unable to be removed. I think a better programming analogy would be "non-terminating"; so for example, CTV is "recursive" in the sense that you can nest one CTV commitment inside another, but it does terminate, because you can only nest a finite number of CTV commitments inside another, due to computational limits. Covenants even have a racist history in the US (because of course they do), apparently, with covenants along the lines of "None of said land may be conveyed to, used, owned, or occupied by negroes as owners or tenants" [1] having been in use. Such covenants have apparently been ruled uneforcable by the Supreme Court, but apparently are nevertheless often difficult or impossible to remove from the property despite that. Presumably we at least don't need to worry about somehow introducing racist opcodes in bitcoin, but if you're wondering why covenants are controversial, their historical use is relevant. [1] https://www.npr.org/2021/11/17/1049052531/racial-covenants-housing-discrimination Covenants are specifically undesirable if applied to bitcoin because they break fungibility -- if you have some covenant that "runs with the coin", then it's no longer true to say "1 BTC = 1 BTC" if such a covenant means the one of the left can't be used for a lightning channel or the one on the right can't be used to back an asset on eth or liquid. But that isn't what anyone's *trying* to do here. What we're trying to do is add temporary conditions that allow us to do smarter things than we currently can while the coin remains in our ownership -- for example protecting our own money by putting it in a "vault", or pooling funds with people we don't entirely trust. That often requires recursion in the first place (so that the vault or the pool doesn't disappear after the first transaction). And from there, it can be easy to prevent the recursion from terminating and turn what would otherwise be a temporary condition into a permanent one. That was theoretically interesting in 2013 [2], and remains so today [3], but it isn't something that's *desirable* to apply to bitcoin. [2] https://bitcointalk.org/index.php?topic=278122.0 [3] https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-ii.html That is: even if it becomes possible to create permanent non-terminating covenants that run with a coin on bitcoin, that isn't something you should do, and if you do, people should not (and almost certainly will not) accept those coins from you, unless you first find a way to remove that covenant. One significant difference between real estate covenants and anything we might have on bitcoin is the ability to be sure that once you receive ownership of bitcoin, that that ownership does not come with encumbrances you're unaware of. In particular, when purchasing real estate, you may have to do numerous legal searches to be sure that there isn't a covenant, easement or other encumbrance on your property; in bitcoin, you decide the exact set of encumbrances that will be placed on your coins when you create the receiving address that you use, and once the address is chosen, those conditions are fixed. (Though to be fair, they could reference external things, such a
Re: [bitcoin-dev] Security problems with relying on transaction fees for security
On Mon, Jul 11, 2022 at 08:21:40PM -0400, Russell O'Connor via bitcoin-dev wrote: > Oops, you are right. We need the bribe to be the output of the coinbase, > but due to the maturity rule, it isn't really a bribe. > Too bad coinbases cannot take other coinbase outputs as inputs to bypass > the maturity rule. Sufficiently advanced tx introspection could be used for this; spend the fees in the coinbase to address A, but also create a 0sat output via a regular tx to the scriptPubKey "1 CSV". Note that tx's txid as B. The next miner claims the bribe B, by spending the 0sat output to itself with a 1-in, 1-out tx, with scriptPubKey C. nVersion = 1 inputs = [txid=B, vout=0, scriptSig="", nSeq=1] outputs = [value=0, scriptPubKey=C] nLocktime = 0 Now we get back to A, and say that it's scriptPubKey uses a script that takes "C" as input, has "B" hardcoded, calculates the txid of the tx above, call it D, and then uses tx introspection to check that one of the inputs of the tx has D as the txid. > I guess that means the bribe has to be by leaving transactions in the > mempool. You *could* make that work if you allow tx's to use the annex to commit to a recent block. That is, if you just mined block 740,000 and its hash was 0005f28764680afdbd8375216ff8f30b17eeb26bd98aac63, you construct a bribe tx paying to "OP_1", but when you sign it, you add "50ee070b4aa0d98aac63" as the annex (tag=ee, length=07, value[0:3]=height=0b4aa0=470k, value[3:]=d98aac63), and (via a soft fork) nodes then only consider that tx valid if the block at "height" ends in "d98aac63". There's then only a 1-in-4B chance that someone who extends a competitor to your block could claim the bribe, at a cost of 11 extra witness bytes. But such txs (and anything that descends from them) would become invalid with as little as a 1-block reorg, which would pretty much defeat the entire purpose of the maturity delay... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary
On Mon, Jul 11, 2022 at 08:56:04AM -0400, Erik Aronesty via bitcoin-dev wrote: > > Alternatively, losses could be at a predictable rate that's entirely > > different to the one Peter assumes. > No, peter only assumes that there *is* a rate. No, he assumes it's a constant rate. His integration step gives a different result if lambda changes with t: https://www.wolframalpha.com/input?i=dN%2Fdt+%3D+k+-+lambda%28t%29*N On Mon, Jul 11, 2022 at 12:59:53PM -0400, Peter Todd via bitcoin-dev wrote: > Give me an example of an *actual* inflation rate you expect to see, given a > disaster of a given magnitude. All I was doing was saying your proof is incorrect (or, rather, relies on a highly unrealistic assumption), since I hadn't seen anybody else point that out already. But even if the proof were correct, I don't think it provides a useful mechanism (since there's no reason to think miners gaining all the coins lost in a year will be sufficient for anything), and I don't really think the "security budget" framework (ie, that the percentage of total supply given to miners each year is what's important for security) you're implicitly relying on is particularly meaningful. So no, not particularly interested in diving into it any deeper. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Security problems with relying on transaction fees for security
On Mon, Jul 11, 2022 at 11:12:52AM -0700, Bram Cohen via bitcoin-dev wrote: > If transaction fees came in at an even rate over time all at the exact same > level then they work fine for security, acting similarly to fixed block > rewards. Unfortunately that isn't how it works in the real world. That just becomes a market design question. There's been some trivial effort put into that for bitcoin (ie, getting people to actually chooses fees based on the weight of their transaction, and having weight be the sole limiting factor for miners), but not a lot, and there's evidence both from previous times in Bitcoin's history and from altcoin's that the market can support higher fees. Should we work on that today, though? It doesn't seem smart to me: the subsidy is already quite substantial ($6.5 billion USD per year at current prices) so raising fees to 10% of block reward would transfer another $650M USD from bitcoin users to miners (or ASIC manfucturers and electricity producers) each year, achieving what? Refuting some FUD? Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary
On Sat, Jul 09, 2022 at 08:46:47AM -0400, Peter Todd via bitcoin-dev wrote: > title: "Surprisingly, Tail Emission Is Not Inflationary" > Of course, this isn't realistic as coins are constantly being lost due to > deaths, forgotten passphrases, boating accidents, etc. These losses are > independent: This isn't necessarily true: if the losses are due to a common cause, then they'll be heavily correlated rather than independent; for example losses could be caused by a bug in a popular wallet/exchange software that sends funds to invalid addresses, or by a war or natural disaster that damages key storage hardware. They're also not independent over time -- people improve their key storage habits over time; eg switching to less buggy wallets/exchanges, validating addresses before using them, using distributed multisig to prevent a localised disaster from being catastrophic. > the *rate* of coin loss at time $$t$$ is > proportional to the total supply *at that moment* in time. This is the key assumption that produces the claimed result. If you're losing a constant fraction, x (Peter's \lambda), of Bitcoins each year, then as soon as the supply increases enough that the constant reward, k, corresponds to the constant fraction, ie k = x*N(t), then you've hit an equilibrium. (Likewise if you're losing more than you're increasing -- you just need to wait until N(t) decreases enough that you reach the same equilibrium point) You don't really need any fancy maths. But that assumption doesn't need to be true; coins could primarily be lost in "black swan" events (due to bugs, wars or disasters) rather than at a predictable rate -- with actions taken thereafter such that the same event repeating is no longer the same level of catastrophe, but instead another new black swan event is required to maintain the same loss rate. If that's the case, then the rate at which funds are lost will vary chaotically, leading to "inflationary" periods in between events, and comparatively strong deflationary shocks when these events occur. Alternatively, losses could be at a predictable rate that's entirely different to the one Peter assumes. One alternative predictable rate that seems plausible to me is if funds are lost due to people not be careful about losing small amounts; even though they are careful when amounts are larger. So when 10k BTC was worth $40, maybe it doesn't matter if you misplace a hard drive with 7500 BTC on it since that's only worth $30; but by the time 7500 BTC is worth $150M, maybe you take a bit more care with that, but are still not too worried if you lose 1.5mBTC, since that's also only worth $30. To mathematise that, perhaps there are K people holding Bitcoin, and with probability p, each loses $100 (in constant 2009 dollars say, so that we can ignore inflation) of that Bitcoin a year through carelessness. For an equilibrium to occur in that case, you need: N(t) + k - (100/P * Kp) = N(t) where P is the price of Bitcoin (again in constant 2009 dollars) and k is Peter's fixed tail subsidy. Simplifying gives: P = K * 100p/k But k and p are constant by assumption in this scenario, so equilibrium is reached only if price (P) is exactly proportional to number of users (K). That requires you to have a non-inflationary currency (supply is constant) with constant adoption (assume K doesn't change) that maintains a constant price (P=K*100p/k) in real terms even if the economy is otherwise expanding or contracting. More importantly, just from a goals point of view, x is something we should be finding ways to minimise it over time, not leave constant. In fact, you could argue for an even stronger goal: "the real value held in BTC lost each year should decrease", that is, x should be decreasing faster than 1/(N(t)*P). Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Bitcoin covenants are inevitable
On Thu, Jul 07, 2022 at 10:12:41AM -0400, Peter Todd via bitcoin-dev wrote: > We should not imbue real technology with magical qualities. That's much more fun if you invert it, and take it as a mission statement. Advance technology sufficiently! > The fact of the matter is that the present amount of security is about 1.7% of > the total coin supply/year, and Bitcoin seems to be working fine. 1.7% is also > already an amount low enough that it's much smaller than economic volatility. > > Obviously 0% is too small. > > There's zero reason to stress about finding an "optimal" amount. An amount low > enough to be easily affordable, but non-zero, is fine. 1% would be fine; 0.5% > would probably be fine; 0.1% would probably be fine. For comparison, 0.1% of 21M BTC per annum is 0.4 BTC per block, which is about 50sat/vb if blocks are 800kvB on average. Doing that purely with fees seems within the ballpark of feasibility to me. 50sat/vb for a 200vb tx (roughly the size of a 2-in, 2-out p2wpkh/p2tr tx) is $2 at $20k/BTC, $10 at $100k/BTC, $100 at $1M/BTC etc. If the current block reward of ~1.7% pa of 19M at a price of $20k funds the current level of mining activity, then you'd expect a similar level of mining activity as today with reward at 0.1% pa of 21M at a price of ~$310k. Going by the halving schedule, the block subsidy alone will remain above 0.1% of supply until we hit the 0.39 BTC/block regime, in 2036, at which point it drops to ~0.0986% annualised. (I guess you could extend that by four years if you're willing to assume more than 1.5% of bitcoin supply has been permanently lost) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [PROPOSAL] OP_TX: generalized covenants reduced to OP_CHECKTEMPLATEVERIFY
On Tue, May 10, 2022 at 08:05:54PM +0930, Rusty Russell via bitcoin-dev wrote: > OPTX_SEPARATELY: treat fields separately (vs concatenating) > OPTX_UNHASHED: push on the stack without hashing (vs SHA256 before push) > OPTX_SELECT_OUTPUT_AMOUNT32x2*: sats out, as a high-low u31 pair > OPTX_SELECT_OUTPUT_SCRIPTPUBKEY*: output scriptpubkey Doing random pie-in-the-sky contract design, I had a case where I wanted to be able to say "update the CTV hash from commiting to outputs [A,B,C,D,E] to outputs [A,B,X,D,E]". The approach above and the one CTV takes are somewhat awkward for that: * you have to include all of A,B,D,E in order to generate both hashes, which seems less efficient than a merkle path * proving that you're taking an output in its entirety, rather than, say, the last 12 bytes of C and the first 30 bytes of D, seems hard. Again, it seems like a merkle path would be better? This is more of an upgradability concern I think -- ie, only relevant if additional features like CAT or TLUV or similar are added; but both OP_TX and CTV seem to be trying to take upgradability into account in advance, so I thought this was worth raising. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Package Relay Proposal
On 24 May 2022 5:05:35 pm GMT-04:00, Gloria Zhao via bitcoin-dev wrote: >To clarify, in this situation, I'm imagining something like >A: 0 sat, 100vB >B: 1500 sat, 100vB >C: 0 sat, 100vB >X: 500 sat, 100vB >feerate floor is 3sat/vB > >With the algo: >> * is X alone above my fee rate? no, then forget it >> * otherwise, s := X.size, f := X.fees, R := [X] >> * for P = P1..Pn: >> * do I already have P? then skip to the next parent >> * s += P.size, f += P.fees, R += [P] >> * if f/s above my fee rate floor? if so, request all the txs in R > >We'd erroneously ask for A+B+C+X, but really we should only take A+B. >But wouldn't A+B also be a package that was announced for B? In theory, yes, but maybe it was announced earlier (while our node was down?) or had dropped from our mempool or similar, either way we don't have those txs yet. >Please lmk if you were imagining something different. I think I may be >missing something. That's what I was thinking, yes. So the other thing is what happens if the peer announcing packages to us is dishonest? They announce pkg X, say X has parents A B C and the fee rate is garbage. But actually X has parent D and the fee rate is excellent. Do we request the package from another peer, or every peer, to double check? Otherwise we're allowing the first peer we ask about a package to censor that tx from us? I think the fix for that is just to provide the fee and weight when announcing the package rather than only being asked for its info? Then if one peer makes it sound like a good deal you ask for the parent txids from them, dedupe, request, and verify they were honest about the parents. >> Is it plausible to add the graph in? Likewise, I think you'd have to have the graph info from many nodes if you're going to make decisions based on it and don't want hostile peers to be able to trick you into ignoring txs. Other idea: what if you encode the parent txs as a short hash of the wtxid (something like bip152 short ids? perhaps seeded per peer so collisions will be different per peer?) and include that in the inv announcement? Would that work to avoid a round trip almost all of the time, while still giving you enough info to save bw by deduping parents? > For a maximum 25 transactions, >23*24/2 = 276, seems like 36 bytes for a child-with-parents package. If you're doing short ids that's maybe 25*4B=100B already, then the above is up to 36% overhead, I guess. Might be worth thinking more about, but maybe more interesting with ancestors than just parents. >Also side note, since there are no size/count params, wondering if we >should just have "version" in "sendpackages" be a bit field instead of >sending a message for each version. 32 versions should be enough right? Maybe but a couple of messages per connection doesn't really seem worth arguing about? Cheers, aj -- Sent from my phone. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Package Relay Proposal
On 23 May 2022 9:13:43 pm GMT-04:00, Gloria Zhao wrote: >> If you're asking for the package for "D", would a response telling you: >> txid_D (500 sat, 100vB) >> txid_A (0 sat, 100vB) >> txid_B (2000 sat, 100 vB) >> be better, in that case? Then the receiver can maybe do the logic >> themselves to figure out that they already have A in their mempool >> so it's fine, or not? >Right, I also considered giving the fees and sizes of each transaction in >the package in “pckginfo1”. But I don’t think that information provides >additional meaning unless you know the exact topology, i.e. also know if >the parents have dependency relationships between them. For instance, in >the {A, B, D} package there, even if you have the information listed, your >decision should be different depending on whether B spends from A. I don't think that's true? We already know D is above our fee floor so if B with A is also above the floor, we want them all, but also if B isn't above the floor, but all of them combined are, then we also do? If you've got (A,B,C,X) where B spends A and X spends A,B,C where X+C is below fee floor while A+B and A+B+C+X are above fee floor you have the problem though. Is it plausible to add the graph in? Cheers, aj -- Sent from my phone. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Package Relay Proposal
On Wed, May 18, 2022 at 02:40:58PM -0400, Gloria Zhao via bitcoin-dev wrote: > > Does it make sense for these to be configurable, rather than implied > > by the version? > > … would it be better to either just not do sendpackages > > at all if you're limiting ancestors in the mempool incompatibly > Effectively: if you’re setting your ancestor/descendant limits lower than > the default, you can’t do package relay. I wonder if this might be > controversial, since it adds pressure to adhere to Bitcoin Core’s current > mempool policy? I would be happy to do it this way, though - makes things > easier to implement. How about looking at it the other way: if you're writing a protocol that's dependent on people seeing that a package as a whole pays a competitive feerate, don't you want to know in advance what conditions the network is going to impose on your transactions in order to consider them as a package? In that case, aren't the "depth" and "size" constraints things we should specify in a standard? (The above's not a rhetorical question; I'm not sure what the answer is. And even if it's "yes", maybe core's defaults should be reconsidered rather than standardised as-is) Worst case, you could presumably do a new package relay version with different constraints, if needed. > > > 5. If 'fRelay==false' in a peer's version message, the node must not > > >send "sendpackages" to them. If a "sendpackages" message is > > > received by a peer after sending `fRelay==false` in their version > > > message, the sender should be disconnected. > > Seems better to just say "if you set fRelay=false in your version > > message, you must not send sendpackages"? You already won't do packages > > with the peer if they don't also announce sendpackages. > I guess, theoretically, if you allow bloom filters with this peer, it’s > plausible they’re saying “fRelay=false, I’ll send you a bloom filter later, > and I’ll also want to talk about packages.” I was just meaning "it's okay to send VERSION fRelay=true then immediately send WTXIDRELAY then immediately send SENDPACKAGES" without having to first verify what the other guy's fRelay was set to. On the other hand, you do already have to verify the other guy's version is high enough, but it would be kind-of nice to move towards just announcing the features you support, and not having to make it a multistep negotiation... > > Maybe: "You must not send sendpackages unless you also send wtxidrelay" ? > Do you mean if we get a verack, and the peer sent “sendpackages” but not > “wtxidrelay,” we should disconnect them? Yes. > I have it as: we send a PCKG INV when this transaction’s feerate is above > the fee filter, but one or more of its parents don’t. I don’t think using > ancestor feerate is better. > See this counterexample: > https://raw.githubusercontent.com/glozow/bitcoin-notes/master/mempool_garden/abc_1parent_2kids.png > A (0fee) has 2 kids, B (3sat/vB) and C (20sat/vB), everything’s the same > vsize. Let’s say the fee filter is 3sat/vB. > If we do it based on ancestor feerate, we won’t send B. But B is actually > fine; C is paying for A. But that only works if the receiver also has C, in which case they also have A, and you don't need package relay to do anything with B? If they didn't have C already, then relaying {A,B} would be a waste of time, because {A,B} would be rejected as only paying 1.5sat/vB or whatever.. If you switch it to being: A (0 sats, 200vB) B (2000 sats, 200vB, spends A:0) C (200 sats, 200vB) D (1000 sats, 200vB, sepnds A:1, C:0) then you get: A alone = 0s/vB B+A = 5s/vB C alone = 1s/vB D+C+A = 2s/vB D+C = 3s/vB (B+A already at 5s/vB) which I think recovers your point, while also having all the details only be dealing with direct parents. > > Are "getpckgtxns" / "pcktxns" really limited to packages, or are they > > just a general way to request a batch of transactions? > > Maybe call those messages "getbatchtxns" and "batchtxns" and allow them to > > be used more generally, potentially in ways unrelated to packages/cpfp? > Indeed, it’s a general way to request a batch of transactions. I’ll > highlight that it is “all or nothing,” i.e. if the sender is missing any of > them, they’ll just send a notfound. > The idea here was to avoid downloading any transactions that can’t be > validated right away. Right; maybe I should just be calling a "batch of packages to be validated together" a "tx package" in the first place. Maybe it would be worth emphasising that you should be expecting to validate all the txs you receive as a response to getpckgtxns (getpkgtxs :) all at the same time, and immediately upon receiving them? > > The "only be sent if both peers agreed to do package relay" rule could > > simply be dropped, I think. > Wouldn’t we need some way of saying “hey I support batchtxns?” Otherwise > you would have to guess by sending a request and waiting to see if it’s > ignored? Sure, perhaps I should have said leave tha
Re: [bitcoin-dev] Package Relay Proposal
On Tue, May 17, 2022 at 12:01:04PM -0400, Gloria Zhao via bitcoin-dev wrote: > New Messages > Three new protocol messages are added for use in any version of > package relay. Additionally, each version of package relay must define > its own inv type and "pckginfo" message version, referred to in this > document as "MSG_PCKG" and "pckginfo" respectively. See > BIP-v1-packages for a concrete example. The "PCKG" abbreviation threw me for a loop; isn't the usual abbreviation "PKG" ? > =sendpackages= > |version || uint32_t || 4 || Denotes a package version supported by the > node. > |max_count || uint32_t || 4 ||Specifies the maximum number of transactions > per package this node is > willing to accept. > |max_weight || uint32_t || 4 ||Specifies the maximum total weight per > package this node is willing > to accept. Does it make sense for these to be configurable, rather than implied by the version? I presume the idea is to cope with people specifying different values for -limitancestorcount or -limitancestorsize, but if people are regularly relaying packages around, it seems like it becomes hard to have those values really be configurable while being compatible with that? I guess I'm asking: would it be better to either just not do sendpackages at all if you're limiting ancestors in the mempool incompatibly; or alternatively, would it be better to do the package relay, then reject the particular package if it turns out too big, and log that you've dropped it so that the node operator has some way of realising "whoops, I'm not relaying packages properly because of how I configured my node"? > 5. If 'fRelay==false' in a peer's version message, the node must not >send "sendpackages" to them. If a "sendpackages" message is > received by a peer after sending `fRelay==false` in their version > message, the sender should be disconnected. Seems better to just say "if you set fRelay=false in your version message, you must not send sendpackages"? You already won't do packages with the peer if they don't also announce sendpackages. > 7. If both peers send "wtxidrelay" and "sendpackages" with the same >version, the peers should announce, request, and send package > information to each other. Maybe: "You must not send sendpackages unless you also send wtxidrelay" ? As I understand it, the two cases for the protocol flow are "I received an orphan, and I'd like its ancestors please" which seems simple enough, and "here's a child you may be interested in, even though you possibly weren't interested in the parents of that child". I think the logic for the latter is: * if tx C's fee rate is less than the peer's feefilter, skip it (will maybe treat it as a parent in some package later though) * if tx C's ancestor fee rate is less than the peer's feefilter, skip it? * look at the lowest ancestor fee rate for any of C's in-mempool parents * if that is higher than the peer's fee filter, send a normal INV * if it's lower than the peer's fee filter, send a PCKG INV Are "getpckgtxns" / "pcktxns" really limited to packages, or are they just a general way to request a batch of transactions? Particularly in the case of requesting the parents of an orphan tx you already have, it seems hard for the node receiving getpckgtxns to validate that the txs are related in some way; but also it doesn't seem very necessary? Maybe call those messages "getbatchtxns" and "batchtxns" and allow them to be used more generally, potentially in ways unrelated to packages/cpfp? The "only be sent if both peers agreed to do package relay" rule could simply be dropped, I think. > 4. The reciever uses the package information to decide how to request >the transactions. For example, if the receiver already has some of > the transactions in their mempool, they only request the missing ones. > They could also decide not to request the package at all based on the > fee information provided. Shouldn't the sender only be sending package announcements when they know the recipient will be interested in the package, based on their feefilter? > =pckginfo1= > {| > | Field Name || Type || Size || Purpose > |- > |blockhash || uint256 || 32 || The chain tip at which this package is > defined. > |- > |pckg_fee||CAmount||4|| The sum total fees paid by all transactions in the > package. CAmount in consensus/amount.h is a int64_t so shouldn't this be 8 bytes? If you limit a package to 101kvB, an int32_t is enough to cover any package with a fee rate of about 212 BTC/block or lower, though. > |pckg_weight||int64_t||8|| The sum total weight of all transactions in the > package. The maximum block weight is 4M, and the default -limitancestorsize presumably implies a max package weight of 404k; seems odd to provide a uint64_t rather than an int32_t here, which easily allows either of those values? > 2. ''Only 1 child with unconfirmed parents.'' The package must consist >of one transaction and its unconf
Re: [bitcoin-dev] Speedy covenants (OP_CAT2)
On Thu, May 12, 2022 at 06:48:44AM -0400, Russell O'Connor via bitcoin-dev wrote: > On Wed, May 11, 2022 at 11:07 PM ZmnSCPxj wrote: > > So really: are recursive covenants good or...? > My view is that recursive covenants are inevitable. It is nearly > impossible to have programmable money without it because it is so difficult > to avoid. I think my answer is that yes they are good: they enable much more powerful contracting. Of course, like any cryptographic tool they can also be harmful to you if you misuse them, and so before you use them yourself you should put in the time to understand them well enough that you *don't* misuse them. Same as using a kitchen knife, or riding a bicycle, or swimming. Can be natural to be scared at first, too. > Given that we cannot have programmable money without recursive covenants > and given all the considerations already discussed regarding them, i.e. no > worse than being compelled to co-sign transactions, and that user generated > addresses won't be encumbered by a covenant unless they specifically > generate it to be, I do think it makes sense to embrace them. I think that's really the easy way to be sure *you* aren't at risk from covenants: just follow the usual "not your keys, not your coins" philosophy. The way you currently generate an address from a private key already guarantees that *your* funds won't be encumbered by any covenants; all you need to do is to keep doing that. And generating the full address yourself is already necessary with taproot: if you don't understand all the tapscript MAST paths, then even though you can spend the coin, one of those paths you don't know about might already allow someone to steal your funds. But if you generated the address, you (or at least your software) will understand everything and not include anything dangerous, so your funds really are safu. It may be that some people will refuse to send money to your address because they have some rule that says "I'll only send money to people who encumber all their funds with covenant X" and you didn't encumber your address in that way -- but that just means they're refusing to pay you, just as people who say "I'll only pay you off-chain via coinbase" or "I'll only pay you via SWIFT" won't send funds to your bitcoin address. Other examples might include "we only support segwit-v0 addresses not taproot ones", or "you're on an OFAC sanctions list so I can't send to you or the government will put me in prison" or "my funds are in a multisig with the government who won't pay to anyone who isn't also in a multisig with them". It does mean you still need people with the moral fortitude to say "no, if you can't pay me properly, we can't do business" though. Even better: in so far as wallet software will just ignore any funds sent to addresses that they didn't generate themselves according to the rules you selected, you can already kind of outsource that policy to your wallet. And covenants, recursive or otherwise, don't change that. For any specific opcode proposal, I think you still want to consider 1) how much you can do with it 2) how efficient it is to validate (and thus how cheap it is to use) 3) how easy it is to make it do what you want 4) how helpful it is at preventing bugs 5) how clean and maintainable the validation code is I guess to me CTV and APO are weakest at (1); CAT/CSFS falls down on (3) and (4); OP_TX is probably weakest at (5) and maybe not as good as we'd like at (3) and (4)? Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] What to expect in the next few weeks
On Mon, Apr 25, 2022 at 10:48:20PM -0700, Jeremy Rubin via bitcoin-dev wrote: > Further, you're representing the state of affairs as if there's a great > need to scramble to generate software for this, whereas there already are > scripts to support a URSF that work with the source code I pointed to from > my blog. This approach is a decent one, even though it requires two things, > because it is simple. I think it's important that people keep this in mind > because that is not a joke, the intention was that the correct set of check > and balance tools were made available. I'd be eager to learn what, > specifically, you think the advantages are of a separate binary release > rather than a binary + script that can handle both cases? The point of running a client with a validation requirement of "blocks must (not) signal" is to handle the possiblity of there being a chain split, where your preferred ruleset ends up on the less-work side. Ideally that will be a temporary situation and other people will come to your side, switch their miners over etc, and your chain will go back to having the most work, and anyone who wasn't running a client with the opposite signalling requirement will reorg to your chain and ruleset. But forkd isn't quite enough to do that reliably -- instead, you'll start disconnecting nodes who forward blocks to you that were built on the block you disconnected, and you'll risk ending up isolated: that's why bip8 recommends clients "should either use parameters that do not risk there being a higher work alternative chain, or specify a mechanism for implementations that support the deployment to preferentially peer with each other". Also, in order to have other nodes reorg to your chain when it has more work, you don't want to exclusively connect to likeminded peers. That's less of a big deal though, since you only need one peer to forward the new chain to the compatible network to trigger all of them to reorg. Being able to see the other chain has more work might be valuable in order to add some sort of user warning signal though: "the other chain appears to have maintained 3x as much hash power as the chain your are following". In theory, using the `BLOCK_RECENT_CONSENSUS_CHANGE` flag to indicate unwanted signalling might make sense; then you could theoretically trigger on that to avoid disconnecting inbound peers that are following the wrong chain. There's already some code along those lines; but while I haven't checked recently, I think it ends up failing relatively quickly once an invalid chain has been extended by a few blocks, since they'll result in `BLOCK_INVALID_PREV` errors instead. The segwit UASF client took some care to try to make this work, fwiw. (As it stands, I think RECENT_CONSENSUS_CHANGE only really helps with avoiding disconnections if there's one or maybe two invalid blocks in a row from a random miner that's doing strange things, rather than if there's an active conflict resulting in a deliberate chain split). On the other hand, if there is a non-trivial chain split, then everyone has to deal with splitting their coins across the different chains, presuming they don't want to just consider one or the other a complete write-off. That's already annoying; but for lightning funds I think it means the automation breaks down, and every channel in the network would need to be immediately closed on chain, as otherwise accepting state updates risks losing the value of your channel balance on whichever chain you're lightning node is not following. So to your original question: I think it's pretty hard to do all that stuff in a separate script, without updating the node software itself. More generally, while I think forkd *is* pretty much state of the art; I don't think it comes close to addressing all the problems that a chain split would create. Maybe it's still worthwhile despite those problems if there's some existential threat to bitcoin, but (not) activating CTV doesn't seem to rise to that level to me. Just my opinion, but: without some sort of existential threat, it seems better to take things slowly and hold off on changes until either pretty much everyone who cares is convinced that the change is a good idea and ready to go; or until someone has done the rest of the work to smooth over all the disruption a non-trivial chain split could cause. Of course, the latter option is a _lot_ of work, and probably requires consensus changes itself... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Mon, Apr 25, 2022 at 11:26:09AM -0600, Keagan McClelland via bitcoin-dev wrote: > > Semi-mandatory in that only "threshold" blocks must signal, so if > only 4% or 9% of miners aren't signalling and the threshold is set > at 95% or 90%, no blocks will be orphaned. > How do nodes decide on which blocks are orphaned if only some of them have > to signal, and others don't? Is it just any block that would cause the > whole threshold period to fail? Yes, exactly those. See [0] or [1]. [0] https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki#Mandatory_signalling [1] https://github.com/bitcoin/bips/pull/1021 (err, you apparently acked that PR) Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Mon, Apr 25, 2022 at 10:11:45AM -0600, Keagan McClelland via bitcoin-dev wrote: > > Under *any* other circumstance, when they're used to activate a bad soft > > fork, speedy trial and bip8 are the same. If a resistance method works > > against bip8, it works against speedy trial; if it fails against speedy > > trial, it fails against bip8. > IIRC one essential difference between ST (which is a variant of BIP9) and > BIP8 is that since there is no mandatory signaling during the lockin > period, BIP8 doesn't have mandatory signaling during the lockin period, it has semi-mandatory [0] signalling during the must_signal period. > you can't do a counter soft fork as easily. The "counter" for bip8 activation is to reject any block during either the started or must_signal phases that would meet the threshold. In that case someone running bip8 might see blocks: [elapsed=2010, count=1813, signal=yes] [elapsed=2011, count=1813, signal=no] [elapsed=2012, count=1814, signal=yes] [elapsed=2013, count=1815, signal=yes, will-lockin!] [elapsed=2014, count=1816, signal=yes] [elapsed=2015, count=1816, signal=no] [elapsed=2016, count=1816, signal=no] [locked in!] But running software to reject the soft fork, you would reject the elapsed=2013 block, and any blocks that build on it. You would wait for someone else to mine a chain that looked like: [elapsed=2013, count=1814, signal=no] [elapsed=2014, count=1814, signal=no] [elapsed=2015, count=1814, signal=no] [elapsed=2016, count=1814, signal=no] [failed!] That approach works *exactly* the same with speedy trial. Jeremy's written code that does exactly this using the getdeploymentinfo rpc to check the deployment status, and the invalidateblock rpc to reject a block. See: https://github.com/JeremyRubin/forkd The difference to bip8 with lot=true is that nodes running speedy trial will reorg to follow the resisting chain if it has the most work. bip8 with lot=true nodes will not reorg to a failing chain, potentially creating an ongoing chain split, unless one group or the other gives up, and changes their software. Cheers, aj [0] Semi-mandatory in that only "threshold" blocks must signal, so if only 4% or 9% of miners aren't signalling and the threshold is set at 95% or 90%, no blocks will be orphaned. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Sun, Apr 24, 2022 at 12:13:08PM +0100, Jorge Timón wrote: > You're not even considering user resistance in your cases. Of course I am. Again: > > My claim is that for *any* bad (evil, flawed, whatever) softfork, then > > attempting activation via bip8 is *never* superior to speedy trial, > > and in some cases is worse. > > > > If I'm missing something, you only need to work through a single example > > to demonstrate I'm wrong, which seems like it ought to be easy... But > > just saying "I disagree" and "I don't want to talk about that" isn't > > going to convince anyone. The "some cases" where bip8 with lot=true is *worse* than speedy trial is when miners correctly see that a bad fork is bad. Under *any* other circumstance, when they're used to activate a bad soft fork, speedy trial and bip8 are the same. If a resistance method works against bip8, it works against speedy trial; if it fails against speedy trial, it fails against bip8. > Sorry for the aggressive tone, but I when people ignore some of my points > repeteadly, I start to wonder if they do it on purpose. Perhaps examine the beam in your own eye. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CTV Signet Parameters
On Thu, Apr 21, 2022 at 10:05:20AM -0500, Jeremy Rubin via bitcoin-dev wrote: > I can probably make some show up sometime soon. Note that James' vault uses > one at the top-level https://github.com/jamesob/simple-ctv-vault, but I > think the second use of it (since it's not segwit wrapped) wouldn't be > broadcastable since it's nonstandard. The whole point of testing is so that bugs like "wouldn't be broadcastable since it's nonstandard" get fixed. If these things are still in the "interesting thought experiment" stage, but nobody but Jeremy is interested enough to start making them consistent with the proposed consensus and policy rules, it seems very premature to be changing consensus or policy rules. > One case where you actually use less space is if you have a few different > sets of customers at N different fee priority level. Then, you might need > to have N independent batches, or risk overpaying against the customer's > priority level. Imagine I have 100 tier 1 customers and 1000 tier 2 > customers. If I batcher tier 1 with tier 2, to provide tier 1 guarantees > I'd need to pay tier 1 rate for 10x the customers. With CTV, I can combine > my batch into a root and N batch outputs. This eliminates the need for > inputs, signatures, change outputs, etc per batch, and can be slightly > smaller. Since the marginal benefit on that is still pretty small, having > bare CTV improves the margin of byte wise saving. Bare CTV only saves bytes when *spending* -- but this is when you're creating the 1100 outputs, so an extra 34 or 67 bytes of witness data seems fairly immaterial (0.05% extra vbytes?). It doesn't make the small commitment tx any smaller. ie, scriptPubKey looks like: - bare ctv: [push][32 bytes][op_nop4] - p2wsh: [op_0][push][32 bytes] - p2tr: [op_1][push][32 bytes] while witness data looks like: - bare ctv: empty scriptSig, no witness - pw2sh: empty scriptSig, witness = "[push][32 bytes][op_nop4]" - p2tr: empty scriptSig, witness = 33B control block, "[push][32 bytes][op_nop4]" You might get more a benefit from bare ctv if you don't pay all 1100 outputs in a single tx when fees go lower; but if so, you're also wasting quite a bit more block space in that case due to the intermediate transactions you're introducing, which makes it seem unlikely that you care about the extra 9 or 17 vbytes bare CTV would save you per intermediate tx... I admit that I am inclined towards micro-optimising things to save those bytes if it's easy, which does incline me towards bare CTV; but the closest thing we have to real user data suggests that nobody's going to benefit from that possibility anyway. > Even if we got rid of bare ctv, segwit v0 CTV would still exist, so we > couldn't use OP_SUCCESSx there either. segwitv0 might be desired if someone > has e.g. hardware modules or MPC Threshold Crypto that only support ECDSA > signatures, but still want CTV. If you desire new features, then you might have to upgrade old hardware that can't support them. Otherwise that would be an argument to never use OP_SUCCESSx: someone might want to use whatever new feature we might imagine on hardware that only supports ECDSA signatures. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV
On Wed, Apr 20, 2022 at 03:04:53PM -1000, David A. Harding via bitcoin-dev wrote: > The main criticisms I'm aware of against CTV seem to be along the following > lines: [...] > Could those concerns be mitigated by making CTV an automatically reverting > consensus change with an option to renew? [...] Buck O Perley suggested that "Many of the use cases that people are excited to use CTV for ([5], [6]) are very long term in nature and targeted for long term store of value in contrast to medium of exchange." But, if true, that's presumably incompatible with any sort of sunset that's less than many decades away, so doesn't seem much better than just having it be available on a signet? [5] https://github.com/kanzure/python-vaults/blob/master/vaults/bip119_ctv.py [6] https://github.com/jamesob/simple-ctv-vault If sunsetting were a good idea, one way to think about implementing it might be to code it as: if (DeploymentActiveAfter(pindexPrev, params, FOO) && !DeploymentActiveAfter(pindexPrev, params, FOO_SUNSET)) { EnforceFoo(); } That is to have sunsetting the feature be its own soft-fork with pre-declared parameters that are included in the original activation proposal. That way you don't have to have a second process debate about how to go about (not) sunsetting the rules, just one on the merits of whether sunsetting is worth doing or not. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CTV Signet Parameters
On Wed, Apr 20, 2022 at 05:13:19PM +, Buck O Perley via bitcoin-dev wrote: > All merits (or lack thereof depending on your view) of CTV aside, I find this > topic around decision making both interesting and important. While I think I > sympathize with the high level concern about making sure there are use cases, > interest, and sufficient testing of a particular proposal before soft forking > it into consensus code, it does feel like the attempt to attribute hard > numbers in this way is somewhat arbitrary. Sure. I included the numbers for falsifiability mostly -- so people could easily check if my analysis was way off the mark. > For example, I think it could be reasonable to paint the list of examples you > provided where CTV has been used on signet in a positive light. 317 CTV > spends “out in the wild” before there’s a known activation date is quite a lot Not really? Once you can make one transaction, it's trivial to make hundreds. It's more interesting to see if there's multiple wallets or similar that support it; or if one wallet has a particularly compelling use case. > (more than taproot had afaik). Yes; as I've said a few times now, I think we should have had more real life demos before locking taproot's activation in. I think that would have helped avoid bugs like Neutrino's [0] and made it easier for hardware wallets etc to have support for taproot as soon as it was active, without having to rush around adding library support at the last minute. [0] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-November/019589.html Lightning's "two independent implementations" rule might be worth aspiring too, eg. > If we don’t think it is enough, then what number of unique spends and use > cases should we expect to see of a new proposal before it’s been sufficiently > tested? I don't really think that's the metric. I'd go for something more like: 1a) can you make transactions using the new feature with bitcoin-cli, eg createrawtransaction etc? 1b) can you make transactions using the new feature with some other library? 1c) can you make transactions using the new feature with most common libraries? 2) has anyone done a usable prototype of the major use cases of the new feature? I think the answers for CTV are: 1a) no 1b) yes, core's python test suite, sapio 1c) no 2) no Though presumably jamesob's simple ctv vault is close to being an answer for (2)? For taproot, we had, 1a) yes, with difficulty [1] 1b) yes, core's python test suite; kalle's btcdeb sometimes worked too 1c) no 2) optech's python notebook [2] from it's taproot workshops had demos for musig and degrading multisig via multiple merkle paths, though I think they were out of date with the taproot spec for a while [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019543.html [2] https://github.com/bitcoinops/taproot-workshop/ To some extent those things are really proxies for: 3) how well do people actually understand the feature? 4) are we sure the tradeoffs being made in this implementation of the feature, vs other implementations or other features actually make sense? 5) how useful is the feature? I think we were pretty confident in the answers for those questions for taproot. At least personally, I'm still not super confident in the answers for CTV. In particular: - is there really any benefit to doing it as a NOP vs a taproot-only opcode like TXHASH? Theoretically, sure, that saves some bytes; but as was pointed out on #bitcoin-wizards the other day, you can't express those outputs as an address, which makes them not very interoperable, and if they're not interoperable between, say, an exchange and its users trying to do a withdraw, how useful is that really ever going to be? - the scriptSig commitments seems very kludgy; combining multiple inputs likewise seems kludgy The continual push to rush activation of it certainly doesn't increase my confidence either. Personally, I suspect it's counterproductive; better to spend the time answering questions and improving the proposal, rather than spending time going around in circles about activating something people aren't (essentially) unanimously confident about. > In absence of the above, the risk of a constantly moving bar I'd argue the bar *should* be constantly moving, in the sense that we should keep raising it. > To use your meme, miners know precisely what they’re mining for and what a > metric of success looks like which makes the risk/costs of attempting the PoW > worth it The difference between mining and R&D is variance: if you're competing for 50k blocks a year, you can get your actual returns to closely match your expected return, especially if you pool with others so your probability of success isn't miniscule -- for consensus dev, you can reasonably only work on a couple of projects a year, so your median return is likely $0, rat
Re: [bitcoin-dev] CTV Signet Parameters
On Wed, Apr 20, 2022 at 08:05:36PM +0300, Nadav Ivgi via bitcoin-dev wrote: > > I didn't think DROP/1 is necessary here? Doesn't leaving the 32 byte hash > on the stack evaluate as true? > Not with Taproot's CLEANSTACK rule. The CLEANSTACK rule is the same for segwit and tapscript though? For p2wsh/BIP 141 it's "The script must not fail, and result in exactly a single TRUE on the stack." and for tapscript/BIP 342, it's "If the execution results in anything but exactly one element on the stack which evaluates to true with CastToBool(), fail." CastToBool/TRUE is anything that's not false, false is zero (ie, any string of 0x00 bytes) or negative zero (a string of 0x00 bytes but with the high byte being 0x80). Taproot has the MINIMALIF rule that means you have to use exactly 1 or 0 as the input to IF, but I don't think that's relevant for CTV. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CTV Signet Parameters
On Thu, Feb 17, 2022 at 01:58:38PM -0800, Jeremy Rubin via bitcoin-dev wrote: > AJ Wrote (in another thread): > > I'd much rather see some real > > third-party experimentation *somewhere* public first, and Jeremy's CTV > > signet being completely empty seems like a bad sign to me. There's now been some 2,200 txs on CTV signet, of which (if I haven't missed anything) 317 have been CTV spends: - none have been bare CTV (ie, CTV in scriptPubKey directly, not via p2sh/p2wsh/taproot) - none have been via p2sh - 3 have been via taproot: https://explorer.ctvsignet.com/tx/f73f4671c6ee2bdc8da597f843b2291ca539722a168e8f6b68143b8c157bee20 https://explorer.ctvsignet.com/tx/7e4ade977db94117f2d7a71541d87724ccdad91fa710264206bb87ae1314c796 https://explorer.ctvsignet.com/tx/e05d828bf716effc65b00ae8b826213706c216b930aff194f1fb2fca045f7f11 The first two of these had alternative merkle paths, the last didn't. - 314 have been via p2wsh https://explorer.ctvsignet.com/tx/62292138c2f55713c3c161bd7ab36c7212362b648cf3f054315853a081f5808e (don't think there's any meaningfully different examples?) As far as I can see, all the scripts take the form: [PUSH 32 bytes] [OP_NOP4] [OP_DROP] [OP_1] (I didn't think DROP/1 is necessary here? Doesn't leaving the 32 byte hash on the stack evaluate as true? I guess that means everyone's using sapio to construct the txs?) I don't think there's any demos of jamesob's simple-ctv-vault [0], which I think uses a p2wsh of "IF n CSV DROP hotkey CHECKSIG ELSE lockcoldtx CTV ENDIF", rather than taproot branches. [0] https://github.com/jamesob/simple-ctv-vault Likewise I don't think there's any examples of "this CTV immediately; or if fees are too high, this other CTV that pays more fees after X days", though potentially they could be hidden in the untaken taproot merkle branches. I don't think there's any examples of two CTV outputs being combined and spent in a single transaction. I don't see any txs with nSequence set meaningfully; though most (all?) of the CTV spends seem to set nSequence to 0x0040 which I think doesn't have a different effect from 0xfffe? That looks to me like there's still not much practical (vs theoretical) exploration of CTV going on; but perhaps it's an indication that CTV could be substantially simplified and still get all the benefits that people are particularly eager for. > I am unsure that "learning in public" is required -- For a consensus system, part of the learning is "this doesn't seem that interesting to me; is it actually valuable enough to others that the change is worth the risk it imposes on me?" and that's not something you can do purely in private. One challenge with building a soft fork is that people don't want to commit to spending time building something that relies on consensus features and run the risk that they might never get deployed. But the reverse of that is also a concern: you don't want to deploy consensus changes and run the risk that they won't actually turn out to be useful. Or, perhaps, to "meme-ify" it -- part of the "proof of work" for deploying a consensus change is actually proving that it's going to be useful. Like sha256 hashing, that does require real work, and it might turn out to be wasteful. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Fri, Apr 08, 2022 at 11:58:48AM +0200, Jorge Timón via bitcoin-dev wrote: > On Wed, Mar 30, 2022 at 6:21 AM Anthony Towns wrote: > > > Let's discuss those too. Feel free to point out how bip8 fails at some > > > hypothetical cases speedy trial doesn't. > > Any case where a flawed proposal makes it through getting activation > > parameters set and released, but doesn't achieve supermajority hashpower > > support is made worse by bip8/lot=true in comparison to speedy trial > I disagree. Also, again, not the hypothetical case I want to discuss. You just said "Let's discuss those" and "Feel free to point out how bip8 fails at some hypothetical cases speedy trial doesn't", now you're saying it's not what you want to discuss? But the above does include your "evil soft fork" hypothetical (I mean, unless you think being evil isn't a flaw?). The evil soft fork gets proposed, and due to some failure in review, merged with activation parameters set (via either speedy trial or bip8), then: a) supermajority hashpower support is achieved quickly: - both speedy trial and bip8+lot=true activate the evil fork b) supermajority hashpower support is achieved slowly: - speedy trial does *not* activate the evil fork, as it times out quickly - bip8 *does* activate the fork c) supermajority hashpower support support is never achieved: - speedy trial does *not* activate the evil fork - bip8+lot=false does *not* activate the evil fork, but only after a long timeout - bip8+lot=true *does* activate the evil fork In case (a), they both do the same thing; in case (b) speedy trial is superior to bip8 no matter whether lot=true/false since it blocks the evil fork, and in case (c) speedy trial is better than lot=false because it's quicker, and much better than lot=true because lot=true activates the evil fork. > > > > 0') someone has come up with a good idea (yay!) > > > > 1') most of bitcoin is enthusiastically behind the idea > > > > 2') an enemy of bitcoin is essentially alone in trying to stop it > > > > 3') almost everyone remains enthusiastic, despite that guy's > > incoherent > > > > raving > > > > 4') nevertheless, the enemies of bitcoin should have the power to stop > > > > the good idea > > > "That guy's incoherent raving" > > > "I'm just disagreeing". > > > > Uh, you realise the above is an alternative hypothetical, and not talking > > about you? I would have thought "that guy" being "an enemy of bitcoin" > > made that obvious... I think you're mistaken; I don't think your emails > > are incoherent ravings. > Do you realize IT IS NOT the hypothetical case I wanted to discuss. Yes, that's what "alternative" means: a different one. > I'm sorry, but I'm tired of trying to explain. and quite, honestly, you > don't seem interested in listening to me and understanding me at all, but > only in "addressing my concerns". Obviously we understand different things > by "addressing concerns". > Perhaps it's the language barrier or something. My claim is that for *any* bad (evil, flawed, whatever) softfork, then attempting activation via bip8 is *never* superior to speedy trial, and in some cases is worse. If I'm missing something, you only need to work through a single example to demonstrate I'm wrong, which seems like it ought to be easy... But just saying "I disagree" and "I don't want to talk about that" isn't going to convince anyone. I really don't think the claim above should be surprising; bip8 is meant for activating good proposals, bad ones need to be stopped in review -- as "pushd" has said in this thread: "Flawed proposal making it through activation is a failure of review process", and Luke's said similar things as well. The point of bip8 isn't to make it easier to reject bad forks, it's to make it easier to ensure *good* forks don't get rejected. But that's not your hypothetical, and you don't want to talk about all the ways to stop an evil fork prior to an activation attempt... Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Mon, Mar 28, 2022 at 09:31:18AM +0100, Jorge Timón via bitcoin-dev wrote: > > In particular, any approach that allows you to block an evil fork, > > even when everyone else doesn't agree that it's evil, would also allow > > an enemy of bitcoin to block a good fork, that everyone else correctly > > recognises is good. A solution that works for an implausible hypothetical > > and breaks when a single attacker decides to take advantage of it is > > not a good design. > Let's discuss those too. Feel free to point out how bip8 fails at some > hypothetical cases speedy trial doesn't. Any case where a flawed proposal makes it through getting activation parameters set and released, but doesn't achieve supermajority hashpower support is made worse by bip8/lot=true in comparison to speedy trial. That's true both because of the "trial" part, in that activation can fail and you can go back to the drawing board without having to get everyone upgrade a second time, and also the "speedy" part, in that you don't have to wait a year or more before you even know what's going to happen. > > 0') someone has come up with a good idea (yay!) > > 1') most of bitcoin is enthusiastically behind the idea > > 2') an enemy of bitcoin is essentially alone in trying to stop it > > 3') almost everyone remains enthusiastic, despite that guy's incoherent > > raving > > 4') nevertheless, the enemies of bitcoin should have the power to stop > > the good idea > "That guy's incoherent raving" > "I'm just disagreeing". Uh, you realise the above is an alternative hypothetical, and not talking about you? I would have thought "that guy" being "an enemy of bitcoin" made that obvious... I think you're mistaken; I don't think your emails are incoherent ravings. It was intended to be the simplest possible case of where someone being able to block a change is undesirable: they're motivated by trying to harm bitcoin, they're as far as possible from being part of some economic majority, and they don't even have a coherent rationale to provide for blocking the idea. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Thu, Mar 24, 2022 at 07:30:09PM +0100, Jorge Timón via bitcoin-dev wrote: > Sorry, I won't answer to everything, because it's clear you're not listening. I'm not agreeing with you; that's different to not listening to you. > In the HYPOTHETICAL CASE that there's an evil for, the fork being evil > is a PREMISE of that hypothetical case, a GIVEN. Do you really find people more inclined to start agreeing with you when you begin yelling at them? When other people start shouting at you, do you feel like it's a discussion that you're engaged in? > Your claim that "if it's evil, good people would oppose it" is a NON > SEQUITUR, "good people" aren't necessarily perfect and all knowing. > good people can make mistakes, they can be fooled too. > In the hypothetical case that THERE'S AN EVIL FORK, if "good people" > don't complain, it is because they didn't realize that the given fork > was evil. Because in our hypothetical example THE EVIL FORK IS EVIL BY > DEFINITION, THAT'S THE HYPOTHETICAL CASE I WANT TO DISCUSS, not the > hypothetical case where there's a fork some people think it's evil but > it's not really evil. The problem with that approach is that any solution we come up with doesn't only have to deal with the hypotheticals you want to discuss. In particular, any approach that allows you to block an evil fork, even when everyone else doesn't agree that it's evil, would also allow an enemy of bitcoin to block a good fork, that everyone else correctly recognises is good. A solution that works for an implausible hypothetical and breaks when a single attacker decides to take advantage of it is not a good design. And I did already address what to do in exactly that scenario: > > But hey what about the worst case: what if everyone else in bitcoin > > is evil and supports doing evil things. And maybe that's not even > > implausible: maybe it's not an "evil" thing per se, perhaps [...] > > > > In that scenario, I think a hard fork is the best choice: split out a new > > coin that will survive the upcoming crash, adjust the mining/difficulty > > algorithm so it works from day one, and set it up so that you can > > maintain it along with the people who support your vision, rather than > > having to constantly deal with well-meaning attacks from "bitcoiners" > > who don't see the risks and have lost the plot. > > > > Basically: do what Satoshi did and create a better system, and let > > everyone else join you as the problems with the old one eventually become > > unavoidably obvious. > Once you understand what hypothetical case I'm talking about, maybe > you can understand the rest of my reasoning. As I understand it, your hypothetical is: 0) someone has come up with a bad idea 1) most of bitcoin is enthusiastically behind the idea 2) you are essentially alone in discovering that it's a bad idea 3) almost everyone remains enthusiastic, despite your explanations that it's a bad idea 4) nevertheless, you and your colleagues who are aware the idea is bad should have the power to stop the bad idea 5) bip8 gives you the power to stop the bad idea but speedy trial does not Again given (0), I think (1) and (2) are already not very likely, and (3) is simply not plausible. But in the event that it does somehow occur, I disagree with (4) for the reasons I describe above; namely, that any mechanism that did allow that would be unable to distinguish between the "bad idea" case and something along the lines of: 0') someone has come up with a good idea (yay!) 1') most of bitcoin is enthusiastically behind the idea 2') an enemy of bitcoin is essentially alone in trying to stop it 3') almost everyone remains enthusiastic, despite that guy's incoherent raving 4') nevertheless, the enemies of bitcoin should have the power to stop the good idea And, as I said in the previous mail, I think (5) is false, independently of any of the other conditions. > But if you don't understand the PREMISES of my example, You can come up with hypothetical premises that invalidate bitcoin, let alone some activation method. For example, imagine if the Federal Reserve Board are full of geniuses and know exactly when to keep issuance predictable and when to juice the economy? Having flexibility gives more options than hardcoding "21M" somewhere, so clearly the USD's approach is the way to go, and everything is just a matter of appointing the right people to the board, not all this decentralised stuff. The right answer is to reject bad premises, not to argue hypotheticals that have zero relationship to reality. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Speedy Trial
On Thu, Mar 17, 2022 at 03:04:32PM +0100, Jorge Timón via bitcoin-dev wrote: > On Tue, Mar 15, 2022 at 4:45 PM Anthony Towns wrote: > > On Fri, Mar 11, 2022 at 02:04:29PM +, Jorge Timón via bitcoin-dev wrote: > > People opposed to having taproot transactions in their chain had over > > three years to do that coordination before an activation method was merged > > [0], and then an additional seven months after the activation method was > > merged before taproot enforcement began [1]. > > > > [0] 2018-01-23 was the original proposal, 2021-04-15 was when speedy > > trial activation parameters for mainnet and testnet were merged. > > [1] 2021-11-14 > People may be opposed only to the final version, but not the initial > one or the fundamental concept. > Please, try to think of worse case scenarios. I mean, I've already spent a lot of time thinking through these worst cast scenarios, including the ones you bring up. Maybe I've come up with wrong or suboptimal conclusions about it, and I'm happy to discuss that, but it's a bit hard to avoid taking offense at the suggestion that I haven't even thought about it. In the case of taproot, the final substantive update to the BIP was PR#982 merged on 2020-08-27 -- so even if you'd only been opposed to the changes in the final version (32B pubkeys perhaps?) you'd have had 1.5 months to raise those concerns before the code implementing taproot was merged, and 6 months to raise those concerns before activation parameters were set. If you'd been following the discussion outside of the code and BIP text, in the case of 32B pubkeys, you'd have had an additional 15 months from the time the idea was proposed on 2019-05-22 (or 2019-05-29 if you only follow optech's summaries) until it was included in the BIP. > Perhaps there's no opposition until after activation code has been > released and miners are already starting to signal. > Perhaps at that moment a reviewer comes and points out a fatal flaw. Perhaps there's no opposition until the change has been deployed and in wide use for 30 years. Aborting activation isn't the be-all and end-all of addressing problems with a proposal, and it's not going to be able to deal with every problem. For any problems that can be found before the change is deployed and in use, you want to find them while the proposal is being discussed. More broadly, what I don't think you're getting is that *any* method you can use to abort/veto/revert an activation that's occuring via BIP8 (with or without mandatory activation), can also be used to abort/veto/revert a speedy trial activation. Speedy trial simply changes two things: it allows a minority (~10%) of hashpower to abort the activation; and it guarantees a "yes" or "no" answer within three months, while with BIP343 you initially don't know when within a ~1 year period activation will occur. If you're part of an (apparent) minority trying to abort/veto/reject activation, this gives you an additional option: if you can get support from ~10% of hashpower, you can force an initial "no" answer within three months, at which point many of the people who were ignoring your arguments up until then may be willing to reconsider them. For example, I think Mark Friedenbach's concerns about unhashed pubkeys and quantum resistance don't make sense, and (therefore) aren't widely held; but if 10% of blocks during taproot's speedy trial had included a tagline indicating otherwise and prevented activation, that would have been pretty clear objective evidence that the concern was more widely held than I thought, and might be worth reconsidering. Likewise, there could have somehow been other problems that somehow were being ignored, that could have similarly been reprioritised in the same way. That's not the way that you *want* things to work -- ideally people should be raising the concerns beforehand, and they should be taken seriously and fixed or addressed beforehand. That did happen with Mark's concerns -- heck, I raised it as a question ~6 hours after Greg's original taproot proposal -- and it's directly addressed in the rationale section of BIP341. But in the worst case; maybe that doesn't happen. Maybe bitcoin-dev and other places are somehow being censored, or sensible critics are being demonised and ignored. The advantage of a hashrate veto here is that it's hard to fake and hard to censor -- whereas with mailing list messages and the like, it's both easy to fake (setup sockpuppets and pay troll farms) and easy to censor (ban/moderate people for spamming say). So as a last ditch "we've been censored, please take us seriously" method of protest, it seems worthwhile to have to me. (Of course, a 90% majority might *still* choose to not take the concerns of the 10% minority seriously, and just continue to ignore the concern and followup with an immediate mandatory activation. But if that's what happening, you can't stop it; you can't only choose whether you want to be a part of it, or leave
Re: [bitcoin-dev] bitcoin scripting and lisp
On Wed, Mar 16, 2022 at 02:54:05PM +, ZmnSCPxj via bitcoin-dev wrote: > My point is that in the past we were willing to discuss the complicated > crypto math around cross-input sigagg in order to save bytes, so it seems to > me that cross-input compression of puzzles/solutions at least merits a > discussion, since it would require a lot less heavy crypto math, and *also* > save bytes. Maybe it would be; but it's not something I was intending to bring up in this thread. Chia allows any coin spend to reference any output created in the same block, and potentially any other input in the same block, and automatically aggregates all signatures in a block; that's all pretty neat, but trying to do all that in bitcoin in step one doesn't seem smart. > > > > I /think/ the compression hook would be to allow you to have the puzzles > > > > be (re)generated via another lisp program if that was more efficient > > > > than just listing them out. But I assume it would be turtles, err, > > > > lisp all the way down, no special C functions like with jets. > > > > Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a > > > > cryptocurrency node software, so "special C function" seems to > > > > overprivilege C... > > Jets are "special" in so far as they are costed differently at the > > consensus level than the equivalent pure/jetless simplicity code that > > they replace. Whether they're written in C or something else isn't the > > important part. > > By comparison, generating lisp code with lisp code in chia doesn't get > > special treatment. > Hmm, what exactly do you mean here? This is going a bit into the weeds... > If I have a shorter piece of code that expands to a larger piece of code > because metaprogramming, is it considered the same cost as the larger piece > of code (even if not all parts of the larger piece of code are executed, e.g. > branches)? Chia looks at the problem differently to bitcoin. In bitcoin each transaction includes a set of inputs, and each of those inputs contains both a reference to a utxo which has a scriptPubKey, and a solution for the scriptPubKey called the scriptSig. In chia, each block contains a list of coins (~utxos) that are being spent, each of which has a hash of its puzzle (~scriptPubKey) which must be solved; each block then contains a lisp program that will produce all the transaction info, namely coin (~utxo id), puzzle reveal (~witness program) and solution (~witness stack); then to verify the block, you need to check the coins exist, the puzzle reveals all match the corresponding coin's puzzle, that the puzzle+solution executes successfully, and that the assertions that get returned by all the puzzle+solutions are all consistent. > Or is the cost simply proportional to the number of operations actually > executed? AIUI, the cost is the sum of the size of the program, as well as how much compute and memory is used to run the program. In comparison, the cost for an input with tapscript is the size of that input; memory usage has a fixed maximum (1000 elements in the stack/altstack, and 520 bytes per element); and compute resources are limited according to the size of the input. > It seems to me that lisp-generating-lisp compression would reduce the cost of > bytes transmitted, but increase the CPU load (first the metaprogram runs, and > *then* the produced program runs). In chia, you're always running the metaprogram, it may just be that that program is the equivalent of: stuff = lambda: [("hello", "world"), ("hello", "Z-man")] which doesn't seem much better than just saying: stuff = [("hello", "world"), ("hello", "Z-man")] The advantage is that you could construct a block template optimiser that rewrites the program to: def stuff(): h = "hello" return [(h, "world"), (h, "Z-man")] which for large values of "hello" may be worthwhile (and the standard puzzle in chia is large enough at that that might well be worthwhile at ~227 bytes, since it implements taproot/graftroot logic from scratch). > Over in that thread, we seem to have largely split jets into two types: > * Consensus-critical jets which need a softfork but reduce the weight of the > jetted code (and which are invisible to pre-softfork nodes). > * Non-consensus-critical jets which only need relay change and reduces bytes > sent, but keeps the weight of the jetted code. > It seems to me that lisp-generating-lisp compression would roughly fall into > the "non-consensus-critical jets", roughly. It could do; but the way it's used in chia is consensus-critical. I'm not 100% sure how costing works in chia, but I believe a block template optimiser as above might allow miners to fit more transactions in their block and therefore collect more transaction fees. That makes the block packing problem harder though, since it means your transaction is "cheaper" if it's more similar to other transactions in the block. I don't think it's relevant today sin
Re: [bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks
On Tue, Mar 22, 2022 at 05:37:03AM +, ZmnSCPxj via bitcoin-dev wrote: > Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks (Have you considered applying a jit or some other compression algorithm to your emails?) > Microcode For Bitcoin SCRIPT > > I propose: > * Define a generic, low-level language (the "RISC language"). This is pretty much what Simplicity does, if you optimise the low-level language to minimise the number of primitives and maximise the ability to apply tooling to reason about it, which seem like good things for a RISC language to optimise. > * Define a mapping from a specific, high-level language to > the above language (the microcode). > * Allow users to sacrifice Bitcoins to define a new microcode. I think you're defining "the microcode" as the "mapping" here. This is pretty similar to the suggestion Bram Cohen was making a couple of months ago: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-December/019722.html https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019773.html https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019803.html I believe this is done in chia via the block being able to include-by-reference prior blocks' transaction generators: ] transactions_generator_ref_list: List[uint32]: A list of block heights of previous generators referenced by this block's generator. - https://docs.chia.net/docs/05block-validation/block_format (That approach comes at the cost of not being able to do full validation if you're running a pruning node. The alternative is to effectively introduce a parallel "utxo" set -- where you're mapping the "sacrificed" BTC as the nValue and instead of just mapping it to a scriptPubKey for a later spend, you're permanently storing the definition of the new CISC opcode) > We can then support a "RISC" language that is composed of > general instructions, such as arithmetic, SECP256K1 scalar > and point math, bytevector concatenation, sha256 midstates, > bytevector bit manipulation, transaction introspection, and > so on. A language that includes instructions for each operation we can think of isn't very "RISC"... More importantly it gets straight back to the "we've got a new zk system / ECC curve / ... that we want to include, let's do a softfork" problem you were trying to avoid in the first place. > Then, the user creates a new transaction where one of > the outputs contains, say, 1.0 Bitcoins (exact required > value TBD), Likely, the "fair" price would be the cost of introducing however many additional bytes to the utxo set that it would take to represent your microcode, and the cost it would take to run jit(your microcode script) if that were a validation function. Both seem pretty hard to manage. "Ideally", I think you'd want to be able to say "this old microcode no longer has any value, let's forget it, and instead replace it with this new microcode that is much better" -- that way nodes don't have to keep around old useless data, and you've reduced the cost of introducing new functionality. Additionally, I think it has something of a tragedy-of-the-commons problem: whoever creates the microcode pays the cost, but then anyone can use it and gain the benefit. That might even end up creating centralisation pressure: if you design a highly decentralised L2 system, it ends up expensive because people can't coordinate to pay for the new microcode that would make it cheaper; but if you design a highly centralised L2 system, you can just pay for the microcode yourself and make it even cheaper. This approach isn't very composable -- if there's a clever opcode defined in one microcode spec, and another one in some other microcode, the only way to use both of them in the same transaction is to burn 1 BTC to define a new microcode that includes both of them. > We want to be able to execute the defined microcode > faster than expanding an `OP_`-code SCRIPT to a > `UOP_`-code SCRIPT and having an interpreter loop > over the `UOP_`-code SCRIPT. > > We can use LLVM. We've not long ago gone to the effort of removing openssl as a consensus critical dependency; and likewise previously removed bdb. Introducing a huge new dependency to the definition of consensus seems like an enormous step backwards. This would also mean we'd be stuck at the performance of whatever version of llvm we initially adopted, as any performance improvements introduced in later llvm versions would be a hard fork. > On the other hand, LLVM bugs are compiler bugs and > the same bugs can hit the static compiler `cc`, too, "Well, you could hit Achilles in the heel, so really, what's the point of trying to be invulnerable anywhere else?" > Then we put a pointer to this compiled function to a > 256-long array of functions, where the array index is > the `OP_` code. That's a 256-long array of functions for each microcode, which increases the "microcode-utxo" database
Re: [bitcoin-dev] Speedy Trial
On Fri, Mar 11, 2022 at 02:04:29PM +, Jorge Timón via bitcoin-dev wrote: > > Thirdly, if some users insist on a chain where taproot is > > "not activated", they can always softk-fork in their own rule that > > disallows the version bits that complete the Speedy Trial activation > > sequence, or alternatively soft-fork in a rule to make spending from (or > > to) taproot addresses illegal. > Since it's about activation in general and not about taproot specifically, > your third point is the one that applies. > Users could have coordinated to have "activation x" never activated in > their chains if they simply make a rule that activating a given proposal > (with bip8) is forbidden in their chain. > But coordination requires time. People opposed to having taproot transactions in their chain had over three years to do that coordination before an activation method was merged [0], and then an additional seven months after the activation method was merged before taproot enforcement began [1]. [0] 2018-01-23 was the original proposal, 2021-04-15 was when speedy trial activation parameters for mainnet and testnet were merged. [1] 2021-11-14 For comparison, the UASF activation attempt for segwit took between 4 to 6 months to coordinate, assuming you start counting from either the "user activated soft fork" concept being raised on bitcoin-dev or the final params for BIP 148 being merged into the bips repo, and stop counting when segwit locked in. > Please, try to imagine an example for an activation that you wouldn't like > yourself. Imagine it gets proposed and you, as a user, want to resist it. Sure. There's more steps than just "fork off onto a minority chain" though. 1) The first and most important step is to explain why you want to resist it, either to convince the proposers that there really is a problem and they should stand down, or so someone can come up with a way of fixing the proposal so you don't need to resist it. Ideally, that's all that's needed to resolve the objections. (That's what didn't happen with opposition to segwit) 2) If that somehow doesn't work, and people are pushing ahead with a consensus change despite significant reasonable opposition; the next thing to do would be to establish if either side is a paper tiger and setup a futures market. That has the extra benefit of giving miners some information about which (combination of) rules will be most profitable to mine for. Once that's setup and price discovery happens, one side or the other will probably throw in the towel -- there's not much point have a money that other people aren't interested in using. (And that more or less is what happened with 2X) If a futures market like that is going to be setup, I think it's best if it happens before signalling for the soft fork starts -- the information miners will get from it is useful for figuring out how much resources to invest in signalling, eg. I think it might even be feasible to set something up even before activation parameters are finalised; you need something more than just one-on-one twitter bets to get meaningful price discovery, but I think you could probably build something based on a reasonably unbiassed oracle declaring an outcome, without precisely defined parameters fixed in a BIP. So if acting like reasonable people and talking it through doesn't work, this seems like the next step to me. 3) But maybe you try both those and they fail and people start trying to activate the soft fork (or perhaps you just weren't paying attention until it was too late, and missed the opportunity). I think the speedy trial approach here is ideal for a last ditch "everyone stays on the same chain while avoiding this horrible change" attempt. The reason being that it allows everyone to agree to not adopt the new rules with only very little cost: all you need is for 10% of hashpower to not signal over a three month period. That's cheaper than bip9 (5% over 12 months requires 2x the cumulative hashpower), and much cheaper than bip8 which requires users to update their software 4) At this point, if you were able to prevent activation, hopefully that's enough of a power move that people will take your concerns seriously, and you get a second chance at step (1). If that still results in an impasse, I'd expect there to be a second, non-speedy activation of the soft fork, that either cannot be blocked at all, or cannot be blocked without having control of at least 60% of hashpower. 5) If you weren't able to prevent activation (whether or not you prevented speedy trial from working), then you should have a lot of information: - you weren't able to convince people there was a problem - you either weren't in the economic majority and people don't think your concept of bitcoin is more valuable (
Re: [bitcoin-dev] bitcoin scripting and lisp
On Tue, Mar 08, 2022 at 03:06:43AM +, ZmnSCPxj via bitcoin-dev wrote: > > > They're radically different approaches and > > > it's hard to see how they mix. Everything in lisp is completely sandboxed, > > > and that functionality is important to a lot of things, and it's really > > > normal to be given a reveal of a scriptpubkey and be able to rely on your > > > parsing of it. > > The above prevents combining puzzles/solutions from multiple coin spends, > > but I don't think that's very attractive in bitcoin's context, the way > > it is for chia. I don't think it loses much else? > But cross-input signature aggregation is a nice-to-have we want for Bitcoin, > and, to me, cross-input sigagg is not much different from cross-input > puzzle/solution compression. Signature aggregation has a lot more maths and crypto involved than reversible compression of puzzles/solutions. I was more meaning cross-transaction relationships rather than cross-input ones though. > > I /think/ the compression hook would be to allow you to have the puzzles > > be (re)generated via another lisp program if that was more efficient > > than just listing them out. But I assume it would be turtles, err, > > lisp all the way down, no special C functions like with jets. > Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a > cryptocurrency node software, so "special C function" seems to overprivilege > C... Jets are "special" in so far as they are costed differently at the consensus level than the equivalent pure/jetless simplicity code that they replace. Whether they're written in C or something else isn't the important part. By comparison, generating lisp code with lisp code in chia doesn't get special treatment. (You *could* also use jets in a way that doesn't impact consensus just to make your node software more efficient in the normal case -- perhaps via a JIT compiler that sees common expressions in the blockchain and optimises them eg) On Wed, Mar 09, 2022 at 02:30:34PM +, ZmnSCPxj via bitcoin-dev wrote: > Do note that PTLCs remain more space-efficient though, so forget about HTLCs > and just use PTLCs. Note that PTLCs aren't really Chia-friendly, both because chia doesn't have secp256k1 operations in the first place, but also because you can't do a scriptless-script because the information you need to extract is lost when signatures are non-interactively aggregated via BLS -- so that adds an expensive extra ECC operation rather than reusing an op you're already paying for (scriptless script PTLCs) or just adding a cheap hash operation (HTLCs). (Pretty sure Chia could do (= PTLC (pubkey_for_exp PREIMAGE)) for preimage reveal of BLS PTLCs, but that wouldn't be compatible with bitcoin secp256k1 PTLCs. You could sha256 the PTLC to save a few bytes, but I think given how much a sha256 opcode costs in Chia, that that would actually be more expensive?) None of that applies to a bitcoin implementation that doesn't switch to BLS signatures though. > > But if they're fully baked into the scriptpubkey then they're opted into by > > the recipient and there aren't any weird surprises. > This is really what I kinda object to. > Yes, "buyer beware", but consider that as the covenant complexity increases, > the probability of bugs, intentional or not, sneaking in, increases as well. > And a bug is really "a weird surprise" --- xref TheDAO incident. Which is better: a bug in the complicated script code specified for implementing eltoo in a BOLT; or a bug in the BIP/implementation of a new sighash feature designed to make it easy to implement eltoo, that's been soft-forked into consensus? Seems to me, that it's always better to have the bug be at the wallet level, since that can be fixed by upgrading individual wallet software. > This makes me kinda wary of using such covenant features at all, and if stuff > like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added but must > be reimplemented via a covenant feature, I would be saddened, as I now have > to contend with the complexity of covenant features and carefully check that > `SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented correctly. > True I also still have to check the C++ source code if they are implemented > directly as opcodes, but I can read C++ better than frikkin Bitcoin SCRIPT. If OP_CHECKTEMPLATEVERIFY (etc) is implemented as a consensus update, you probably want to review the C++ code even if you're not going to use it, just to make sure consensus doesn't end up broken as a result. Whereas if it's only used by other people's wallets, you might be able to ignore it entirely (at least until it becomes so common that any bugs might allow a significant fraction of BTC to be stolen/lost and indirectly cause a systemic risk). > Not to mention that I now have to review both the (more complicated due to > more general) covenant feature implementation, *and* the implementation of > `SIGHASH_ANYPREVOUT`/`OP_CHECKT
Re: [bitcoin-dev] bitcoin scripting and lisp
On Tue, Mar 08, 2022 at 06:54:56PM -0800, Bram Cohen via bitcoin-dev wrote: > On Mon, Mar 7, 2022 at 5:27 PM Anthony Towns wrote: > > One way to match the way bitcoin do things, you could have the "list of > > extra conditions" encoded explicitly in the transaction via the annex, > > and then check the extra conditions when the script is executed. > The conditions are already basically what's in transactions. I think the > only thing missing is the assertion about one's own id, which could be > added in by, in addition to passing the scriptpubkey the transaction it's > part of, also passing in the index of inputs which it itself is. To redo the singleton pattern in bitcoin's context, I think you'd have to pass in both the full tx you're spending (to be able to get the txid of its parent) and the full tx of its parent (to be able to get the scriptPubKey that your utxo spent) which seems klunky but at least possible (you'd be able to drop the witness data at least; without that every tx would be including the entire history of the singleton). > > > A nice side benefit of sticking with the UTXO model is that the soft fork > > > hook can be that all unknown opcodes make the entire thing automatically > > > pass. > > I don't think that works well if you want to allow the spender (the > > puzzle solution) to be able to use opcodes introduced in a soft-fork > > (eg, for graftroot-like behaviour)? > This is already the approach to soft forking in Bitcoin script and I don't > see anything wrong with it. It's fine in Bitcoin script, because the scriptPubKey already commits to all the opcodes that can possibly be used for any particular output. With a lisp approach, however, you could pass in additional code fragments to execute. For example, where you currently say: script: [pubkey] CHECKSIG witness: [64B signature][0x83] (where 0x83 is SINGLE|ANYONECANPAY) you might translate that to: script: (checksig pubkey (bip342-txmsg 3) 2) witness: signature 0x83 where "3" grabs the sighash byte, and "2" grabs the signature. But you could also translate it to: script: (checksig pubkey (sha256 3 (a 3)) 2) witness: signature (bip342-txmsg 0x83) where "a 3" takes "(bip342-txmsg 0x83)" then evaluates it, and (sha256 3 (a 3)) makes sure you've signed off on both how the message was constructed as well as what the message was. The advantage there is that the spender can then create their own signature hashes however they like; even ones that hadn't been thought of when the output was created. But what if we later softfork in a bip118-txmsg for quick and easy ANYPREVOUT style-signatures, and want to use that instead of custom lisp code? You can't just stick (softfork C (bip118-txmsg 0xc3)) into the witness, because it will evaluate to nil and you won't be signing anything. But you *could* change the script to something like: script: (softfork C (q checksigverify pubkey (a 3) 2)) witness: signature (bip118-txmsg 0xc3) But what happens if the witness instead has: script: (softfork C (q checksigverify pubkey (a 3) 2)) witness: fake-signature (fakeopcode 0xff) If softfork is just doing a best effort for whatever opcodes it knows about, and otherwise succeeding, then it has to succeed, and your script/output has become anyone-can-spend. On the other hand, if you could tell the softfork op that you only wanted ops up-to-and-including the 118 softfork, then it could reject fakeopcode and fail the script, which I think gives the desirable behaviour. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] bitcoin scripting and lisp
On Sun, Mar 06, 2022 at 10:26:47PM -0800, Bram Cohen via bitcoin-dev wrote: > > After looking into it, I actually think chia lisp [1] gets pretty much all > > the major design decisions pretty much right. There are obviously a few > > changes needed given the differences in design between chia and bitcoin: > Bitcoin uses the UTXO model as opposed to Chia's Coin Set model. While > these are close enough that it's often explained as Chia uses the UTXO > model but that isn't technically true. Relevant to the above comment is > that in the UTXO model transactions get passed to a scriptpubkey and it > either assert fails or it doesn't, while in the coin set model each puzzle > (scriptpubkey) gets run and either assert fails or returns a list of extra > conditions it has, possibly including timelocks and creating new coins, > paying fees, and other things. One way to match the way bitcoin do things, you could have the "list of extra conditions" encoded explicitly in the transaction via the annex, and then check the extra conditions when the script is executed. > If you're doing everything from scratch it's cleaner to go with the coin > set model, but retrofitting onto existing Bitcoin it may be best to leave > the UTXO model intact and compensate by adding a bunch more opcodes which > are special to parsing Bitcoin transactions. The transaction format itself > can be mostly left alone but to enable some of the extra tricks (mostly > implementing capabilities) it's probably a good idea to make new > conventions for how a transaction can have advisory information which > specifies which of the inputs to a transaction is the parent of a specific > output and also info which is used for communication between the UTXOs in a > transaction. I think the parent/child coin relationship is only interesting when "unrelated" spends can assert that the child coin is being created -- ie things along the lines of the "transaction sponsorship" proposal. My feeling is that complicates the mempool a bit much, so is best left for later, if done at all. (I think the hard part of managing the extra conditions is mostly in keeping it efficient to manage the mempool and construct the most profitable blocks/bundles, rather than where the data goes) > But one could also make lisp-generated UTXOs be based off transactions > which look completely trivial and have all their important information be > stored separately in a new vbytes area. That works but results in a bit of > a dual identity where some coins have both an old style id and a new style > id which gunks up what We've already got a txid and a wtxid, adding more ids seems best avoided if possible... > > Pretty much all the opcodes in the first section are directly from chia > > lisp, while all the rest are to complete the "bitcoin" functionality. > > The last two are extensions that are more food for thought than a real > > proposal. > Are you thinking of this as a completely alternative script format or an > extension to bitcoin script? As an alternative to tapscript, so when constructing the merkle tree of scripts for a taproot address, you could have some of those scripts be in tapscript as it exists today with OP_CHECKSIG etc, and others could be in lisp. (You could then have an entirely lisp-based sub-merkle-tree of lisp fragments via sha256tree or similar of course) > They're radically different approaches and > it's hard to see how they mix. Everything in lisp is completely sandboxed, > and that functionality is important to a lot of things, and it's really > normal to be given a reveal of a scriptpubkey and be able to rely on your > parsing of it. The above prevents combining puzzles/solutions from multiple coin spends, but I don't think that's very attractive in bitcoin's context, the way it is for chia. I don't think it loses much else? > > There's two ways to think about upgradability here; if someday we want > > to add new opcodes to the language -- perhaps something to validate zero > > knowledge proofs or calculate sha3 or use a different ECC curve, or some > > way to support cross-input signature aggregation, or perhaps it's just > > that some snippets are very widely used and we'd like to code them in > > C++ directly so they validate quicker and don't use up as much block > > weight. One approach is to just define a new version of the language > > via the tapleaf version, defining new opcodes however we like. > A nice side benefit of sticking with the UTXO model is that the soft fork > hook can be that all unknown opcodes make the entire thing automatically > pass. I don't think that works well if you want to allow the spender (the puzzle solution) to be able to use opcodes introduced in a soft-fork (eg, for graftroot-like behaviour)? > Chia's approach to transaction fees is essentially identical to Bitcoin's > although a lot fewer things in the ecosystem support fees due to a lack of > having needed it yet. I don't think mempool issues have much to