Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
"James O'Beirne" writes: > On Sat, Oct 28, 2023 at 12:51 AM Rusty Russell via bitcoin-dev < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> But AFAICT there are multiple perfectly reasonable variants of vaults, >> too. One would be: >> >> 1. master key can do anything >> 2. OR normal key can send back to vault addr without delay >> 3. OR normal key can do anything else after a delay. >> >> Another would be: >> 1. normal key can send to P2WPKH(master) >> 2. OR normal key can send to P2WPKH(normal key) after a delay. >> > > I'm confused by what you mean here. I'm pretty sure that BIP-345 VAULT > handles the cases that you're outlining, though I don't understand your > terminology -- "master" vs. "normal", and why we are caring about P2WPKH > vs. anything else. Using the OP_VAULT* codes can be done in an arbitrary > arrangement of tapleaves, facilitating any number of vaultish spending > conditions, alongside other non-VAULT leaves. I was thinking from a user POV: the "master" key is the one they keep super safe in case of emergencies, the "normal" is the delayed spend key. OP_VAULT certainly can encapsulate this, but I have yet to do the kind of thorough review that I'd need to evaluate the various design decisions. > Well, I found the vault BIP really hard to understand. I think it wants >> to be a new address format, not script opcodes. >> > > Again confused here. This is like saying "CHECKLOCKTIMEVERIFY wants to be a > new address format, not a script opcode." I mean in an ideal world, Bitcoin Script would be powerful enough to implement vaults, and once a popular use pattern emerged we'd introduce a new address type, defined to expand to that template. Like P2WPK or P2PKH. Sadly, we're not in that world! BIP 345 introduces a number of separate mechanisms, such as limited script delegation, iteration and amount arithmetic which are not expressible in Script (ok, amount arithmetic kind of is, but ick!). To form a real opinion, I need to consider all these elements, and whether they should exist inside OP_VAULT, or as separate things. That's a slow process, sorry :( > That said, I'm sure some VAULT patterns could be abstracted into the > miniscript/descriptor layer to good effect. That would be very interesting, but hard. Volunteers? :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
Anthony Towns writes: > On Fri, Oct 20, 2023 at 02:10:37PM +1030, Rusty Russell via bitcoin-dev wrote: >> I've done an exploration of what would be required (given >> OP_TX/OP_TXHASH or equivalent way of pushing a scriptPubkey on the >> stack) to usefully validate Taproot outputs in Bitcoin Script. Such >> functionality is required for usable vaults, at least. >> >> >> https://rusty.ozlabs.org/2023/10/20/examining-scriptpubkey-in-script.html >> >> (If anyone wants to collaborate to produce a prototype, and debug my >> surely-wrong script examples, please ping me!) >> >> TL;DR: if we have OP_TXHASH/OP_TX, and add OP_MULTISHA256 (or OP_CAT), >> OP_KEYADDTWEAK and OP_LESS (or OP_CONDSWAP), and soft-fork weaken the >> OP_SUCCESSx rule (or pop-script-from-stack), we can prove a two-leaf >> tapscript tree in about 110 bytes of Script. This allows useful >> spending constraints based on a template approach. > > I think there's two reasons to think about this approach: > > (a) we want to do vault operations specifically, and this approach is > a good balance between being: >- easy to specify and implement correctly, and >- easy to use correctly. > > (b) we want to make bitcoin more programmable, so that we can do > contracting experiments directly in wallet software, without needing > to justify new soft forks for each experiment, and this approach > provides a good balance amongst: >- opening up a wide range of interesting experiments, >- making it easy to understand the scope/consequences of opening up > those experiments, >- being easy to specify and implement correctly, and >- being easy to use correctly. > > Hopefully that's a fair summary? Obviously what balance is "good" > is always a matter of opinion -- if you consider it hard to do soft > forks, then it's perhaps better to err heavily towards being easy to > specify/implement, rather than easy to use, for example. > > For (a) I'm pretty skeptical about this approach for vault operations > -- it's not terribly easy to specify/implement (needing 5 opcodes, one > of which has a dozen or so flags controlling how it behaves, then also > needs to change the way OP_SUCCESS works), and it seems super complicated > to use. But AFAICT there are multiple perfectly reasonable variants of vaults, too. One would be: 1. master key can do anything 2. OR normal key can send back to vault addr without delay 3. OR normal key can do anything else after a delay. Another would be: 1. normal key can send to P2WPKH(master) 2. OR normal key can send to P2WPKH(normal key) after a delay. > By comparison, while the bip 345 OP_VAULT proposal also proposes 3 new > opcodes (OP_CTV, OP_VAULT, OP_VAULT_RECOVER) [0], those opcodes can be > implemented fairly directly (without requiring different semantics for > OP_SUCCESS, eg) and can be used much more easily [1]. I'm interested in vaults because they're a concrete example I can get my head around. Not because I think they'll be widely used! So I feel that anyone who has the ability to protect two distinct keys, and make two transactions per transfer is not a great candidate for optimization or convenience. > I'm not sure, but I think the "deferred check" setup might also > provide additional functionality beyond what you get from cross-input > introspection; that is, with it, you can allow multiple inputs to safely > contribute funds to common outputs, without someone being able to combine > multiple inputs into a tx where the output amount is less than the sum > of all the contributions. Without that feature, you can mimic it, but > only so long as all the input scripts follow known templates that you > can exactly match. Agreed, I don't think you would implement anything but 1:1 unvaulting in bitcoin script, except as a party trick. > So to me, for the vault use case, the > TXHASH/MULTISHA256/KEYADDTWEAK/LESS/CAT/OP_SUCCESS approach just doesn't > really seem very appealing at all in practical terms: lots of complexity, > hard to use, and doesn't really seem like it works very well even after > you put in tonnes of effort to get it to work at all? Well, I found the vault BIP really hard to understand. I think it wants to be a new address format, not script opcodes. I don't think spelling it out in script is actually that much more complex to use, either. "Use these templates". And modulo consolidation, I think it works as well. > I think in the context of (b), ie enabling experimentation more generally, > it's much more interesting. eg, CAT alone would allow for various > interesting constraints on signatures ("you must sign this tx with th
Re: [bitcoin-dev] Proposed BIP for OP_CAT
Andrew Poelstra writes: > I had a similar thought. But my feeling is that replacing the stack > interpreter data structure is still too invasive to justify the benefit. > > Also, one of my favorite things about this BIP is the tiny diff. To be fair, this diff is even smaller than the OP_CAT diff :) Though I had to strongly resist refactoring, that interpreter code needs a good shake! Using a class for the stack is worth doing anyway (macros, really??). diff --git a/src/script/interpreter.cpp b/src/script/interpreter.cpp index dcaf28c2472..2ee2034115f 100644 --- a/src/script/interpreter.cpp +++ b/src/script/interpreter.cpp @@ -403,6 +403,19 @@ static bool EvalChecksig(const valtype& sig, const valtype& pubkey, CScript::con assert(false); } +// First 520 bytes is free, after than you consume an extra slot! +static size_t effective_size(const std::vector >& stack) +{ +size_t esize = stack.size(); + +for (const auto& v : stack) +{ +if (v.size() > MAX_SCRIPT_ELEMENT_SIZE) +esize += (v.size() - 1) / MAX_SCRIPT_ELEMENT_SIZE; +} +return esize; +} + bool EvalScript(std::vector >& stack, const CScript& script, unsigned int flags, const BaseSignatureChecker& checker, SigVersion sigversion, ScriptExecutionData& execdata, ScriptError* serror) { static const CScriptNum bnZero(0); @@ -1239,7 +1252,7 @@ bool EvalScript(std::vector >& stack, const CScript& } // Size limits -if (stack.size() + altstack.size() > MAX_STACK_SIZE) +if (effective_size(stack) + effective_size(altstack) > MAX_STACK_SIZE) return set_error(serror, SCRIPT_ERR_STACK_SIZE); } } ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposed BIP for OP_CAT
Andrew Poelstra writes: > On Mon, Oct 23, 2023 at 12:43:10PM +1030, Rusty Russell via bitcoin-dev wrote: >> Ethan Heilman via bitcoin-dev writes: >> > Hi everyone, >> > >> > We've posted a draft BIP to propose enabling OP_CAT as Tapscript opcode. >> > https://github.com/EthanHeilman/op_cat_draft/blob/main/cat.mediawiki >> >> 520 feels quite small for script templates (mainly because OP_CAT itself >> makes Script more interesting!). For example, using OP_TXHASH and >> OP_CAT to enforce that two input amounts are equal to one output amount >> takes about 250 bytes of Script[2] :( >> >> So I have to ask: >> >> 1. Do other uses feel like 520 is too limiting? > > In my view, 520 feels limiting provided that we lack rolling sha2 > opcodes. If we had those, then arguably 65 bytes is enough. Without > them, I'm not sure that any value is "enough". For CHECKSIGFROMSTACK > emulation purposes ideally we'd want the ability to construct a full > transaction on the stack, which in principle would necessitate a 4M > limit. I haven't yet found a desire for rolling sha2: an `OP_MULTISHA256` has been sufficient for my templating investigations w/ OP_TXHASH. In fact, I prefer it to OP_CAT, but OP_CAT does allow your Schnorr sig trick :) >> 2. Was there a concrete rationale for maintaining 520 bytes? 10k is the >> current >>script limit, can we get closer to that? :) > > But as others have said, 520 bytes is the existing stack element limit > and minimizing changes seems like a good strategy to get consensus. (On > the other hand, it's been a few days without any opposition so maybe we > should be more agressive :)). It's probably worth *thinking* about, yes. >> 3. Should we restrict elsewhere instead? After all, OP_CAT doesn't >>change total stack size, which is arguably the real limit? >> > > Interesting thought. Right now the stack size is limited to 1000 > elements of 520 bytes each, which theoretically means a limit of 520k. > Bitcoin Core doesn't explicitly count the "total stack size" in the > sense that you're suggesting; it just enforces these two limits > separately. BTW, I'm just learning of the 1000 element limit; I couldn't see it on scanning BIP-141. > I think trying to add a "total stack size limit" (which would have to > live alongside the two existing limits; we can't replace them without > a whole new Tapscript version) would add a fair bit of accounting > complextiy and wind up touching almost every other opcode...probably > not worth the added consensus logic. Simplest thing I can come up with: - instead of counting simple stack depth, count each stack entry as (1 + /520) entries? You can still only push 520 bytes, so you can only make these with OP_CAT. Looking in interpreter.cpp, `stack` and `altstack` now need to be objects to count entries differently (not vectors), but it seems like it'd be simple enough, and the logic could be enabled unconditionally since it Cannot Be Violated prior to OP_CAT. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposed BIP for OP_CAT
Ethan Heilman via bitcoin-dev writes: > Hi everyone, > > We've posted a draft BIP to propose enabling OP_CAT as Tapscript opcode. > https://github.com/EthanHeilman/op_cat_draft/blob/main/cat.mediawiki This is really nice to see! AFAICT you don't define the order of concatenation, except in the implementation[1]. I think if A is top of stack, we get BA, not AB? 520 feels quite small for script templates (mainly because OP_CAT itself makes Script more interesting!). For example, using OP_TXHASH and OP_CAT to enforce that two input amounts are equal to one output amount takes about 250 bytes of Script[2] :( So I have to ask: 1. Do other uses feel like 520 is too limiting? 2. Was there a concrete rationale for maintaining 520 bytes? 10k is the current script limit, can we get closer to that? :) 3. Should we restrict elsewhere instead? After all, OP_CAT doesn't change total stack size, which is arguably the real limit? Of course, we can increase this limit in future tapscript versions, too, so it's not completely set in stone. Thanks! Rusty. [1] Maybe others are Bitcoin Core fluent, but I found it weird that it's not simply `valtype vch1 = popstack(stack);`, and `vch3.reserve(vch1.size() + vch2.size());` was just a weird detail. [2] https://rusty.ozlabs.org/2023/10/22/amounts-in-script.html ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
Brandon Black writes: > On 2023-10-20 (Fri) at 14:10:37 +1030, Rusty Russell via bitcoin-dev wrote: >> I've done an exploration of what would be required (given >> OP_TX/OP_TXHASH or equivalent way of pushing a scriptPubkey on the >> stack) to usefully validate Taproot outputs in Bitcoin Script. Such >> functionality is required for usable vaults, at least. > > So you're proposing this direction as an alternative to the more > constrained OP_UNVAULT that replaces a specific leaf in the taptree in a > specific way? I think the benefits of OP_UNVAULT are pretty big vs. a > generic construction (e.g. ability to unvault multiple inputs sharing > the same scriptPubkey to the same output). I would have to write the scripts exactly (and I'm already uncomfortable that I haven't actually tested the ones I wrote so far!) to properly evaluate. In general, script makes it hard to do N-input evaluation (having no iteration). It would be useful to attempt this though, as it might enlighted us as to OP_TXHASH input selection: for example, we might want to have an "all *but* one input" mode for this kind of usage. Dealing with satsoshi amounts is possible, but really messy (that's my next post). >> TL;DR: if we have OP_TXHASH/OP_TX, and add OP_MULTISHA256 (or OP_CAT), >> OP_KEYADDTWEAK and OP_LESS (or OP_CONDSWAP), and soft-fork weaken the >> OP_SUCCESSx rule (or pop-script-from-stack), we can prove a two-leaf >> tapscript tree in about 110 bytes of Script. This allows useful >> spending constraints based on a template approach. > > I agree that this is what is needed. I started pondering this in > response to some discussion about the benefits of OP_CAT or OP_2SHA256 > for BitVM. Given these examples, I think it's clear that OP_MULTISHA256 is almost as powerful as OP_CAT, without the stack limit problems. And OP_2SHA256 is not sufficient in general for CScriptNum generation, for example. > Personally I'd use OP_TAGGEDCATHASH that pops a tag (empty tag can be > special cased to plain sha256) and a number (n) of elements to hash, > then tagged-hashes the following 'n' elements from the stack. That's definitely a premature optimization to save two opcodes. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script
Hi all, I've done an exploration of what would be required (given OP_TX/OP_TXHASH or equivalent way of pushing a scriptPubkey on the stack) to usefully validate Taproot outputs in Bitcoin Script. Such functionality is required for usable vaults, at least. https://rusty.ozlabs.org/2023/10/20/examining-scriptpubkey-in-script.html (If anyone wants to collaborate to produce a prototype, and debug my surely-wrong script examples, please ping me!) TL;DR: if we have OP_TXHASH/OP_TX, and add OP_MULTISHA256 (or OP_CAT), OP_KEYADDTWEAK and OP_LESS (or OP_CONDSWAP), and soft-fork weaken the OP_SUCCESSx rule (or pop-script-from-stack), we can prove a two-leaf tapscript tree in about 110 bytes of Script. This allows useful spending constraints based on a template approach. Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Scaling Lightning With Simple Covenants
Hi! I've read the start of the paper on my vacation, and am still digesting it. But even so far, it presents some delightful possibilities. As with some other proposals, it's worth noting that the cost of enforcement is dramatically increased. It's no longer one or two txs, it's 10+. If the "dedicated user" contributes some part of the expected fee, the capital efficiency is reduced (and we're back to "how much is enough?"). But worst case (dramatic dedicated user failure) it's only a 2x penalty on number of onchain txs, which seems acceptable if the network is sufficiently mature that these failure events are rare. Note also that the (surprisingly common!) "user goes away" case where the casual user fails to rollover only returns funds to the dedicated user; relying on legal and normal custody policies in this case may be preferable to an eternal burden on the UTXO set with the current approach! Thankyou! Rusty. jlspc via bitcoin-dev writes: > TL;DR > = > * The key challenge in scaling Lightning in a trust-free manner is the > creation of Lightning channels for casual users. > - It appears that signature-based factories are inherently limited to > creating at most tens or hundreds of Lightning channels per UTXO. > - In contrast, simple covenants (including those enabled by CTV [1] or APO > [2]) would allow a single UTXO to create Lightning channels for millions of > casual users. > * The resulting covenant-based protocols also: > - support resizing channels off-chain, > - use the same capital to simultaneously provide in-bound liquidity to > casual users and route unrelated payments for other users, > - charge casual users tunable penalties for attempting to put an old state > on-chain, and > - allow casual users to monitor the blockchain for just a few minutes every > few months without employing a watchtower service. > * As a result, adding CTV and/or APO to Bitcoin's consensus rules would go a > long way toward making Lightning a widely-used means of payment. > > Overview > > > Many proposed changes to the Bitcoin consensus rules, including > CheckTemplateVerify (CTV) [1] and AnyPrevOut (APO) [2], would support > covenants. > While covenants have been shown to improve Bitcoin in a number of ways, > scalability of Lightning is not typically listed as one of them. > This post argues that any change (including CTV and/or APO) that enables even > simple covenants greatly improves Lightning's scalability, while meeting the > usability requirements of casual users. > A more complete description, including figures, is given in a paper [3]. > > The Scalability Problem > === > > If Bitcoin and Lightning are to become widely-used, they will have to be > adopted by casual users who want to send and receive bitcoin, but who do not > want to go to any effort in order to provide the infrastructure for making > payments. > Instead, it's reasonable to expect that the Lightning infrastructure will be > provided by dedicated users who are far less numerous than the casual users. > In fact, there are likely to be tens-of-thousands to millions of casual users > per dedicated user. > This difference in numbers implies that the key challenge in scaling Bitcoin > and Lightning is providing bitcoin and Lightning to casual users. > As a result, the rest of this post will focus on this challenge. > > Known Lightning protocols allow casual users to perform Lightning payments > without: > * maintaining high-availability, > * performing actions at specific times in the future, or > * having to trust a third-party (such as a watchtower service) [5][6]. > > In addition, they support tunable penalties for casual users who attempt to > put an old channel state on-chain (for example, due to a crash that causes a > loss of state). > As a result, these protocols meet casual users' needs and could become > widely-used for payments if they were sufficiently scalable. > > The Lightning Network lets users send and receive bitcoin off-chain in a > trust-free manner [4]. > Furthermore, there are Lightning protocols that allow Lightning channels to > be resized off-chain [7]. > Therefore, making Lightning payments and resizing Lightning channels are > highly scalable operations. > > However, providing Lightning channels to casual users is not scalable. > In particular, no known protocol that uses the current Bitcoin consensus > rules allows a large number (e.g., tens-of-thousands to millions) of > Lightning channels, each co-owned by a casual user, to be created from a > single on-chain unspent transaction output (UTXO). > As a result, being able to create (and close) casual users' Lightning > channels remains the key bottleneck in scaling Lightning. > > Casual Users And Signatures > === > > Unfortunately, there are good reasons to believe this bottleneck is > unavoidable given the current Bitcoin consensus rules. > The
[bitcoin-dev] [PROPOSAL] OP_TX: generalized covenants reduced to OP_CHECKTEMPLATEVERIFY
Hi all, TL;DR: a v1 tapscript opcode for generic covenants, but OP_SUCCESS unless it's used a-la OP_CHECKTEMPLATEVERIFY. This gives an obvious use case, with clean future expansion. OP_NOP4 can be repurposed in future as a shortcut, if experience shows that to be a useful optimization. (This proposal builds on Russell O'Connor's TXHASH[1], with Anthony Towns' modification via extending the opcode[2]; I also notice on re-reading that James Lu had a similar restriction idea[3]). Details --- OP_TX, when inside v1 tapscript, is followed by 4 bytes of flags. Unknown flag patterns are OP_SUCCESS, though for thoroughness some future potential uses are documented here. Note that pushing more than 1000 elements on the stack or an element more than 512 bytes will hit the BIP-342 resource limits and fail. Defined bits (Only those marked with * have to be defined for this soft fork; the others can have semantics later). OPTX_SEPARATELY: treat fields separately (vs concatenating) OPTX_UNHASHED: push on the stack without hashing (vs SHA256 before push) - The first nicely sidesteps the lack of OP_CAT, and the latter allows OP_TXHASH semantics (and avoid stack element limits). OPTX_SELECT_VERSION*: version OPTX_SELECT_LOCKTIME*: nLocktime OPTX_SELECT_INPUTNUM*: current input number OPTX_SELECT_INPUTCOUNT*: number of inputs OPTX_SELECT_OUTPUTCOUNT*: number of outputs OPTX_INPUT_SINGLE: if set, pop input number off stack to apply to OPTX_SELECT_INPUT_*, otherwise iterate through all. OPTX_SELECT_INPUT_TXID: txid OPTX_SELECT_INPUT_OUTNUM: txout index OPTX_SELECT_INPUT_NSEQUENCE*: sequence number OPTX_SELECT_INPUT_AMOUNT32x2: sats in, as a high-low u31 pair OPTX_SELECT_INPUT_SCRIPT*: input scriptsig OPTX_SELECT_INPUT_TAPBRANCH: ? OPTX_SELECT_INPUT_TAPLEAF: ? OPTX_OUTPUT_SINGLE: if set, pop input number off stack to apply to OPTX_SELECT_OUTPUT_*, otherwise iterate through all. OPTX_SELECT_OUTPUT_AMOUNT32x2*: sats out, as a high-low u31 pair OPTX_SELECT_OUTPUT_SCRIPTPUBKEY*: output scriptpubkey OPTX_SELECT_19...OPTX_SELECT_31: future expansion. OP_CHECKTEMPLATEVERIFY is approximated by the following flags: OPTX_SELECT_VERSION OPTX_SELECT_LOCKTIME OPTX_SELECT_INPUTCOUNT OPTX_SELECT_INPUT_SCRIPT OPTX_SELECT_INPUT_NSEQUENCE OPTX_SELECT_OUTPUTCOUNT OPTX_SELECT_OUTPUT_AMOUNT32x2 OPTX_SELECT_OUTPUT_SCRIPTPUBKEY OPTX_SELECT_INPUTNUM All other flag combinations result in OP_SUCCESS. Discussion -- By enumerating exactly what can be committed to, it's absolutely clear what is and isn't committed (and what you need to think about!). The bits which separate concatenation and hashing provide a simple mechanism for template-style (i.e. CTV-style) commitments, or for programatic treatment of individual elements (e.g. amounts, though the two s31 style is awkward: a 64-bit push flag could be added in future). The lack of double-hashing of scriptsigs and other fields means we cannot simply re-use hashing done for SIGHASH_ALL. The OP_SUCCESS semantic is only valid in tapscript v1, so this does not allow covenants for v0 segwit or pre-segwit inputs. If covenants prove useful, dedicated opcodes can be provided for those cases (a-la OP_CHECKTEMPLATEVERIFY). Cheers, Rusty. [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019813.html [2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019819.html [3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019816.html ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
Jeremy Rubin writes: > Hi Rusty, > > Please see my post in the other email thread > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019886.html > > The differences in this regard are several, and worth understanding beyond > "you can iterate CTV". I'd note a few clear examples for showing that "CTV > is just as powerful" is not a valid claim: > > 1) CTV requires the contract to be fully enumerated and is non-recursive. > For example, a simple contract that allows n participants to take an action > in any order requires factorially many pre-computations, not just linear or > constant. For reference, 24! is about 2**80. Whereas for a more > interpretive covenant -- which is often introduced with the features for > recursion -- you can compute the programs for these addresses in constant > time. > 2) CTV requires the contract to be fully enumerated: For example, a simple > contract one could write is "Output 0 script matches Output 1", and the set > of outcomes is again unbounded a-priori. With CTV you need to know the set > of pairs you'd like to be able to expand to a-priori > 3) Combining 1 and 2, you could imagine recursing on an open-ended thing > like creating many identical outputs over time but not constraining what > those outputs are. E.g., Output 0 matches Input 0, Output 1 matches Output > 2. Oh agreed. It was distinction of "recursive" vs "not recursive" which was less useful in this context. "limited to complete enumeration" is the more useful distinction: it's a bright line between CTV and TXHASH IMHO. > I'll close by repeating : Whether that [the recursive/open ended > properties] is an issue or not precluding this sort of design or not, I > defer to others. Yeah. There's been some feeling that complex scripting is bad, because people can lose money (see the various attempts to defang SIGHASH_NOINPUT). I reject that; since script exists, we've crossed the Rubicon, so let's make the tools as clean and clear as we can. Cheers! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
Jeremy Rubin writes: > Rusty, > > Note that this sort of design introduces recursive covenants similarly to > how I described above. > > Whether that is an issue or not precluding this sort of design or not, I > defer to others. Good point! But I think it's a distinction without meaning: AFAICT iterative covenants are possible with OP_CTV and just as powerful, though technically finite. I can constrain the next 100M spends, for example: if I insist on those each having incrementing nLocktime, that's effectively forever. Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT
Russell O'Connor via bitcoin-dev writes: > Given the overlap in functionality between CTV and ANYPREVOUT, I think it > makes sense to decompose their operations into their constituent pieces and > reassemble their behaviour programmatically. To this end, I'd like to > instead propose OP_TXHASH and OP_CHECKSIGFROMSTACKVERIFY. > > OP_TXHASH would pop a txhash flag from the stack and compute a (tagged) > txhash in accordance with that flag, and push the resulting hash onto the > stack. It may be worth noting that OP_TXHASH can be further decomposed into OP_TX (and OP_TAGGEDHASH, or just reuse OP_SHA256). OP_TX would place the concatenated selected fields onto the stack (rather than hashing them) This is more compact for some tests (e.g. testing tx version for 2 is "OP_TX(version) 1 OP_EQUALS" vs "OP_TXHASH(version) 012345678...aabbccddeeff OP_EQUALS"), and also range testing (e.g amount less than X or greater than X, or less than 3 inputs). > I believe the difficulties with upgrading TXHASH can be mitigated by > designing a robust set of TXHASH flags from the start. For example having > bits to control whether (1) the version is covered; (2) the locktime is > covered; (3) txids are covered; (4) sequence numbers are covered; (5) input > amounts are covered; (6) input scriptpubkeys are covered; (7) number of > inputs is covered; (8) output amounts are covered; (9) output scriptpubkeys > are covered; (10) number of outputs is covered; (11) the tapbranch is > covered; (12) the tapleaf is covered; (13) the opseparator value is > covered; (14) whether all, one, or no inputs are covered; (15) whether all, > one or no outputs are covered; (16) whether the one input position is > covered; (17) whether the one output position is covered; (18) whether the > sighash flags are covered or not (note: whether or not the sighash flags > are or are not covered must itself be covered). Possibly specifying which > input or output position is covered in the single case and whether the > position is relative to the input's position or is an absolute position. These easily map onto OP_TX, "(1) the version is pushed as u32, (2) the locktime is pushed as u32, ...". We might want to push SHA256() of scripts instead of scripts themselves, to reduce possibility of DoS. I suggest, also, that 14 (and similarly 15) be defined two bits: 00 - no inputs 01 - all inputs 10 - current input 11 - pop number from stack, fail if >= number of inputs or no stack elems. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] March 23rd 2021 Taproot Activation Meeting Notes
Ryan Grant writes: > On Tue, Apr 6, 2021 at 11:58 PM Rusty Russell via bitcoin-dev > wrote: >> The core question always was: what do we do if miners fail to activate? >> >> [...] Speedy Trial takes the approach that "let's pretend we didn't >> *actually* ask [miners]". > > What ST is saying is that a strategy of avoiding unnecessary risk is > stronger than a strategy of brinkmanship when brinkmanship wasn't > our only option. Having deescalation in the strategy toolkit makes > Bitcoin stronger. I don't believe that having a plan is brinkmanship or an escalation. During the segwit debate, Pieter Wuille said that users should decide. I've been thinking about that a lot, especially about what that means in a practical sense where the normal developer / miner dynamic has failed. >> It's totally a political approach, to avoid facing the awkward question. >> Since I believe that such prevaricating makes a future crisis less >> predictable, I am forced to conclude that it makes bitcoin less robust. > > LOT=true does face the awkward question, but there are downsides: > > - in the requirement to drop blocks from apathetic miners (although > as Luke-Jr pointed out in a previous reply on this list they have > no contract under which to raise a complaint); and Surely, yes. If the users of bitcoin decide blocks are invalid, they're invalid. With a year's warning, and developer and user consensus against them, I think we've reached the limits of acceptable miner apathy. > - in the risk of a chain split, should gauging economic majority > support - which there is zero intrinsic tooling for - go poorly. Agreed that we should definitely do better here: in practice people would rely on third party explorers for information on the other side of the split. Tracking the cumulative work on invalid chains would be a good idea for bitcoind in general (AJ suggested this, IIRC). >> Personally, I think the compromise position is using LOT=false and >> having those such as Luke and myself continue working on a LOT=true >> branch for future consideration. It's less than optimal, but I >> appreciate that people want Taproot activated more than they want >> the groundwork future upgrades. > > Another way of viewing the current situation is that should > brinkmanship be necessary, then better tooling to resolve a situation > that requires brinkmanship will be invaluable. But: > > - we do not need to normalize brinkmanship; > > - designing brinkmanship tooling well before the next crisis does > not require selecting conveniently completed host features to > strap the tooling onto for testing; and Again, openly creating a contingency plan is not brinkmanship, it's normal. I know that considering these scenarios is uncomfortable; I avoid conflict myself! But I feel obliged to face this as a real possibility. I think we should be normalizing the understanding that bitcoin users are the ultimate decider. By offering *all* of them the tools to do so we show this isn't lip-service, but something that businesses and everyone else in the ecosystem should consider. > - it's already the case that a UASF branch can be prepared along > with ST (ie. without requiring LOT=false), although the code is a > bit more complex and the appropriate stopheight a few blocks later. I don't believe this is true, unless you UASF before ST expires? ST is explicitly designed *not* to give time to conclude that miners are stalling (unless something has changed from the initial 3 month proposal?). > Although your NACK is well explained, for the reasons above I am > prepared to run code that overrides it. Good. In the end, we're all at the whim of the economic majority. Cheers! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] March 23rd 2021 Taproot Activation Meeting Notes
Jeremy via bitcoin-dev writes: > We had a very productive meeting today. Here is a summary of the meeting -- > I've done my best to > summarize in an unbiased way. Thank you to everyone who attended. > > 1. On the use of a speedy trial variant: > > - There are no new objections to speedy trial generally. > - There is desire to know if Rusty retracts or reaffirms his NACK in light > of the responses. I do not withdraw my NACK (and kudos: there have been few attempts to pressure me to do so!). The core question always was: what do we do if miners fail to activate? Luke-Jr takes the approach that "we (i.e developers) ensure it activates anyway". I take the approach that "the users must make a direct intervention". Speedy Trial takes the approach that "let's pretend we didn't *actually* ask them". It's totally a political approach, to avoid facing the awkward question. Since I believe that such prevaricating makes a future crisis less predictable, I am forced to conclude that it makes bitcoin less robust. Personally, I think the compromise position is using LOT=false and having those such as Luke and myself continue working on a LOT=true branch for future consideration. It's less than optimal, but I appreciate that people want Taproot activated more than they want the groundwork future upgrades. I hope that helps, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Response to Rusty Russell from Github
Jeremy via bitcoin-dev writes: > Where I disagree is that I do not believe that BIP8 with LOT configuration > is the improved long term option we should ossify around either. I > understand the triumvirate model you desire to achieve, but BIP8 with an > individually set LOT configuration does not formalize how economic nodes > send a network legible signal ahead of a chain split. A regular flag day, > with no signalling, but communally released and communicated openly most > likely better achieves the goal of providing users choice. You're ignoring the role of infrastructure. It's similar to saying that there is no need for elections: if things are bad enough, citizens can rise up and overthrow their government. > 1. Developers release, but do not activate > 2. Miners signal > 3. Users may override by compiling and releasing a patched Bitcoin with > moderate changes that activates Taproot at a later date. While this might > *seem* more complicated a procedure than configurable LOT, here are four > reasons why it may be simpler (and safer) to just do a fresh release: Users may indeed, fire the devs and replace them, as this implies. This is not empowering users, but in effect risks reducing their role to "beg the devs or beg the miners". > A. No time-based consensus sensitivity on when LOT must be set (e.g., what > happens if mid final signal period users decide to set LOT? Do all users > set it at the same time? Or different times and end up causing nodes to ban > each other for various reasons?) Yes, this Schelling point is important. If you read BIP-8, you will see that LOT=true activates at the last moment for this very reason. > B. No "missed window" if users don't coordinate on setting LOT before the > final period -- release whenever ready. Of course there is: they need to upgrade in time. > C. ST fails fast, permitting users ample time to prepare an alternative > release You'd think so, but given the confusion surrounding Segwit, it seems a year was barely time to debate, decide and coordinate. You want this ready to go at the *beginning* of the 1 year process, not being decided, debated, build and deployed once the crisis is upon us. That existing deployment is a vital stake in the calculus of those who might try to disrupt the process for any reason. > D. If miners continue to mine without signalling, and users abandon a > LOT=true setting, their node will have already marked those blocks invalid > and they will need to figure out how to re-validate the block. This is true, in fact, of any soft fork: a Luke points out, our lack of revalidation of blocks after upgrade is a bug. Which should be fixed: IMHO a decent PR to make LOT runtime configurable would reevaluate any blocks >= timeoutheight-2016 when it is altered. > RE: point 3, is it as easy as it *could* be? No, but I don't have any > genius ideas on how to make it easier either. (Note that I've previously > argued for adding configurable LOT=true on the basis that a user-run script > could emulate LOT without any software change as a harm reduction, but I > did not advocate that particular technique be formalized as a part of the > activation process) BIP-8 (with the recent modifications to allow maximal number of non-signalling blocks) is technically as fork-preventative as we can seek to make it. I am hopeful that our ecosystem will remain harmonious and we won't have to use it. But I am significantly more hopeful that we won't have to use it if we have it deployed and ready. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Bech32m BIP: new checksum, and usage for segwit address
Perhaps title 'Bech32m address format for native v0-16 segregated witness outputs' should probably be v1-16? This is a thorough and clear write up; a superb read. Side note: I am deeply impressed with your mathematical jujitsu that no bech32 string is also a valid bech32m string *even with three errors*. This sways me even more that this approach is correct. Untested-Ack. Thanks, Rusty. Pieter Wuille via bitcoin-dev writes: > On Monday, January 4, 2021 4:14 PM, Pieter Wuille > wrote: > >> Hello all, >> >> here is a BIP draft for changing the checksum in native segwit addresses for >> v1 and higher, following the discussion in >> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-December/018293.html >> >> Overall, the idea is: >> * Define a new encoding which is a tweaked variant of Bech32, called >> Bech32m. It refers to the Bech32 section of BIP173, which remains in effect. >> * Define a new segwit address encoding which replaces the corresponding >> section in BIP173. It prescribes using Bech32 for v0 witness addresses, and >> Bech32m for other versions. > > Of course I forgot the actual link: > https://github.com/sipa/bips/blob/bip-bech32m/bip-bech32m.mediawiki > > Cheers, > > -- > Pieter > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] New PSBT version proposal
Andrew Chow writes: > Hi Rusty, > > On 1/6/21 6:26 PM, Rusty Russell wrote: >> Hi Andrew et al, >> >> Very excited to see this progress; thanks for doing all the >> work! Sorry for the delayed feedback, I didn't get to this before the >> break. >> >>> Additionally, I would like to add a new global field: >>> * PSBT_GLOBAL_UNDER_CONSTRUCTION = 0x05 >>> * Key: empty >>> * Value: A single byte as a boolean. 0 for False, 1 for True. All >>> other values ore prohibited. Must be omitted for PSBTv0, may be omitted >>> in PSBTv2. >>> >>> PSBT_GLOBAL_UNDER_CONSTRUCTION is used to signal whether inputs and >>> outputs can be added to the PSBT. This flag may be set to True when >>> inputs and outputs are being updated, signed, and finalized. However >>> care must be taken when there are existing signatures. If this field is >>> omitted or set to False, no further inputs and outputs may be added to >>> the PSBT. >> I wonder if this can be flagged simply by omitting the (AFAICT >> redundant) PSBT_GLOBAL_INPUT_COUNT and PSBT_GLOBAL_OUTPUT_COUNT? What >> are the purposes of those fields? > The purpose of those fields is to know how many input and output maps > there are. Without PSBT_GLOBAL_UNSIGNED_TX, there is no way to determine > whether a map is an input map or an output map. So the counts are there > to allow that. Ah, yeah, you need at least the number of input maps :( It's generally preferable to have sections be self-describing; internally if you have a function which takes all the input maps you should be able to trivially tell if you're handed the output maps by mistake. Similarly, it would have been nice to have an input map be a distinctly marked type from global or output maps. Nonetheless, that's a bigger change. You could just require a double-00 terminator between the global, input and output sections though. >> For our uses, there would be no signatures at this stage; it's simply a >> subdivision of the Creator role. This role would be terminated by >> removing the under-construction marker. For this, it could be clear >> that such an under-construction PSBT SHOULD NOT be signed. > > There are some protocols where signed inputs are added to transactions. Sure, but you can't solve every problem. We've now created the possibility that a PSBT is "under construction" but can't be modified, *and* a very invasive requirement to determine that. I disagree with Andrew's goal here: > 1. PSBT provides no way to modify the set of inputs or outputs after the > Creator role is done. It's simpler if, "the under-construction PSBT can be used within the Creator role, which can now have sub-roles". If you really want to allow this (and I think we need to explore concrete examples to justify this complexity!), better to add data to PSBT_GLOBAL_UNDER_CONSTRUCTION: 1. a flag to indicate whether inputs are modifiable. 2. a flag to indicate whether outputs are modifiable. 3. a bitmap of what inputs are SIGHASH_SINGLE. If you add a signature which is not SIGHASH_NONE, you clear the "outputs modifiable" flag. If you add a signature which is not SIGHASH_ANYONECANPAY, you clear the "inputs modifiable" flag. If you clear both flags, you remove the PSBT_GLOBAL_UNDER_CONSTRUCTION altogether. You similarly set the bitmap depending on whether all sigs are SIGHASH_SINGLE. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] New PSBT version proposal
Hi Andrew et al, Very excited to see this progress; thanks for doing all the work! Sorry for the delayed feedback, I didn't get to this before the break. > Additionally, I would like to add a new global field: > * PSBT_GLOBAL_UNDER_CONSTRUCTION = 0x05 > * Key: empty > * Value: A single byte as a boolean. 0 for False, 1 for True. All > other values ore prohibited. Must be omitted for PSBTv0, may be omitted > in PSBTv2. > > PSBT_GLOBAL_UNDER_CONSTRUCTION is used to signal whether inputs and > outputs can be added to the PSBT. This flag may be set to True when > inputs and outputs are being updated, signed, and finalized. However > care must be taken when there are existing signatures. If this field is > omitted or set to False, no further inputs and outputs may be added to > the PSBT. I wonder if this can be flagged simply by omitting the (AFAICT redundant) PSBT_GLOBAL_INPUT_COUNT and PSBT_GLOBAL_OUTPUT_COUNT? What are the purposes of those fields? For our uses, there would be no signatures at this stage; it's simply a subdivision of the Creator role. This role would be terminated by removing the under-construction marker. For this, it could be clear that such an under-construction PSBT SHOULD NOT be signed. Otherwise, if an explicit marker is required, I would omit the value and simply use its existence to as a flag. Having two "false" values is simply asking for trouble. Thanks! Rusty. PS. Perhaps we should change the name to PBT (Partial Bitcoin Transaction) now, since it's more than just signing... ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
Mike Schmidt via bitcoin-dev writes: > I am happy to re-test the services Harding listed previously for v1 send > support next week. > > Suggestions of additional services that would be valuable to test are > welcome as well. Thanks! I am a little disappointed that I won't get to ask Bitcoin Twitter to send tips to Pieter[1] though... I would like to hear from the services who currently support v1+ (who thus *would* have to change their software) whether they have a technical preference for option 1 or 2. Cheers, Rusty. [1] Or just maybe, tips to some random miner... ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
ZmnSCPxj writes: > I believe this is actually my code, which was later refactored by John > Barboza when we were standardizing the `param` system. Ah, sorry! > This was intended only as a simple precaution against creating non-standard > transaction, and not an assumption that future versions should be invalid. > The intent is that further `else if` branches would be added for newer > witness versions and whatever length restrictions they may have, as the `/* > Insert other witness versions here. */` comment implies. Yes, I mentioned it here because I've found this to be a common misconception; the *idea* was that application's segwit code would not have to be reworked for future upgrades, but that information propagated poorly. (Just as well, because of overly strict standardness rules, the overflow bug, and now the proposed validation changes, turns out this lack of forward compat is a feature!) Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
Rusty Russell writes: > Accepts > --- > Green: ef1662fd2eb736612afb1b60e3efabfdf700b1c4822733d9dbe1bfee607a5b9b > blockchain.info: > 64b0fcb22d57b3c920fee1a97b9facec5b128d9c895a49c7d321292fb4156c21 PEBKAC. Pasted wrong address. Here are correct results: Rejects --- c-lightning: "Could not parse destination address, destination should be a valid address" Phoenix: "Invalid data. Please try again." blockchain.info: "Your Bitcoin transaction failed to send. Please try again." Accepts --- Green: 9e4ab6617a2983439181a304f0b4647b63f51af08fdd84b0676221beb71a8f21 Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
Pieter Wuille writes: > Here is a BIP341 witness v1 address, corresponding to just the generator as > inner public key (using TapTweak(pubkey) as tweak, as suggested by the BIP): > > bc1pmfr3p9 YOU j00pfxjh WILL 0zmgp99y8zf LOSE tmd3s5pmedqhy MONEY > ptwy6lm87hf5ss52r5n8 Here are my initial results: Rejects --- c-lightning: "Could not parse destination address, destination should be a valid address" Phoenix: "Invalid data. Please try again." Accepts --- Green: ef1662fd2eb736612afb1b60e3efabfdf700b1c4822733d9dbe1bfee607a5b9b blockchain.info: 64b0fcb22d57b3c920fee1a97b9facec5b128d9c895a49c7d321292fb4156c21 Will keep exploring (and others are welcome to try too!) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
Pieter Wuille writes: > Today, no witness v1 receivers exist. So it seems to me the only question > is what software/infrastructure exist that supports sending to witness v1, > and whether they (and their userbase) are more or less likely to upgrade > before receivers appear than those that don't. > > Clearly if only actively developed software currently supports sending to > v1 right now, then the question of forward compatibility is moot, and I'd > agree the cleanliness of option 2 is preferable. If software supports v1 today and doesn't get upgraded, and we persue option 1 then a trailing typo can make trouble. Not directly lose money (since the tx won't get propagated), but for most systems (e.g. hosted wallets) someone has to go in and figure out the error and fix it up. Option 2 means they're likely to fix their systems the first time someone tries a v1 send, not the first time someone makes a trailing typo (which might be years). > Does anyone have an up-to-date overview of where to-future-witness sending > is supported? I know Bitcoin Core does. Anecdata: c-lightning doesn't allow withdraw to segwit > 0. It seems that the contributor John Barboza (CC'd) assumed that future versions should be invalid: if (bip173) { bool witness_ok = false; if (witness_version == 0 && (witness_program_len == 20 || witness_program_len == 32)) { witness_ok = true; } /* Insert other witness versions here. */ >> Deferring a hard decision is not useful unless we expect things to be >> easier in future, and I only see it getting harder as time passes and >> userbases grow. > > Possibly, but in the past I think there has existed a pattern where adoption > of new technology is at least partially based on certain infrastructure > and codebases going out of business and/or being replaced with newer ones, > rather than improvements to existing ones. > > If that effect is significant, option 1 may be preferable: it means less > compatibility issues in the short term, and longer term all that may be > required is fixing the spec, and waiting long enough for old/unmaintained code > to be replaced. Hmm, I'd rather cleanly break zombie infra, since they're exactly the kind that won't/cant fix the case where someone trailing-typos? > As for how long: new witness version/length combinations are only rarely > needed, > and there are 14 length=32 ones left to pick. We'll likely want to use those > first anyway, as it's the cheapest option with 128-bit collision resistance. > Assuming future constructions have something like BIP341's leaf versioning, > new > witness version/length combinations are only required for: > > * Changes to the commitment structure of script execution (e.g. Graftroot, > different hash function for Merkle trees, ...) > * Upgrades to new signing cryptography (EC curve change, PQC, ...). > * Changes to signatures outside of a commitment structure (e.g. new sighash > modes for the keypath in BIP341, or cross-input aggregation for them). > > and in general, not for things like new script opcodes, or even for fairly > invasive redesigns of the script language itself. Hmm, good point. These can all be done with version bumps. The only use for extra bytes I can see is per-UTXO flags, but even then I can't see why you'd need to know them until their spent (in which case you stash the flag in the input, not the output). And fewer bytes seems bad for fungibility, since multisig would be dangerous. But the future keeps surprising me, so I'm still hesitant. >> The good news it that the change is fairly simple and the reference >> implementations are widely used so change is not actually that hard >> once the decision is made. > > Indeed. Whatever observations we had about adoption of base58 -> bech32 may > not > apply because the change to a different checksum is fairly trivial compared to > that. Still, presence of production codebases that just don't update at all > may complicate this. > >> > Hopefully by the time we want to use segwit v2, most software will have >> > implemented length limits and so we won't need any additional consensus >> > restrictions from then on forward. >> >> If we are prepared to commit to restrictions on future addresses. >> >> We don't know enough to do that, however, so I'm reluctant; I worry that >> a future scheme where we could save (e.g.) 2 bytes will impractical due >> to our encoding restrictions, resulting in unnecessary onchain bloat. > > I'm opposed to consensus-invalidating certain length/version combinations, if > that's what you're suggesting, and I don't think there is a need for it. This *seems* to leave the option of later removing size restrictions, but I think this is an illusion. Upgrading will only get harder over time: we would instead opt for some kind of backward compatiblity hack (i.e. 33 byte address, but
Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
"David A. Harding" writes: > On Thu, Oct 08, 2020 at 10:51:10AM +1030, Rusty Russell via bitcoin-dev wrote: >> Hi all, >> >> I propose an alternative to length restrictions suggested by >> Russell in https://github.com/bitcoin/bips/pull/945 : use the >> https://gist.github.com/sipa/a9845b37c1b298a7301c33a04090b2eb variant, >> unless the first byte is 0. >> >> Here's a summary of each proposal: >> >> Length restrictions (future segwits must be 10, 13, 16, 20, 23, 26, 29, >> 32, 36, or 40 bytes) >> 1. Backwards compatible for v1 etc; old code it still works. >> 2. Restricts future segwit versions, may require new encoding if we >> want a diff length (or waste chainspace if we need to have a padded >> version for compat). >> >> Checksum change based on first byte: >> 1. Backwards incompatible for v1 etc; only succeeds 1 in a billion. >> 2. Weakens guarantees against typos in first two data-part letters to >> 1 in a billion.[1] > > Excellent summary! > >> I prefer the second because it forces upgrades, since it breaks so >> clearly. And unfortunately we do need to upgrade, because the length >> extension bug means it's unwise to accept non-v0 addresses. > > I don't think the second option forces upgrades. It just creates > another opt-in address format that means we'll spend another several > years with every wallet having two address buttons, one for a "segwit > address" (v0) and one for a "taproot address" (v1). Or maybe three > buttons, with the third being a "taproot-in-a-segwit-address" (v1 > witness program using the original bech32 encoding). If we go for option 2, v1 (generated from bitcoin core) will simply fail the first time you try test it. So it will force an upgrade. There are fewer places generating addresses than accepting them, so this seems the most likely scenario. OTOH, with option 1, anyone accepting v1 addresses today is going to become a liability once v1 addresses start being generated. > It took a lot of community effort to get widespread support for bech32 > addresses. Rather than go through that again, I'd prefer we use the > backwards compatible proposal from BIPs PR#945 and, if we want to > maximize safety, consensus restrict v1 witness program size, e.g. reject > transactions with scriptPubKeys paying v1 witness programs that aren't > exactly 32 bytes. Yes, I too wish we weren't here. :( Deferring a hard decision is not useful unless we expect things to be easier in future, and I only see it getting harder as time passes and userbases grow. The good news it that the change is fairly simple and the reference implementations are widely used so change is not actually that hard once the decision is made. > Hopefully by the time we want to use segwit v2, most software will have > implemented length limits and so we won't need any additional consensus > restrictions from then on forward. If we are prepared to commit to restrictions on future addresses. We don't know enough to do that, however, so I'm reluctant; I worry that a future scheme where we could save (e.g.) 2 bytes will impractical due to our encoding restrictions, resulting in unnecessary onchain bloat. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)
Hi all, I propose an alternative to length restrictions suggested by Russell in https://github.com/bitcoin/bips/pull/945: use the https://gist.github.com/sipa/a9845b37c1b298a7301c33a04090b2eb variant, unless the first byte is 0. Here's a summary of each proposal: Length restrictions (future segwits must be 10, 13, 16, 20, 23, 26, 29, 32, 36, or 40 bytes) 1. Backwards compatible for v1 etc; old code it still works. 2. Restricts future segwit versions, may require new encoding if we want a diff length (or waste chainspace if we need to have a padded version for compat). Checksum change based on first byte: 1. Backwards incompatible for v1 etc; only succeeds 1 in a billion. 2. Weakens guarantees against typos in first two data-part letters to 1 in a billion.[1] I prefer the second because it forces upgrades, since it breaks so clearly. And unfortunately we do need to upgrade, because the length extension bug means it's unwise to accept non-v0 addresses. (Note non-v0 segwit didn't relay before v0.19.0 anyway, so many places may already be restricting to v0 segwit). The sooner a decision is reached on this, the sooner we can begin upgrading software for a taproot world. Thanks, Rusty. PS. Lightning uses bech32 over longer lengths, but the checksum is less critical; we'd prefer to follow whatever bitcoin chooses. [1] Technically less for non-v0: you have a 1 in 8 chance of a typo in the second letter changing the checksum algorithm, so it's 1 in 8 billion. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest
"David A. Harding via bitcoin-dev" writes: > To avoid the excessive wasting of bandwidth. Bitcoin Core's defaults > require each replacement pay a feerate of 10 nBTC/vbyte over an existing > transaction or package, and the defaults also allow transactions or > packages up to 100,000 vbytes in size (~400,000 bytes). So, without > enforcement of BIP125 rule 3, an attacker starting at the minimum > default relay fee also of 10 nBTC/vbyte could do the following: > > - Create a ~400,000 bytes tx with feerate of 10 nBTC/vbyte (1 mBTC total > fee) > > - Replace that transaction with 400,000 new bytes at a feerate of 20 > nBTC/vbyte (2 mBTC total fee) > > - Perform 998 additional replacements, each increasing the feerate by 10 > nBTC/vbyte and the total fee by 1 mBTC, using a total of 400 megabytes > (including the original transaction and first replacement) to > ultimately produce a transaction with a feerate of 10,000 nBTC/vbyte > (1 BTC total fee) > > - Perform one final replacement of the latest 400,000 byte transaction > with a ~200-byte (~150 vbyte) 1-in, 1-out P2WPKH transaction that pays > a feerate of 10,010 nBTC/vbyte (1.5 mBTC total fee) To be fair, if the feerate you want is 100x the minimum permitted, you can always use 100x as much bandwidth as necessary without extra cost. If everyone (or some major tx producers) were to do that, it would suck. To fix this properly, you really need to agressively delay processing (thus propagation) of transactions which aren't likely to be in the next (few?) blocks. This is a more miner incentive compatible scheme. However, I realize this is a complete rewrite of bitcoind's logic, and I'm not volunteering to do it! Cheers, Rusty, ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] New BIP for p2p messages/state enabling reconciliation-based protocols (Erlay)
Hi Gleb, Minor feedback on reading the draft: > sendrecon: > uint32version Must be exactly 1 currently. At risk of quoting myself[1]: data doesn't have requirements. Actors do. In this case, I assume you mean "writers must set this to 1". What do readers do if it's not? > reqreconcil > uint8 q Coefficient used to estimate set difference. You describe how to calculate q (as a floating point value), but not how to encode it? > Every node stores sets of 128-bit truncated IDs per every peer "*a* set..." or is it "two sets" (if you include the snapshot?). And " *for* every peer" (maybe "which supports tx reconciliation?") > To the best of our knowledge, PinSketch is more bandwidth efficient > than IBLT, especially for the small differences in sets we expect to > operate over. Remove "To the best of our knowledge, ": that makes it sound like it's up for debate. I've implemented and experimented with IBLT, and it's worse. Cheers, Rusty. [1] https://github.com/lightningnetwork/lightning-rfc/blob/master/CONTRIBUTING.md#writing-the-requirements ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)
"David A. Harding" writes: > On Thu, Jun 06, 2019 at 02:46:54PM +0930, Rusty Russell via bitcoin-dev wrote: >> If that's true, I don't think this proposal makes it worse. > > Here's a scenario that I think shows it being at least 20x worse. [ Snip ] Indeed :( To be fair, if I have a transaction of median size (250 bytes) and I use the current estimatefee 2 of '0.00068325' I get to replace is 68 times; that's $0 for an additional 1GB across all nodes. So, I don't think the current rules are sufficient. But I understand the desire not to make things worse. I'll roll in some changes and re-propose. > It's already hard for wallet software to determine whether or not its > transactions have successfully been relayed. As the deadline approaches, a lightning wallet would RBF with increasing desparation until it gets into a block. It doesn't really matter *why* the tx isn't going through, there's nothing else it can do. > This proposal requires LN > wallets not only be able to guess where the next-block feerate boundary > is in other nodes' individual mempools (now and in the future for the > time it takes the transaction to propagate to ~100% of miners), but it > possibly requires that under the condition that the LN wallet can't > guess too low because it might not get another chance for relay in the > limited time available before contract expiration. I think you mean any proposal which relies on a deadline? If so, that bus has already left. When you see a block you can guess the fees required for the next block. You need some smoothing to avoid wild spikes, but in practice you can start this "desperation mode" 10 blocks before your deadline. Without RBF changes, it needs to assume that it needs to replace a 400kSipa tx @ feerate-for-next-block. With some RBF change, it need only replace @feerate-for-next-block. > Considered that way, I worry that these constraints produce a recipe for > paying extremely high feerates. If that's an actual risk, is that > actually significantly better than dealing with the existing transaction > pinning issue where one needs to pay a high total fee in order to evict > a bunch of junk descendents? Paying lots of fees may not be the optimal > solution to the problem of having to pay lots of fees. :-) I don't understand this at all, sorry. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)
Matt Corallo writes: > I think this needs significantly improved motivation/description. A few areas > I'd like to see calculated out: > > 1) wrt rule 3, for this to be > obviously-incentive-compatible-for-the-next-miner, I'd think no evicted > transactions would be allowed to be in the next block range. This would > probably require some significant additional tracking in today's mempool > logic. This is a good question, which is why I really wanted to look into the implementation details. There are some approximations possible wrt. pre- and post- tx bundle feerate, but they have to be examined closely. > 2) wrt rule 4, I'd like to see a calculation of worst-case free relay. I > think we're already not in a great place, but maybe it's worth it or maybe > there is some other way to reduce this cost (intuitively it looks like this > proposal could make things very, very, very bad). I *think* you can currently create a tx at 1 sat/byte, have it propagate, then RBF it to 2 sat/byte, 3... and do that a few thousand times before your transaction gets mined. If that's true, I don't think this proposal makes it worse. >> 3) wrt rule 5, I'd like to see benchmarks, it's probably a pretty nasty DoS >> attack, but it may also be the case that is (a) not worse than other >> fundamental issues or (b) sufficiently expensive. I thought we still meet rule 5 in practice since bitcoind will never even accept a tree of unconfirmed txs which is > 100 txs? That would still stand, it's just that we'd still consider a replacement. > 4) As I've indicated before, I'm generaly not a fan of such vague > protections for time-critical transactions such as payment channel > punishment transactions. The bitcoin network offers no propagation guarantees; it's all best effort anyway. This makes it no worse, and we can tunnel txs through the lightning network if we have to. > At a high-level, in this context your counterparty's transactions (not to > mention every other transaction in everyone's mempool) are still involved in > the decision about whether to accept an RBF, in contrast to previous > proposals, which makes it much harder to reason about. As a specific example, > if an attacker exploits mempool policy differences they may cause your > concept of "top 4M weight" to be bogus for a subeset of nodes, causing > propogation to be limited. If miners have a conflicting tx in the top 4MSipa, you don't have a problem. So an attacker needs to limit propagation in a way which isolates the miners from either the new tx or the conflicting one, which is much harder. > Obviously there is also a ton more client-side knowledge required and > complexity to RBF decisions here than other previous, more narrowly-targeted > proposals. Define client-side here? I'd say from the lightning side it's as simple as a normal RBF policy until you get within a few blocks of a deadline, then you increase the fees until it's well within reach of the next block. You can even approximate this by looking at fees on the previous block, with some care for outliers. > (I don't think this one use-case being not optimal should prevent such a > proposal, i agree it's quite nice for some other cases). I like that it's conceptually simple and inventive-robust, and doesn't really rely on bitcoind's internal policy mechanics of RBF. I think in the longer term we're going to need other mechanisms for restricting abusive propagation anyway, but that's a bit out-of-scope. Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)
"Russell O'Connor" writes: > Hi Rusty, > > On Sun, Jun 2, 2019 at 9:21 AM Rusty Russell via bitcoin-dev < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> The new "emergency RBF" rule: >> >> 6. If the original transaction was not in the first 4,000,000 weight >> units of the fee-ordered mempool and the replacement transaction is, >> rules 3, 4 and 5 do not apply. >> >> This means: >> >> 3. This proposal does not open any significant new ability to RBF spam, >>since it can (usually) only be used once. IIUC bitcoind won't >>accept more that 100 descendents of an unconfirmed tx anyway. >> > > Is it not possible for Alice to grief Bob's node by alternating RBFing two > transactions, each one placing itself at the bottom of Bob's top 4,000,000 > weight mempool which pushes the other one below the top 4,000,000 weight, > and then repeating with the other transaction? It might be possible to > amend this proposal to partially mitigate this. Good point. This will cost Alice approximately one tx every block, but that may still be annoying. My intuition says it's hard to play these games across swathes of non-direct peers, since mempools are in constant flux and propagation is a bit random. What mitigations were you thinking? Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] [PROPOSAL] Emergency RBF (BIP 125)
Hi all, I want to propose a modification to rules 3, 4 and 5 of BIP 125: To remind you of BIP 125: 3. The replacement transaction pays an absolute fee of at least the sum paid by the original transactions. 4. The replacement transaction must also pay for its own bandwidth at or above the rate set by the node's minimum relay fee setting. 5. The number of original transactions to be replaced and their descendant transactions which will be evicted from the mempool must not exceed a total of 100 transactions. The new "emergency RBF" rule: 6. If the original transaction was not in the first 4,000,000 weight units of the fee-ordered mempool and the replacement transaction is, rules 3, 4 and 5 do not apply. This means: 1. RBF can be used in adversarial conditions, such as lightning unilateral closes where the adversary has another valid transaction and can use it to block yours. This is a problem when we allow differential fees between the two current lightning transactions (aka "Bring Your Own Fees"). 2. RBF can be used without knowing about miner's mempools, or that the above problem is occurring. One simply gets close to the required maximum height for lightning timeout, and bids to get into the next block. 3. This proposal does not open any significant new ability to RBF spam, since it can (usually) only be used once. IIUC bitcoind won't accept more that 100 descendents of an unconfirmed tx anyway. 4. This proposal makes RBF miner-incentive compatible. Currently the protocol tells miners they shouldn't accept the highest bidding tx for the good of the network. This conflict is particularly sharp in the case where the replacement tx would be immediately minable, which this proposal addresses. Unfortunately I haven't found time to code this up in bitcoin, but if there's positive response I can try. Thanks for reading! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] SIGHASH_ANYPREVOUT proposal
Anthony Towns writes: > On Wed, May 22, 2019 at 12:17:31PM +0930, Rusty Russell wrote: >>I prefer to >>change the bip introduction to expliclty shout "THESE SIGNATURE >>HASHES ARE UNSAFE FOR NORMAL WALLET USAGE.", and maybe rename it >>SIGHASH_UNSAFE_ANYPREVOUT. > >> 4. "Rebinding is a new power in bitcoin, and it makes me uncomfortable". >>I have a degree of sympathy for this view, but objections must be >>backed in facts. If we don't understand it well enough, we should >>not do it. > > Yeah, that's where I'm at: if we think something is UNSAFE enough to > need a warning, then I think it's too unsafe to include in the consensus > layer. I would like to find a way of making ANYPREVOUT safe enough that > it doesn't need scary warnings; a week or two ago, chaperone sigs were > my best idea for that. The DO_NOT_WANT naming is to prevent people who *don't* want to use it from using it because it's the "new hotness". It cannot both be powerful enough to do what we need (rebinding) and safe enough for general use (no rebinding). > So here's something I almost think is an argument that ANYPREVOUT is safe > (without chaperone sigs or output tagging, etc). > > #1. I'm assuming funds are "safe" in Bitcoin if they're (a) held in > a cryptographically secured UTXO, (b) under "enough" proof of work > that a reorg is "sufficiently" unlikely. If you don't have both those > assumptions, your money is not safe in obvious ways; if you do have them > both, your money is safe. > > #2. My theory is that as long as you, personally, only use ANYPREVOUT > to sign transactions that are paying the money back to yourself, your > funds will remain safe. > > If you follow this rule, then other people replaying your signature is > not a problem -- the funds will just move from one of your addresses, to > a different address. ... > Eltoo naturally meets this requirement, as long as you consider "paying > to yourself" to cover both "paying to same multisig address" (for update > transactions) I disagree? Paying to share with an untrusted party is *insecure* without further, complex arrangements. Those arrangements (already a requirement for lightning) worry me far more than the bitcoin-level rebinding, TBH. Lightning relies on #1, not #2. I don't know of any use for #2 in fact; in practice if you have control of keys you can generally sign a new tx, not requiring ANYPREVOUT. If you're trying to blindly spend a tx which may be RBF'd, ANYPREVOUT won't generally help you (amount changes). > #5. It's probably not compatible with luke's "malleability proof" wallet > idea [0]. Malleability is only a concern for funds that aren't already > "sufficiently" buried, and if you're only spending it to yourself that > doesn't help with burying, and if you're spending it to someone else > immediately after, doesn't help with making that transaction less > malleable. But if the line of argument above's correct, that just > recognises that a wallet like that risks losing funds if someone else > reuses its addresses; it doesn't add any systemic risk. And "wallet X > isn't safe" is a risk category we already deal with. Yes, I think our primary concern is risk to non-ANYPREVOUT using txs. That would make ANYPREVOUT a bad idea, but seems we're concluding that's not the case. Secondary, is the accidentally-using-ANYPREVOUT scenario, which I consider unlikely (like accidentally-using-SIGHASHNONE), especially since you need to actually mark your keys now, so you can't do it post-hoc to existing outputs. Final concern is the using-correctly-but-nasty-gotchas. This seems to be inherent in rebinding, and is fully addressed by Don't Reuse Addresses. That is already a requirement for lightning (reusing revocation keys is fatal). Others reusing your addresses is already a thing we have to deal with in bitcoin (Enjoy/Sochi). Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] SIGHASH_ANYPREVOUT proposal
Anthony Towns via bitcoin-dev writes: > Hi everybody! > > Here is a followup BIP draft that enables SIGHASH_ANYPREVOUT and > SIGHASH_ANYPREVOUTANYSCRIPT on top of taproot/tapscript. (This is NOINPUT, > despite the name change) I really like this proposal, and am impressed with how cleanly it separated from taproot/tapscript. I believe the chaparone signature requirement should be eliminated: I am aware of four suggested benefits, which I don't believe are addressed adaquately enough by chaparones to justify enshrining this complexity into the protocol. 1. "These features could be used dangerously, and chaparone signatures make them harder to use, thus less likely to be adopted by random wallet authors." This change is already hard to implement, even once you're on v1 segwit; you can't just use it with existing outputs. I prefer to change the bip introduction to expliclty shout "THESE SIGNATURE HASHES ARE UNSAFE FOR NORMAL WALLET USAGE.", and maybe rename it SIGHASH_UNSAFE_ANYPREVOUT. 2. "Accidental key reuse can make this unsafe." This is true, but chaparones don't seem to help. The main purpose of ANYPREV is where you can't re-sign; in particular, in lightning you are co-signing with an untrusted party up-front. So you have to share the chaparone privkeys with one untrusted party. The BIP says "SHOULD limit the distribution of those private keys". That seems ridiculously optimistic: don't tell the secret to more than *one* untrusted party? In fact, lightning will also need to hand chaparone keys to watchtowers, so we'll probably end up using some fixed known secret. 3. "Miners can reorg and invalidate downstream txs". There's a principle (ISTR reading it years ago from Greg Maxwell?) that if no spender is malicious, a transaction should generally not become invalid. With ANYPREV, a miner could reattach a transaction during a reorg, changing its txid and invalidating normal spends from it. In practice, I believe this principle will remain just as generally true with ANYPREV: 1. For lightning the locktime will be fairly high before these txs are generally spendable. 2. Doing this would require special software, since I don't see bitcoin core indexing outputs to enable this kind of rewriting. 3. We already added such a common possibility with RBF, but before I brought it up I don't believe anyone realized. We certainly haven't seen any problems in practice, for similar practical reasons. 4. "Rebinding is a new power in bitcoin, and it makes me uncomfortable". I have a degree of sympathy for this view, but objections must be backed in facts. If we don't understand it well enough, we should not do it. If we do understand it, we should be able to point out real problems. Finally, it seems to me that chaparones can be opt-in, and don't need to burden the protocol. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety
Sorry AJ, my prior email was not constructive :( I consider the "my software reused my keys" the most reasonable attack scenario, though still small compared to other lightning attack surfaces. But I understand the general wariness of third-parties reusing SIGHASH_NOINPUT signatures. Since "must have a non-SIGHASH_NOINPUT" rule addresses the first reuse scenario (as well as the second), I'd be content with that proposal. Future segwit versions may choose to relax it.[1] Cheers, Rusty. [1] Must be consensus, not standardness; my prev suggestion was bogus. Rusty Russell writes: > Anthony Towns writes: >> If you publish to the blockchain: > ... >> 4 can be dropped, state 5 and finish can be altered). Since the CSV delay >> is chosen by the participants, the above is still a possible scenario >> in eltoo, though, and it means there's some risk for someone accepting >> bitcoins that result from a non-cooperative close of an eltoo channel. > > AJ, this was a meandering random walk which shed very little light. > > I don't find the differentiation between malicious and non-malicious > double-spends convincing. Even if you trust A, you already have to > worry about person-who-sent-the-coins-to-A. This expands that set to be > "miner who mined coins sent-to-A", but it's very hard to see what > difference that makes to how you'd handle coins from A. > >> Beyond that, I think NOINPUT has two fundamental ways to cause problems >> for the people doing NOINPUT sigs: >> >> 1) your signature gets applied to a unexpectedly different >> script, perhaps making it look like you've being dealing >> with some blacklisted entity. OP_MASK and similar solves >> this. > > ... followed by two paragraphs describing how it's not a "fundamental > way to cause problems" that you (or I) can see. > >> For the second case, that seems a little more concerning. The nightmare >> scenario is maybe something like: >> >> * naive users do silly things with NOINPUT signatures, and end up >>losing funds due to replays like the above > > As we've never seen with SIGHASH_NONE? > >> * initial source of funds was some major exchange, who decide it's >>cheaper to refund the lost funds than deal with the customer complaints >> >> * the lost funds end up costing enough that major exchanges just outright >>ban sending funds to any address capable of NOINPUT, which also bans >>all taproot/schnorr addresses > > I don't find this remotely credible. > >> FWIW, I don't have a strong opinion here yet, but: >> >> - I'm still inclined to err on the side of putting more safety >>measures in for NOINPUT, rather than fewer > > In theory, sure. But not feel-good and complex "safety measures" which > don't actually help in practical failure scenarios. > >> - the "must have a sig that commits to the input tx" seems like it >>should be pretty safe, not too expensive, and keeps taproot's privacy >>benefits in the cases where you end up needing to use NOINPUT > > If this is considered necessary, can it be a standardness rule rather > than consensus? > > Thanks, > Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety
Anthony Towns writes: > If you publish to the blockchain: ... > 4 can be dropped, state 5 and finish can be altered). Since the CSV delay > is chosen by the participants, the above is still a possible scenario > in eltoo, though, and it means there's some risk for someone accepting > bitcoins that result from a non-cooperative close of an eltoo channel. AJ, this was a meandering random walk which shed very little light. I don't find the differentiation between malicious and non-malicious double-spends convincing. Even if you trust A, you already have to worry about person-who-sent-the-coins-to-A. This expands that set to be "miner who mined coins sent-to-A", but it's very hard to see what difference that makes to how you'd handle coins from A. > Beyond that, I think NOINPUT has two fundamental ways to cause problems > for the people doing NOINPUT sigs: > > 1) your signature gets applied to a unexpectedly different > script, perhaps making it look like you've being dealing > with some blacklisted entity. OP_MASK and similar solves > this. ... followed by two paragraphs describing how it's not a "fundamental way to cause problems" that you (or I) can see. > For the second case, that seems a little more concerning. The nightmare > scenario is maybe something like: > > * naive users do silly things with NOINPUT signatures, and end up >losing funds due to replays like the above As we've never seen with SIGHASH_NONE? > * initial source of funds was some major exchange, who decide it's >cheaper to refund the lost funds than deal with the customer complaints > > * the lost funds end up costing enough that major exchanges just outright >ban sending funds to any address capable of NOINPUT, which also bans >all taproot/schnorr addresses I don't find this remotely credible. > FWIW, I don't have a strong opinion here yet, but: > > - I'm still inclined to err on the side of putting more safety >measures in for NOINPUT, rather than fewer In theory, sure. But not feel-good and complex "safety measures" which don't actually help in practical failure scenarios. > - the "must have a sig that commits to the input tx" seems like it >should be pretty safe, not too expensive, and keeps taproot's privacy >benefits in the cases where you end up needing to use NOINPUT If this is considered necessary, can it be a standardness rule rather than consensus? Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)
Matt Corallo writes: >>> Thus, even if you imagine a steady-state mempool growth, unless the >>> "near the top of the mempool" criteria is "near the top of the next >>> block" (which is obviously *not* incentive-compatible) >> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next >> block, and assumed you'd only allow RBF if the old package wasn't in the >> top and the replacement would be. That seems incentive compatible; more >> than the current scheme? > > My point was, because of block time variance, even that criteria doesn't hold > up. If you assume a steady flow of new transactions and one or two blocks > come in "late", suddenly "top 4MWeight" isn't likely to get confirmed until a > few blocks come in "early". Given block variance within a 12 block window, > this is a relatively likely scenario. [ Digging through old mail. ] Doesn't really matter. Lightning close algorithm would be: 1. Give bitcoind unileratal close. 2. Ask bitcoind what current expidited fee is (or survey your mempool). 3. Give bitcoind child "push" tx at that total feerate. 4. If next block doesn't contain unilateral close tx, goto 2. In this case, if you allow a simpified RBF where 'you can replace if 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx isnt', it works. It allows someone 100k of free tx spam, sure. But it's simple. We could further restrict it by marking the unilateral close somehow to say "gonna be pushed" and further limiting the child tx weight (say, 5kSipa?) in that case. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)
Matt Corallo writes: > Ultimately, defining a "near the top of the mempool" criteria is fraught > with issues. While it's probably OK for the original problem (large > batched transactions where you don't want a single counterparty to > prevent confirmation), lightning's requirements are very different. > Instead is wanting a high probability that the transaction in question > confirms "soon", we need certainty that it will confirm by some deadline. I don't think it's different, in practice. > Thus, even if you imagine a steady-state mempool growth, unless the > "near the top of the mempool" criteria is "near the top of the next > block" (which is obviously *not* incentive-compatible) I was defining "top of mempool" as "in the first 4 MSipa", ie. next block, and assumed you'd only allow RBF if the old package wasn't in the top and the replacement would be. That seems incentive compatible; more than the current scheme? The attack against this is to make a 100k package which would just get into this "top", then push it out with a separate tx at slightly higher fee, then repeat. Of course, timing makes that hard to get right, and you're paying real fees for it too. Sure, an attacker can make you pay next-block high fees, but it's still better than our current "*always* overpay and hope!", and you can always decide at the time based on whether the expiring HTLC(s) are worth it. But I think whatever's simplest to implement should win, and I'm not in a position to judge that accurately. Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Johnson Lau writes: >> On 17 Dec 2018, at 11:10 AM, Rusty Russell wrote: >> My anti-complexity argument leads me to ask why we'd support >> OP_CODESEPARATOR at all? Though my argument is weaker here: no wallet >> need support it. > > Because it could make scripts more compact in some cases? > > This is an example: > https://github.com/bitcoin/bitcoin/pull/11423#issuecomment-333441321 > <https://github.com/bitcoin/bitcoin/pull/11423#issuecomment-333441321> > > But this is probably not a good example for taproot, as it could be more > efficient by making the 2 branches as different script merkle leaves. Thanks, I hadn't seen this before! That's also the first time I've seen SIGHASH_NONE used. >> But I don't see how OP_CODESEPARATOR changes anything here, wrt NOINPUT? >> Remember, anyone can create an output which can be spent by any NOINPUT, >> whether we go for OP_MASK or simply not commiting to the input script. >> > > Let me elaborate more. Currently, scriptCode is truncated at the last > executed CODESEPARATOR. If we have a very big script with many CODESEPARATORs > and CHECKSIGs, there will be a lot of hashing to do. > > To fix this problem, it is proposed that the new sighash will always commit > to the same H(script), instead of the truncated scriptCode. So we only need > to do the H(script) once, even if the script is very big Yes, I read this as proposed, it is clever. Not sure we'd be introducing it if OP_CODESEPARATOR didn't already exist, but at least it's a simplfication. > In the case of NOINPUT with MASKEDSCRIPT, it will commit to the > H(masked_script) instead of H(script). > > To make CODESEPARATOR works as before, the sighash will also commit to the > position of the last executed CODESEPARATOR. So the semantics doesn’t change. > For scripts without CODESEPARATOR, the committed value is a constant. > > IF NOINPUT does not commit to H(masked_script), technically it could still > commit to the position of the last executed CODESEPARATOR. But since the > wallet is not aware of the actual content of the script, it has to guess the > meaning of such committed positions, like “with the HD key path m/x/y/z, I > assume the script template is blah blah blah because I never use this path > for another script template, and the meaning of signing the 3rd CODESEPARATOR > is blah blah blah”. It still works if the assumptions hold, but sounds quite > unreliable to me. My question is more fundamental. If NOINPUT doesn't commit to the input at all, no script, no code separator, nothing. I'm struggling to understand your original comment was "without signing the script or masked script, OP_CODESEPARATOR becomes unusable or insecure with NOINPUT." I mean, non-useful, sure. Its purpose is to alter signature behavior, and from the script POV there's no signature with this form of NOINPUT. But other than the already-established "I reused keys for multiple outputs" oops, I don't see any new dangers? Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Johnson Lau writes: >> On 16 Dec 2018, at 2:55 PM, Rusty Russell via bitcoin-dev >> wrote: >> >> Anthony Towns via bitcoin-dev writes: >>> On Thu, Dec 13, 2018 at 11:07:28AM +1030, Rusty Russell via bitcoin-dev >>> wrote: >>>> And is it worthwhile doing the mask complexity, rather than just >>>> removing the commitment to script with NOINPUT? It *feels* safer to >>>> restrict what scripts we can sign, but is it? >>> >>> If it's not safer in practice, we've spent a little extra complexity >>> committing to a subset of the script in each signature to no gain. If >>> it is safer in practice, we've prevented people from losing funds. I'm >>> all for less complexity, but not for that tradeoff. >> >> There are many complexities we could add, each of which would prevent >> loss of funds in some theoretical case. > > Every security measures are overkill, until someone get burnt. If these > security measures are really effective, no one will get burnt. The inevitable > conclusion is: every effective security measures are overkill. > >> >> From practical experience; reuse of private keys between lightning and >> other things is not how people will lose funds[1]. > > So you argument seems just begging the question. Without NOINPUT, it is just > impossible to lose money by key reuse, and this is exactly the topic we are > debating. I think we're getting confused here. I'm contributing my thoughts from the lightning implementer's point of view; there are other important considerations, but this is my contribution. In *lightning* there are more ways to lose funds via secret reuse. Meanwhile, both SIGHASH_NOINPUT and OP_MASK have the reuse-is-dangerous property; with OP_MASK the danger is limited to reuse-on-the-same-script (ie. if you use the same key for a non-lightning output and a lightning output, you're safe with OP_MASK. However, this is far less likely in practice). I state again: OP_MASK doesn't seem to gain *lightning* any significant security benefit. > It does not need to be something like > GetMaskedScript(GetScriptCodeForMultiSig()). After all, only a very small > number of script templates really need NOINPUT. A GetMaskedScript() in a > wallet is just an overkill (and a vulnerability if mis-implemented) Our current transaction signing code is quite generic (and, if I may say so, readable and elegant). We could, of course, special case GetMaskedScript() for the case we need (the Eltoo examples I've seen have a single OP_MASK at the front, which makes it trivial). > It’s also about functionality here: as I mentioned in another reply, > OP_CODESEPARATOR couldn’t function properly with NOINPUT but without > OP_MASKEDPUSH The mailing list seems a bit backed up or something; I replied to that in the hope you can clear my confusion on that one. > This debate happens because NOINPUT introduces the third way to lose fund > with key reuse. And once it is deployed, we have to support it forever, and > is not something that we could softfork it away. A would use the same words to encourage you to create the simplest possible implementation? I don't think we disagree on philosophy, just trade-offs. And that's OK. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Johnson Lau writes: > I don’t think this has been mentioned: without signing the script or masked > script, OP_CODESEPARATOR becomes unusable or insecure with NOINPUT. > > In the new sighash proposal, we will sign the hash of the full script (or > masked script), without any truncation. To make OP_CODESEPARATOR works like > before, we will commit to the position of the last executed OP_CODESEPARATOR. > If NOINPUT doesn’t commit to the masked script, it will just blindly > committing to a random OP_CODESEPARATOR position, which a wallet couldn’t > know what codes are actually being executed. My anti-complexity argument leads me to ask why we'd support OP_CODESEPARATOR at all? Though my argument is weaker here: no wallet need support it. But I don't see how OP_CODESEPARATOR changes anything here, wrt NOINPUT? Remember, anyone can create an output which can be spent by any NOINPUT, whether we go for OP_MASK or simply not commiting to the input script. Confused, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Anthony Towns via bitcoin-dev writes: > On Thu, Dec 13, 2018 at 11:07:28AM +1030, Rusty Russell via bitcoin-dev wrote: >> And is it worthwhile doing the mask complexity, rather than just >> removing the commitment to script with NOINPUT? It *feels* safer to >> restrict what scripts we can sign, but is it? > > If it's not safer in practice, we've spent a little extra complexity > committing to a subset of the script in each signature to no gain. If > it is safer in practice, we've prevented people from losing funds. I'm > all for less complexity, but not for that tradeoff. There are many complexities we could add, each of which would prevent loss of funds in some theoretical case. >From practical experience; reuse of private keys between lightning and other things is not how people will lose funds[1]. It *is* however non-trivially more complicated for wallets; they currently have a set of script templates which they will sign (ie. no OP_CODESEPARATOR) and I implemented BIP 143 with only the simplest of naive code[2]. In particular, there is no code to parse scripts. Bitcoind developers are not in a good position to assess complexity here. They have to implement *everything*, so each increment seems minor. In addition, none of these new script versions will ever make bitcoind simpler, since they have to support all prior ones. Wallets, however, do not have to. I also think that minimal complexity for (future) wallets is an underappreciated feature: the state of wallets in bitcoin is poor[3] so simplicity should be a key goal. Respectfully, Rusty. [1] Reusing your revocation base point across two channels will lose funds in a much more trivial way, as will reusing payment hashes across invoices. [2] In fact, I added SIGHASH_ANYONECANPAY and SIGHASH_SINGLE recently for Segwit and it worked first time! Kudos to BIP143's authors for such a clear guide. [3] Bitcoind's wallet can't restore from seed; this neatly demonstrates how hard the wallet problem is, but there are many others. code, as modern wallets currently don't have to parse the scripts they sign. I'm telling you that this is not how people are losing funds. > > Also, saying "I can't see how to break this, so it's probably good > enough, even if other people have a bad feeling about it" is a crypto > anti-pattern, isn't it? > > I don't see how you could feasibly commit to more information than script > masking does for use cases where you want to be able to spend different > scripts with the same signature [0]. If that's possible, I'd probably > be for it. > > At the same time, script masking does seem feasible, both for > lightning/eltoo, and even for possibly complex variations of scripts. So > committing to less doesn't seem wise. > >> You already need both key-reuse and amount-reuse to be exploited. >> SIGHASH_MASK only prevents you from reusing this input for a "normal" >> output; if you used this key for multiple scripts of the same form, >> you're vulnerable[1]. > > For example, script masking seems general enough to prevent footguns > even if (for some reason) key and value reuse across eltoo channels > were a requirement, rather than prohibited: you'd make the script be > " MASK CLTV 2DROP CHECKSIG", and your > signature will only apply to that channel, even if another channel has > the same capacity and uses the same keys, a and b. > >> So I don't think it's worth it. SIGHASH_NOINPUT is simply dangerous >> with key-reuse, and Don't Do That. > > For my money, "NOINPUT" commits to dangerously little context, and > doesn't really feel safe to include as a primitive -- as evidenced by > the suggestion to add "_UNSAFE" or similar to its name. Personally, I'm > willing to accept a bit of risk, so that feeling doesn't make me strongly > against the idea; but it also makes it hard for me to want to support > adding it. To me, committing to a masked script is a huge improvement. > > Heck, if it also makes it easier to do something safer, that's also > probably a win... > > Cheers, > aj > > [0] You could, perhaps, commit to knowing the private keys for all the > *outputs* you're spending to, as well as the inputs, which comes > close to saying "I know this is a scary NOINPUT transaction, but > we're paying to ourselves, so it will all be okay". > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Rusty Russell writes: >> However, I’m not sure if there is any useful NOINPUT case with unmasked >> script. > > This is *not* true of Eltoo; the script itself need not change for the > rebinding (Christian, did something change?). This is wrong, sorry. I re-checked the paper, and the constant for the timelock comparison changes on each new update. (The alternative was a new opcode like OP_TIMELOCKGREATERVERIFY which required remembering the nLocktime for the UTXO). So now my opinion is closer to yours: what's the use for NOINPUT && !NOMASK? And is it worthwhile doing the mask complexity, rather than just removing the commitment to script with NOINPUT? It *feels* safer to restrict what scripts we can sign, but is it? Note that NOINPUT is only useful when you can't just re-sign the tx, and you need to be able to create a new tx even if this input is spent once (an attacker can do this with SIGHASH_MASK or not!). ie. any other inputs need to be signed NOINPUT or this one SIGHASH_SINGLE|ANYONECANPAY. You already need both key-reuse and amount-reuse to be exploited. SIGHASH_MASK only prevents you from reusing this input for a "normal" output; if you used this key for multiple scripts of the same form, you're vulnerable[1]. Which, given the lightning software will be using the One True Script, is more likely that your normal wallet using the same keys. So I don't think it's worth it. SIGHASH_NOINPUT is simply dangerous with key-reuse, and Don't Do That. Cheers, Rusty. [1] Attacker can basically clone channel state to another channel. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Johnson Lau writes: >> On 12 Dec 2018, at 5:42 PM, Rusty Russell via bitcoin-dev >> wrote: >> >> Pieter Wuille via bitcoin-dev writes: >>> Here is a combined proposal: >>> * Three new sighash flags are added: SIGHASH_NOINPUT, SIGHASH_NOFEE, >>> and SIGHASH_SCRIPTMASK. >>> * A new opcode OP_MASK is added, which acts as a NOP during execution. >>> * The sighash is computed like in BIP143, but: >>> * If SIGHASH_SCRIPTMASK is present, for every OP_MASK in scriptCode >>> the subsequent opcode/push is removed. >> >> Having the SIGHASH_SCRIPTMASK flag is redundant AFAICT: why not always >> perform mask-removal for signing? > > Because a hardware wallet may want to know what exact script it is signing? OK, removing OP_MASKs unconditionally would introduce a hole without some explicit flag to say they've been removed (the "real script" could be something different with OP_MASKs). We could have the signature commit to the outputscript, but that's a bit meh. > Masked script has reduced security, but this is a tradeoff with > functionality (e.g. eltoo can’t work without masking part of the > script). So when you don’t need that extra functionality, you go back > to better security > > However, I’m not sure if there is any useful NOINPUT case with unmasked > script. This is *not* true of Eltoo; the script itself need not change for the rebinding (Christian, did something change?). So, can we find an example where OP_MASK is useful? >> If you're signing arbitrary scripts, you're surely in trouble already? >> >> And I am struggling to understand the role of scriptmask in a taproot >> world, where the alternate script is both hidden and general? > > It makes sure that your signature is applicable to a specific script branch, > not others (assuming you use the same pubkey in many branches, which is > avoidable) If I'm using SIGHASH_NOINPUT, I'm already required to take care with key reuse. Without a concrete taproot proposal it's hard to make assertions, but if the signature flags that it's using the taproot script, it's no less safe, and more general AFAICT. Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT
Pieter Wuille via bitcoin-dev writes: > Here is a combined proposal: > * Three new sighash flags are added: SIGHASH_NOINPUT, SIGHASH_NOFEE, > and SIGHASH_SCRIPTMASK. > * A new opcode OP_MASK is added, which acts as a NOP during execution. > * The sighash is computed like in BIP143, but: > * If SIGHASH_SCRIPTMASK is present, for every OP_MASK in scriptCode > the subsequent opcode/push is removed. I'm asking on-list because I'm sure I'm not the only confused one. Having the SIGHASH_SCRIPTMASK flag is redundant AFAICT: why not always perform mask-removal for signing? If you're signing arbitrary scripts, you're surely in trouble already? And I am struggling to understand the role of scriptmask in a taproot world, where the alternate script is both hidden and general? I look forward to learning what I missed! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)
Matt Corallo writes: > As an alternative proposal, at various points there have been > discussions around solving the "RBF-pinning" problem by allowing > transactors to mark their transactions as "likely-to-be-RBF'ed", which > could enable a relay policy where children of such transactions would be > rejected unless the resulting package would be "near the top of the > mempool". This would theoretically imply such attacks are not possible > to pull off consistently, as any "transaction-delaying" channel > participant will have to place the package containing A at an effective > feerate which makes confirmation to occur soon with some likelihood. It > is, however, possible to pull off this attack with low probability in > case of feerate spikes right after broadcast. I like this idea. Firstly, it's incentive-compatible[1]: assuming blocks are full, miners should always take a higher feerate tx if that tx would be in the current block and the replaced txs would not.[2] Secondly, it reduces the problem that the current lightning proposal adds to the UTXO set with two anyone-can-spend txs for 1000 satoshis, which might be too small to cleanup later. This rule would allow a simple single P2WSH(OP_TRUE) output, or, with IsStandard changed, a literal OP_TRUE. > Note that this clearly relies on some form of package relay, which comes > with its own challenges, but I'll start a separate thread on that. Could be done client-side, right? Do a quick check if this is above 250 satoshi per kweight but below minrelayfee, put it in a side-cache with a 60 second timeout sweep. If something comes in which depends on it which is above minrelayfee, then process them as a pair[3]. Cheers, Rusty. [1] Miners have generally been happy with Defaults Which Are Good For The Network, but I feel a long term development aim should to be reduce such cases to smaller and smaller corners. [2] The actual condition is subtler, but this is a clear subset AFAICT. [3] For Lightning, we don't care about child-pays-for-grandparent etc. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] BIP sighash_noinput
DING FENG writes: > Hi, > > I'm a junior developer and a bitcoin user. > And I have read this thread carefully. > > I'm very worried about "SIGHASH_NOINPUT". > > Because "SIGHASH_NOINPUT" looks will be widely used, and it makes reuse > address more dangerous. No. A wallet should *never* create a SIGHASH_NOINPUT to spend its own UTXOs. SIGHASH_NOINPUT is useful for smart contracts which have unique conditions, such as a pair of peers rotating keys according to an agreed schedule (eg. lightning). Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP sighash_noinput
Gregory Maxwell via bitcoin-dev writes: > On Mon, Apr 30, 2018 at 4:29 PM, Christian Decker via bitcoin-dev > wrote: >> Hi all, >> >> I'd like to pick up the discussion from a few months ago, and propose a new >> sighash flag, `SIGHASH_NOINPUT`, that removes the commitment to the previous > > I know it seems kind of silly, but I think it's somewhat important > that the formal name of this flag is something like > "SIGHASH_REPLAY_VULNERABLE" or likewise or at least > "SIGHASH_WEAK_REPLAYABLE". I agree with the DO_NOT_WANT-style naming. REUSE_VULNERABLE seems to capture it: the word VULNERABLE should scare people away (or at least cause them to google further). Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts
"David A. Harding" writes: > On Tue, Jun 19, 2018 at 02:02:51PM -0400, David A. Harding wrote: >> Anyone can rewrite a SIGHASH_NOINPUT input's outpoint, but the actual >> transaction containing the settlement is expected to have (at least) two >> inputs, with the second one originating the fees. That second input's >> signature is (I assume) using SIGHASH_ALL to commit to all outpoints in >> the transaction, so it can't be arbitrarily rewritten by a third-party >> to apply to a different state outpoint > > I realized that the fee-paying input could possibly be signed with > SIGHASH_ALL|SIGHASH_ANYONECANPAY to allow anyone to arbitrarily > rewrite the other input signed with SIGHASH_NOINPUT. However, this > reminded me of the well-known DoS against transactions signed with > SIGHASH_ANYONECANPAY[1], which seems to apply generally against > SIGHASH_NOINPUT as well and may allow theft from HTLCs. Yes, RBF Rule #3 again :( It makes RBF unusable in adversarial conditions, and it's not miner incentive-compatible. The only mitigations I have been able to come up with are: 1. Reduce the RBF grouping depth to 2, not 10. This doesn't help here though, since you can still have ~infinite fan-out of txs (create 1000 outputs, spend each with a 400ksipa tx). 2. Revert #3 to a simple "greater feerate" rule, but delay propagation proportional to tx weight, say 60 seconds (fuzzed) for a 400 ksipa tx. That reduces your ability to spam the network (you can always connect directly to nodes and waste their time and bandwidth, but you can do that pretty much today). Frankly, I'd also like a similar mechanism to not reject low-fee txs (above 250 satoshi per ksipa) but simply not propagate them. Drop them after 60 seconds if there's no CPFP to increase their effective feerate. That would allow us to use CPFP on lightning commitment txs today, without having to guess what fees will be sometime in the future. Cheers, Rusty. > ## DoS against Eltoo settlements > > Alice and Mallory have a channel with some state updates. Alice tries > to initiate a cooperate close, but Mallory stalls and instead broadcasts > the trigger transaction and the first state (state 0). Notably, the > first state is bundled into a very large vsize transaction with a low > feerate. State 1 is added to another very large low-feerate > transaction, as are states 2 through 9. > > Alice could in theory RBF the state 0 transaction, but per BIP125 rule > #3, she needs to pay an absolute fee greater than all the transactions > being replaced (not just a higher feerate). That could cost a lot. > Alice could also create a transaction that binds the final state to the > state 9 transaction and attempt CPFP, but increasing the feerate for the > transaction ancestor group to a satisfactory degree would cost the same > amount as RBF. > > So Alice is stuck waiting for states 0-9 to confirm before the final > state can be confirmed. During recent periods of full mempools on > default nodes, the waiting time for 10 nBTC/vbyte transactions has been > more than two weeks. > > ## HTLC theft > > If Mallory is able to introduce significant settlement delays, HTLC > security is compromised. For example, imagine this route: > > Mallory <-> Alice <-> Bob > > Mallory orders a widget from Bob and pays via LN by sending 1 BTC to > Alice hashlocked and timelocked, which Alice forwards to Bob also > hashlocked and timelocked. Mallory releases the preimage to Bob, who > claims the funds from Alice and ships the widget, giving Alice the > preimage. > > At this point, Mallory broadcasts the transactions described in the > preceding section. > > If the low feerate of states 0-9 prevent them from confirming before the > timeout, Mallory can create a transaction containing a dishonest final > state that executes the refund branch. Like before, she can bury this > in an ancestor transaction chain that would be cost prohibitive for Alice > to RBF. > > Considered independently, this is a very expensive attack for Mallory, > and so perhaps impractical. But Mallory can join forces with someone > already creating large low-feerate consolidation transactions. Better > yet, from Mallory's perspective, she can execute the attack against > hundreds of channels at once (creating long chains of ancestor > transactions that are large in aggregate rather than individually > large), using the aggregate size of all the victims' channels against > each of the individual victims. > > Thanks, > > -Dave > > [1] > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2014-August/006438.html > ___ > Lightning-dev mailing list > lightning-...@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Making OP_TRUE standard?
Rusty Russell writes: > AFAICT the optimal DoS is where: > > 1. Attacker sends a 100,000 vbyte tx @1sat/vbyte. > 2. Replaces it with a 108 vbyte tx @2sat/vbyte which spends one of > those inputs. > 3. Replaces that spent input in the 100k tx and does it again. > > It takes 3.5 seconds to propagate to 50% of network[1] (probably much worse > given 100k txs), so they can only do this about 86 times per block. > > That means they send 86 * (10 + 108) = 8609288 vbytes for a cost of > 86 * 2 * 108 + 10 / 2 = 68576 satoshi (assuming 50% chance 100k tx > gets mined). This 50% chance assumption is wrong; it's almost 0% for a low enough fee. Thus the cost is only 18576, making the cost for the transactions 463x lower than just sending 1sat/vbyte txs under optimal conditions. That's a bit ouch.[1] I think a better solution is to address the DoS potential directly: if a replacement doesn't meet #3 or #4, but *does* increase the feerate by at least minrelayfee, processing should be delayed by 30-60 seconds. That means that eventually you will RBF a larger tx, but it'll take much longer. Should be easy to implement, too, since similar timers will be needed for dandelion. Cheers, Rusty. [1] Christian grabbed some more detailed propagation stats for me: larger txs do propagate slower, but only by a factor of 2.5 or so. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Making OP_TRUE standard?
Peter Todd writes: > On Mon, May 21, 2018 at 01:14:06PM +0930, Rusty Russell via bitcoin-dev wrote: >> Jim Posen writes: >> > I believe OP_CSV with a relative locktime of 0 could be used to enforce RBF >> > on the spending tx? >> >> Marco points out that if the parent is RBF, this child inherits it, so >> we're actually good here. >> >> However, Matt Corallo points out that you can block RBF will a >> large-but-lowball tx, as BIP 125 points out: >> >>will be replaced by a new transaction...: >> >>3. The replacement transaction pays an absolute fee of at least the sum >> paid by the original transactions. >> >> I understand implementing a single mempool requires these kind of >> up-front decisions on which tx is "better", but I wonder about the >> consequences of dropping this heuristic? Peter? > > We've discussed this before: that rule prevents bandwidth usage DoS attacks on > the mempool; it's not a "heuristic". If you drop it, an attacker can > repeatedly > broadcast and replace a series of transactions to use up tx relay bandwidth > for > significantly lower cost than otherwise. > > Though these days with relatively high minimum fees that may not matter. AFAICT the optimal DoS is where: 1. Attacker sends a 100,000 vbyte tx @1sat/vbyte. 2. Replaces it with a 108 vbyte tx @2sat/vbyte which spends one of those inputs. 3. Replaces that spent input in the 100k tx and does it again. It takes 3.5 seconds to propagate to 50% of network[1] (probably much worse given 100k txs), so they can only do this about 86 times per block. That means they send 86 * (10 + 108) = 8609288 vbytes for a cost of 86 * 2 * 108 + 10 / 2 = 68576 satoshi (assuming 50% chance 100k tx gets mined). That's a 125x cost over just sending 1sat/vbyte txs under optimal conditions[2], but it doesn't really reach most low-bandwidth nodes anyway. Given that this rule is against miner incentives (assuming mempool is full), and makes things more complex than they need to be, I think there's a strong argument for its removal. Cheers, Rusty. [1] http://bitcoinstats.com/network/propagation/ [2] Bandwidth overhead for just sending a 108-vbyte tx is about 160 bytes, so our actual bandwidth per satoshi is closer to 60x even under optimal conditions. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Making OP_TRUE standard?
Jim Posenwrites: > I believe OP_CSV with a relative locktime of 0 could be used to enforce RBF > on the spending tx? Marco points out that if the parent is RBF, this child inherits it, so we're actually good here. However, Matt Corallo points out that you can block RBF will a large-but-lowball tx, as BIP 125 points out: will be replaced by a new transaction...: 3. The replacement transaction pays an absolute fee of at least the sum paid by the original transactions. I understand implementing a single mempool requires these kind of up-front decisions on which tx is "better", but I wonder about the consequences of dropping this heuristic? Peter? Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Making OP_TRUE standard?
Johnson Lauwrites: > You should make a “0 fee tx with exactly one OP_TRUE output” standard, but > nothing else. This makes sure CPFP will always be needed, so the OP_TRUE > output won’t pollute the UTXO set That won't propagate :( > Instead, would you consider to use ANYONECANPAY to sign the tx, so it > is possible add more inputs for fees? The total tx size is bigger than > the OP_TRUE approach, but you don’t need to ask for any protocol > change. No, that would change the TXID, which we rely on for HTLC transactions. > In long-term, I think the right way is to have a more flexible SIGHASH system > to allow people to add more inputs and outputs easily. Agreed: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-April/015862.html But in the long term we'll have Eltoo and SIGHASH_NOINPUT which both allow different solutions. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP sighash_noinput
Anthony Towns via bitcoin-devwrites: > On Mon, May 07, 2018 at 09:40:46PM +0200, Christian Decker via bitcoin-dev > wrote: >> Given the general enthusiasm, and lack of major criticism, for the >> `SIGHASH_NOINPUT` proposal, [...] > > So first, I'm not sure if I'm actually criticising or playing devil's > advocate here, but either way I think criticism always helps produce > the best proposal, so > > The big concern I have with _NOINPUT is that it has a huge failure > case: if you use the same key for multiple inputs and sign one of them > with _NOINPUT, you've spent all of them. The current proposal kind-of > limits the potential damage by still committing to the prevout amount, > but it still seems a big risk for all the people that reuse addresses, > which seems to be just about everyone. If I can convince you to sign with SIGHASH_NONE, it's already a problem today. > I wonder if it wouldn't be ... I'm not sure better is the right word, > but perhaps "more realistic" to have _NOINPUT be a flag to a signature > for a hypothetical "OP_CHECK_SIG_FOR_SINGLE_USE_KEY" opcode instead, > so that it's fundamentally not possible to trick someone who regularly > reuses keys to sign something for one input that accidently authorises > spends of other inputs as well. That was also suggested by Mark Friedenbach, but I think we'll end up with more "magic key" a-la Schnorr/taproot/graftroot and less script in future. That means we'd actually want a different Segwit version for "NOINPUT-can-be-used", which seems super ugly. > Maybe a different opcode maybe makes sense at a "philosophical" level: > normal signatures are signing a spend of a particular "coin" (in the > UTXO sense), while _NOINPUT signatures are in some sense signing a spend > of an entire "wallet" (all the coins spendable by a particular key, or > more accurately for the current proposal, all the coins of a particular > value spendable by a particular key). Those are different intentions, > so maybe it's reasonable to encode them in different addresses, which > in turn could be done by having a new opcode for _NOINPUT. In a world where SIGHASH_NONE didn't exist, this might be an argument :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Making OP_TRUE standard?
Olaoluwa Osuntokunwrites: > What are the downsides of just using p2wsh? This route can be rolled out > immediately, while policy changes are pretty "fuzzy" and would require a > near uniform rollout in order to ensure wide propagation of the commitment > transactions. I expect we will, but thougth I'd ask :) I get annoyed when people say "We found this issue, but we worked around it and so never bothered you with it!" for my projects :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Making OP_TRUE standard?
Hi all, The largest problem we are having today with the lightning protocol is trying to predict future fees. Eltoo solves this elegantly, but meanwhile we would like to include a 546 satoshi OP_TRUE output in commitment transactions so that we use minimal fees and then use CPFP (which can't be done at the moment due to CSV delays on outputs). Unfortunately, we'd have to P2SH it at the moment as a raw 'OP_TRUE' is non-standard. Are there any reasons not to suggest such a policy change? Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Signature bundles
Anthony Towns via bitcoin-devwrites: > If you've got one bundle that overpays fees and another that underpays, > you can safely combine the two only if you can put a SIGHASH_ALL sig in > the one that overpays (otherwise miners could just make their own tx of > just the overpaying bundle). This is a potential problem, yes :( And I'm not sure how to solve it, unless you do some crazy thing like commit to a set of keys which are allowed to bundle, which kind of defeats the generality of outsourcing. > This could replace SINGLE|ANYONECANPAY at a cost of an extra couple of > witness bytes. > > I think BUNDLESTART is arguably redundant -- you could just infer > BUNDLESTART if you see an INBUNDLE flag when you're not already in > a bundle. Probably better to have the flag to make parsing easier, > so just have the rule be BUNDLESTART is set for precisely the first > INBUNDLE signature since the last bundle finished. Indeed. >> One of the issues we've struck with lightning is trying to guess future >> fees for commitment transactions: we can't rely on getting another >> signature from our counterparty to increase fees. Nor can we use >> parent-pays-for-child since the outputs we can spend are timelocked. > > That doesn't quite work with the HTLC-Success/HTLC-Timeout transactions > though, does it? They spend outputs from the commitment transaction > and need to be pre-signed by your channel partner in order to ensure > the output address is correct -- but if the commitment transaction gets > bundled, its txid will change, so it can't be pre-signed. Not without SIGHASH_NOINPUT, no. > FWIW, a dumb idea I had for this problem was to add a zero-value > anyone-can-spend output to commitment transactions, that can then be > used with CPFP to bump the fees. Not very nice for UTXO bloat if fee > bumping isn't needed though, and I presume it would fail to pass the > dust threshold... Yeah, let's not do that. > I wonder if it would be plausible to have after-the-fact fee-bumping > via special sighash flags at the block level anyway though. Concretely: > say you have two transactions, X and Y, that don't pay enough in fees, > you then provide a third transaction whose witness is [txid-for-X, > txid-for-Y, signature committing to (txid-for-X, txid-for-Y)], and can > only be included in a block if X and Y are also in the same block. You > could make that fairly concise if you allowed miners to replace txid-for-X > with X's offset within the block (or the delta between X's txnum and the > third transaction's txnum), though coding that probably isn't terribly > straightforward. What would it spend though? Can't use an existing output, so this really needs to be stashed in an *output script*, say a zero-amount output which is literally a push of txids, and is itself unspendable. ... That's pretty large though, and it's non-witness data (though discardable). How about 'OP_NOP4 '? Then the miner just bundles those tx all together? Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP 117 Feedback
"Russell O'Connor"writes: > However, if I understand correctly, the situation for BIP 117 is entirely > different. As far as I understand there is currently no restrictions about > terminating a v0 witness program with a non-empty alt-stack, and there are > no restrictions on leaving non-canonical boolean values on the main stack. BIP-141: "The script must not fail, and result in exactly a single TRUE on the stack." And it has long been non-standard for P2SH scripts to not do the same (don't know exactly when). > There could already be commitments to V0 witness programs that, when > executed in some reasonable context, leave a non-empty alt-stack and/or > leave a non-canonical true value on the main stack. Unlike the P2SH or > Segwit soft-forks, these existing commitments could be real outputs that > currently confer non-trivial ownership over their associated funds. If BIP > 117 were activated, these commitments would be subject to a new set of > rules that didn't exist when the commitments were made. In particular, > these funds could be rendered unspendable. Because segwit commitments are > hashes of segwit programs, there is no way to even analyze the blockchain > to determine if these commitments currently exist (and even if we could it > probably woudln't be adequate protection). The rule AFAICT is "standard transactions must still work". This was violated with low-S, but the transformation was arguably trivial. OTOH, use of altstack is completely standard, though in practice it's unused and so only a theoretical concern. My concern remains unanswered: I want hard numbers on the worst-case time taken by sigops with the limit removed. It's about 120 usec per sigop (from [1]), so how bad could it be? I think Russell had an estimate like 1 in 3 ops, so 160 seconds to validate a block? Thanks, Rusty. [1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015346.html ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] BIP 117 Feedback
I've just re-read BIP 117, and I'm concerned about its flexibility. It seems to be doing too much. The use of altstack is awkward, and makes me query this entire approach. I understand that CLEANSTACK painted us into a corner here :( The simplest implementation of tail recursion would be a single blob: if a single element is left on the altstack, pop and execute it. That seems trivial to specify. The treatment of concatenation seems like trying to run before we can walk. Note that if we restrict this for a specific tx version, we can gain experience first and get fancier later. BIP 117 also drops SIGOP and opcode limits. This requires more justification, in particular, measurements and bounds on execution times. If this analysis has been done, I'm not aware of it. We could restore statically analyzability by rules like so: 1. Only applied for tx version 3 segwit txs. 2. For version 3, top element of stack is counted for limits (perhaps with discount). 3. The blob popped off for tail recursion must be identical to that top element of the stack (ie. the one counted above). Again, future tx versions could drop such restrictions. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Covenants through merkelized txids.
Hi all, This is an alternative to jl2012's BIPZZZ (OP_PUSHTXDATA[1]). It riffs on the (ab)use of OP_CHECKSIGFROMSTACK that Russell[2] used to implement covenants[3]. I've been talking about it to random people for a while, but haven't taken time to write it up. The idea is to provide a OP_TXMERKLEVERIFY that compares the top stack element against a merkle tree of the current tx constructed like so[4]: TXMERKLE = merkle(nVersion | nLockTime | fee, inputs & outputs) inputs & outputs = merkle(inputmerkle, outputmerkle) input = merkle(prevoutpoint, nSequence | inputAmount) output = merkle(nValue, scriptPubkey) Many variants are possible, but if we have OP_CAT, this makes it fairly easy to test a particular tx property. A dedicated OP_MERKLE would make it even easier, of course. If we one day HF and add merklized TXIDs[5], we could also use this method to inspect the tx *being spent* (which was what I was originally trying to do). Thanks for reading! Rusty. [1] https://github.com/jl2012/bips/blob/vault/bip-0ZZZ.mediawiki [2] Aka Dr. "Not Rusty" O'Connor. Of course both of us in the same thread will probably break the internet. [3] https://blockstream.com/2016/11/02/covenants-in-elements-alpha.html [4] You could put every element in a leaf, but that's less compact to use: cheaper to supply the missing parts with OP_CAT than add another level. [5] Eg. use the high nVersion bit to say "make my txid a merkle". ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP149 timeout-- why so far in the future?
Matt Corallowrites: > A more important consideration than segwit's timeout is when code can be > released, which will no doubt be several months after SegWit's current > timeout. I was assuming it would be included in the next point release. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP149 timeout-- why so far in the future?
Gregory Maxwell via bitcoin-devwrites: > Based on how fast we saw segwit adoption, why is the BIP149 timeout so > far in the future? > > It seems to me that it could be six months after release and hit the > kind of density required to make a stable transition. Agreed, I would suggest 16th December, 2017 (otherwise, it should be 16th January 2018; during EOY holidays seems a bad idea). This means this whole debacle has delayed segwit exactly 1 (2) month(s) beyond what we'd have if it used BIP8 in the first place. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Rolling UTXO set hashes
Gregory Maxwell via bitcoin-devwrites: > On Tue, May 16, 2017 at 6:17 PM, Pieter Wuille > wrote: >> just the first - and one that has very low costs and no normative >> datastructures at all. > > The serialization of the txout itself is normative, but very minimal. I do prefer the (2) approach, BTW, as it reuses existing primitives, but I know "simpler" means a different thing to mathier brains :) Since it wasn't explicit in the proposal, I think the txout information placed in the hash here is worth discussing. I prefer a simple txid||outnumber[1], because it allows simple validation without knowing the UTXO set itself; even a lightweight node can assert that UTXOhash for block N+1 is valid if the UTXOhash for block N is valid (and vice versa!) given block N+1. And miners can't really use that even if they were to try not validating against UTXO (!) because they need to know input amounts for fees (which are becoming significant). If I want to hand you the complete validatable UTXO set, I need to hand you all the txs with any unspent output, and some bitfield to indicate which ones are unspent. OTOH, if you serialize more (eg. ...||amount||scriptPubKey ?), then the UTXO set size needed to validate the utxohash is a little smaller: you need to send the txid, but not the tx nVersion, nLocktime or inputs. But in a SegWit world, that's actually *bigger* AFAICT. Thanks, Rusty. [1] I think you could actually use txid^outnumber, and if that's not a curve point SHA256() again, etc. But I don't think that saves any real time, and may cause other issues. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP: Block signal enforcement via tx fees
Russell O'Connor via bitcoin-devwrites: > On Sat, May 13, 2017 at 1:26 AM, Luke Dashjr wrote: > >> Versionbits change/lose their meaning after the deployment timeout. For >> this >> reason, the timeout must be specified so the check is skipped when that >> occurs. >> > > To add a timeout a user can optionally bundle a pair of similar > transactions. One with the transaction version bits set and a second with > a locktime set. The effect is the same. I have a similar proposal to Russell; use tx nVersion. However, my subset is simpler, and uses fewer precious nVersion bits: 1. Top version 26 bits must be 1 (say) 2. Next bit indicates positive (must have bit set) or negative (must NOT have bit set). 3. Bottom 5 bits refer to which BIP8/9 bit we're talking about. This only allows specifying a single bit, and only support BIP8/9-style signalling. I believe we can skip the timeout: miners don't signal 100% either way anyway. If a BIP is in LOCKIN, wallets shouldn't set positive on that bit (this gives them two weeks). Similarly, if a BIP is close to FAILED, don't set positive on your tx. Wallets shouldn't signal until any bit until see some minimal chance it's accepted (eg. 1 in 20 blocks). > I recall chatting about this idea recently and my conclusion was the same > as Peter Todd's conclusion: this will just encourage miners to false signal > readiness with undermines both BIP 9 and BIP 8. This is gentler on miners than a UASF flag day, and does offer some harder-to-game signalling from bitcoin users. False signalling miners still have the 2 week LOCKIN period to upgrade, otherwise they can already lose money. You could argue they're *more* likely to upgrade with a signal that significant parts of the economy have done so. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP draft: OP_CHECKBLOCKATHEIGHT
Luke Dashjr via bitcoin-devwrites: > This BIP describes a new opcode (OP_CHECKBLOCKATHEIGHT) for the Bitcoin > scripting system to address reissuing bitcoin transactions when the coins > they > spend have been conflicted/double-spent. > > https://github.com/luke-jr/bips/blob/bip-cbah/bip-cbah.mediawiki > > Does this seem like a good idea/approach? Prefer a three-arg version (gbits-to-compare, blocknum, hash): - If is 0 or > 256, invalid. - If the hash length is not ( + 7) / 8, invalid. - If the hash unused bits are not 0, invalid. - Otherwise of hash is compared to lower of blockhash. This version also lets you play gambling games on-chain! Or maybe I've just put another nail in CBAH's coffin? Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] New BIP: Dealing with OP_IF and OP_NOTIF malleability in P2WSH
Johnson Lau via bitcoin-devwrites: > Restriction for segwit OP_IF argument as a policy has got a few concept ACK. > I would like to have more people to ACK or NACK, especially the real users of > OP_IF. I think Lightning network would use that at lot. My current scripts use OP_IF and OP_NOTIF only after OP_EQUAL, except for one place where they use OP_EQUAL ... OP_EQUAL... OP_ADD OP_IF (where the two OP_EQUALs are comparing against different hashes, so only 0 or 1 of the two OP_EQUAL can return 1). So there's no effect either way on the c-lightning implementation, at least. Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP 151 use of HMAC_SHA512
Ethan Heilmanwrites: >>It's also not clear to me why the HMAC, vs just SHA256(key|cipher-type|mesg). >> But that's probably just my crypto ignorance... > > SHA256(key|cipher-type|mesg) is an extremely insecure MAC because of > the length extension property of SHA256. > > If I have a tag y = SHA256(key|cipher-type|mesg), I can without > knowing key or msg compute a value y' such that > y' = SHA256(key|cipher-type|mesg|any values I want). Not quite, there's an important subtlety that SHA256 appends the bitlength, so you can only create: y' = SHA256(key|cipher-type|mesg|padding|bitlength|any values I want). But we're not using this for a MAC in BIP151, we're using this to generate the encryption keys. Arthur Chen said: > HMAC has proven security property. > It is still secure even when underlying crypto hashing function has > collision resistant weakness. > For example, MD5 is considered completely insecure now, but HMAC-MD5 is > still considered secure. > When in doubt, we should always use HMAC for MAC(Message Authentication > Code) rather than custom construction Bitcoin already relies on SHA256's robustness, but again, we don't need a MAC here. I'm happy to buy "we just copied ssh" if that's the answer, and I can't see anything wrong with using HMAC here, it just seems odd... Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP 151 use of HMAC_SHA512
Jonas Schnelliwrites: >> To quote: >> >>> HMAC_SHA512(key=ecdh_secret|cipher-type,msg="encryption key"). >>> >>> K_1 must be the left 32bytes of the HMAC_SHA512 hash. >>> K_2 must be the right 32bytes of the HMAC_SHA512 hash. >> >> This seems a weak reason to introduce SHA512 to the mix. Can we just >> make: >> >> K_1 = HMAC_SHA256(key=ecdh_secret|cipher-type,msg="header encryption key") >> K_2 = HMAC_SHA256(key=ecdh_secret|cipher-type,msg="body encryption key") > > SHA512_HMAC is used by BIP32 [1] and I guess most clients will somehow > make use of bip32 features. I though a single SHA512_HMAC operation is > cheaper and simpler then two SHA256_HMAC. Good point; I would argue that mistake has already been made. But I was looking at appropriating your work for lightning inter-node comms, and adding another hash algo seemed unnecessarily painful. > AFAIK, sha256_hmac is also not used by the current p2p & consensus layer. > Bitcoin-Core uses it for HTTP RPC auth and Tor control. It's also not clear to me why the HMAC, vs just SHA256(key|cipher-type|mesg). But that's probably just my crypto ignorance... Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] BIP 151 use of HMAC_SHA512
To quote: > HMAC_SHA512(key=ecdh_secret|cipher-type,msg="encryption key"). > > K_1 must be the left 32bytes of the HMAC_SHA512 hash. > K_2 must be the right 32bytes of the HMAC_SHA512 hash. This seems a weak reason to introduce SHA512 to the mix. Can we just make: K_1 = HMAC_SHA256(key=ecdh_secret|cipher-type,msg="header encryption key") K_2 = HMAC_SHA256(key=ecdh_secret|cipher-type,msg="body encryption key") Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Compact Block Relay BIP
Gregory Maxwell <g...@xiph.org> writes: > On Tue, May 10, 2016 at 5:28 AM, Rusty Russell via bitcoin-dev > <bitcoin-dev@lists.linuxfoundation.org> wrote: >> I used variable-length bit encodings, and used the shortest encoding >> which is unique to you (including mempool). It's a little more work, >> but for an average node transmitting a block with 1300 txs and another >> ~3000 in the mempool, you expect about 12 bits per transaction. IOW, >> about 1/5 of your current size. Critically, we might be able to fit in >> two or three TCP packets. > > Hm. 12 bits sounds very small even giving those figures. Why failure > rate were you targeting? That's a good question; I was assuming a best-case in which we have mempool set reconciliation (handwave) thus know they are close. But there's also an alterior motive: any later more sophisticated approach will want variable-length IDs, and I'd like Matt to do the work :) In particular, you can significantly narrow the possibilities for a block by sending the min-fee-per-kb and a list of "txs in my mempool which didn't get in" and "txs which did despite not making the fee-per-kb". Those turn out to be tiny, and often make set reconciliation trivial. That's best done with variable-length IDs. > (*Not interesting because it mostly reduces exposure to loss and the > gods of TCP, but since those are the long poles in the latency tent, > it's best to escape them entirely, see Matt's udp_wip branch.) I'm not convinced on UDP; it always looks impressive, but then ends up reimplementing TCP in practice. We should be well within a TCP window for these, so it's hard to see where we'd win. >> I would also avoid the nonce to save recalculating for each node, and >> instead define an id as: > > Doing this would greatly increase the cost of a collision though, as > it would happen in many places in the network at once over the on the > network at once, rather than just happening on a single link, thus > hardly impacting overall propagation. "Greatly increase"? I don't see that. Let's assume an attacker grinds out 10,000 txs with 128 bits of the same TXID, and gets them all in a block. They then win the lottery and get a collision. Now we have to transmit ~48 bytes more than expected. > Using the same nonce means you also would not get a recovery gain from > jointly decoding using compact blocks sent from multiple peers (which > you'll have anyways in high bandwidth mode). Not quite true, since if their mempools differ they'll use different encoding lengths, but yes, you'll get less of this. > With a nonce a sender does have the option of reusing what they got-- > but the actual encoding cost is negligible, for a 2500 transaction > block its 27 microseconds (once per block, shared across all peers) > using Pieter's suggestion of siphash 1-3 instead of the cheaper > construct in the current draft. > > Of course, if you're going to check your whole mempool to reroll the > nonce, thats another matter-- but that seems wasteful compared to just > using a table driven size with a known negligible failure rate. I'm not worried about the sender: The recipient needs to encode all the mempool. >> As Peter R points out, we could later enhance receiver to brute force >> collisions (you could speed that by sending a XOR of all the txids, but >> really if there are more than a few collisions, give up). > > The band between "no collisions" and "infeasible many" is fairly > narrow. You can add a small amount more space to the ids and > immediately be in the no collision zone. Indeed, I would be adding extra bits in the sender and not implementing brute force in the receiver. But I welcome someone else to do so. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Simple Bitcoin Payment Channel Protocol v0.1 draft (request for comments)
Rune Kjær Svendsen via bitcoin-devwrites: > Dear list > > I've spent the past couple of months developing a simple protocol for > working with payment channels. I've written up a specification of how > it operates, in an attempt to standardize the operations of opening, > paying and closing. Hi! CHECKLOCKTIMEVERIFY [...] allows payment channel setup to be risk free [...] something that was not the case before, when the refund Bitcoin transaction depended on another, unconfirmed Bitcoin transaction. Building on unconfirmed transactions is currently not safe in Bitcoin With Segregated Witness, this is now safe. With that expected soon, I'd encourage you to take advantage of it. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] SIGHASH_NOINPUT in Segregated Witness
Joseph Poon via bitcoin-devwrites: > Ideally, a 3rd-party can be handed a transaction which can encompass all > prior states in a compact way. For currently-designed Segregated Witness > transactions, this requires storing all previous signatures, which can > become very costly if individuals to thousands of channel state updates > per day. AFAICT we need more than this. Or are you using something other than the deployable lightning commit tx style? If each HTLC output is a p2sh[1], you need the timeout and rhash for each one to build the script to redeem it. In practice, there's not much difference between sending a watcher a tx for every commit tx and sending it information for every new HTLC (roughly a factor of 2). So we also need to put more in the scriptPubKey for this to work; either the entire redeemscript, or possibly some kind of multiple-choice P2SH where any one of the hashes will redeem the payment. Cheers, Rusty. [1] eg. from https://github.com/ElementsProject/lightning/blob/master/doc/deployable-lightning.pdf OP_HASH160 OP_DUP # Replace top element with two copies of its hash OP_EQUAL # Test if they supplied the HTLC R value OP_SWAP OP_EQUAL OP_ADD # Or the commitment revocation hash OP_IF # If any hash matched. # Pay to B. OP_ELSE # Must be A, after HTLC has timed out. OP_CHECKLOCKTIMEVERIFY Ensure (absolute) time has passed. OP_CHECKSEQUENCEVERIFY # Delay gives B enough time to use revocation if it has it. OP_2DROP # Drop the delay and htlc-timeout from the stack. # Pay to A. OP_ENDIF OP_CHECKSIG # Verify A or B's signature is correct. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Three Month bitcoin-dev Moderation Review
xor--- via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes: > On Thursday, January 21, 2016 11:20:46 AM Rusty Russell via bitcoin-dev wrote: >> So, what should moderation look like from now on? > > The original mail which announced moderation contains this rule: >> - Generally discouraged: [...], +1s, [...] > > I assume "+1s" means statements such as "I agree with doing X". > > Any sane procedure of deciding something includes asking the involved people > whether they're for or against it. > If there are dozens of proposals on how to solve a particular technical > problem, how else do you want to decide it than having a vote? +1s here means simpling say "+1" or "me too" that carries no additional information. ie. if you like an idea, that's great, but it's not worth interruping the entire list for. If you say "I prefer proposal X over Y because " that's different. As is "I dislike X because " or "I need X because ". Hope that clarifies! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Three Month bitcoin-dev Moderation Review
Dave Scotese via bitcoin-devwrites: > It is a shame that the moderated messages require so many steps to > retrieve. Is it possible to have the "downloadable version" from > https://lists.ozlabs.org/pipermail/bitcoin-dev-moderation/ for each month > contain the text of the moderated emails? They do contain the subjects, so > that helps. Yes, it's because we simply forward them to the bitcoin-dev-moderation mailing list, and it strips them out as attachments. I'd really love a filter which I could run them through (on ozlabs.org) to fix this. Volunteers welcome :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?
Matt Corallowrites: > Indeed, anything which uses P2SH is obviously vulnerable if there is > an attack on RIPEMD160 which reduces it's security only marginally. I don't think this is true? Even if you can generate a collision in RIPEMD160, that doesn't help you since you need to create a specific SHA256 hash for the RIPEMD160 preimage. Even a preimage attack only helps if it leads to more than one preimage fairly cheaply; that would make grinding out the SHA256 preimage easier. AFAICT even MD4 isn't this broken. But just with Moore's law (doubling every 18 months), we'll worry about economically viable attacks in 20 years.[1] That's far enough away that I would choose simplicity, and have all SW scriptPubKeys simply be "<0> RIPEMD(SHA256(WP))" for now, but it's not a no-brainer. Cheers, Rusty. [1] Assume bitcoin-network-level compute (collision in 19 days) costs $1B to build today. Assume there will be 100 million dollars a day in vulnerable txs, and you're on one end of all of them (or can MITM if you find a collision), *and* can delay them all by 10 seconds, and none are in parallel so you can attack all of them. IOW, just like a single $100M opportunity for 3650 seconds each year. Our machine has a 0.11% chance of finding a collision in 1 hour, so it's worth about $110,000. We can build it for that in about 20 years. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Draft] Decentralized Improvement Proposals
Luke Dashjr via bitcoin-devwrites: > On Wednesday, December 30, 2015 6:22:59 PM Tomas wrote: >> > The specification itself looks like an inefficient and bloaty reinvention >> > of version bits. >> >> The actual assignment of version bits isn't clear from the >> specification. Are you saying that any implementation that wants to >> propose a change is encouraged to pick a free version bit and use it? > > That should probably be clarified in the BIP, I agree. Perhaps it ought to be > assigned the same as BIP numbers themselves, by the BIP editor? (Although as > a > limited resource, maybe that's not the best solution.) I thought about it, but it's subject to change. Frankly, the number of deployed forks is low enough that they can sort it out themselves. If we need something more robust, I'm happy to fill that role. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On the security of softforks
Jonathan Toomim via bitcoin-devwrites: > On Dec 18, 2015, at 10:30 AM, Pieter Wuille via bitcoin-dev > wrote: > >> 1) The risk of an old full node wallet accepting a transaction that is >> invalid to the new rules. >> >> The receiver wallet chooses what address/script to accept coins on. >> They'll upgrade to the new softfork rules before creating an address >> that depends on the softfork's features. >> >> So, not a problem. > > > Mallory wants to defraud Bob with a 1 BTC payment for some beer. Bob > runs the old rules. Bob creates a p2pkh address for Mallory to > use. Mallory takes 1 BTC, and creates an invalid SegWit transaction > that Bob cannot properly validate and that pays into one of Mallory's > wallets. Mallory then immediately spends the unconfirmed transaction > into Bob's address. Bob sees what appears to be a valid transaction > chain which is not actually valid. Pretty sure Bob's wallet will be looking for "OP_DUP OP_HASH160 OP_EQUALVERIFY OP_CHECKSIG" scriptSig. The SegWit-usable outputs will (have to) look different, won't they? Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Segregated Witness features wish list
Jannes Faber via bitcoin-devwrites: > Segregated IBLT > > I was just wondering if it would make sense when we have SW to also make > Segregated IBLT? Segregating transactions from signatures and then tune the > parameters such that transactions have a slightly higher guarantee and save > a bit of space on the signatures side. It just falls out naturally. If the peer doesn't want the witnesses, they don't get serialized into the IBLT. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Draft] Datastream compression of Blocks and Transactions
Gavin Andresen via bitcoin-devwrites: > On Wed, Dec 2, 2015 at 1:57 PM, Emin Gün Sirer < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> How to Do It >> >> If we want to compress Bitcoin, a programming challenge/contest would be >> one of the best ways to find the best possible, Bitcoin-specific >> compressor. This is the kind of self-contained exercise that bright young >> hackers love to tackle. It'd bring in new programmers into the ecosystem, >> and many of us would love to discover the limits of compressibility for >> Bitcoin bits on a wire. And the results would be interesting even if the >> final compression engine is not enabled by default, or not even merged. >> > > I love this idea. Lets build a standardized data set to test against using > real data from the network (has anybody done this yet?). https://github.com/rustyrussell/bitcoin-corpus It includes mempool contents and tx receipt logs for 1 week across 4 nodes. I vaguely plan to update it every year. A more ambitious version would add some topology information, but we need to figure out some anonymization strategy for the data. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Alternative name for CHECKSEQUENCEVERIFY (BIP112)
Eric Lombrozo via bitcoin-devwrites: >>From an app developer's perspective, I think it is pretty blatantly > clear that relative timelock is *the* critical exposed functionality > intended here. As someone who actually developed scripts using CSV, I agree with Mark (and Matt). The relative locktime stuff isn't in this opcode, it's in the nSequence calculation. So, I vote to keep CSV called as it is. Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Compatibility requirements for hard or soft forks
Gavin Andresen via bitcoin-devwrites: > Should it be a requirement that ANY one-megabyte transaction that is valid > under the existing rules also be valid under new rules? > > Pro: There could be expensive-to-validate transactions created and given a > lockTime in the future stored somewhere safe. Their owners may have no > other way of spending the funds (they might have thrown away the private > keys), and changing validation rules to be more strict so that those > transactions are invalid would be an unacceptable confiscation of funds. Not just lockTime; potentially any tx locked away in a safe. Consider low-S enforcement: high chance a non-expert user will be unable to spend an old transaction. They need to compromise their privacy and/or spend time and money. A milder "confiscation" but a more likely one. By that benchmark, we should aim for "reasonable certainty". A transaction which would never have been generated by any known software is the minimum bar. Adding "...which would have to be deliberately stupid with many redundant OP_CHECKSIG etc" surpasses it. The only extra safeguard I can think of is clear, widespread notification of the change. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Memory leaks?
Btc Drak via bitcoin-devwrites: > I think this thread has gotten to the stage where it should be moved > to an issue on Github and not continue to CC the bitcoin-dev list (or > any other list tbh). Agreed. I couldn't see an issue, so I've opened one. Let's track this there, please. https://github.com/bitcoin/bitcoin/issues/6876 Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change
Btc Drakwrites: > Alex, > > I am sorry for not communicating more clearly. Mark and I discussed your > concerns from the last meeting and he made the change. The BIP text still > needs to be updated, but the discussed change was added to the PR, albeit > squashed making it more non-obvious. BIP68 now explicitly uses 16 bits with > a bitmask. Please see the use of SEQUENCE_LOCKTIME_MASK > and SEQUENCE_LOCKTIME_GRANULARITY in the PR > https://github.com/bitcoin/bitcoin/pull/6312. I like it from a technical perspective. >From a practical perspective: yuck. There's currently no way to play with bitcoind's perception of time, so that's a very long sleep to blackbox test (which is what my lightning test script does). So consider this YA feature request :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change
Rusty Russell via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes: >>From a practical perspective: yuck. There's currently no way to play > with bitcoind's perception of time, so that's a very long sleep to > blackbox test (which is what my lightning test script does). > > So consider this YA feature request :) ... Gavin just told me about setmocktime. That's fast service! Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change
Peter Todd via bitcoin-devwrites: > However I don't think we've done a good job showing why we need to > implement this feature via nSequence. It could be implemented in other ways, but nSequence is the neatest and most straightforward I've seen. - I'm not aware of any other (even vague) proposal for its use? Enlighten? - BIP68 reserves much of it for future use already. If we apply infinite caution we could never use nSequence, as there might be a better use tommorrow. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Versionbits BIP (009) minor revision proposal.
Rusty Russell via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes: > Gregory Maxwell <gmaxw...@gmail.com> writes: >> I can, however, argue it the other way (and probably have in the >> past): The bit is easily checked by thin clients, so thin clients >> could use it to reject potentially ill-fated blocks from non-upgraded >> miners post switch (which otherwise they couldn't reject without >> inspecting the whole thing). This is an improvement over not forcing >> the bit, and it's why I was previously in favor of the way the >> versions were enforced. But, experience has played out other ways, >> and thin clients have not done anything useful with the version >> numbers. >> >> A middle ground might be to require setting the bit for a period of >> time after rule enforcing begins, but don't enforce the bit, just >> enforce validity of the block under new rules. Thus a thin client >> could treat these blocks with increased skepticism. > > Introducing this later would trigger warnings on older clients, who > would consider the bit to represent a new soft fork :( Actually, this isn't a decisive argument, since we can use the current mechanism to upgrade versionbits, or as Eric says, tack it on to an existing soft fork. So, I think I'm back where I started. We leave this for now. There was no nak on the "keep setting bit until activation" proposal, so I'm opening a PullReq for that now: https://github.com/bitcoin/bips/pull/209 Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!
Adam Backwrites: > I think from discussion with Gavin sometime during the montreal > scaling bitcoin workshop, XT maybe willing to make things easy and > adapt what it's doing. For example in relation to versionBits Gavin > said he'd be willing to update XT with an updated/improved > versionBits, for example. > > It seems more sensible to do what is simple and clean and have both > core do that, and XT follow if there is no particular philosophy > debate on a given technical topic. This seems a quite constructive > approach. That too, but let's not break existing software. This proposal allows that, and is trivial. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Versionbits BIP (009) minor revision proposal.
Gregory Maxwellwrites: > I can, however, argue it the other way (and probably have in the > past): The bit is easily checked by thin clients, so thin clients > could use it to reject potentially ill-fated blocks from non-upgraded > miners post switch (which otherwise they couldn't reject without > inspecting the whole thing). This is an improvement over not forcing > the bit, and it's why I was previously in favor of the way the > versions were enforced. But, experience has played out other ways, > and thin clients have not done anything useful with the version > numbers. > > A middle ground might be to require setting the bit for a period of > time after rule enforcing begins, but don't enforce the bit, just > enforce validity of the block under new rules. Thus a thin client > could treat these blocks with increased skepticism. Introducing this later would trigger warnings on older clients, who would consider the bit to represent a new soft fork :( So if we want this middle ground, we should sew it in now, though it adds a other state. Simplest is to have miners keep setting the bit for another 2016 blocks. If we want to later, we can make this a consensus rule. "Bitcoin is hard, let's go shopping!" "With Bitcoin!" "..." Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Versionbits BIP (009) minor revision proposal.
Hi all, Pieter and Eric pointed out that the current BIP has miners turning off the bit as soon as it's locked in (75% testnet / 95% mainnet). It's better for them to keep setting the bit until activation (2016 blocks later), so network adoption is visible. I'm not proposing another suggestion, though I note it for future: miners keep setting the bit for another 2016 blocks after activation, and have a consensus rule that rejects blocks without the bit. That would "force" upgrades on those last miners. I feel we should see how this works first. Cheers, Rusty. diff --git a/bip-0009.mediawiki b/bip-0009.mediawiki index c17ca15..b160810 100644 --- a/bip-0009.mediawiki +++ b/bip-0009.mediawiki @@ -37,14 +37,15 @@ retarget period. Software which supports the change should begin by setting B in all blocks mined until it is resolved. -if (BState == defined) { +if (BState != activated && BState != failed) { SetBInBlock(); } '''Success: Lock-in Threshold''' If bit B is set in 1916 (1512 on testnet) or more of the 2016 blocks within a retarget period, it is considered -''locked-in''. Miners should stop setting bit B. +''locked-in''. Miners should continue setting bit B, so uptake is +visible. if (NextBlockHeight % 2016 == 0) { if (BState == defined && Previous2016BlocksCountB() >= 1916) { @@ -57,7 +58,7 @@ more of the 2016 blocks within a retarget period, it is considered The consensus rules related to ''locked-in'' soft fork will be enforced in the second retarget period; ie. there is a one retarget period in which the remaining 5% can upgrade. At the that activation block and -after, the bit B may be reused for a different soft fork. +after, miners should stop setting bit B, which may be reused for a different soft fork. if (BState == locked-in && NextBlockHeight == BActiveHeight) { BState = activated; ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.
Tom Harding via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes: > On 9/13/2015 11:56 AM, Rusty Russell via bitcoin-dev wrote: >> '''Success: Activation Delay''' >> The consensus rules related to ''locked-in'' soft fork will be enforced in >> the second retarget period; ie. there is a one retarget period in >> which the remaining 5% can upgrade. At the that activation block and >> after, the bit B may be reused for a different soft fork. >> > > Rather than a simple one-period delay, should there be a one-period > "burn-in" to show sustained support of the threshold? During this > period, support must continuously remain above the threshold. Any lapse > resets to inactivated state. > > With a simple delay, you can have the embarrassing situation where > support falls off during the delay period and there is far below > threshold support just moments prior to enforcement, but enforcement > happens anyway. Yeah, but Gavin's right. If you can't account for all the corner cases, all you can do is keep it simple and well defined. Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!
"Wladimir J. van der Laan via bitcoin-dev"writes: > On Sun, Sep 27, 2015 at 02:50:31PM -0400, Peter Todd via bitcoin-dev wrote: > >> It's time to deploy BIP65 CHECKLOCKTIMEVERIFY. > > There appears to be common agreement on that. > > The only source of some controversy is how to deploy: versionbits versus > IsSuperMajority. I think the versionbits proposal should first have code > out there for longer before we consider it for concrete softforks. Haste-ing > along versionbits because CLTV is wanted would be risky. Agreed. Unfortunately, a simple "block version >= 4" check is insufficient, due to XT which sets version bits 001111. Given that, I suggest using the simple test: if (pstart->nVersion & 0x8) ++nFound; Which means: 1) XT won't trigger it. 2) It won't trigger XT. 3) You can simply set block nVersion to 8 for now. 4) We can still use versionbits in parallel later. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Weak block thoughts...
Gavin Andresen via bitcoin-devwrites: > I don't see any incentive problems, either. Worst case is more miners > decide to skip validation and just mine a variation of the > highest-fee-paying weak block they've seen, but that's not a disaster-- > invalid blocks will still get rejected by all the non-miners running full > nodes. That won't help SPV nodes, unfortunately. > If we did see that behavior, I bet it would be a good strategy for a big > hashrate miner to dedicate some of their hashrate to announcing invalid > weak blocks; if you can get your lazy competitors to mine it, then you > win We already see non-validating mining, but they do empty blocks. This just makes it more attractive in the future, since you can collect fees too. But I think it's clear we'll eventually need some UTXO commitment so full nodes can tell SPV nodes about bad blocks. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.
Jorge Timónwrites: > I disagree with the importance of this concern and old soft/hardforks will > replace this activation mechanism with height, so that's an argument in > favor of using the height from the start. This is "being discussed" in a > thread branched from bip99's discussion. Thanks, I'll have to dig through bitcoin-dev and find it. > Anyway, is this proposing to use the block time or the median block time? > For some hardforks/softforks the block time complicates the implementation > (ie in acceptToMemoryPool) as discussed in the mentioned thread. BIP text is pretty clear that it's median block time. This is only for timeout, not for soft fork rule change (which *is* 2016 blocks after 95% is reached). Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.
Tier Nolan via bitcoin-devwrites: > The advantage of enforcing the rule when 75% is reached (but only for > blocks with the bit set) is that miners get early notification that they > have implemented the rule incorrectly.They might produce blocks that > they think are fine, but which aren't. Indeed. There are three believable failure possibilties: 1) You don't implement the rule at all, and don't set the bit. 2) You implement it and set bit, but think some valid block is invalid. 3) You implement it and set bit, but think some invalid block is valid. #1 is by far the most common, and the proposal is designed so they *always* get ~2 weeks warning before those drop to SPV security. Assuming the mining majority isn't buggy (otherwise, it's arguably not a bug but a feature!) #2 is the worst case: some miners fork off and don't rejoin. So there is a slight advantage in doing this early: those buggy miners no longer contribute to the 95% threshold. But that's outweighed IMHO by: 1) We would need another delay at 75% so #1 nodes can upgrade. 2) The new feature won't be exercised much before impliciation, since it's useless before then, so it might not find bugs anyway. In conclusion, I'm not convinced by the extra complexity. Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.
Tier Nolan via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> writes: > On Sun, Sep 13, 2015 at 7:56 PM, Rusty Russell via bitcoin-dev < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> '''States''' >> With every softfork proposal we associate a state BState, which begins >> at ''defined'', and can be ''locked-in'', ''activated'', >> or ''failed''. Transitions are considered after each >> retarget period. >> > > I think the 75% rule should be maintained. It confirms that miners who are > setting the bit are actually creating blocks that meet the new rule (though > it doesn't check if they are enforcing it). I couldn't see a use for it, since partial enforcement of a soft fork is pretty useless. Your point about checking that miners are actually doing it is true, though all stuff being forked in in future will be nonstandard AFAICT. I bias towards simplicity for this. > What is the reason for aligning the updated to the difficulty window? Miners already have that date in their calendar, so prefer to anchor to that. > *defined* > Miners set bit > If 75% of blocks of last 2016 have bit set, goto tentative > > > *tentative* > Miners set bit > Reject blocks that have bit set that don't follow new rule > If 95% of blocks of last 2016 have bit set, goto locked-in > > > *locked-in* > > Point of no return > Miners still set bit > Reject blocks that have bit set that don't follow new rule > After 2016 blocks goto notice OK, *that* variant makes perfect sense, and is no more complex, AFAICT. So, there's two weeks to detect bad implementations, then you everyone stops setting the bit, for later reuse by another BIP. > I think counting in blocks is easier to be exact here. Easier for code, but harder for BIP authors. > If two bits were allocated per proposal, then miners could vote against > forks to recover the bits. If 25% of the miners vote against, then that > kills it. You need a timeout: an ancient (non-mining, thus undetectable) node should never fork itself off the network because someone reused a failed BIP bit. > In the rationale, it would be useful to discuss effects on SPV clients and > buggy miners. > > SPV clients should be recommended to actually monitor the version field. SPV clients don't experience a security change when a soft fork occurs? They're already trusting miners. Greg pointed out that soft forks tend to get accompanied by block forks across activation, but SPV clients should *definitely* be taking those into account whenever they happen, right? Thanks! Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.
Hi all, Those who've seen the original versionbits bip, this adds: 1) Percentage checking only on retarget period boundaries. 2) 1 retarget period between 95% and activation. 3) A stronger suggestion for timeout value selection. https://gist.github.com/rustyrussell/47eb08093373f71f87de And pasted below, de-formatted a little. Thanks, Rusty. BIP: ?? Title: Version bits with timeout and delay Author: Pieter Wuille <pieter.wui...@gmail.com>, Peter Todd <p...@petertodd.org>, Greg Maxwell <g...@xiph.org>, Rusty Russell <ru...@rustcorp.com.au> Status: Draft Type: Informational Track Created: 2015-10-04 ==Abstract== This document specifies a proposed change to the semantics of the 'version' field in Bitcoin blocks, allowing multiple backward-compatible changes (further called called "soft forks") being deployed in parallel. It relies on interpreting the version field as a bit vector, where each bit can be used to track an independent change. These are tallied each retarget period. Once the consensus change succeeds or times out, there is a "fallow" pause after which the bit can be reused for later changes. ==Motivation== BIP 34 introduced a mechanism for doing soft-forking changes without predefined flag timestamp (or flag block height), instead relying on measuring miner support indicated by a higher version number in block headers. As it relies on comparing version numbers as integers however, it only supports one single change being rolled out at once, requiring coordination between proposals, and does not allow for permanent rejection: as long as one soft fork is not fully rolled out, no future one can be scheduled. In addition, BIP 34 made the integer comparison (nVersion >= 2) a consensus rule after its 95% threshold was reached, removing 2^31 +2 values from the set of valid version numbers (all negative numbers, as nVersion is interpreted as a signed integer, as well as 0 and 1). This indicates another downside this approach: every upgrade permanently restricts the set of allowed nVersion field values. This approach was later reused in BIP 66, which further removed nVersion = 2 as valid option. As will be shown further, this is unnecessary. ==Specification== ===Mechanism=== '''Bit flags''' We are permitting several independent soft forks to be deployed in parallel. For each, a bit B is chosen from the set {0,1,2,...,28}, which is not currently in use for any other ongoing soft fork. Miners signal intent to enforce the new rules associated with the proposed soft fork by setting bit 1B in nVersion to 1 in their blocks. '''High bits''' The highest 3 bits are set to 001, so the range of actually possible nVersion values is [0x2000...0x3FFF], inclusive. This leaves two future upgrades for different mechanisms (top bits 010 and 011), while complying to the constraints set by BIP34 and BIP66. Having more than 29 available bits for parallel soft forks does not add anything anyway, as the (nVersion >= 3) requirement already makes that impossible. '''States''' With every softfork proposal we associate a state BState, which begins at ''defined'', and can be ''locked-in'', ''activated'', or ''failed''. Transitions are considered after each retarget period. '''Soft Fork Support''' Software which supports the change should begin by setting B in all blocks mined until it is resolved. if (BState == defined) { SetBInBlock(); } '''Success: Lock-in Threshold''' If bit B is set in 1916 (1512 on testnet) or more of the 2016 blocks within a retarget period, it is considered ''locked-in''. Miners should stop setting bit B. if (NextBlockHeight % 2016 == 0) { if (BState == defined && Previous2016BlocksCountB() >= 1916) { BState = locked-in; BActiveHeight = NextBlockHeight + 2016; } } '''Success: Activation Delay''' The consensus rules related to ''locked-in'' soft fork will be enforced in the second retarget period; ie. there is a one retarget period in which the remaining 5% can upgrade. At the that activation block and after, the bit B may be reused for a different soft fork. if (BState == locked-in && NextBlockHeight == BActiveHeight) { BState = activated; ApplyRulesForBFromNextBlock(); /* B can be reused, immediately */ } '''Failure: Timeout''' A soft fork proposal should include a ''timeout''. This is measured as the beginning of a calendar year as per this table (suggested three years from drafting the soft fork proposal): Timeout Year>= Seconds Timeout Year>= Seconds 20181514764800 20261767225600 20191546300800 20271798761600 20201577836800 20281830297600 20211609459200 2029186192 20221640995200 20301893456
Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime
jl2...@xbt.hk writes: Rusty Russell 於 2015-08-26 23:08 寫到: - We should immediately deploy an IsStandard() rule which insists that nSequence is 0x or 0, so nobody screws themselves when we soft fork and they had random junk in there. This is not needed because BIP68 is not active for version 1 tx. No existing wallet would be affected. Ah thanks! I missed the version bump in BIP68. Aside: I'd also like to have nLockTime apply even if nSequence != 0x (another mistake I made). So I'd like an IsStandard() rule to say it nLockTime be 0 if an nSequence != 0x. Would that screw anyone currently? Do you mean have nLockTime apply even if nSequence = 0x? This is a softfork. Should we do this together with BIP65, BIP68 and BIP112? Yes, but Mark pointed out that it has uses, so I withdraw the suggestion. Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime
Btc Drak via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org writes: This BIP has been assigned BIP112 by the BIP repository maintainer. I have updated the pull request accordingly. Regarding the suggestion to cannibalise version, by your own disadvantage list, we would lose fine grained control over txins which neuters the usefulness considerably. Also using version is also ugly because there isn't a semantic association with what we are trying to do, whereas, sequence is associated with transaction finality and is thus the more appropriate and logical field to use. OK, having implemented lightning test code against the initial proposal, I can give the following anecdata: - I screwed up inversion in my initial implementation. Please kill it. - 256 second granularity would be be fine in deployment, but a bit painful for testing (I currently use 60 seconds, and sleep 61). 64 would work better for me, and works roughly as minutes. - 1 year should be sufficient as a max; my current handwave is = 1 day per lightning hop, max 12 hops, though we have no deployment data. - We should immediately deploy an IsStandard() rule which insists that nSequence is 0x or 0, so nobody screws themselves when we soft fork and they had random junk in there. Aside: I'd also like to have nLockTime apply even if nSequence != 0x (another mistake I made). So I'd like an IsStandard() rule to say it nLockTime be 0 if an nSequence != 0x. Would that screw anyone currently? Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] BIP 68 Questions
Rusty Russell ru...@rustcorp.com.au writes: Hi Mark, It looks like the code in BIP 68 compares the input's nSequence against the transaction's nLockTime: No, forget this. I misread the code. Mark ELI5'd to me offlist, thanks! FWIW, the code works :) Cheers, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] BIP 68 Questions
Hi Mark, It looks like the code in BIP 68 compares the input's nSequence against the transaction's nLockTime: if ((int64_t)tx.nLockTime LOCKTIME_THRESHOLD) nMinHeight = std::max(nMinHeight, (int)tx.nLockTime); else nMinTime = std::max(nMinTime, (int64_t)tx.nLockTime); if (nMinHeight = nBlockHeight) return nMinHeight; if (nMinTime = nBlockTime) return nMinTime; So if transaction B spends the output of transaction A: 1. If A is in the blockchain already, you don't need a relative locktime since you know A's time. 2. If it isn't, you can't create B since you don't know what value to set nLockTime to. How was this supposed to work? Thanks, Rusty. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [RFC] IBLT block testing implementation
Kalle Rosenbaum ka...@rosenbaum.se writes: 2015-06-23 7:53 GMT+02:00 Rusty Russell ru...@rustcorp.com.au: Hi all, I've come up with a model for using IBLT to communicate blocks between peers. The gory details can be found on github: it's a standalone C++ app for testing not integrated with bitcoin. https://github.com/rustyrussell/bitcoin-iblt Good to see that you're working on this. Really exciting! I want to take the opportunity to link to my previous work on IBLTs, for those that haven't seen it, where I investigate the behaviour of the IBLT when changing different parameters, like cell count, hashFunctionCount etc: https://github.com/kallerosenbaum/bitcoin-iblt/wiki Yep, I stole the hashFunctionCount = 3 straight from there, same with 64-byte bucket contents. From glancing over your implementation, I see that you don't use a keyHashSum, in fact you don't use a key at all, but only a concatenatenation of (txid48, fragid, tx-chunk) as value. Here the txid48+fragid functions as a kind of keyHashSum. I think this might be a very good idea, If you have a false positive with count == 1, then you would probably detect it if fragid is outside reasonable limit from from base_fragid. Did you implement your idea to remove all the count==1 fagments in ascending order of (fragid-base_fragid)? Yep! I keep records of all the 1 and -1 buckets; separate lists depending on how far they are off the base. Lists for 0, 1, 2, ... 7, then powers of 2. See todo in iblt.cpp. Anyhow, I think we should make some more comparable tests, just as you proposed last year when I didn't reply, sorry... My code is a more straight forward implementation of the IBLT paper (http://arxiv.org/pdf/1101.2245.pdf) and encoding blocks is done pretty much as Gavin proposed in his gist. That should function as a baseline so that we can validate that your optimizations actually work. Please contact me directly if you are interested in that, and we'll figure something out. Absolutely, will do that offline. More comments/questions inline: Assumptions and details: 1. The base idea comes from Gavin's Block Propagation gist: https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2 2. It relies on similarity in mempools, with some selection hints. This is designed to be as flexible as possible to make fewest assumptions on tx selection policy. 3. The selection hints are: minimum fee-per-byte (fixed point) and bitmaps of included-despite-that and rejected-despite-that. The former covers things like child-pays-for-parent and the priority area. The latter covers other cases like Eligius censoring spam, bitcoin version differences, etc. There is a chance that the bit prefix of the added or removed tx is not unique within the receiver's mempool. In that case the receiver can probably just use the earliest matching transaction and hope for the best. If not - bad luck. Is that how you do it? No; they add or remove all matching. If they add too many, that's the easy case, of course. They can't remove too many (since they know that bit prefix was unique on the other end). 4. Cost to represent these exceptional added or excluded transactions is (on average) log2(transactions in mempool) bits. These exceptional tx *could* instead be encoded in the IBLT, just as if they were unknown differences. Your bitmaps is probably a more compact representation, but it's also more complex. It's pretty easy to cut the bitmaps to zero and test this (comment them out in wire_encode.cpp, for example). But the overhead of IBLT is some factor greater than txsize (need to measure that factor, too!); whereas these are a log(#txs-in-mempool) bits. 5. At Peiter Wuille's suggestion, it is designed to be reencoded between nodes. It seems fast enough for that, and neighboring nodes should have most similar mempools. What is the purpose of reencoding when relaying? Is that to improve the failure probability as new tx may have arrived in the mempool of the intermediary node? Yep, and estimation ability. The theory is that adjacent nodes will have closer mempools, allowing for smaller IBLTs. The size of mempool differences for each block can be fed back, so you have an idea of the likely delta to peers (ie. add that to the actual difference between your mempool and the new block, to estimate the amount of IBLT reconstruction required). 6. It performs reasonably well on my 100 block sample in bitcoin-corpus (chosen to include a mempool spike): Average block range (bytes): 7988-999829 Block size mean (bytes): 401926 Minimal decodable BLT size range (bytes):314-386361 Minimal decodable BLT size mean (bytes): 13265 7. Actual results will have to be worse than these minima, as peers will have to oversize the IBLT to have reasonable chance of success