Re: [bitcoin-dev] Refreshed BIP324
On Wed, Oct 26, 2022 at 04:39:02PM +, Pieter Wuille via bitcoin-dev wrote: > However, it obviously raises the question of how the mapping table between the > 1-byte IDs and the commands they represent should be maintained: > > 1. The most straightforward solution is using the BIP process as-is: let > BIP324 >introduce a fixed initial table, and future BIPs which introduce new >messages can introduce new mapping entries for it. [...] > 3. Yet another possibility is not having a fixed table at all, and negotiate >the mapping dynamically. E.g. either side could send a message at >connection time with an explicit table of entries "when I send byte X, I >mean command Y". FWIW, I think these two options seem fine -- maintaining a purely local and hardcoded internal mapping of "message string C has id Y" where Y is capped by the number of commands you actually implement (presumably less than 65536 total) is easy, and providing a per-peer mapping from "byte X" to "id Y" then requires at most 512 bytes per peer, along with up to 3kB of initial setup to tell your peer what mappings you'll use. > Our idea is to start out with approach (1), with a mapping table effectively > managed by the BIP process directly, but if and when collisions become a > concern (maybe due to many parallel proposals, maybe because the number of > messages just grows too big), switch to approach (3), possibly even > differentially (the sent mappings are just additions/overwrites of the > BIP-defined table mappings, rather than a full mapping). I guess I think it would make sense to not start using a novel 1-byte message unless you've done something to introduce that message first; whether that's via approach (3) ("I'm going to use 0xE9 to mean pkgtxns") or via a multibyte feature support message ("I sent sendaddrv3 as a 10-byte message, that implies 0xA3 means addrv3 from now on"). I do still think it'd be better to recommend against reserving a byte for one-shot messages, and not do it for existing one-shot messages though. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Fwd: P2EP Lightning PayJoin
Funding channels on a lightning node can be a pain. First, I need to send funds to my node on-chain. Then I need to make another transaction to open channels. Instead, we can use the BIP 78 PayJoin P2EP protocol to fund and open channels in a single transaction. We do not communicate over the shared blockchain, we share the blockchain by communicating. Here is an illustration of how the BIP 78 protocol pairs with the BOLT 2 Channel establishment protocol: ┌──┐ ┌─┐ ┌──┐ │Lightning Peer│ │My Lightning Node│ │Sender│ └──┬───┘ └───┬─┘ └───┬──┘ │ │ │ │ BOLT 2 ├─── Bip21 with ?pj= ───►│ │ Channel Establishment │ │ │ │◄── Original PSBT ──┤ │ │ │ │ │ │ │◄ open_channel ─┤ │ │ │ │ ├ accept_channel ───►│ │ │ │ BIP 78 │ │ │ │ │◄─── funding_created ───┤ │ │ │ │ ├ funding_signed ───►│ │ │ │ │ │ │ PayJoin Proposal │ │ ├── PSBT ───►│ │ │ │ │ │ │ │ │ ┌─ PayJoin + Funding ───┤ │ │ │ Transaction │ │ │ │ │ x││xxx ▼ xx│x x xxx BITCOIN NETWORK x x x│││x │ │ │ │◄channel_ready ─┤ │ │ │ │ ├ channel_ready ►│ │ │ │ │ We use P2EP to automate [PSBT Channel establishment](https://gist.github.com/yuyaogawa/3d69bfa03b0702b8ff12c210bc795a6a) communications. As an added benefit, The BIP 78 spec helps avoid surveillance heuristics as well. On-chain, these transactions look like regular PayJoins when using Taproot outputs. I thank Martin Habovštiak for the initial work on this idea. Thank you to Riccardo Casatta for developing the rust payjoin crate further. Thank you to Evan Lin for early hacking late nights on this idea. Thanks to my Legends of Lightning Tournament teammates Armin Sabouri and Nick Farrow. We have released "nolooking," an experimental alpha that implements this work, on the Umbrel app store. A brand new node can become totally connected in a single transaction that opens channels to outbound peers, and could immediately swap for inbound capacity. Just by scanning a single QR code. [Source](https://github.com/chaincase-app/nolooking) Want to help? The rust [payjoin crate](https://github.com/Kixunil/payjoin) needs love in the form of code review and unit testing. Don’t hesitate to reach out. Dan___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Using Full-RBF to fix BIP-125 Rule #3 Pinning with nLockTime
Hi Peter, > We can ensure with high probability that the transaction can be > cancelled/mined > at some point after N blocks by pre-signing a transaction, with nLockTime set > sufficiently far into the future, spending one or more inputs of the > transaction with a sufficiently high fee that it would replace transaction(s) > attempting to exploit Rule #3 pinning (note how the package limits in Bitcoin > Core help here). >From my understanding, there are many open questions to such a pre-signed high-fee solution aiming to address Rule #3 pinning. Determining the high-fee to guarantee replacements with high odds. I think it should be superior to current top network mempools sat/vb * MAX_STANDARD_TX_WEIGHT, otherwise an adversary can pin the multi-party funded transaction on the ground of Core's replacement rule ("The replacement transaction's feerate is greater than the feerates of all directly conflicting transactions''). Though note the difficulty, the sat/vb is an unknown fact at time of signatures exchange among the multi-party funded transaction participants. Solving this issue probably requires from then to overshoot, and adopt a historical worst-case mempool feerate. This "historically-worst" sat/vb introduces two new issues, first I think this is an economic lower bound on the funds that can be staked in the collaborative transaction. Second I believe this constitutes a griefing vector, where a participant could deliberately pin to inflict an asymmetric damage, without entering into any fee competition. This griefing vector could be leveraged as hard as being triggered by a miner-as-participant in so-called miner harvesting attacks. Further, I think this solution relying on nLocktime doesn't solve the timevalue DoS inflicted to the participants UTXOs, until the pre-signed high-fee transaction is final. If participants prefer to save the timevalue of their contributed UTXOs over operation success, a better approach could be for them to unilaterally spend after a protocol/implementation timepoint (e.g LN's funding timeout recovery mechanism). A more workable solution I believe could be simply to rely on package-relay, an ephemeral anchor output, and a special replacement regime (e.g nVersion=3) to allow the multi-party funded transaction coordinator to unilateral fee-bump, in a step-by-step approach. I.e without making assumptions on the knowledge of network mempools and burning directly the worst amount in fees. Best, Antoine Le lun. 7 nov. 2022 à 16:18, Peter Todd via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> a écrit : > On Mon, Nov 07, 2022 at 03:17:29PM -0500, Peter Todd via bitcoin-dev wrote: > > tl;dr: We can remove the problem of Rule #5 pinning by ensuring that all > > transactions in the mempool are always replaceable. > > With Rule #5 solved, let's look at the other pinning attack on multi-party > transactions: BIP-125 Rule #3 > > tl;dr: In conjunction with full-RBF, nLockTime'd, pre-signed, transactions > can > ensure that one party is not forced to pay for all the cost of a rule #3 > replacement. > > > # What is the problem? > > When a transaction contains inputs from multiple parties, each party can > lock > up funds from the other party by spending their input with a transaction > that > is difficult/expensive to replace. Obviously, the clearest example of > "difficult to > replace" is a non-BIP-125 (Opt-in-RBF) transaction. But here, we'll assume > that > full-rbf is implemented and all transactions are replaceable. > > BIP-125 Rule #3 states that: > > The replacement transaction pays an absolute fee of at least the sum > paid > by the original transactions. > > The attack is that the malicious party, who we'll call Mallory, broadcasts > a > transaction spending their input(s) with a low fee rate transaction that's > potentially quite large, during a time of high mempool demand. Due to the > low > fee rate this transaction will take a significant amount of time to mine. > The > other parties to the transaction - who we'll collectively call Alice - are > now > unable to spend their inputs unless they broadcast a transaction "paying > for" > Mallory's. > > This attack works because Mallory doesn't expect the conflicting tx to > actually > get mined: he assumes it'll either expire, or Alice will get frustrated and > have to double spend it. By simple tying up money, Mallory has caused > Alice to > actually lose money. > > > # Fixing the problem with nLockTime > > Conversely, in the case of an honest multi-party transaction, whose parties > we'll call Alice and Bob, the parties genuinely intend for one of two > outcomes: > > 1) The multi-party transaction to get mined within N blocks. > 2) The transaction to be cancelled (most likely by spending one of the > inputs). > > We can ensure with high probability that the transaction can be > cancelled/mined > at some point after N blocks by pre-signing a transaction, with nLockTime > set > sufficiently far into the fut
Re: [bitcoin-dev] Removing BIP-125 Rule #5 Pinning with the Always-Replaceable Invariant
On Mon, Nov 07, 2022 at 04:21:11PM -0500, Suhas Daftuar via bitcoin-dev wrote: > Hello bitcoin-dev, > > The description in the OP is completely broken for the simple reason that > Bitcoin transactions can have multiple inputs, and so a single transaction > can conflict with multiple in-mempool transactions. The proposal would > limit the number of descendants of each in-mempool transaction to > MAX_REPLACEMENT_CANDIDATES (note that this is duplicative of the existing > Bitcoin Core descendant limits), but a transaction that has, say, 500 > inputs might arrive and conflict with 500 unrelated in-mempool > transactions. This could result in 12,500 evictions -- far more than the > 100 that was intended. That's easy to fix: just sum up the number of nReplacementCandidates for each input in the multiple input case. Again, it'll overcount in the diamond case. But so does the existing code. The goal is to defeat pinning by ensuring that you can always replace a transaction by double-spending an output; not that any possible way of double-spending multiple outputs at once would work. -- https://petertodd.org 'peter'[:-1]@petertodd.org signature.asc Description: PGP signature ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Removing BIP-125 Rule #5 Pinning with the Always-Replaceable Invariant
Hello bitcoin-dev, The description in the OP is completely broken for the simple reason that Bitcoin transactions can have multiple inputs, and so a single transaction can conflict with multiple in-mempool transactions. The proposal would limit the number of descendants of each in-mempool transaction to MAX_REPLACEMENT_CANDIDATES (note that this is duplicative of the existing Bitcoin Core descendant limits), but a transaction that has, say, 500 inputs might arrive and conflict with 500 unrelated in-mempool transactions. This could result in 12,500 evictions -- far more than the 100 that was intended. Also, note that those 500 transactions might instead have 24 ancestors each (again, using the default chain limits in Bitcoin Core) -- this would result in 12,000 transactions whose state would be updated as a result of evicting those 500 transactions. Hopefully this gives some perspective on why we have a limit. On Mon, Nov 7, 2022 at 3:17 PM Peter Todd via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > tl;dr: We can remove the problem of Rule #5 pinning by ensuring that all > transactions in the mempool are always replaceable. > > > Currently Bitcoin Core implements rule #5 of BIP-125: > > The number of original transactions to be replaced and their descendant > transactions which will be evicted from the mempool must not exceed a > total > of 100 transactions. > > As of Bitcoin Core v24.0rc3, this rule is implemented via the > MAX_REPLACEMENT_CANDIDATES limit in GetEntriesForConflicts: > > // Rule #5: don't consider replacing more than > MAX_REPLACEMENT_CANDIDATES > // entries from the mempool. This potentially overestimates the number > of actual > // descendants (i.e. if multiple conflicts share a descendant, it will > be counted multiple > // times), but we just want to be conservative to avoid doing too much > work. > if (nConflictingCount > MAX_REPLACEMENT_CANDIDATES) { > return strprintf("rejecting replacement %s; too many potential > replacements (%d > %d)\n", > txid.ToString(), > nConflictingCount, > MAX_REPLACEMENT_CANDIDATES); > } > > There is no justification for this rule beyond avoiding "too much work"; > the > exact number was picked at random when the BIP was written to provide an > initial conservative limit, and is not justified by benchmarks or memory > usage > calculations. It may in fact be the case that this rule can be removed > entirely > as the overall mempool size limits naturally limit the maximum number of > possible replacements. > > > But for the sake of argument, let's suppose some kind of max replacement > limit > is required. Can we avoid rule #5 pinning? The answer is yes, we can, with > the > following invariant: > > No transaction may be accepted into the mempool that would cause > another > transaction to be unable to be replaced due to Rule #5. > > We'll call this the Replaceability Invariant. Implementing this rule is > simple: > for each transaction maintain an integer, nReplacementCandidates. When a > non-conflicting transaction is considered for acceptance to the mempool, > verify > that nReplacementCandidates + 1 < MAX_REPLACEMENT_CANDIDATES for each > unconfirmed parent. When a transaction is accepted, increment > nReplacementCandidates by 1 for each unconfirmed parent. > > When a *conflicting* transaction is considered for acceptance, we do _not_ > need > to verify anything: we've already guaranteed that every transaction in the > mempool is replaceable! The only thing we may need to do is to decrement > nReplacementCandidates by however many children we have replaced, for all > unconfirmed parents. > > When a block is mined, note how the nReplacementCandidates of all > unconfirmed > transactions remains unchanged because a confirmed transaction can't spend > an > unconfirmed txout. > > The only special case to handle is a reorg that changes a transaction from > confirmed to unconfirmed. Setting nReplacementCandidates to > MAX_REPLACEMENT_CANDIDATES would be fine in that very rare case. > > > Note that like the existing MAX_REPLACEMENT_CANDIDATES check, the > Replaceability Invariant overestimates the number of transactions to be > replaced in the event that multiple children are spent by the same > transaction. > That's ok: diamond tx graphs are even more unusual than unconfirmed > children, > and there's no reason to bend over backwards to accomodate them. > > Prior art: this proposed rule is similar in spirit to the package limits > aready > implemented in Bitcoin Core. > > -- > https://petertodd.org 'peter'[:-1]@petertodd.org > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org
[bitcoin-dev] Using Full-RBF to fix BIP-125 Rule #3 Pinning with nLockTime
On Mon, Nov 07, 2022 at 03:17:29PM -0500, Peter Todd via bitcoin-dev wrote: > tl;dr: We can remove the problem of Rule #5 pinning by ensuring that all > transactions in the mempool are always replaceable. With Rule #5 solved, let's look at the other pinning attack on multi-party transactions: BIP-125 Rule #3 tl;dr: In conjunction with full-RBF, nLockTime'd, pre-signed, transactions can ensure that one party is not forced to pay for all the cost of a rule #3 replacement. # What is the problem? When a transaction contains inputs from multiple parties, each party can lock up funds from the other party by spending their input with a transaction that is difficult/expensive to replace. Obviously, the clearest example of "difficult to replace" is a non-BIP-125 (Opt-in-RBF) transaction. But here, we'll assume that full-rbf is implemented and all transactions are replaceable. BIP-125 Rule #3 states that: The replacement transaction pays an absolute fee of at least the sum paid by the original transactions. The attack is that the malicious party, who we'll call Mallory, broadcasts a transaction spending their input(s) with a low fee rate transaction that's potentially quite large, during a time of high mempool demand. Due to the low fee rate this transaction will take a significant amount of time to mine. The other parties to the transaction - who we'll collectively call Alice - are now unable to spend their inputs unless they broadcast a transaction "paying for" Mallory's. This attack works because Mallory doesn't expect the conflicting tx to actually get mined: he assumes it'll either expire, or Alice will get frustrated and have to double spend it. By simple tying up money, Mallory has caused Alice to actually lose money. # Fixing the problem with nLockTime Conversely, in the case of an honest multi-party transaction, whose parties we'll call Alice and Bob, the parties genuinely intend for one of two outcomes: 1) The multi-party transaction to get mined within N blocks. 2) The transaction to be cancelled (most likely by spending one of the inputs). We can ensure with high probability that the transaction can be cancelled/mined at some point after N blocks by pre-signing a transaction, with nLockTime set sufficiently far into the future, spending one or more inputs of the transaction with a sufficiently high fee that it would replace transaction(s) attempting to exploit Rule #3 pinning (note how the package limits in Bitcoin Core help here). There's a few different ways to implement this, and exactly which one makes sense will depend on the specifics of the multi-party protocol. But the general approach is to defeat the attack by ensuring that Mallory will have to pay the cost of getting the multi-party transaction unstuck, at some point in the future. For example, in a two party transaction where there's a clearly more reputable party (Alice), and an untrusted party (Mallory), Alice could simply require Mallory to provide a nLockTime'd transaction spending only his input to fees, multiple days into the future. In the unlikely event that Mallory holds up the protocol, he will be severely punished. Meanwhile, Alice can always cancel at no cost. In a many party transaction where both parties are equally (un)trustworthy the protocol could simply have both parties sign a series of transactions, nLockTimed at decreasingly far into a future, paying a decreasingly amount of fees. If either party holds up the transaction intentionally, they'll both pay a high cost. But again, at some point Mallory will have paid the full price for his attack. This approach also has the beneficial side effect of implementing fee discovery with rbf. This approach is easier as the number of parties increases, eg the Wasabi/Joinmarket transactions with hundreds of inputs and outputs: they collectively already have to pay a significant fee to get the transaction mined, making the extra poential cost needed to defeat pinning minimal. # Coordinator Spent Bonds with Package Relay/Replacement For schemes with a central semi-trusted coordinator, such as Wasabi coinjoins, with package relay/replacement we can use a two party punishment transaction consisting of: tx1 - spends Mallory's input to a txout spendable by: IF CheckSig Else CheckSequenceVerify CheckSig EndIf tx2 - spends tx1 output to as much fees as needed Whether or not Mallory cheated with a double-spend is provable to third parties; the second transaction ensures that Mallory can't simply release tx1 on their own to frame the coordinator. The use of CheckSequenceVerify ensures that if mallory did try to frame the coordinator, they don't have to do anything to return the funds to Mallory. -- https://petertodd.org 'peter'[:-1]@petertodd.org signature.asc Description: PGP signature ___ bitcoin-dev mailing list bitcoin-dev@li
[bitcoin-dev] Removing BIP-125 Rule #5 Pinning with the Always-Replaceable Invariant
tl;dr: We can remove the problem of Rule #5 pinning by ensuring that all transactions in the mempool are always replaceable. Currently Bitcoin Core implements rule #5 of BIP-125: The number of original transactions to be replaced and their descendant transactions which will be evicted from the mempool must not exceed a total of 100 transactions. As of Bitcoin Core v24.0rc3, this rule is implemented via the MAX_REPLACEMENT_CANDIDATES limit in GetEntriesForConflicts: // Rule #5: don't consider replacing more than MAX_REPLACEMENT_CANDIDATES // entries from the mempool. This potentially overestimates the number of actual // descendants (i.e. if multiple conflicts share a descendant, it will be counted multiple // times), but we just want to be conservative to avoid doing too much work. if (nConflictingCount > MAX_REPLACEMENT_CANDIDATES) { return strprintf("rejecting replacement %s; too many potential replacements (%d > %d)\n", txid.ToString(), nConflictingCount, MAX_REPLACEMENT_CANDIDATES); } There is no justification for this rule beyond avoiding "too much work"; the exact number was picked at random when the BIP was written to provide an initial conservative limit, and is not justified by benchmarks or memory usage calculations. It may in fact be the case that this rule can be removed entirely as the overall mempool size limits naturally limit the maximum number of possible replacements. But for the sake of argument, let's suppose some kind of max replacement limit is required. Can we avoid rule #5 pinning? The answer is yes, we can, with the following invariant: No transaction may be accepted into the mempool that would cause another transaction to be unable to be replaced due to Rule #5. We'll call this the Replaceability Invariant. Implementing this rule is simple: for each transaction maintain an integer, nReplacementCandidates. When a non-conflicting transaction is considered for acceptance to the mempool, verify that nReplacementCandidates + 1 < MAX_REPLACEMENT_CANDIDATES for each unconfirmed parent. When a transaction is accepted, increment nReplacementCandidates by 1 for each unconfirmed parent. When a *conflicting* transaction is considered for acceptance, we do _not_ need to verify anything: we've already guaranteed that every transaction in the mempool is replaceable! The only thing we may need to do is to decrement nReplacementCandidates by however many children we have replaced, for all unconfirmed parents. When a block is mined, note how the nReplacementCandidates of all unconfirmed transactions remains unchanged because a confirmed transaction can't spend an unconfirmed txout. The only special case to handle is a reorg that changes a transaction from confirmed to unconfirmed. Setting nReplacementCandidates to MAX_REPLACEMENT_CANDIDATES would be fine in that very rare case. Note that like the existing MAX_REPLACEMENT_CANDIDATES check, the Replaceability Invariant overestimates the number of transactions to be replaced in the event that multiple children are spent by the same transaction. That's ok: diamond tx graphs are even more unusual than unconfirmed children, and there's no reason to bend over backwards to accomodate them. Prior art: this proposed rule is similar in spirit to the package limits aready implemented in Bitcoin Core. -- https://petertodd.org 'peter'[:-1]@petertodd.org signature.asc Description: PGP signature ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On mempool policy consistency
> > > With full-rbf, who saw what transaction first doesn't matter: the higher > fee paying transaction will always(*) replace the lower fee one. With > opt-in RBF, spamming the network can beat out the alternative. > incentivised predictability is critical when designing low level protocols, like bitcoin. the knock-on effects of deeper, network-wide predictability are likely beneficial in ways that are hard to predict. for example stuff like the "sabu" protocol might even work if full-rbf is the norm. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] On mempool policy consistency
On November 3, 2022 5:06:52 PM AST, yancy via bitcoin-dev wrote: > >AJ/Antoine et al > >> What should folks wanting to do coinjoins/dualfunding/dlcs/etc do to >> solve that problem if they have only opt-in RBF available? > >Assuming Alice is a well funded advisory, with enough resources to spam the >network so that enough nodes see her malicious transaction first, how does >full-rbf solve this vs. opt-in rbf? First of all, to make things clear, remember that the attacks were talking about are aimed at _preventing_ a transaction from getting mined. Alice wants to cheaply broadcast something with low fees that won't get mined soon (if ever), that prevents a protocol from making forward progress. With full-rbf, who saw what transaction first doesn't matter: the higher fee paying transaction will always(*) replace the lower fee one. With opt-in RBF, spamming the network can beat out the alternative. *) So what's the catch? Well, due to limitations in today's mempool implementation, sometimes we can't fully evaluate which tx pays the higher fee. For example, if Alice spams the network with very _large_ numbers transactions spending that input, the current mempool code doesn't even try to figure out if a replacement is better. But those limitations are likely to be fixable. And even right now, without fixing them, Alice still has to use a lot more money to pull off these attacks with full-rbf. So full-rbf definitely improves the situation even if it doesn't solve the problem completely. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] [Lightning-dev] Taro: A Taproot Asset Representation Overlay
Hi Laolu, Yeah, that is definitely the main downside, as Ruben also mentioned: tokens are "burned" if they get sent to an already spent UTXO, and there is no way to block those transfers. And I do agree with your concern about losing the blockchain as the main synchronization point, that seems indeed to be a prerequisite for making the scheme safe in terms of re-orgs and asynchronicity. I do think the scheme itself is sound though (maybe not off-chain, see below): it prevents double spending and as long as the clients adhere to the "rule" of not sending to a spent UTXO you'll be fine (if not your tokens will be burned, the same way as if you don't satisfy the Taro script when spending). Thinking more about the examples you gave, I think you are right it won't easily be compatible with LN channels though: If you want to refill an existing channel with tokens, you need the channel counterparties to start signing new commitments that include spending the newly sent tokens. A problem arises however, if the channel is force-closed with a pre-existing commitment from before the token transfer took place. Since this commitment will be spending the funding UTXO, but not the new tokens, the tokens will be burned. And that seems to be harder to deal with (Eltoo style channels could be an avenue to explore, if one could override the broadcasted commitment). Tl;dr: I think you're right, the scheme is not compatible with LN. - Johan On Sat, Nov 5, 2022 at 1:36 AM Olaoluwa Osuntokun wrote: > > Hi Johan, > > I haven't really been able to find a precise technical explanation of the > "utxo teleport" scheme, but after thinking about your example use cases a > bit, I don't think the scheme is actually sound. Consider that the scheme > attempts to target transmitting "ownership" to a UTXO. However, by the time > that transaction hits the chain, the UTXO may no longer exist. At that > point, what happens to the asset? Is it burned? Can you retry it again? Does > it go back to the sender? > > As a concrete example, imagine I have a channel open, and give you an > address to "teleport" some additional assets to it. You take that addr, then > make a transaction to commit to the transfer. However, the block before you > commit to the transfer, my channel closes for w/e reason. As a result, when > the transaction committing to the UTXO (blinded or not), hits the chain, the > UTXO no longer exists. Alternatively, imagine the things happen in the > expected order, but then a re-org occurs, and my channel close is mined in a > block before the transfer. Ultimately, as a normal Bitcoin transaction isn't > used as a serialization point, the scheme seems to lack a necessary total > ordering to ensure safety. > > If we look at Taro's state transition model in contrast, everything is fully > bound to a single synchronization point: a normal Bitcoin transaction with > inputs consumed and outputs created. All transfers, just like Bitcoin > transactions, end up consuming assets from the set of inputs, and > re-creating them with a different distribution with the set of outputs. As a > result, Taro transfers inherit the same re-org safety traits as regular > Bitcoin transactions. It also isn't possible to send to something that won't > ultimately exist, as sends create new outputs just like Bitcoin > transactions. > > Taro's state transition model also means anything you can do today with > Bitcoin/LN also apply. As an example, it would be possible for you to > withdrawn from your exchange into a Loop In address (on chain to off chain > swap), and have everything work as expected, with you topping off your > channel. Stuff like splicing, and other interactive transaction construction > schemes (atomic swaps, MIMO swaps, on chain auctions, etc) also just work. > > Ignoring the ordering issue I mentioned above, I don't think this is a great > model for anchoring assets in channels either. With Taro, when you make the > channel, you know how many assets are committed since they're all committed > to in the funding output when the channel is created. However, let's say we > do teleporting instead: at which point would we recognize the new asset > "deposits"? What if we close before a pending deposits confirms, how can one > regain those funds? Once again you lose the serialization of events/actions > the blockchain provides. I think you'd also run into similar issues when you > start to think about how these would even be advertised on a hypothetical > gossip network. > > I think one other drawback of the teleport model iiuc is that: it either > requires an OP_RETURN, or additional out of band synchronization to complete > the transfer. Since it needs to commit to w/e hash description of the > teleport, it either needs to use an OP_RETURN (so the receiver can see the > on chain action), or the sender needs to contact the receiver to initiate > the resolution of the transfer (details committed to in a change addr or > w/e). > > With Taro,
Re: [bitcoin-dev] Preventing/detecting pinning of jointly funded txs
On Sun, Nov 06, 2022 at 06:22:08PM -0500, Antoine Riard via bitcoin-dev wrote: > Adding a few more thoughts here on what coinjoins/splicing/dual-funded > folks can do to solve this DoS issue in an opt-in RBF world only. > > I'm converging that deploying a distributed monitoring of the network > mempools in the same fashion as zeroconf people is one solution, as you can > detect a conflicting spend of your multi-party transaction. > Let's say you > have a web of well-connected full-nodes, each reporting all their incoming > mempool transactions to some reconciliation layer. > > This "mempools watchdog" infrastructure isn't exempt from mempools > partitioning attacks by an adversary, A mempool partitioning attack requires the adversary to identify your nodes. If they're just monitoring and not being used as the initial broadcaster of your txs, that should be difficult. (And I think it would make sense to do things that make it more difficult to successfully partition a node for tx relay, even if you can identify it) > where the goal is to control your > local node mempool view. A partitioning trick is somehow as simple as > policy changes across versions (e.g allowing Taproot Segwit v0.1 spends) or > two same-feerate transactions. The partitioning attack can target at least > two meaningful subsets. An even easier way to partition the network is to create two conflicting txs at the same fee/fee rate and to announce them to many peers simultaneously. That way neither will replace the other, and you can build from there. In order to attack you, the partition would need to be broad enough to capture all your monitoring nodes on one side (to avoid detection), and all the mining nodes on the other side (to prevent your target tx from being confirmed). If that seems likely, maybe it indicates that it's too easy to identify nodes that feed txs to mining pools... > Either the miner mempools only, by conflicting all > the reachable nodes in as many subsets with a "tainted" transaction (e.g > set a special nSequence value for each), and looking on corresponding > issued block. Or targeting the "watchdog" mempools only, where the > adversary observation mechanism is the multi-party blame assignment round > itself. I think there's a few cases like that: you can find out what txs mining pools have seen by looking at their blocks, what explorers have seen by looking at their website... Being able to use that information to identify your node(s) -- rather than just your standardness policy, which you hopefully share with many other nodes -- seems like something we should be working to fix... > There is an open question on how many "divide-and-conquer" rounds > from an adversary viewpoint you need to efficiently identify all the > complete set of "mempools watchdog". If the transaction-relay topology is > highly dynamic thanks to outbound transaction-relay peers rotation, the > hardness bar is increased. I'm not sure outbound rotation is sufficient? In today's world, if you're a listening node, an attacker can just connect directly, announce the conflicting tx to you, and other txs to everyone else. For a non-listening node, outbound rotation might be more harmful than helpful, as it increases the chances a node will peer with a given attacker at some point. > Though ultimately, the rough mental model I'm thinking on, this is a > "cat-and-mouse" game between the victims and the attacker, where the latter > try to find the blind spots of the former. I would say there is a strong > advantage to the attacker, in mapping the mempools can be batched against > multiple sets of victims. While the victims have no entry barriers to > deploy "mempools watchdog" there is a scarce resource in contest, namely > the inbound connection slots (at least the miners ones). Anytime you deploy a new listening node, you use up 10 inbound connections, but provide capacity for 115 new ones. Worst case (if it's too hard to prevent identifying a listening node if you use it for monitoring), you setup all your monitoring nodes as non-listening nodes, and also set enough listening nodes in different IP ranges to compenasate for the number of outbound connections your monitoring nodes are making, and just ignore the mempools of those listening nodes. > Victims could batch their defense costs, in outsourcing the monitoring to > dedicated entities (akin to LN watchtower). However, there is a belief in > lack of a compensation mechanism, you will have only a low number of public > ones (see number of BIP157 signaling nodes, or even Electrum ones). Seems like a pretty simple service to pay for, if there's real demand: costs are pretty trivial, and if it's a LN service, you can pay for it over lightning... Fairly easy to test if you're getting what you're paying for too, by simultaneously announcing two conflicting txs paying yourself, and checking you get an appropriate alert. > Assuming we can solve them, there is still the issue of assi