Re: [bitcoin-dev] Examining ScriptPubkeys in Bitcoin Script

2023-10-30 Thread James O'Beirne via bitcoin-dev
On Sat, Oct 28, 2023 at 12:51 AM Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> But AFAICT there are multiple perfectly reasonable variants of vaults,
> too.  One would be:
>
> 1. master key can do anything
> 2. OR normal key can send back to vault addr without delay
> 3. OR normal key can do anything else after a delay.
>
> Another would be:
> 1. normal key can send to P2WPKH(master)
> 2. OR normal key can send to P2WPKH(normal key) after a delay.
>

I'm confused by what you mean here. I'm pretty sure that BIP-345 VAULT
handles the cases that you're outlining, though I don't understand your
terminology -- "master" vs. "normal", and why we are caring about P2WPKH
vs. anything else. Using the OP_VAULT* codes can be done in an arbitrary
arrangement of tapleaves, facilitating any number of vaultish spending
conditions, alongside other non-VAULT leaves.

Well, I found the vault BIP really hard to understand.  I think it wants
> to be a new address format, not script opcodes.
>

Again confused here. This is like saying "CHECKLOCKTIMEVERIFY wants to be a
new address format, not a script opcode."

That said, I'm sure some VAULT patterns could be abstracted into the
miniscript/descriptor layer to good effect.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposed BIP for OP_CAT

2023-10-26 Thread James O'Beirne via bitcoin-dev
I have to admit - I'm somewhat baffled at the enthusiasm for a "just CAT"
softfork, since I can't see that it would achieve much. It's indicative to
me that there isn't a compelling example to date that (i) actually has
working code and (ii) only relies upon CAT. I'm not averse to CAT, just
confused that there's a lot of enthusiasm for a CAT-only fork.

To do actually-interesting covenants, afacit you'd need "introspection"
opcodes and/or CHECKSIGFROMSTACK - and even then, for almost all
applications I'm familiar with, that kind of CAT-based approach would be
much more circuitous than the alternatives that have been discussed for
years on this list.

> Vaults

I don't think this is actually a use-case that CAT materially helps with.
Andrew's posts, while well written and certainly foundational, do not
sketch a design for vaults that someone would actually use. I don't see how
CAT alone (without many auxiliary introspection opcodes) facilitates vaults
that clear the usability hurdles I describe in this paper:
https://jameso.be/vaults.pdf. For example, batched withdrawals and partial
unvaultings don't seem possible.

Even with introspection opcodes, Burak's (
https://brqgoo.medium.com/emulating-op-vault-with-elements-opcodes-bdc7d8b0fe71)
prototype wasn't able to handle revaulting - an important feature for
usability (https://twitter.com/jamesob/status/1636546085186412544).

> Tree signatures

To what extent does Taproot obviate this use?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP for OP_VAULT

2023-03-06 Thread James O'Beirne via bitcoin-dev
I'm glad to see that Greg and AJ are forming a habit of hammering
this proposal into shape. Nice work fellas.

To summarize:

What Greg is proposing above is to in essence TLUV-ify this proposal.

I.e. instead of relying on hashed commitments and recursive script
execution (e.g.  + later presentation of preimage
script for execution), OP_VAULT would instead move through its
withdrawal process by swapping out tapleaf contents according to
specialized rules. If this is opaque (as it was to me), don't fret -
I'll describe it below in the "mechanics" section.


The benefits of this TLUVification are

- we can avoid any nested/recursive script execution. I know the
  recursive stuff rankles some greybeards even in spite of it being
  bounded to a single call. I'm not sure I share the concern but
  maintaining the status quo seems good.

- the spec is easier to reason about, more or less. The opcodes
  introduced don't have variadic witness requirements, and each opcode
  is only consumed in a single way.

- there's less general indirection. Instead of saying "okay, here's the
  hash of the script I'm going to use to authorize trigger
  transactions," we're just outright including the trigger auth script
  in the tapleaf at the birth of the vault as regular 'ol script that is
  evaluated before execution of the OP_VAULT instruction.

  Similarly, instead of relying on an implicit rule that an OP_VAULT can
  be claimed by a recovery flow, we're relying on a specific tapleaf that
  facilitates that recovery with OP_VAULT_RECOVER, described below.

Basically, OP_VAULT would just be implemented in a way that feels
more native to Taproot primitives.

Greg also introduces different opcodes to facilitate consistent
witness structure, rather than the variable ones we have now
since OP_VAULT and OP_UNVAULT can each be spent in two different
contexts. I've changed those a little here; instead of the three general
ones Greg gave, we whittled it down to two: OP_VAULT and
OP_VAULT_RECOVER.


So I think that, barring significant implementation complexity - which
I'll find out about soon and don't expect - this is a good change to the
proposal. As Greg noted, it doesn't really change anything about the
usage or expressiveness... other than the fact that, as a bonus, it
might allow an optional withdrawal authorization script (i.e. trigger
output => final target), which could be useful if e.g. some kind of
size-limiting opcode (e.g. OP_TX_MAXSIZE or something) came around in
the future as a kind of pinning fix.

If that last bit lost you, don't worry - that is speculative, but the
point is that this rework composes well with other stuff.


# CTV use

Another thing that has dawned on us is that we might as well just reuse
OP_CHECKTEMPLATEVERIFY for withdrawal target spends. Ben Carmen and
others realized early on that you can synthesize CTV-like behavior by
spending to a 0-delay OP_UNVAULT output, so something CTVish has always
implicitly been a part of the proposal. But CTV is better studied and
basically as simple as the OP_UNVAULT spend semantics, so the thought is
that we might as well reuse all the existing work (and scrutiny) from
CTV.

As a concrete example, an issue with the existing proposal is that the
existing CTVish OP_UNVAULT behavior has txid malleability, since it
doesn't commit to nVersion or nLockTime or the input sequences. Using
CTV solves this issue. Otherwise we'd basically reinvent it - "something
something convergent evolution."

I think this is a satisfying development, because there's clearly demand
for CTV use in other contexts (DLC efficiency, e.g.), and if it's
required behavior for practical vaults, I think pulling in the existing
BIP-119 that's been worked over for years reduces the conceptual
surface area added by OP_VAULT.


# New mechanics of the proposal

So here I'm going to describe my rendering of Greg and AJ's suggestions.


## Required opcodes

- OP_VAULT: spent to trigger withdrawal
- OP_VAULT_RECOVER: spent to recover
- OP_CHECKTEMPLATEVERIFY: spent into final withdrawal target


Creating an initial deposit
---

For each vault, vaulted coins are spent to an output with the taproot
structure

  taproot(internal_key, {$recovery_leaf, $trigger_leaf, ...})

where

  internal_key =
unchanged from original proposal (some very safe recovery key)

  $recovery_leaf =
[  ]  OP_VAULT_RECOVER

  $trigger_leaf =
  

[bitcoin-dev] BIP for OP_VAULT

2023-02-13 Thread James O'Beirne via bitcoin-dev
Since the last related correspondence on this list [0], a number of
improvements have been made to the OP_VAULT draft [1]:

* There is no longer a hard dependence on package relay/ephemeral
  anchors for fee management. When using "authorized recovery," all
  vault-related transactions can be bundled with unrelated inputs and
  outputs, facilitating fee management that is self contained to the
  transaction. Consequently, the contents of this proposal are in theory
  usable today.

* Specific output locations are no longer hardcoded in any of the
  transaction validation algorithms. This means that the proposal is now
  compatible with future changes like SIGHASH_GROUP, and
  transaction shapes for vault operations are more flexible.

---

I've written a BIP that fully describes the proposal here:


https://github.com/jamesob/bips/blob/jamesob-23-02-opvault/bip-vaults.mediawiki

The corresponding PR is here:

  https://github.com/bitcoin/bips/pull/1421

My next steps will be to try for a merge to the inquisition repo.

Thanks to everyone who has participated so far, but especially to AJ and
Greg for all the advice.

James

[0]:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-January/021318.html
[1]: https://github.com/bitcoin/bitcoin/pull/26857
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-20 Thread James O'Beirne via bitcoin-dev
Andrew, thanks for taking the time.

> It seems like this proposal will encourage address reuse for vaults,
> at least in some parts. It seems like it would not be difficult to
> ensure that each vault address was unique through the use of key
> derivation.

I think it's worth stepping back and noting that this proposal defers
the level of privacy-vs.-efficiency to be decided by the end user.

For users who are very privacy conscious and are doing fairly low volume
(a few vaulted coins a month?), trading the ability to do batched
operations for no privacy loss seems reasonable. They can use a ranged
descriptor to generate recovery paths and unvault sPKs and reveal no
marginal information during recoveries or unvaults.

Though of course there still may be an obvious temporal association
across transactions in the case of recovery - everything with the same
unvault key has to be recovered at once.

For users who expect to have large numbers of vaulted coins and maybe
don't care as much about privacy (e.g. many commercial users), revealing
the related nature of coins that are being unvaulted or recovered seems
like an okay cost to pay. Such users might decide to create "tranches"
of vaults with different parameters in whatever manner makes sense for
their use.

Importantly: in either case, you can always keep the nature of
still-vaulted coins hidden by burying the OP_VAULT script in a taptree
and varying the internal key you use for each coin with a ranged
descriptor. This way, only the revealed contents of unvaults and
recoveries can be associated. So I think that's your worst case, which
really doesn't seem bad to me.

As an aside, a goal in supporting address reuse wasn't for address reuse
*per se* - it was to remove potential catastrophe resulting from the
case where you give someone a vault address to deposit to, and they wind
up depositing to it multiple times - whether you warned them against it
or not.


> I'm not sure how [batching without privacy loss] could be solved
> though.

For recovery, I think it might be intractable at the moment.

Seemingly unrelated vaults which have the same recovery parameters will
presumably be recovered together if the unvault key is compromised. The
simple fact that these outputs are being spent together and are OP_VAULT
scripts fundamentally reveals their association during recovery, no way
around that AFAICT.

Similarly for the unvaulting, you can't get around revealing that you're
spending a group of outputs that contain an OP_VAULT script.

As mentioned above, unvaulting -- regardless of whether your vault
configuration supports batching or not -- *never* has to correlate
unvaulted coins to still-vaulted coins if you are either

  (i) varying the internal key used for the OP_VAULT taptrees, or
  (ii) using the optional "authorized recovery" feature and are varying
   that sPK with a ranged descriptor.

So there's really never a case where unvaults have to reveal a
relationship to, or between, still-vaulted coins. Subsequent unvaults
may be able to be correlated though on the basis of the recovery sPK
target hash.


> It just means that the recovery scripts must be the same, and this
> would leave an identifying mark on chain for every unvault.

This is only true if the user decides to create vaults which share
"literal" recovery paths. At the risk of belaboring the point for
clarity, you can avoid this by generating the different "literal"
recovery paths from a ranged descriptor which is controlled by a single
private key -- of course at the cost of no batched recovery.


> not to mention that sweeping to recovery would also reveal all of your
> coins too.

Maybe it's worth contextualizing that recovery sweeps really only happen
as a final fallback in catastrophic cases that, in a world without
vaults, would result in the coins simply being stolen. In this case I
would guess most users are willing to make the privacy trade to retain
ownership of their coins.


> I think it would be sufficient to do the same check as the OP_UNVAULT
> script [when validating the revault output in the unvault trigger
> transaction] and just require that the recovery script and the delay
> are the same

Consider that this allows the holder of the unvault key (possibly an
attacker) to immediately spend the vault into a new vault configuration
with a different unvault key.

Obviously the recovery path would still be the same, and so the owner of
the vault could still sweep if the unvault key switch was unauthorized,
but I'll need to think a little bit more about whether this change is
more fundamental.

Generally that would be easy to implement, though. I'll think on it. My
suspicion is that you're right and this would be a good change.


> I'm also not convinced that OP_VAULT and OP_UNVAULT should be allowed
> for bare and P2WSH outputs. It seems like it would make sense to just
> limit their usage to tapscripts as this would simply their
> implementation.

I think you're right, an

Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-18 Thread James O'Beirne via bitcoin-dev
> I don't see in the write up how a node verifies that the destination
> of a spend using an OP_VAULT output uses an appropriate OP_UNVAULT
> script.

It's probably quicker for you to just read through the
implementation that I reference in the last section of the paper.

https://github.com/bitcoin/bitcoin/blob/fdfd5e93f96856fbb41243441177a40ebbac6085/src/script/interpreter.cpp#L1419-L1456

> It would usually be prudent to store this recovery address with every
> key to the vault, ...

I'm not sure I really follow here. Worth noting now that in OP_VAULT the
recovery path can be optionally gated by an arbitrary scriptPubKey.

> This is rather limiting isn't it? Losing the key required to sign
> loses your recovery option.

This functionality is optional in OP_VAULT as of today. You can specify
OP_TRUE (or maybe I should allow empty?) in the  to
disable any signing necessary for recovery.

> Wouldn't it be reasonably possible to allow recovery outputs with any
> recovery address to be batched, and the amount sums sent to each to be
> added up and verified?

I think the space savings from this is pretty negligible, since you're
just saving on the transaction overhead, and it makes the implementation
decently more complicated. One benefit might be sharing a common
fee-management output (e.g. ephemeral anchor) across the separate vaults
being recovered.

> If someday wallet vaults are the standard wallet construct, people
> might not even want to have a non-vault wallet just for use in
> unvaulting.

If you truly lacked any non-vaulted UTXOs and couldn't get any at a
reasonable price (?), I can imagine there might be a mechanism where you
include a payout output to some third party in a drafted unvault trigger
transaction, and they provide a spend of the ephemeral output.

Though you do raise a good point that this construction as written may
not be compatible with SIGHASH_GROUP... I'd have to think about that
one.

> Hmm, it seems inaccurate to say that step is "skipped". While there
> isn't a warm wallet step, its replaced with an OP_UNVAULT script step.

It is "skipped" in the sense that your bitcoin can't be stolen by having
to pass through some intermediate wallet during an authorized withdrawal
to a given target, in the way that they could if you had to prespecify
an unvault target when creating the vault.


---


> My proposal for efficient wallet vaults was designed to meet all of
> those criteria, and allows batching as well.

Probably a discussion of your proposal merits a different thread, but
some thoughts that occur:


> [from the README]
>
> OP_BEFOREBLOCKVERIFY - Verifies that the block the transaction is
> within has a block height below a particular number. This allows a
> spend-path to expire.

I think this breaks fundamental reorgability of transactions. I think
some of the other opcodes, e.g the one that limits fee contribution on
the basis of historical feerate, are going to be similarly
controversial.

> This is done by using a static intermediate address that has no values
> that are unique to the particular wallet vault address.

Does mean either that (i) this proposal doesn't have dynamic unvaulting
targets or, (ii) if you do, in order to be batch unvaulted, vaulted
coins need to first be spent into this intermediate output?

It sounds like (ii) is the case, given that your unvault target
specification lives in (I think?) the witness for the spend creating the
intermediate output.

If the intermediate address doesn't have any values which are unique to
a particular vault, how do you authorize recoveries from it?

---

Personally I think if you'd like to pursue your proposal, it'd be
valuable to see a full implementation. Might also make it easier to
assess the viability of the proposal.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-18 Thread James O'Beirne via bitcoin-dev
I've implemented three changes based on suggestions from Greg Sanders
and AJ Towns.

I've segmented the changes into commits that should be
reasonable to follow, even though I'll probably rearrange the commit
structure later on.

1. Greg's suggestion: OP_UNVAULT outputs can now live behind
scripthashes. This means that the lifecycle of a vault can live entirely
within, say, Taproot. In this case the only thing that would reveal
the operation of the vault would be the content of the witness stack
when triggering an unvault or recovering. So I think no real privacy
benefits over the previous scheme, but certainly some efficiency ones
since we're moving more content from the scriptPubKey into the witness.

 Commit here:
https://github.com/bitcoin/bitcoin/pull/26857/commits/cd33d120c67cda7eb5c6bbfe3a1ea9fb6c5b93d1

2. AJ's suggestion: unvault trigger transactions can now have an extra
"revault" output that redeposits some balance to the same vault
scriptPubKey from which it came. This is nice because if the delay
period is long, you may want to manage a remaining vault balance
separately while the spent balance is pending an unvault.

 Commit here:
https://github.com/bitcoin/bitcoin/pull/26857/commits/cf3764fb503bc17c4438d1322ecf780b9dc3ef30

3. AJ's suggestion: instead of specifying , introduce
a replacement parameter: . This contains the same
target recovery sPK hash as before, but the remaining bytes contain a
scriptPubKey that functions as authorization for the recovery process.
This allows users to optionally avoid the risk of "recovery replays" at
the expense of having to maintain a recovery key. Users can opt out of
this by passing OP_TRUE for the recovery sPK. I guess maybe I could even
support just omitting an sPK altogether for the legacy behavior.

Commit here:
https://github.com/bitcoin/bitcoin/pull/26857/commits/fdfd5e93f96856fbb41243441177a40ebbac6085


The suggestions were good ones, and I think they've improved the
proposal.

My next steps are to do minor updates to the paper and start writing a
BIP draft.

Thanks to achow for the valuable feedback, which I'm still mulling on.

James
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-10 Thread James O'Beirne via bitcoin-dev
Thanks for the thoughtful reply AJ.


> I don't think that makes sense? With a general scheme, you'd only be
> bloating the witness data (perhaps including the witness script) not
> the scriptPubKey?

Sorry, sloppy language on my part. To be charitable, I'm talking about
the "figurative sPK," which of course these days lives in the witness
for script-path-ish spends. Maybe the witness discount means that
"complicated" scripts aren't as big a deal, depending on the actual
difference in raw script size.


> I think it might be better to use a pay-to-contract construction for
> the recovery path, rather than an empty witness.

So I guess the one advantage that what you're proposing has over just
using a recovery-path key signature is that it's all derivable from your
cold privkey; you don't have to worry about accidentally losing the
recovery-path key.

Of course you're still vulnerable to spurious sweeps if the
sha256(secret) value gets found out, which presumably you'd want in an
accessible cache to avoid touching the cold secret every time you want
to sweep.

What do you think about the idea of making the recovery-path
authorization behavior variable on a single byte flag preceding the 32
byte data push, as I mentioned in another post? I think it may make
sense to leave this option open to end-users (and also allow for some
upgradeability).


> I think a generic OP_UNVAULT can be used to simulate OP_CTV: replace
> " OP_CTV" with "<000..0> 0  OP_UNVAULT".

Yup, that's an inefficient way of emulating CTV. If people want CTV, we
should just look at activating CTV. Greg Sanders has a thing about
"jetting" CTV into this proposal (I think) so that the code-paths are
shared, but I haven't figured out how that would work. They really
don't share that much code AFAICT.


> The paper seems to put "OP_UNVAULT" first, but the code seems to
> expect it to come last, not sure what's up with that inconsistency.

Again some sloppy notation on my part. What I sort of meant in the paper
was a kind of functional notation `OP_VAULT(param1, param2, ...)`. Let's
chalk that up to my inexperience actually working on script stuff.


> I think there's maybe a cleverer way of batching / generalising
> checking that input/output amounts match.
>
> [...]
>
>  * set C = the sum of each output that has a vault tag with
>#recovery=X

This would also need to take into account that the s are
compatible, but your point is well taken.


> I think one meaningful difference between these two approaches is that
> the current proposal means unvaulting locks up the entire utxo for the
> delay period, rather than just the amount you're trying to unvault.

This is a really good point and I think is one that's important to
incorporate with a change to the existing proposal.

A simple fix for facilitating the use of a "partial revault" while the
OP_UNVAULT UTXO is outstanding would be to allow for an optional
third output that is a redeposit back to the identical OP_VAULT sPK that
is being spent by the OP_UNVAULT transaction, then the script
interpreter would just ensure that the nValue of those two outputs sums
to the sum of the input nValues.

I can see what you're saying about having more generic "group amounts by
compatible vault params, and then compare to similarly grouped outputs,"
but I'm just wondering if there are other uses that enables besides the
partial-revault thing I mentioned above. If not, I'd probably rather just
stick
with something simple like having the third optional re-vault output.


> Changing the unvault construction to have an optional OP_VAULT output
> would remedy that, I think.

Oh - okay, this is what you're saying. Right!

Is that a sufficient change, or are there other benefits that the more
complicated clever-I/O-vault-grouping would enable that you have in
mind?


> What would it look like to just hide all this under taproot?
>
> [...]
>
> It also needs some way of constructing "unvault[X]", which could be a
> TLUV-like construction.
>
> That all seems possible to me; though certainly needs more work/thought
> than just having dedicated opcodes and stuffing the data directly in
> the sPK.

I think this is a very important comparison to do, but I'm eager to see
code for things like this. There have been a lot of handwavey proposals
lately without tangible code artifacts. I'm eager to see what these
alternatives look like in practice - i.e. in functional tests.


Thanks again for the great mail.
James
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-10 Thread James O'Beirne via bitcoin-dev
Forwarding in some conceptual feedback from the pull request.

>From ariard:

> I've few open questions, like if the recovery path should be committed
with a signature rather than protected by a simple scriptpubkey preimage.

That's something I've wondered about too. I have to ruminate on AJ's good
post about this, but a pretty straightforward way of enabling this (at the
expense of some complexity) is to do something like "if
 is 32 bytes, treat it as it's currently used. If it's
33 bytes, use the first byte as a parameter for how to interpret it." To
start with, an extra prefix byte of 0x00 could mean "require a witness
satisfying the scriptPubKey that hashes to the remaining 32 bytes" in the
same way we do the unvault signing. This would enable a "sign-to-recover"
flow at the option of the user, specified during vault creation.

> The current OP_VAULT implementation is using OP_NOP repurposing but this
doesn't seem compatible with Taproot-only extensions (e.g ANYPREVOUT) and
maybe a OP_SUCCESS could be used.

Yes, with Greg's suggestion of putting  on the witness stack
during OP_VAULT (-> OP_UNVAULT) spend, we could conceivably move
OP_VAULT/OP_UNVAULT into Taproot-only OP_SUCCESSx opcodes. I haven't
thought hard about how worthwhile it is to preserve the ability to use
OP_VAULT in pre-Taproot contexts.

> There is a conceptual wonder, if a CTV and template malleability approach
wouldn't better suit the vault use-case and allow other ones, as such
better re-usability of primitives.

I dedicated a whole section of the paper ("Precomputed vaults with
covenants") to explaining why precomputed covenant mechanisms have big
shortcomings for vaults.

That said, a number of people have commented about OP_VAULT's ability to
(inefficiently) emulate CTV. I'm still very supportive of CTV, I just don't
really have any uses I personally understand inside and out aside from
vaults... so if others do, they should really post about it on this list
and we should resume working on an activation for CTV!

---

>From naumenkogs:

> I'm personally not sure batching withdrawals is that compelling... It's a
nice-to-have, but I'd think about the benefits dropping this feature would
provide.

Having familiarity with a few large-scale custodial operations, I think
batching is a really big deal. And if you're going to support multiple
deposits to the same vault, no support for batching is going to result in a
lot of unnecessary output creation even as a small user if you're, e.g.,
doing weekly automated deposits from an exchange to a vault you've
configured.

Darosior comments:

> On the contrary i think the batching feature is very compelling. The
impossibility to batch Unvaults in Revault is a major drawback: it
significantly increases the cost of any realistic operation (you need one
whole additional transaction per input, and each have likely more than one
output). It also potentially increases the cost on the network (you'd
likely want some sort of anchor output on each Unvault tx, that you might
not spend, so that's 2*n outputs created with n the number of coins spent):
we definitely don't want to prevent batching. The ability to batch the
recovery transactions (what we called Emergency tx in Revault) is also very
compelling but i think your comment was only about batched withdrawals.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-10 Thread James O'Beirne via bitcoin-dev
Greg explained his suggestion to me off-list, and I think it's a good one.
To summarize, consider the normal "output flow" of an expected vault use:

(i) output to be vaulted
  => (ii) OP_VAULT output
=> (iii) OP_UNVAULT "trigger" output
  => (iv) final output

In my existing draft implementation, all outputs aside from (iii), the
OP_UNVAULT trigger, can be P2TR or P2WSH. In other words, those outputs can
hide their true script until spend. In my draft, the OP_UNVAULT trigger had
to be bare so that the script interpreter could inspect part of it for
validity: "does this OP_UNVAULT have the same  and
 as the OP_VAULT?"

If that output wasn't bare, because the  is variable at the
time of OP_UNVAULT output creation, the script interpreter would have no
way of constructing the expected scriptPubKey.

Greg's suggestion would allow that output to be any kind of script. He
suggests to put the  onto the witness stack when spending the
OP_VAULT output (and creating the OP_UNVAULT output). If we did that, the
script interpreter could e.g. use a NUMS point (i.e. a publicly known point
with no usable private key) to construct a Taproot configuration that looks
like

  tr(NUMS, {  })

and check if the scriptPubKey of the proposed OP_UNVAULT output matches
that. This would allow all outputs in vault lifecycles to be P2TR, for
example, which would conceal the operation of the vault - a very nice
feature!

This would also allow the OP_VAULT/OP_UNVAULT opcodes to be implemented as
Taproot-only OP_SUCCESSx opcodes, if that was decided to be preferable.

The problem is how to (and whether to) enable something similar for witness
v0 outputs. For example, if we want the (ii) and (iii) output scripts to
live behind P2WSH. One (kind of hacky) option to enable this is to have the
script interpreter construct the expected OP_UNVAULT scriptPubKey on the
basis of what witness version it sees. For example, if it sees "OP_0 <32
bytes data>", it would use  on the witness stack to construct
a fitting P2WSH scriptPubKey that is compatible with the OP_VAULT being
spent, and then match against that. But if it detects "OP_1 <32 bytes
data>", it would do the same process for an expected Taproot-with-NUMS
output.

---

Anyway, sorry if that was more verbose than necessary, but I think it's a
really great suggestion from Greg. I'll look at modifying the
implementation accordingly. I'd be curious to hear what others think as
well.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-09 Thread James O'Beirne via bitcoin-dev
Hey Greg,

I think what you're trying to get at here is that the OP_UNVAULT
scriptPubKey *must* be a bare script so that the OP_VAULT spend logic can
verify that we're spending an OP_VAULT output into a compatible OP_UNVAULT
output, and that's true. The OP_UNVAULT scriptPubKey also must contain the
target hash because that has is used when validating that spend to ensure
that the final unvault target matches what was advertised when the
OP_UNVAULT output was created.

So I'm not sure what problem you're trying to solve by putting the target
hash  on the OP_VAULT spend witness stack. If it were placed there, it
wouldn't be accessible during OP_UNVAULT spend AFAICT. I agree it would be
nice to figure out a way to allow the OP_UNVAULT scriptPubKey to not be
bare, which may require moving the target hash out of it, but we'd have to
figure out a mechanism to properly forward the target hash for validation.

Best,
James

On Mon, Jan 9, 2023 at 2:32 PM Greg Sanders  wrote:

> Hi James and co,
>
> Currently there is no way to make this compatible with scripthashes of any
> kind, since the script interpreter has no insight into the OP_UNVAULT
> outputs' "execution script", and one of the arguments of OP_UNVAULT is
> freeform, resulting in an unpredictable output scriptpubkey.
>
> I think the fix is just requiring a single additional witness data item
> during OP_VAULT spend(for unvault path), mandating the
>  to be included in the witness stack as an input to
> OP_VAULT opcode, and transaction introspection then checks to make sure the
> witness item and the corresponding output script template matches the
> expected.
>
> This would only be necessary for the unvaulting path, and not for the
> recovery path.
>
> Cheers,
> Greg
>
> On Mon, Jan 9, 2023 at 2:10 PM rot13maxi via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hey James,
>>
>> Really cool proposal. I’ve been thinking a lot lately about script paths
>> for inheritance. In a lot of the “have a relative time lock that allows a
>> different key to spend coins, or allows a smaller threshold of a multisig
>> to spend” schemes, you have the problem of needing to “refresh” all of your
>> coins when the timelock is close to maturation. In a lot of the “use
>> multisig with ephemeral keys to emulate covenants” schemes, you have to
>> pre-commit to the terminal destination well in advance of the spend-path
>> being used, which leads to all kinds of thorny questions about security and
>> availability of *those* keys. In other words, you either have to have
>> unbound destinations but a timer that needs resetting, or you have unbound
>> time but fixed destinations. This design gets you the best of both because
>> the destination SPKs aren’t committed to until the unvaulting process
>> starts. This (or something like this with destination binding at
>> unvault-time) would be an incredibly useful tool for inheritance designs in
>> wallets.
>>
>> I need to think a bit more about the recovery path not having any real
>> encumbrances on it. Maybe in practice if you’re worried about DoS, you have
>> UTXOs that commit to multiple vault paths that have tweaked recovery
>> destinations or something, or maybe it really is the right move to say that
>> if recovery is triggered, you probably do want it for all of your inflight
>> unvaultings.
>>
>> Looking forward to reading this a few more times and talking more about
>> it.
>>
>> Thanks!
>> rijndael
>>
>>
>> On Mon, Jan 9, 2023 at 11:07 AM, James O'Beirne via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> For the last few years, I've been interested in vaults as a way to
>> substantially derisk custodying Bitcoin, both at personal and commercial
>> scales. Instead of abating with familiarity, as enthusiasm sometimes
>> does, my conviction that vaults are an almost necessary part of bitcoin's
>> viability has only grown over the years.
>>
>> Since people first started discussing vaults, it's been pretty clear that
>> some kind of covenant-enabling consensus functionality is necessary to
>> provide the feature set necessary to make vault use practical.
>>
>> Earlier last year I experimented with using OP_CTV[1], a limited covenant
>> mechanism, to implement a "minimum-viable" vault design. I found that the
>> inherent limitations of a precomputed covenant scheme left the resulting
>> vault implementation wanting, even though it was an improvement over
>> existing strategies that rely on presigned transactions and (hopefully)
>&g

[bitcoin-dev] OP_VAULT: a new vault proposal

2023-01-09 Thread James O'Beirne via bitcoin-dev
For the last few years, I've been interested in vaults as a way to
substantially derisk custodying Bitcoin, both at personal and commercial
scales. Instead of abating with familiarity, as enthusiasm sometimes
does, my conviction that vaults are an almost necessary part of bitcoin's
viability has only grown over the years.

Since people first started discussing vaults, it's been pretty clear that
some kind of covenant-enabling consensus functionality is necessary to
provide the feature set necessary to make vault use practical.

Earlier last year I experimented with using OP_CTV[1], a limited covenant
mechanism, to implement a "minimum-viable" vault design. I found that the
inherent limitations of a precomputed covenant scheme left the resulting
vault implementation wanting, even though it was an improvement over
existing strategies that rely on presigned transactions and (hopefully)
ephemeral keys.

But I also found proposed "general" covenant schemes to be
unsuitable for this use. The bloated scriptPubKeys, both in size and
complexity, that would result when implementing something like a vault
weren't encouraging. Also importantly, the social-consensus quagmire
regarding which covenant proposal to actually deploy feels at times
intractable.

As a result, I wanted to explore a middle way: a design solely concerned
with making the best vault use possible, with covenant functionality as a
secondary consideration. In other words, a proposal that would deliver
the safety benefits of vaults to users without getting hung up on
trying to solve the general problem of covenants.

At first this design, OP_VAULT, was just sort of a pipe dream. But as I
did more thinking (and eventually implementing) I became more convinced
that, even if it isn't considered for soft-fork, it is a worthwhile
device to serve as a standard benchmark against which other proposals
might be judged.

I wrote a paper that summarizes my findings and the resulting proposal:
https://jameso.be/vaults.pdf

along with an accompanying draft implementation:
https://github.com/bitcoin/bitcoin/pull/26857

I might work on a BIP if there's interest.

James

[1]: https://github.com/jamesob/simple-ctv-vault
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Ephemeral Anchors: Fixing V3 Package RBF againstpackage limit pinning

2022-10-19 Thread James O'Beirne via bitcoin-dev
I'm also very happy to see this proposal, since it gets us closer to having
a mechanism that allows the contribution to feerate in an "unauthenticated"
way, which seems to be a very helpful feature for vaults and other
contracting protocols.

One possible advantage of the sponsors interface -- and I'm curious for
your input here Greg -- is that with sponsors, assuming we relaxed the "one
sponsor per sponsoree" constraint, multiple uncoordinated parties can
collaboratively bump a tx's feerate. A simple example would be a batch
withdrawal from an exchange could be created with a low feerate, and then
multiple users with a vested interest of expedited confirmation could all
"chip in" to raise the feerate with multiple sponsor transactions.

Having a single ephemeral output seems to create a situation where a single
UTXO has to shoulder the burden of CPFPing a package. Is there some way we
could (possibly later) amend the ephemeral anchor interface to allow for
this kind of collaborative sponsoring? Could you maybe see "chained"
ephemeral anchors that would allow this?


On Tue, Oct 18, 2022 at 12:52 PM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Excellent proposal and I agree it does capture much of the spirit of
> sponsors w.r.t. how they might be used for V3 protocols.
>
> The only drawbacks I see is they don't work for lower tx version
> contracts, so there's still something to be desired there, and that the
> requirement to sweep the output must be incentive compatible for the miner,
> or else they won't enforce it (pass the buck onto the future bitcoiners).
> The Ephemeral UTXO concept can be a consensus rule (see
> https://rubin.io/public/pdfs/multi-txn-contracts.pdf "Intermediate UTXO")
> we add later on in lieu of managing them by incentive, so maybe it's a
> cleanup one can punt.
>
> One question I have is if V3 is designed for lightning, and this is
> designed for lightning, is there any sense in requiring these outputs for
> v3? That might help with e.g. anonymity set, as well as potentially keep
> the v3 surface smaller.
>
> On Tue, Oct 18, 2022 at 11:51 AM Greg Sanders via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> > does that effectively mark output B as unspendable once the child gets
>> confirmed?
>>
>> Not at all. It's a normal spend like before, since the parent has been
>> confirmed. It's completely unrestricted, not being bound to any
>> V3/ephemeral anchor restrictions on size, version, etc.
>>
>> On Tue, Oct 18, 2022 at 11:47 AM Arik Sosman via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Greg,
>>>
>>> Thank you very much for sharing your proposal!
>>>
>>> I think there's one thing about the second part of your proposal that
>>> I'm missing. Specifically, assuming the scenario of a v3 transaction with
>>> three outputs, A, B, and the ephemeral anchor OP_TRUE. If a child
>>> transaction spends A and OP_TRUE, does that effectively mark output B as
>>> unspendable once the child gets confirmed? If so, isn't the implication
>>> therefore that to safely spend a transaction with an ephemeral anchor, all
>>> outputs must be spent? Thanks!
>>>
>>> Best,
>>> Arik
>>>
>>> On Tue, Oct 18, 2022, at 6:52 AM, Greg Sanders via bitcoin-dev wrote:
>>>
>>> Hello Everyone,
>>>
>>> Following up on the "V3 Transaction" discussion here
>>> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
>>> , I would like to elaborate a bit further on some potential follow-on work
>>> that would make pinning severely constrained in many setups].
>>>
>>> V3 transactions may solve bip125 rule#3 and rule#5 pinning attacks under
>>> some constraints[0]. This means that when a replacement is to be made and
>>> propagated, it costs the expected amount of fees to do so. This is a great
>>> start. What's left in this subset of pinning is *package limit* pinning. In
>>> other words, a fee-paying transaction cannot enter the mempool due to the
>>> existing mempool package it is being added to already being too large in
>>> count or vsize.
>>>
>>> Zooming into the V3 simplified scenario for sake of discussion, though
>>> this problem exists in general today:
>>>
>>> V3 transactions restrict the package limit of a V3 package to one parent
>>> and one child. If the parent transaction includes two outputs which can be
>>> immediately spent by separate parties, this allows one party to disallow a
>>> spend from the other. In Gloria's proposal for ln-penalty, this is worked
>>> around by reducing the number of anchors per commitment transaction to 1,
>>> and each version of the commitment transaction has a unique party's key on
>>> it. The honest participant can spend their version with their anchor and
>>> package RBF the other commitment transaction safely.
>>>
>>> What if there's only one version of the commitment transaction, such as
>>> in other protocols like duplex payment channels, eltoo? What about multi
>>> par

[bitcoin-dev] More uses for CTV

2022-08-19 Thread James O'Beirne via bitcoin-dev
Over the past few months there have been a few potential uses of
OP_CHECKTEMPLATEVERIFY (BIP-119)
(https://github.com/bitcoin/bitcoin/pull/21702) that I've found
interesting.

# Congestion control redux

When I first heard of CTV, a presentation Jeremy did at Chaincode back
in 2018 or '19, he cited congestion control as one of its main use
cases.

The pitch went something like

> When there is a high demand for blockspace it becomes very expensive
> to make transactions. By using OP_CHECKTEMPLATEVERIFY, a large volume
> payment processor may aggregate all their payments into a single O(1)
> transaction for purposes of confirmation. Then, some time later, the
> payments can be expanded out of that UTXO when the demand for
> blockspace is decreased.

(from https://utxos.org/uses/scaling/)

At the time that didn't particularly grab me; the idea of smoothing fee
rates seemed nice but marginal.

But recently, two particular cases have made me reassess the value of
congestion control.

The first stems from the necessity of L2 protocols (payment channels,
vaults, etc.) to, under certain circumstances, settle to the chain in a
timely way in order to prevent abuse of the protocol. If some
unexpected condition (a protocol exploit, large network disconnect, en
masse vault breach, etc.) creates a situation where a large number of
contracts need to settle to the chain in short order, mempools could
fill up and protocol failures could happen for want of mempool/block
space
(
https://github.com/jamesob/mempool.work#failure-one-mempool-to-rule-them-all
).

In such a case, CTV could be used effectively to "compress" settlement
commitments, get them on-chain, and then facilitate later unpacking of
the CTV ouputs into the contract's true end state.

This amounts to `n` contract-control outputs (e.g. a lightning funding
transaction outputs) being spent into a single CTV output, which
commits to the final settlement state. Multiple parties could
trustlessly collaborate to settle into a single CTV output using
SIGHASH_ALL | ANYONECANPAY. This requires a level of interaction
similar to coinjoins.

Put simply, CTV allows deferring the chainspace required for the final
settlement outputs, but still immediately requires space for the
inputs. This might sound like a temporary reprieve from half-ish of the
space required to settle, but in many (most?) cases the outputs require
substantially more space than the inputs, given that often we're
settling a single UTXO into multiple payouts per party. A 2, 3, or
4-fold increase (depending on the contracting pattern) in capacity
isn't a silver bullet, but it could ameliorate the damage of unexpected
settlement "tidal waves."

Conceptually, CTV is the most parsimonious way to do such a scheme,
since you can't really get smaller than a SHA256 commitment, and that's
essentially all CTV is.

The second congestion control case is related to a recent post Bram
made about stability under a no-block-subsidy regime. He posted

> If transaction fees came in at an even rate over time all at the
> exact same level then they work fine for security, acting similarly
> to fixed block rewards. Unfortunately that isn't how it works in the
> real world. There's a very well established day/night cycle with fees
> going to zero overnight and even longer gaps on weekends and
> holidays. If in the future Bitcoin is entirely dependent on fees for
> security (scheduled very strongly) and this pattern keeps up
> (overwhelmingly likely) then this is going to become a serious
> problem.

(from
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-July/020702.html
)

Ryan Grant points out that CTV's congestion control use could help to
smooth fees, creating a less spiky incentive to mine
(
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-July/020702.html
).

Admittedly the original concern is speculative and a ways off from now,
as others in the thread pointed out. But having CTV-based fee smoothing
as an option certainly doesn't seem like a bad thing.


# Atomic mining pool payouts

Laurentia is a mining pool design that pays participants out directly
from the coinbase of found blocks.

> Block solve reward is distributed directly from the block to each
> user, meaning each user gets a 'mined' transaction directly into
> their wallet as soon as the block is solved so there is no wait to
> get paid and no pool wallet storing user's rewards.

(from
https://laurentiapool.org/wp-content/uploads/2020/05/laurentiapool_whitepaper.pdf
)

I'm not a mining expert and so I can't speak to the efficacy of the
paper as a whole, but direct-from-coinbase payouts seem like a
desirable feature which avoids some trust in pools. One limitation is
the size of the coinbase outputs owed to constituent miners; this
limits the number of participants in the pool.

If the payout was instead a single OP_CTV output, an arbitrary number
of pool participants could be paid out "atomically" within a single
coinbase.

---

CTV both

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread James O'Beirne via bitcoin-dev
> The enumeration of covenants uses here excludes vaulting,
> which I see as far and away the highest utility use for covenants

Apologies for the double post, but I need to caveat this.

To be more accurate, I see "coin pools" as the most potentially
valuable use of covenants, since we need to address the scalability of
UTXO ownership as an existential issue at some point down the road - but
because a workable design has not yet been proposed (I don't think e.g.
CoinPools is scalable as-written... but that's for another
post), I am omitting that use in favor of vaults, which are well
understood and can be implemented workably in various ways.

I do not want to suggest that I don't want more general covenant
abilities - I do! But it's clear that both the designs and exact
usages of recursive covenants need *a lot* of work, probably years.

Throwing CTV to the wayside because it isn't a 100% solution to
every possible covenant use we can dream up feels a bit like
slamming the door on P2SH because Taproot might come along
a few years later.

On Fri, Apr 22, 2022 at 12:48 PM James O'Beirne 
wrote:

> > APO/IIDs, CTV, and TLUV/EVICT all seem to me to be very specific to
> > certain usecases (respectively: Eltoo, congestion control, and
> > joinpools)
>
> The enumeration of covenants uses here excludes vaulting,
> which I see as far and away the highest utility use for covenants given
> that it allows significant derisking of custody for any user of Bitcoin
> interested in holding their own coins (which is debatably redundant
> with a strict definition of "Bitcoin user" ;).
>
> A lot of why I like CTV is the simple fact that it is a low-risk way of
> getting us vaults. That feature in itself is more than enough to
> justify (to me) CTV's added validation complexity, which is very modest
> - in contrast every other covenant proposal I've seen so far.
>
> On Thu, Apr 21, 2022 at 6:28 PM David A. Harding via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On 21.04.2022 08:39, Matt Corallo wrote:
>> > We add things to Bitcoin because (a) there's some demonstrated
>> > use-cases and intent to use the change (which I think we definitely
>> > have for covenants, but which only barely, if at all, suggests
>> > favoring one covenant design over any other)
>>
>> I'm unconvinced about CTV's use cases but others have made reasonable
>> claims that it will be used.  We could argue about this indefinitely,
>> but I would love to give CTV proponents an opportunity to prove that a
>> significant number of people would use it.
>>
>> > (b) because its
>> > generally considered aligned with Bitcoin's design and goals, based on
>> > developer and more broad community response
>>
>> I think CTV fulfills this criteria.  At least, I can't think of any way
>> BIP119 itself (notwithstanding activation concerns) violates Bitcoin's
>> designs and goals.
>>
>> > (c) because the
>> > technical folks who have/are wiling to spend time working on the
>> > specific design space think the concrete proposal is the best design
>> > we have
>>
>> This is the criteria that most concerns me.  What if there is no
>> universal best?  For example, I mentioned in my previous email that I'm
>> a partisan of OP_CAT+OP_CSFS due to their min-max of implementation
>> simplicity versus production flexibility.  But one problem is that
>> spends using them would need to contain a lot of witness data.  In my
>> mind, they're the best for experimentation and for proving the existence
>> of demand for more optimized constructions.
>>
>> OP_TX or OP_TXHASH would likely offer almost as much simplicity and
>> flexibility but be more efficient onchain.  Does that make them better
>> than OP_CAT+OP_CSFS?  I don't know how to objectively answer that
>> question, and I don't feel comfortable with my subjective opinion of
>> CAT+CSFS being better than OP_TX.
>>
>> APO/IIDs, CTV, and TLUV/EVICT all seem to me to be very specific to
>> certain usecases (respectively: Eltoo, congestion control, and
>> joinpools), providing maximum onchain efficiency for those cases but
>> requiring contortions or larger witnesses to accomplish other covenant
>> usecases.  Is their increased efficiency better than more general
>> constructions like CSFS or TX?  Again, I don't know how to answer that
>> question objectively, although subjectively I'm ok with optimized
>> constructions for cases of proven demand.
>>
>> > , and finally (d) because the implementation is well-reviewed
>> > and complete.
>>
>> No comment here; I haven't followed CTV's review progress to know
>> whether I'd consider it well enough reviewed or not.
>>
>> > I do not see how we can make an argument for any specific covenant
>> > under (c) here. We could just as well be talking about
>> > TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use
>> > CTV can probably just as easily use those instead - ie this has
>> > nothing to do with "will people use it".
>>
>> I'm curious how we as a t

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread James O'Beirne via bitcoin-dev
> APO/IIDs, CTV, and TLUV/EVICT all seem to me to be very specific to
> certain usecases (respectively: Eltoo, congestion control, and
> joinpools)

The enumeration of covenants uses here excludes vaulting,
which I see as far and away the highest utility use for covenants given
that it allows significant derisking of custody for any user of Bitcoin
interested in holding their own coins (which is debatably redundant
with a strict definition of "Bitcoin user" ;).

A lot of why I like CTV is the simple fact that it is a low-risk way of
getting us vaults. That feature in itself is more than enough to
justify (to me) CTV's added validation complexity, which is very modest
- in contrast every other covenant proposal I've seen so far.

On Thu, Apr 21, 2022 at 6:28 PM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On 21.04.2022 08:39, Matt Corallo wrote:
> > We add things to Bitcoin because (a) there's some demonstrated
> > use-cases and intent to use the change (which I think we definitely
> > have for covenants, but which only barely, if at all, suggests
> > favoring one covenant design over any other)
>
> I'm unconvinced about CTV's use cases but others have made reasonable
> claims that it will be used.  We could argue about this indefinitely,
> but I would love to give CTV proponents an opportunity to prove that a
> significant number of people would use it.
>
> > (b) because its
> > generally considered aligned with Bitcoin's design and goals, based on
> > developer and more broad community response
>
> I think CTV fulfills this criteria.  At least, I can't think of any way
> BIP119 itself (notwithstanding activation concerns) violates Bitcoin's
> designs and goals.
>
> > (c) because the
> > technical folks who have/are wiling to spend time working on the
> > specific design space think the concrete proposal is the best design
> > we have
>
> This is the criteria that most concerns me.  What if there is no
> universal best?  For example, I mentioned in my previous email that I'm
> a partisan of OP_CAT+OP_CSFS due to their min-max of implementation
> simplicity versus production flexibility.  But one problem is that
> spends using them would need to contain a lot of witness data.  In my
> mind, they're the best for experimentation and for proving the existence
> of demand for more optimized constructions.
>
> OP_TX or OP_TXHASH would likely offer almost as much simplicity and
> flexibility but be more efficient onchain.  Does that make them better
> than OP_CAT+OP_CSFS?  I don't know how to objectively answer that
> question, and I don't feel comfortable with my subjective opinion of
> CAT+CSFS being better than OP_TX.
>
> APO/IIDs, CTV, and TLUV/EVICT all seem to me to be very specific to
> certain usecases (respectively: Eltoo, congestion control, and
> joinpools), providing maximum onchain efficiency for those cases but
> requiring contortions or larger witnesses to accomplish other covenant
> usecases.  Is their increased efficiency better than more general
> constructions like CSFS or TX?  Again, I don't know how to answer that
> question objectively, although subjectively I'm ok with optimized
> constructions for cases of proven demand.
>
> > , and finally (d) because the implementation is well-reviewed
> > and complete.
>
> No comment here; I haven't followed CTV's review progress to know
> whether I'd consider it well enough reviewed or not.
>
> > I do not see how we can make an argument for any specific covenant
> > under (c) here. We could just as well be talking about
> > TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use
> > CTV can probably just as easily use those instead - ie this has
> > nothing to do with "will people use it".
>
> I'm curious how we as a technical community will be able to determine
> which is the best approach.  Again, I like starting simple and general,
> gathering real usage data, and then optimizing for demonstrated needs.
> But the simplest and most general approaches seem to be too general for
> some people (because they enable recursive covenants), seemingly forcing
> us into looking only at application-optimized designs.  In that case, I
> think the main thing we want to know about these narrow proposals for
> new applications is whether the applications and the proposed consensus
> changes will actually receive significant use.  For that, I think we
> need some sort of test bed with real paying users, and ideally one that
> is as similar to Bitcoin mainnet as possible.
>
> > we
> > cannot remove the validation code for something ever, really - you
> > still want to be able to validate the historical chain
>
> You and Jeremy both brought up this point.  I understand it and I
> should've addressed it better in my OP, but I'm of the opinion that
> reverting to earlier consensus rules gives future developers the
> *option* of dropping no-longer-used consensus code as a practical
> simplification of the same type we'

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread James O'Beirne via bitcoin-dev
> There are at least three or four separate covenants designs that have
> been posted to this list, and I don't see why we're even remotely
> talking about a specific one as something to move forward with at
> this point.

To my knowledge none of these other proposals (drafts, really) have
actual implementations, let alone the many sample usages that exist for
CTV. Given that the "covenants" discussion has been ongoing for years
now, I think the lack of other serious proposals is indicative of the
difficulty inherent in coming up with a preferable alternative to CTV.

Each covenant proposal aside from CTV has seemed either abstruse and
handwavy to me (TLUV, OP_EVICT) or general to the point of
being hard to analyze for safety (CAT) or encourages
witness verbosity that seems wasteful (OP_TX[HASH]).

CTV is about as simple a covenant system as can be devised - its limits
relative to more "general" covenant designs notwithstanding.
The level of review around CTV's design is well beyond the other
sketches for possible designs that this list has seen.

> We could just as well be talking about
> TLUV/CAT+CHECKSIGFROMSTACK/etc, and nearly anyone who is going to use
> CTV can probably just as easily use those instead - ie this has
> nothing to do with "will people use it".

This vault design (https://github.com/jamesob/simple-ctv-vault)
is a good benchmark for evaluating covenant proposals because it's (i)
simple and (ii) has high utility for many users of Bitcoin. I would
love to see it implemented in one or all of these alternatives, but I
am almost certain no one will do it in the next few months because the
implementations, tooling, and in some cases even complete
specifications do not exist.

Why that is after years of discussion and the utility of
covenants being widely appreciated is indicative to me.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV vaults in the wild

2022-03-08 Thread James O'Beirne via bitcoin-dev
Hey Antoine,

Thanks for taking a look at the repo.

> I believe it's reasonable to expect bugs to slip in affecting the
> output amount or relative-timelock setting correctness

I don't really see the vaults case as any different from other
sufficiently involved uses of bitcoin script - I don't remember anyone
raising these concerns for lightning scripts or DLCs or tapscript use,
any of which could be catastrophic if wallet implementations are not
tested properly.

By comparison, decreasing amount per vault step and one CSV use
seems pretty simple. It's certainly easy to test (as the repo shows),
and really the only parameter the user has is how many blocks to delay
to the `tohot_tx` and perhaps fee-rate. Not too hard to test
comprehensively as far as I can tell.


> I think the main concern I have with any hashchain-based vault design
> is the immutability of the flow paths once the funds are locked to the
> root vault UTXO.

Isn't this kind of inherent to the idea of covenants? You're
precommitting to a spend path. You can put in as many "escape-hatch"
conditions as you want (e.g. Jeremy makes the good point I should
include an immediate-to-cold step that is sibling to the unvaulting),
but fundamentally if you're doing covenants, you're precommitting to a
flow of funds. Otherwise what's the point?


> I think the remaining presence of trusted hardware in the vault design
> might lead one to ask what's the security advantage of vaults compared
> to classic multisig setup.

Who's saying to trust hardware? Your cold key in the vault structure
could have been generated by performing SHA rounds with the
pebbles in your neighbor's zen garden.

Keeping an actively used multi-sig setup secure certainly isn't free or
easy. Multi-sig ceremonies (which of course can be used in this scheme)
can be cumbersome to coordinate.

If there's a known scheme that doesn't require covenants, but has
similar usage and security characteristics, I'd love
to know it! But being able to lock coins up for an arbitrary amount of
time and then have advance notice of an attempted spend only seems
possible with some kind of covenant technique.

> That said, I think this security advantage is only relevant in the
> context of recursive design, where the partial unvault sends back the
> remaining funds to vault UTXO (not the design proposed here).

I'm not really sure why this would be. Yeah, it would be cool to be able
to partially unvault arbitrary amounts or something, but that seems like
another order of complexity. Personally, I'd be happy to "tranche up"
funds I'd like to store into a collection of single-hop vaults vs.
the techniques available to us today.


> I think you might need to introduce an intermediary, out-of-chain
> protocol step where the unvault broadcast is formally authorized by
> the vault stakeholders. Otherwise it's hard to qualify "unexpected",
> as hot key compromise might not be efficiently detected.

Sure; if you're using vaults I think it's safe to assume you're a fairly
sophisticated user of bitcoin, so running a process that monitors the
chain and responds immediately with keyless to-cold broadcasts
doesn't seem totally out of the question, especially with conservative
block delays.

Pretty straightforward to send such a process (whether it's a program or
a collection of humans) an authenticated signal that says "hey, expect a
withdrawal." This kind of alert allows for cross-referencing the
activity and seems a lot better than nothing!

> Don't you also need the endpoint scriptPubkeys (,
> ), the amounts and CSV value ? Though I think you can
> grind amounts and CSV value in case of loss...But I'm not sure if you
> remove the critical data persistence requirement, just reduce the
> surface.

With any use of bitcoin you're going to have critical data that needs to
be maintained (your privkeys at a minimum), so the game is always
reducing surface area. If the presigned-txn vault design
appealed to you as a user, this seems like a strict improvement.

> I'm not sure if the usage of anchor output is safe for any vault
> deployment where the funds stakeholders do not trust each other or
> where the watchtowers are not trusted.

I'm not sure who's proposing that counterparties who don't trust each
other make a vault together. I'm thinking of individual users and
custodians, each of which functions as a single trusted entity.

Perhaps your point here is that if I'm a custodian operating a vault and
someone unexpectedly hacks the fee keys that encumber all of my anchor
outputs, they can possibly pin my attempted response to the unvault
transaction - and that's true. But that doesn't seem like a fault unique
to this scheme, and points to the need for better fee-bumping needs a la
SIGHASH_GROUP or transaction sponsors.[0]

> I would say space efficiency is of secondary concern

If every major custodian ends up implementing some type of vault scheme
(not out of the question), this might be a lot of space! However I'm 

[bitcoin-dev] CTV vaults in the wild

2022-03-06 Thread James O'Beirne via bitcoin-dev
A few months ago, AJ wrote[0]

> I'm not really convinced CTV is ready to start trying to deploy
> on mainnet even in the next six months; I'd much rather see some real
> third-party experimentation *somewhere* public first

In the spirit of real third-party experimentation *somewhere* in public,
I've created this implementation and write-up of a simple vault design
using CTV.

   https://github.com/jamesob/simple-ctv-vault

I think it has a number of appealing characteristics for custody
operations at any size.

Regards,
James


[0]:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019920.html
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-17 Thread James O'Beirne via bitcoin-dev
> Is it really true that miners do/should care about that?

De facto, any miner running an unmodified version of bitcoind doesn't
care about anything aside from ancestor fee rate, given that the
BlockAssembler as-written orders transactions for inclusion by
descending ancestor fee-rate and then greedily adds them to the block
template. [0]

If anyone has any indication that there are miners running forks of
bitcoind that change this behavior, I'd be curious to know it.

Along the lines of what AJ wrote, optimal transaction selection is
NP-hard (knapsack problem). Any time that a miner spends deciding how
to assemble the next block is time not spent grinding on the nonce, and
so I'm skeptical that miners in practice are currently doing anything
that isn't fast and simple like the default implementation: sorting
fee-rate in descending order and then greedily packing.

But it would be interesting to hear evidence to the contrary.

---

You can make the argument that transaction selection is just a function
of mempool contents, and so mempool maintenance criteria might be the
thing to look at. Mempool acceptance is gated based on a minimum
feerate[1].  Mempool eviction (when running low on space) happens on
the basis of max(self_feerate, descendant_feerate) [2]. So even in the
mempool we're still talking in terms of fee rates, not absolute fees.

That presents us with the "is/ought" problem: just because the mempool
*is* currently gating only on fee rate doesn't mean that's optimal. But
if the whole point of the mempool is to hold transactions that will be
mined, and if there's good reason that txns are chosen for mining based
on fee rate (it's quick and good enough), then it seems like fee rate
is the approximation that should ultimately prevail for txn
replacement.


[0]:
https://github.com/bitcoin/bitcoin/blob/master/src/node/miner.cpp#L310-L320
[1]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1106
[2]:
https://github.com/bitcoin/bitcoin/blob/master/src/txmempool.cpp#L1138-L1144
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-16 Thread James O'Beirne via bitcoin-dev
> What do you mean by monotone in the context of sponsor transactions?

I take this to mean that the validity of a sponsor txn is
"monotonically" true at any point after the inclusion of the sponsored
txn in a block.

> And when you say tx-index, do you mean an index for looking up a
> transaction by its ID? Is that not already something nodes do?

Indeed, not all nodes have this ability. Each bitcoind node has a map
of unspent coins which can be referenced by outpoint i.e.(txid, index),
but the same isn't true for all historical transactions. I
(embarrassingly) forgot this in the prior post.

The map of (txid -> transaction) for all time is a separate index that
must be enabled via the `-txindex=1` flag; it isn't enabled by default
because it isn't required for consensus and its growth is unbounded.

> > The current consensus threshold for transactions to become invalid
> > is a 100 block reorg
>
> What do you mean by this? The only 100 block period I'm aware of is
> the coinbase cooldown period.

If there were a reorg deeper than 100 blocks, it would permanently
invalidate any transactions spending the recently-matured coinbase
subsidy in any block between $new_reorg_tip and ($former_tip_height -
100). These invalidated spends would not be able to be reorganized
into a new replacement chain.

How this differs in practice or principle from a "regular" double-spend
via reorg I'll leave for another message. I'm not sure that I understand
that myself. Personally I think if we hit a >100 block reorg, we've got
bigger issues than coinbase invalidation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-15 Thread James O'Beirne via bitcoin-dev
> The downside is that in a 6 block reorg any transaction that is moved
> past its expiration date becomes invalid and all its descendants
> become invalid too.

Worth noting that the transaction sponsors design is no worse an
offender on this count than, say, CPFP is, provided we adopt the change
that sponsored txids are required to be included in the current block
*or* prior blocks. (The original proposal allowed current block only).

In other words, the sponsored txids are just "virtual inputs" to the
sponsor transaction.

This is a much different case than e.g. transaction expiry based on
wall-clock time or block height, which I agree complicates reorgs
significantly.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread James O'Beirne via bitcoin-dev
Thanks for your thoughtful reply Antoine.

> In a distributed system such as the Bitcoin p2p network, you might
> have transaction A and transaction B  broadcast at the same time and
> your peer topology might fluctuate between original send and
> broadcast of the diff, you don't know who's seen what... You might
> inefficiently announce diff A on top of B and diff B on top A. We
> might leverage set reconciliation there a la Erlay, though likely
> with increased round-trips.

In the context of fee bumping, I don't see how this is a criticism
unique to transaction sponsors, since it also applies to CPFP: if you
tried to bump fees for transaction A with child txn B, if some mempool
hasn't seen parent A, it will reject B.

> Have you heard about SIGHASH_GROUP [0] ?

I haven't - I'll spend some time reviewing this. Thanks.

> > [me complaining CPFP requires lock-in to keys]
>
> It's true it requires to pre-specify the fee-bumping key. Though note
> the fee-bumping key can be fully separated from the
> "vaults"/"channels" set of main keys and hosted on replicated
> infrastructure such as watchtowers.

This still doesn't address the issue I'm talking about, which is if you
pre-commit to some "fee-bumping" key in your CPFP outputs and that key
ends up being compromised. This isn't a matter of data availability or
redundancy.

Note that this failure may be unique to vault use cases, when you're
pre-generating potentially large numbers of transactions or covenants
that cannot be altered after the fact. If you generate vault txns that
assume the use of some key for CPFP-based fee bumping and that key
winds up being compromised, that puts you in a an uncomfortable
situation: you can no longer bump fees on unvaulting transactions,
rendering the vaults possibly unretrievable depending on the fee market.

> As a L2 transaction issuer you can't be sure the transaction you wish
> to point to is already in the mempool, or have not been replaced by
> your counterparty spending the same shared-utxo, either competitively
> or maliciously. So as a measure of caution, you should broadcast
> sponsor + target transactions in the same package, thus cancelling
> the bandwidth saving (I think).

As I mentioned in the reply to Matt's message, I'm not quite
understanding this idea of wanting to bump the fee for something
without knowing what it is; that doesn't make much sense to me.
The "bump fee" operation seems contingent on knowing
what you want to bump.

And if you're, say, trying to broadcast a lightning channel close and
you know you need to bump the fee right away, before even broadcasting
it, either you're going to

- reformulate the txn to bring up the fee rate (e.g. add inputs
  with some yet-undeployed sighash) as you would have done with RBF, or

- you'd have the same "package relay" problem with CPFP that you
  would with transaction sponsors.

So I don't understand the objection here.

Also, I didn't mean to discourage existing work on package relay or
fixing RBF, which seem clearly important. Maybe I should have noted
that explicitly in the original message

> I don't think a sponsor is a silver-bullet to solve all the
> L2-related mempool issues. It won't solve the most concerning pinning
> attacks, as I think the bottleneck is replace-by-fee. Neither solve
> the issues encumbered by the L2s by the dust limit.

I'm not familiar with the L2 dust-limit issues, and I do think that
"fixing" RBF behavior is *probably* worthwhile. Those issues aside, I
think the transaction sponsors idea may be closer to a silver bullet
than you're giving it credit for, because designing specifically for the
fee-management use case has some big benefits.

For one, it makes migration easier. That is to say: there is none,
whereas there is existing RBF policy that needs consideration.

But maybe more importantly, transaction sponsors' limited use case also
allows for specifying much more targeted "replacement" policy since
sponsors are special-purpose transactions that only exist to
dynamically bump feerate. E.g. my SIGHASH_{NONE,SINGLE}|ANYONECANPAY
proposal might make complete sense for the sponsors/fee-management use
case, and clarify the replacement problem, but obviously wouldn't work
for more general transaction replacement. In other words, RBF's
general nature might make it a much harder problem to solve well.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread James O'Beirne via bitcoin-dev
> This entirely misses the network cost. Yes, sure, we can send
> "diffs", but if you send enough diffs eventually you send a lot of data.

The whole point of that section of the email was to consider the
network cost. There are many cases for which transmitting a
supplementary 1-in-1-out transaction (i.e. a sponsorship txn) is going
to be more efficient from a bandwidth standpoint than rebroadcasting a
potentially large txn during RBF.

> > In an ideal design, special structural foresight would not be
> > needed in order for a txn's feerate to be improved after broadcast.
> >
> > Anchor outputs specified solely for CPFP, which amount to many
> > bytes of wasted chainspace, are a hack. > It's probably
> > uncontroversial at this
>
> This has nothing to do with fee bumping, though, this is only solved
> with covenants or something in that direction, not different relay
> policy.

My post isn't only about relay policy; it's that txn
sponsors allows for fee-bumping in cases where RBF isn't possible and
CPFP would be wasteful, e.g. for a tree of precomputed vault
transactions or - maybe more generally - certain kinds of
covenants.

> How does this not also fail your above criteria of not wasting block
> space?

In certain cases (e.g. vault structures), using sponsorship txns to
bump fees as-needed is more blockspace-efficient than including
mostly-unused CPFP "anchor" outputs that pay to fee-management wallets.
I'm betting there are other similar cases where CPFP anchors are
included but not necessarily used, and amount to wasted blockspace.

> Further, this doesn't solve pinning attacks at all. In lightning we
> want to be able to *replace* something in the mempool (or see it
> confirm soon, but that assumes we know exactly what transaction is in
> "the" mempool). Just being able to sponsor something doesn't help if
> you don't know what that thing is.

When would you be trying to bump the fee on a transaction without
knowing what it is? Seeing a specific transaction "stuck" in the
mempool seems to be a prerequisite to bumping fees. I'm not sure what
you're getting at here.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-02-11 Thread James O'Beirne via bitcoin-dev
I don't oppose recursive covenants per se, but in prior posts I have
expressed uncertainty about proposals that enable more "featureful"
covenants by adding more kinds of computation into bitcoin script.

Not that anyone here is necessarily saying otherwise, but I am very
interested in limiting operations in bitcoin script to "verification" (vs.
"computation") to the extent practical, and instead encouraging general
computation be done off-chain. This of course isn't a new observation and I
think the last few years have been very successful to that effect, e.g. the
popularity of the "scriptless scripts" idea and Taproot's emphasis on
embedding computational artifacts in key tweaks.

My (maybe unfounded?) worry about opcodes like OP_CAT and OP_TX is that
more logic will live in script than is necessary, and so the burden to
verify the chain may grow and the extra "degrees of freedom" in script may
make it harder to reason about. But I guess at this point there aren't
alternative means to construct new kinds of sighashes that are necessary
for some interesting covenants.

One thing I like about CTV is that it buys a lot of functionality without
increasing the "surface area" of script's design. In general I think there
is a lot to be said for this "jets"-style approach[0] of codifying the
script operations that you'd actually want to do into single opcodes. This
adds functionality while introducing minimal surface area to script, giving
script implementers more flexibility for, say, optimization. But of course
this comes at the cost of precluding experimentation, and probably
requiring more soft-forking. Though whether the place for script
experimentation using more general-purpose opcodes on the main chain is
another interesting debate...

Sorry for going a little off-topic there.

[0]: https://medium.com/blockstream/simplicity-jets-release-803db10fd589


On Thu, Feb 10, 2022 at 7:55 PM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mon, Feb 07, 2022 at 08:34:30PM -0800, Jeremy Rubin via bitcoin-dev
> wrote:
> > Whether [recursive covenants] is an issue or not precluding this sort
> > of design or not, I defer to others.
>
> For reference, I believe the last time the merits of allowing recursive
> covenants was discussed at length on this list[1], not a single person
> replied to say that they were opposed to the idea.
>
> I would like to suggest that anyone opposed to recursive covenants speak
> for themselves (if any intelligent such people exist).  Citing the risk
> of recursive covenants without presenting a credible argument for the
> source of that risk feels to me like (at best) stop energy[2] and (at
> worst) FUD.
>
> -Dave
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-July/019203.html
> [2]
> http://radio-weblogs.com/0107584/stories/2002/05/05/stopEnergyByDaveWiner.html
> (thanks to AJ who told me about stop energy one time when I was
> producing it)
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread James O'Beirne via bitcoin-dev
> It's not that simple. As a miner, if i have less than 1vMB of
transactions in my mempool. I don't want a 10sats/vb transaction paying
10sats by a 100sats/vb transaction paying only 1sats.

I don't understand why the "<1vMB in the mempool" case is even worth
consideration because the miner will just include the entire mempool in the
next block regardless of feerate.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Thoughts on fee bumping

2022-02-10 Thread James O'Beirne via bitcoin-dev
There's been much talk about fee-bumping lately, and for good reason -
dynamic fee management is going to be a central part of bitcoin use as
the mempool fills up (lord willing) and right now fee-bumping is
fraught with difficulty and pinning peril.

Gloria's recent post on the topic[0] was very lucid and highlights a
lot of the current issues, as well as some proposals to improve the
situation.

As others have noted, the post was great. But throughout the course
of reading it and the ensuing discussion, I became troubled by the
increasing complexity of both the status quo and some of the
proposed remedies.

Layering on special cases, more carve-outs, and X and Y percentage
thresholds is going to make reasoning about the mempool harder than it
already is. Special consideration for "what should be in the next
block" and/or the caching of block templates seems like an imposing
dependency, dragging in a bunch of state and infrastructure to a
question that should be solely limited to mempool feerate aggregates
and the feerate of the particular txn package a wallet is concerned
with.

This is bad enough for protocol designers and Core developers, but
making the situation any more intractable for "end-users" and wallet
developers feels wrong.

I thought it might be useful to step back and reframe. Here are a few
aims that are motivated chiefly by the quality of end-user experience,
constrained to obey incentive compatibility (i.e. miner reward, DoS
avoidance). Forgive the abstract dalliance for a moment; I'll talk
through concretes afterwards.


# Purely additive feerate bumps should never be impossible

Any user should always be able to add to the incentive to mine any
transaction in a purely additive way. The countervailing force here
ends up being spam prevention (a la min-relay-fee) to prevent someone
from consuming bandwidth and mempool space with a long series of
infinitesimal fee-bumps.

A fee bump, naturally, should be given the same per-byte consideration
as a normal Bitcoin transaction in terms of relay and block space,
although it would be nice to come up with a more succinct
representation. This leads to another design principle:


# The bandwidth and chain space consumed by a fee-bump should be minimal

Instead of prompting a rebroadcast of the original transaction for
replacement, which contains a lot of data not new to the network, it
makes more sense to broadcast the "diff" which is the additive
contribution towards some txn's feerate.

This dovetails with the idea that...


# Special transaction structure should not be required to bump fees

In an ideal design, special structural foresight would not be needed
in order for a txn's feerate to be improved after broadcast.

Anchor outputs specified solely for CPFP, which amount to many bytes of
wasted chainspace, are a hack. It's probably uncontroversial at this
point to say that even RBF itself is kind of a hack - a special
sequence number should not be necessary for post-broadcast contribution
toward feerate. Not to mention RBF's seemingly wasteful consumption of
bandwidth due to the rebroadcast of data the network has already seen.

In a sane design, no structural foresight - and certainly no wasted
bytes in the form of unused anchor outputs - should be needed in order
to add to a miner's reward for confirming a given transaction.

Planning for fee-bumps explicitly in transaction structure also often
winds up locking in which keys are required to bump fees, at odds
with the idea that...


# Feerate bumps should be able to come from anywhere

One of the practical downsides of CPFP that I haven't seen discussed in
this conversation is that it requires the transaction to pre-specify the
keys needed to sign for fee bumps. This is problematic if you're, for
example, using a vault structure that makes use of pre-signed
transactions.

What if the key you specified n the anchor outputs for a bunch of
pre-signed txns is compromised? What if you'd like to be able to
dynamically select the wallet that bumps fees? CPFP does you no favors
here.

There is of course a tension between allowing fee bumps to come from
anywhere and the threat of pinning-like attacks. So we should venture
to remove pinning as a possibility, in line with the first design
principle I discuss.


---

Coming down to earth, the "tabula rasa" thought experiment above has led
me to favor an approach like the transaction sponsors design that Jeremy
proposed in a prior discussion back in 2020[1].

Transaction sponsors allow feerates to be bumped after a transaction's
broadcast, regardless of the structure of the original transaction.
No rebroadcast (wasted bandwidth) is required for the original txn data.
No wasted chainspace on only-maybe-used prophylactic anchor outputs.

The interface for end-users is very straightforward: if you want to bump
fees, specify a transaction that contributes incrementally to package
feerate for some txid. Simple.

In the original discussion, there were a few m

Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-28 Thread James O'Beirne via bitcoin-dev
> Technical debt isn't a measure of weight of transactions.

Sorry, my original sentence was a little unclear. I meant to say that the
notion that CTV is just a subpar waypoint en route to a more general
covenant system may not be accurate if it is a more efficient way (in terms
of chainstate/weight) to express highly useful patterns like vaults. In
that case, characterizing CTV as technical debt wouldn't be right.

> Our only option here is to be mindful of the long term implications of
the design choices we are making today.

Your points are well taken - I don't think anyone is arguing against
thinking hard about consensus changes. But I have yet to see a proposal for
covenants that is as efficient on-chain and easy to reason about as CTV is.

I also think there's some value in "legging into" covenants by deploying a
simple, non-recursive construction like CTV that services some very
important uses, and then taking as much time as necessary to think about
how to solve more existential problems, like UTXO scalability, that likely
require a recursive covenant construction.

There doesn't have to be mutual exclusion in the approaches, especially
when the maintenance burden of CTV seems to be so low. If we end up
deploying something that requires a wider variety of in-script hashing, it
seems likely that CTV's hash will be able to "free ride" on whatever more
general sighash cache structure we come up with.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-01-28 Thread James O'Beirne via bitcoin-dev
> I don't think implementing a CTV opcode that we expect to largely be
obsoleted by a TXHASH at a later date is yielding good value from a soft
fork process.

This presumes the eventual adoption of TXHASH (or something like it).
You're presenting a novel idea that, as far as I know, hasn't had much time
to bake in public. Like Jeremy, I'm concerned by the combinatorial growth
of flags and the implications that has for testing. Caching for something
like TXHASH looks to me like a whole different ballgame relative to CTV,
which has a single kind of hash.

Even if we were to adopt something like TXHASH, how long is it going to
take to develop, test, and release? My guess is "a while" - in the
meantime, users of Bitcoin are without a vault strategy that doesn't
require either presigning transactions with ephemeral keys (operationally
difficult) or multisig configurations that would make Rube Goldberg blush
(operationally difficult and precarious). The utility of vaulting seems
underappreciated among consensus devs and it's something I'd like to write
about soon in a separate post.

> The strongest argument I can make in favour of CTV would be something
like: "We definitely want bare CTV and if we are going to add CTV to legacy
script (since we cannot use TXHASH in legacy script), then it is actually
easier not to exclude it from tapscript, even if we plan to add TXHASH to
tapscript as well."

Another argument for CTV (which I find especially persuasive) is its
simplicity - it's relatively easy to reason about and, at this point,
pretty well understood. It seems like a low-risk change relative to some of
the other covenant proposals, nearly all of which elicit a good deal of
headscratching (at least from me) and seem to require not only larger
on-chain footprints but sizable code changes.

> I am sensitive to technical debt and soft fork processes

If OP_CTV ends up being the most practical approach for vaulting - among
other things - in terms of weight (which it seems to be at the moment) I
don't think "technical debt" is an applicable term.

On Thu, Jan 27, 2022 at 5:20 PM Russell O'Connor via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I am sensitive to technical debt and soft fork processes, and I don't
> believe I'm unordinary particular about these issues.  Once implemented,
> opcodes must be supported and maintained indefinitely.  Some opcodes are
> easier to maintain than others.  These particular opcodes involve caching
> of hash computations and, for that reason, I would judge them to be of
> moderate complexity.
>
> But more importantly, soft-forks are inherently a risky process, so we
> should be getting as much value out of them as we reasonably can. I don't
> think implementing a CTV opcode that we expect to largely be obsoleted by a
> TXHASH at a later date is yielding good value from a soft fork process.
>
> The strongest argument I can make in favour of CTV would be something
> like: "We definitely want bare CTV and if we are going to add CTV to legacy
> script (since we cannot use TXHASH in legacy script), then it is actually
> easier not to exclude it from tapscript, even if we plan to add TXHASH to
> tapscript as well."
>
> But that argument basically rests the entire value of CTV on the shoulders
> of bare CTV.  As I understand, the argument for why we want bare CTV,
> instead of just letting people use tapscript, involves the finer details of
> weight calculations, and I haven't really reviewed that aspect yet.  I
> think it would need to be pretty compelling to make it worthwhile to add
> CTV for that one use case.
>
>
> Regarding "OP_TXHASH+CSFSV doesn't seem to be the 'full' set of things
> needed", I totally agree we will want more things such as CAT, rolling
> SHA256 opcodes, wider arithmetic, pushing amounts onto the stack, some kind
> of tapleaf manipulation and/or TWEAKVERIFY.  For now, I only want to argue
> TXHASH+CSFSV is better than CTV+APO because it gives us more value, namely
> oracle signature verification.  In particular, I want to argue that
> TXHASH's push semantics is better that CTV's verify semantics because it
> composes better by not needing to carry an extra 32-bytes (per instance) in
> the witness data.  I expect that in a world of full recursive covenants,
> TXHASH would still be useful as a fast and cheap way to verify the
> "payload" of these covenants, i.e. that a transaction is paying a certain,
> possibly large, set of addresses certain specific amounts of money.  And
> even if not, TXHASH+CSFSV would still be the way that eltoo would be
> implemented under this proposal.
>
> On Wed, Jan 26, 2022 at 5:16 PM Jeremy  wrote:
>
>> Hi Russell,
>>
>> Thanks for this email, it's great to see this approach described.
>>
>> A few preliminary notes of feedback:
>>
>> 1) a Verify approach can be made to work for OP_TXHASH (even with CTV
>> as-is) E.g., suppose a semantic added for a single byte stack[-1] sighash
>> flag to read the hash at stack[-2],

Re: [bitcoin-dev] Proposed BIP editor: Kalle Alm

2021-04-26 Thread James O'Beirne via bitcoin-dev
ACK for Kalle.

On Mon, Apr 26, 2021, 09:55 Sjors Provoost via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> ACK for adding Kalle.
>
> Recent drama aside, having a single editor is not ideal. There's currently
> 110 open pull requests to the BIPs repo.
>
> - Sjors
>
> > Op 23 apr. 2021, om 04:09 heeft Luke Dashjr via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> het volgende geschreven:
> >
> > Unless there are objections, I intend to add Kalle Alm as a BIP editor to
> > assist in merging PRs into the bips git repo.
> >
> > Since there is no explicit process to adding BIP editors, IMO it should
> be
> > fine to use BIP 2's Process BIP progression:
> >
> >> A process BIP may change status from Draft to Active when it achieves
> >> rough consensus on the mailing list. Such a proposal is said to have
> >> rough consensus if it has been open to discussion on the development
> >> mailing list for at least one month, and no person maintains any
> >> unaddressed substantiated objections to it.
> >
> > A Process BIP could be opened for each new editor, but IMO that is
> > unnecessary. If anyone feels there is a need for a new Process BIP, we
> can go
> > that route, but there is prior precedent for BIP editors appointing new
> BIP
> > editors, so I think this should be fine.
> >
> > Please speak up soon if you disagree.
> >
> > Luke
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] assumeutxo and UTXO snapshots

2019-04-23 Thread James O'Beirne via bitcoin-dev
Good morning all,

Over the past weeks I've had a number of conversations with a few frequent
contributors about this idea. I've condensed these discussions into a
proposal document which you can view here:
https://github.com/jamesob/assumeutxo-docs/tree/2019-04-proposal/proposal

The document is structured as an FAQ, and so hopefully it addresses some of
the common questions that would come up in this thread. If you'd like to
comment, there's an associated pull request here:
https://github.com/jamesob/assumeutxo-docs/pull/1

Regards,
James


On Tue, Apr 2, 2019 at 4:43 PM James O'Beirne 
wrote:

> Hi,
>
> I'd like to discuss assumeutxo, which is an appealing and simple
> optimization in the spirit of assumevalid[0].
>
> # Motivation
>
> To start a fully validating bitcoin client from scratch, that client
> currently
> needs to perform an initial block download. To the surprise of no one, IBD
> takes a linear amount time based on the length of the chain's history. For
> clients running on modest hardware under limited bandwidth constraints,
> say a mobile device, completing IBD takes a considerable amount of time
> and thus poses serious usability challenges.
>
> As a result, having fully validating clients run on such hardware is rare
> and
> basically unrealistic. Clients with even moderate resource constraints
> are encouraged to rely on the SPV trust model. Though we have promising
> improvements to existing SPV modes pending deployment[1], it's worth
> thinking about a mechanism that would allow such clients to use trust
> models closer to full validation.
>
> The subject of this mail is a proposal for a complementary alternative to
> SPV
> modes, and which is in the spirit of an existing default, `assumevalid`.
> It may
> help modest clients transact under a security model that closely resembles
> full validation within minutes instead of hours or days.
>
> # assumeutxo
>
> The basic idea is to allow nodes to initialize using a serialized version
> of the
> UTXO set rendered by another node at some predetermined height. The
> initializing node syncs the headers chain from the network, then obtains
> and
> loads one of these UTXO snapshots (i.e. a serialized version of the UTXO
> set
> bundled with the block header indicating its "base" and some other
> metadata).
>
> Based upon the snapshot, the node is able to quickly reconstruct its
> chainstate,
> and compares a hash of the resulting UTXO set to a preordained hash
> hard-coded
> in the software a la assumevalid. This all takes ~23 minutes, not
> accounting for
> download of the 3.2GB snapshot[2].
>
> The node then syncs to the network tip and afterwards begins a simultaneous
> background validation (i.e., a conventional IBD) up to the base height of
> the
> snapshot in order to achieve full validation. Crucially, even while the
> background validation is happening the node can validate incoming blocks
> and
> transact with the benefit of the full (assumed-valid) UTXO set.
>
> Snapshots could be obtained from multiple separate peers in the same
> manner as
> block download, but I haven't put much thought into this. In concept it
> doesn't
> matter too much where the snapshots come from since their validity is
> determined via content hash.
>
> # Security
>
> Obviously there are some security implications due consideration. While
> this
> proposal is in the spirit of assumevalid, practical attacks may become
> easier.
> Under assumevalid, a user can be tricked into transacting under a false
> history
> if an attacker convinces them to start bitcoind with a malicious
> `-assumevalid`
> parameter, sybils their node, and then feeds them a bogus chain
> encompassing
> all of the hard-coded checkpoints[3].
>
> The same attack is made easier in assumeutxo because, unlike in
> assumevalid,
> the attacker need not construct a valid PoW chain to get the victim's node
> into
> a false state; they simply need to get the user to accept a bad
> `-assumeutxo`
> parameter and then supply them an easily made UTXO snapshot containing,
> say, a
> false coin assignment.
>
> For this reason, I recommend that if we were to implement assumeutxo, we
> not
> allow its specification via commandline argument[4].
>
> Beyond this risk, I can't think of material differences in security
> relative to
> assumevalid, though I appeal to the list for help with this.
>
> # More fully validating clients
>
> A particularly exciting use-case for assumeutxo is the possibility of
> mobile
> devices functioning as fully validating nodes with access to the complete
> UTXO
> set (as an alternative to SPV models). The total resource burden needed to
> start a node
> from scratch based on a snapshot is, at time of writing, a ~(3.2GB
> + blocks_to_tip * 4MB) download and a few minutes of processing time,
> which sounds
> manageable for many mobile devices currently in use.
>
> A mobile user could initialize an assumed-valid bitcoin node within an
> hour,
> transact immediately, and complete a p

Re: [bitcoin-dev] assumeutxo and UTXO snapshots

2019-04-04 Thread James O'Beirne via bitcoin-dev
I recommend that anyone following this thread read through the recent IRC
exchange between Greg Maxwell and Luke Dashjr:
 http://www.erisian.com.au/bitcoin-core-dev/log-2019-04-04.html


The conversation starts on line 205 at 2019-04-04T02:54:50.

On Thu, Apr 4, 2019 at 2:38 AM Jim Posen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Learning C++ is something within everyone's capability. Even people who do
>> not
>> wish to learn it can hire someone to perform review for them.
>>
>
> Anyone with enough knowledge of C++ to audit the entire the Bitcoin Core
> codebase is more than capable of running it with assumeutxo disabled and
> checking the hard-coded vale themself.
>
>> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] assumeutxo and UTXO snapshots

2019-04-03 Thread James O'Beirne via bitcoin-dev
Thanks for the reply, Jonas. I should've figured someone had hit the
mailing list with this one before!

In hindsight, I may have overemphasized the use of this for low-powered
mobile devices. Indeed I think this may also be a worthwhile optimization
for common hardware too.

On the margin, if a user wants to interact with Bitcoin they will download
software that allows them to do it immediately - this results in many
people defaulting to a light client. If Bitcoin were able to initialize
from scratch in a comparable amount of time and then populate the full
chain in the background, we may have many more people *incidentally*
running full nodes.

Regardless of whether or not we use UTXO snapshots per se, I'd argue that
the pattern of doing some kind of quick initialization (whether it's with
assumed-valid data, or headers-contingent data like BIP157) and then
performing full validation in the background is a good way to ensure that
we have a healthier population of full nodes than we would otherwise.

For this reason, and (as Ethan points out) because IBD's linear setup time
is infeasible in the long-term, I think this pattern is an obvious
direction for the bitcoin client to go.

> * Do we semi-trust the peer that servers the UTXO set (compared to a
block or tx which we can validate)? What channel to we use to serve the
snapshot?

As you note in your post from 2016, where and how we retrieve the snapshot
is more or less immaterial because we compare a hash of its contents to a
previously specified value that the code ships with (the `assumeutxo`
hash). We don't need to trust the source serving it to us, although
bandwidth DoS prevention via some kind chunked delivery from peers would be
worth thinking about.

Regards,
James

On Wed, Apr 3, 2019 at 2:37 AM Jonas Schnelli  wrote:

> Thanks James for the post.
>
> I proposed a similar idea [1] back in 2016 with the difference of signing
> the UTXO-set hash in a gitian-ish way.
>
> While the idea of UTXO-set-syncs are attractive, there are probably still
> significant downsides in usability (compared to models with less security),
> mainly:
> * Assume the UTXO set is 6 weeks old (which seems a reasonable age for
> providing enough security) a peer using that snapshot would still require
> to download and verify ~6048 blocks (~7.9GB at 1.3MB blocks,… probably
> CPU-days on a phone)
> * Do we semi-trust the peer that servers the UTXO set (compared to a block
> or tx which we can validate)? What channel to we use to serve the snapshot?
>
> If the goal is to run a full node on a consumer device that is also been
> used for other CPU intense operations (like a phone, etc.), I’m not sure if
> this proposal will lead to a satisfactory user experience.
>
> The longer I think around this problem, the more I lean towards accepting
> the fact that one need to use dedicated hardware in his own environment to
> perform a painless full validation.
>
> /jonas
>
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012478.html
>
> > Am 02.04.2019 um 22:43 schrieb James O'Beirne via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org>:
> >
> > Hi,
> >
> > I'd like to discuss assumeutxo, which is an appealing and simple
> > optimization in the spirit of assumevalid[0].
> >
> > # Motivation
> >
> > To start a fully validating bitcoin client from scratch, that client
> currently
> > needs to perform an initial block download. To the surprise of no one,
> IBD
> > takes a linear amount time based on the length of the chain's history.
> For
> > clients running on modest hardware under limited bandwidth constraints,
> > say a mobile device, completing IBD takes a considerable amount of time
> > and thus poses serious usability challenges.
> >
> > As a result, having fully validating clients run on such hardware is
> rare and
> > basically unrealistic. Clients with even moderate resource constraints
> > are encouraged to rely on the SPV trust model. Though we have promising
> > improvements to existing SPV modes pending deployment[1], it's worth
> > thinking about a mechanism that would allow such clients to use trust
> > models closer to full validation.
> >
> > The subject of this mail is a proposal for a complementary alternative
> to SPV
> > modes, and which is in the spirit of an existing default, `assumevalid`.
> It may
> > help modest clients transact under a security model that closely
> resembles
> > full validation within minutes instead of hours or days.
> >
> > # assumeutxo
> >
> > The basic idea is to allow nodes to initialize using a serialized
> version of the
> > UTXO set rendered by 

[bitcoin-dev] assumeutxo and UTXO snapshots

2019-04-02 Thread James O'Beirne via bitcoin-dev
Hi,

I'd like to discuss assumeutxo, which is an appealing and simple
optimization in the spirit of assumevalid[0].

# Motivation

To start a fully validating bitcoin client from scratch, that client
currently
needs to perform an initial block download. To the surprise of no one, IBD
takes a linear amount time based on the length of the chain's history. For
clients running on modest hardware under limited bandwidth constraints,
say a mobile device, completing IBD takes a considerable amount of time
and thus poses serious usability challenges.

As a result, having fully validating clients run on such hardware is rare
and
basically unrealistic. Clients with even moderate resource constraints
are encouraged to rely on the SPV trust model. Though we have promising
improvements to existing SPV modes pending deployment[1], it's worth
thinking about a mechanism that would allow such clients to use trust
models closer to full validation.

The subject of this mail is a proposal for a complementary alternative to
SPV
modes, and which is in the spirit of an existing default, `assumevalid`. It
may
help modest clients transact under a security model that closely resembles
full validation within minutes instead of hours or days.

# assumeutxo

The basic idea is to allow nodes to initialize using a serialized version
of the
UTXO set rendered by another node at some predetermined height. The
initializing node syncs the headers chain from the network, then obtains and
loads one of these UTXO snapshots (i.e. a serialized version of the UTXO set
bundled with the block header indicating its "base" and some other
metadata).

Based upon the snapshot, the node is able to quickly reconstruct its
chainstate,
and compares a hash of the resulting UTXO set to a preordained hash
hard-coded
in the software a la assumevalid. This all takes ~23 minutes, not
accounting for
download of the 3.2GB snapshot[2].

The node then syncs to the network tip and afterwards begins a simultaneous
background validation (i.e., a conventional IBD) up to the base height of
the
snapshot in order to achieve full validation. Crucially, even while the
background validation is happening the node can validate incoming blocks and
transact with the benefit of the full (assumed-valid) UTXO set.

Snapshots could be obtained from multiple separate peers in the same manner
as
block download, but I haven't put much thought into this. In concept it
doesn't
matter too much where the snapshots come from since their validity is
determined via content hash.

# Security

Obviously there are some security implications due consideration. While this
proposal is in the spirit of assumevalid, practical attacks may become
easier.
Under assumevalid, a user can be tricked into transacting under a false
history
if an attacker convinces them to start bitcoind with a malicious
`-assumevalid`
parameter, sybils their node, and then feeds them a bogus chain encompassing
all of the hard-coded checkpoints[3].

The same attack is made easier in assumeutxo because, unlike in assumevalid,
the attacker need not construct a valid PoW chain to get the victim's node
into
a false state; they simply need to get the user to accept a bad
`-assumeutxo`
parameter and then supply them an easily made UTXO snapshot containing,
say, a
false coin assignment.

For this reason, I recommend that if we were to implement assumeutxo, we not
allow its specification via commandline argument[4].

Beyond this risk, I can't think of material differences in security
relative to
assumevalid, though I appeal to the list for help with this.

# More fully validating clients

A particularly exciting use-case for assumeutxo is the possibility of mobile
devices functioning as fully validating nodes with access to the complete
UTXO
set (as an alternative to SPV models). The total resource burden needed to
start a node
from scratch based on a snapshot is, at time of writing, a ~(3.2GB
+ blocks_to_tip * 4MB) download and a few minutes of processing time, which
sounds
manageable for many mobile devices currently in use.

A mobile user could initialize an assumed-valid bitcoin node within an hour,
transact immediately, and complete a pruned full validation of their
assumed-valid chain over the next few days, perhaps only doing the
background
IBD when their device has access to suitable high-bandwidth connections.

If we end up implementing an accumulator-based UTXO scaling design[5][6]
down
the road, it's easy to imagine an analogous process that would allow very
fast
startup using an accumulator of a few kilobytes in lieu of a multi-GB
snapshot.

---

I've created a related issue at our Github repository here:
  https://github.com/bitcoin/bitcoin/issues/15605

and have submitted a draft implementation of snapshot usage via RPC here:
  https://github.com/bitcoin/bitcoin/pull/15606

I'd like to discuss here whether this is a good fit for Bitcoin
conceptually. Concrete
plans for deployment steps should be discussed in the Github issu

Re: [bitcoin-dev] The new obfuscation patch & GetStats

2015-10-07 Thread James O'Beirne via bitcoin-dev
This has been confirmed as a bug. Thanks again for reporting. I've filed a
fix here (https://github.com/bitcoin/bitcoin/pull/6777), and will be
writing tests to prevent regressions.

On Wed, Oct 7, 2015 at 4:32 PM, James O'Beirne 
wrote:

> Hey, Daniel.
>
> Patch author here. Thanks for the diligence; I think this indeed may be an
> oversight, though I'm going to need to look into a bit more thoroughly at
> home. Curious that it didn't fail any of the automated tests.
>
> Correct me if I'm wrong, but the only actual invocation of that method is
> here
> 
> (and even then, proxied through a few layers of CCoinView-machinery). In
> fact, this line
>  makes
> me suspect that the implementation of GetStats you reference may be dead
> code.
>
> In any case, you raise a good point: if users of CLevelDBWrapper go
> directly for the iterator, they run the risk of dealing with obfuscated
> data. This should be remedied somehow.
>
> I'll give it more look this evening.
>
> Thanks again for the find,
> James
>
> On Wed, Oct 7, 2015 at 10:25 AM, Daniel Kraft via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi!
>>
>> I hope this is not a stupid question, but I thought I'd ask here first
>> instead of opening a Github ticket (in case I'm wrong).
>>
>> With the recently merged "obfuscation" patch, content of the
>> "chainstate" LevelDB is obfuscated by XOR'ing against a random "key".
>> This is handled by CLevelDBWrapper's Read/Write methods, which probably
>> cover most of the usecases.
>>
>> *However*, shouldn't it also be handled when iterating over the
>> database?  In particular, I would expect that the obfuscation key is
>> applied before line 119 in txdb.cpp (i. e., while iterating over the
>> coin database in CCoinsViewDB::GetStats).
>>
>> Is there a reason why this need not be done there, or is this an actual
>> oversight?
>>
>> Yours,
>> Daniel
>>
>> --
>> http://www.domob.eu/
>> OpenPGP: 1142 850E 6DFF 65BA 63D6  88A8 B249 2AC4 A733 0737
>> Namecoin: id/domob -> https://nameid.org/?name=domob
>> --
>> Done:  Arc-Bar-Cav-Hea-Kni-Ran-Rog-Sam-Tou-Val-Wiz
>> To go: Mon-Pri
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] The new obfuscation patch & GetStats

2015-10-07 Thread James O'Beirne via bitcoin-dev
Hey, Daniel.

Patch author here. Thanks for the diligence; I think this indeed may be an
oversight, though I'm going to need to look into a bit more thoroughly at
home. Curious that it didn't fail any of the automated tests.

Correct me if I'm wrong, but the only actual invocation of that method is
here

(and even then, proxied through a few layers of CCoinView-machinery). In
fact, this line
 makes me
suspect that the implementation of GetStats you reference may be dead code.

In any case, you raise a good point: if users of CLevelDBWrapper go
directly for the iterator, they run the risk of dealing with obfuscated
data. This should be remedied somehow.

I'll give it more look this evening.

Thanks again for the find,
James

On Wed, Oct 7, 2015 at 10:25 AM, Daniel Kraft via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi!
>
> I hope this is not a stupid question, but I thought I'd ask here first
> instead of opening a Github ticket (in case I'm wrong).
>
> With the recently merged "obfuscation" patch, content of the
> "chainstate" LevelDB is obfuscated by XOR'ing against a random "key".
> This is handled by CLevelDBWrapper's Read/Write methods, which probably
> cover most of the usecases.
>
> *However*, shouldn't it also be handled when iterating over the
> database?  In particular, I would expect that the obfuscation key is
> applied before line 119 in txdb.cpp (i. e., while iterating over the
> coin database in CCoinsViewDB::GetStats).
>
> Is there a reason why this need not be done there, or is this an actual
> oversight?
>
> Yours,
> Daniel
>
> --
> http://www.domob.eu/
> OpenPGP: 1142 850E 6DFF 65BA 63D6  88A8 B249 2AC4 A733 0737
> Namecoin: id/domob -> https://nameid.org/?name=domob
> --
> Done:  Arc-Bar-Cav-Hea-Kni-Ran-Rog-Sam-Tou-Val-Wiz
> To go: Mon-Pri
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev