Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread Anthony Towns via bitcoin-dev
On Fri, Mar 04, 2022 at 11:21:41PM +, Jeremy Rubin via bitcoin-dev wrote:
> I've seen some discussion of what the Annex can be used for in Bitcoin. 

https://www.erisian.com.au/meetbot/taproot-bip-review/2019/taproot-bip-review.2019-11-12-19.00.log.html

includes some discussion on that topic from the taproot review meetings.

The difference between information in the annex and information in
either a script (or the input data for the script that is the rest of
the witness) is (in theory) that the annex can be analysed immediately
and unconditionally, without necessarily even knowing anything about
the utxo being spent.

The idea is that we would define some simple way of encoding (multiple)
entries into the annex -- perhaps a tag/length/value scheme like
lightning uses; maybe if we add a lisp scripting language to consensus,
we just reuse the list encoding from that? -- at which point we might
use one tag to specify that a transaction uses advanced computation, and
needs to be treated as having a heavier weight than its serialized size
implies; but we could use another tag for per-input absolute locktimes;
or another tag to commit to a past block height having a particular hash.

It seems like a good place for optimising SIGHASH_GROUP (allowing a group
of inputs to claim a group of outputs for signing, but not allowing inputs
from different groups to ever claim the same output; so that each output
is hashed at most once for this purpose) -- since each input's validity
depends on the other inputs' state, it's better to be able to get at
that state as easily as possible rather than having to actually execute
other scripts before your can tell if your script is going to be valid.

> The BIP is tight lipped about it's purpose

BIP341 only reserves an area to put the annex; it doesn't define how
it's used or why it should be used.

> Essentially, I read this as saying: The annex is the ability to pad a
> transaction with an additional string of 0's 

If you wanted to pad it directly, you can do that in script already
with a PUSH/DROP combo.

The point of doing it in the annex is you could have a short byte
string, perhaps something like "0x010201a4" saying "tag 1, data length 2
bytes, value 420" and have the consensus intepretation of that be "this
transaction should be treated as if it's 420 weight units more expensive
than its serialized size", while only increasing its witness size by
6 bytes (annex length, annex flag, and the four bytes above). Adding 6
bytes for a 426 weight unit increase seems much better than adding 426
witness bytes.

The example scenario is that if there was an opcode to verify a
zero-knowledge proof, eg I think bulletproof range proofs are something
like 10x longer than a signature, but require something like 400x the
validation time. Since checksig has a validation weight of 50 units,
a bulletproof verify might have a 400x greater validation weight, ie
20,000 units, while your witness data is only 650 bytes serialized. In
that case, we'd need to artificially bump the weight of you transaction
up by the missing 19,350 units, or else an attacker could fill a block
with perhaps 6000 bulletproofs costing the equivalent of 120M signature
operations, rather than the 80k sigops we currently expect as the maximum
in a block. Seems better to just have "0x01024b96" stuck in the annex,
than 19kB of zeroes.

> Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode,
> OP_ANNEX which puts the annex on the stack

I think you'd want to have a way of accessing individual entries from
the annex, rather than the annex as a single unit.

> Now suppose that I have a computation that I am running in a script as
> follows:
> 
> OP_ANNEX
> OP_IF
> `some operation that requires annex to be <1>`
> OP_ELSE
> OP_SIZE
> `some operation that requires annex to be len(annex) + 1 or does a
> checksig`
> OP_ENDIF
> 
> Now every time you run this,

You only run a script from a transaction once at which point its
annex is known (a different annex gives a different wtxid and breaks
any signatures), and can't reference previous or future transactions'
annexes...

> Because the Annex is signed, and must be the same, this can also be
> inconvenient:

The annex is committed to by signatures in the same way nVersion,
nLockTime and nSequence are committed to by signatures; I think it helps
to think about it in a similar way.

> Suppose that you have a Miniscript that is something like: and(or(PK(A),
> PK(A')), X, or(PK(B), PK(B'))).
> 
> A or A' should sign with B or B'. X is some sort of fragment that might
> require a value that is unknown (and maybe recursively defined?) so
> therefore if we send the PSBT to A first, which commits to the annex, and
> then X reads the annex and say it must be something else, A must sign
> again. So you might say, run X first, and then sign with A and C or B.
> However, what if the script somehow detects the bitstring WHICH_A WHICH_B
> and has a 

Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Umm `OP_ANNEX` seems boring 


> It seems like one good option is if we just go on and banish the OP_ANNEX. 
> Maybe that solves some of this? I sort of think so. It definitely seems like 
> we're not supposed to access it via script, given the quote from above:
>
> Execute the script, according to the applicable script rules[11], using the 
> witness stack elements excluding the script s, the control block c, and the 
> annex a if present, as initial stack.
> If we were meant to have it, we would have not nixed it from the stack, no? 
> Or would have made the opcode for it as a part of taproot...
>
> But recall that the annex is committed to by the signature.
>
> So it's only a matter of time till we see some sort of Cat and Schnorr Tricks 
> III the Annex Edition that lets you use G cleverly to get the annex onto the 
> stack again, and then it's like we had OP_ANNEX all along, or without CAT, at 
> least something that we can detect that the value has changed and cause this 
> satisfier looping issue somehow.

... Never mind I take that back.

Hmmm.

Actually if the Annex is supposed to be ***just*** for adding weight to the 
transaction so that we can do something like increase limits on SCRIPT 
execution, then it does *not* have to be covered by any signature.
It would then be third-party malleable, but suppose we have a "valid" 
transaction on the mempool where the Annex weight is the minimum necessary:

* If a malleated transaction has a too-low Annex, then the malleated 
transaction fails validation and the current transaction stays in the mempool.
* If a malleated transaction has a higher Annex, then the malleated transaction 
has lower feerate than the current transaction and cannot evict it from the 
mempool.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-04 Thread Jeremy Rubin via bitcoin-dev
I've seen some discussion of what the Annex can be used for in Bitcoin. For
example, some people have discussed using the annex as a data field for
something like CHECKSIGFROMSTACK type stuff (additional authenticated data)
or for something like delegation (the delegation is to the annex). I think
before devs get too excited, we should have an open discussion about what
this is actually for, and figure out if there are any constraints to using
it however we may please.

The BIP is tight lipped about it's purpose, saying mostly only:

*What is the purpose of the annex? The annex is a reserved space for future
extensions, such as indicating the validation costs of computationally
expensive new opcodes in a way that is recognizable without knowing the
scriptPubKey of the output being spent. Until the meaning of this field is
defined by another softfork, users SHOULD NOT include annex in
transactions, or it may lead to PERMANENT FUND LOSS.*

*The annex (or the lack of thereof) is always covered by the signature and
contributes to transaction weight, but is otherwise ignored during taproot
validation.*

*Execute the script, according to the applicable script rules[11], using
the witness stack elements excluding the script s, the control block c, and
the annex a if present, as initial stack.*

Essentially, I read this as saying: The annex is the ability to pad a
transaction with an additional string of 0's that contribute to the virtual
weight of a transaction, but has no validation cost itself. Therefore,
somehow, if you needed to validate more signatures than 1 per 50 virtual
weight units, you could add padding to buy extra gas. Or, we might somehow
make the witness a small language (e.g., run length encoded zeros) such
that we can very quickly compute an equivalent number of zeros to 'charge'
without actually consuming the space but still consuming a linearizable
resource... or something like that. We might also e.g. want to use the
annex to reserve something else, like the amount of memory. In general, we
are using the annex to express a resource constraint efficiently. This
might be useful for e.g. simplicity one day.

Generating an Annex: One should write a tracing executor for a script, run
it, measure the resource costs, and then generate an annex that captures
any externalized costs.

---

Introducing OP_ANNEX: Suppose there were some sort of annex pushing opcode,
OP_ANNEX which puts the annex on the stack as well as a 0 or 1 (to
differentiate annex is 0 from no annex, e.g. 0 1 means annex was 0 and 0 0
means no annex). This would be equivalent to something based on  OP_TXHASH  OP_TXHASH.

Now suppose that I have a computation that I am running in a script as
follows:

OP_ANNEX
OP_IF
`some operation that requires annex to be <1>`
OP_ELSE
OP_SIZE
`some operation that requires annex to be len(annex) + 1 or does a
checksig`
OP_ENDIF

Now every time you run this, it requires one more resource unit than the
last time you ran it, which makes your satisfier use the annex as some sort
of "scratch space" for a looping construct, where you compute a new annex,
loop with that value, and see if that annex is now accepted by the program.

In short, it kinda seems like being able to read the annex off of the stack
makes witness construction somehow turing complete, because we can use it
as a register/tape for some sort of computational model.

---

This seems at odds with using the annex as something that just helps you
heuristically guess  computation costs, now it's somehow something that
acts to make script satisfiers recursive.

Because the Annex is signed, and must be the same, this can also be
inconvenient:

Suppose that you have a Miniscript that is something like: and(or(PK(A),
PK(A')), X, or(PK(B), PK(B'))).

A or A' should sign with B or B'. X is some sort of fragment that might
require a value that is unknown (and maybe recursively defined?) so
therefore if we send the PSBT to A first, which commits to the annex, and
then X reads the annex and say it must be something else, A must sign
again. So you might say, run X first, and then sign with A and C or B.
However, what if the script somehow detects the bitstring WHICH_A WHICH_B
and has a different Annex per selection (e.g., interpret the bitstring as a
int and annex must == that int). Now, given and(or(K1, K1'),... or(Kn,
Kn')) we end up with needing to pre-sign 2**n annex values somehow... this
seems problematic theoretically.

Of course this wouldn't be miniscript then. Because miniscript is just for
the well behaved subset of script, and this seems ill behaved. So maybe
we're OK?

But I think the issue still arises where suppose I have a simple thing
like: and(COLD_LOGIC, HOT_LOGIC) where both contains a signature, if
COLD_LOGIC and HOT_LOGIC can both have different costs, I need to decide
what logic each satisfier for the branch is going to use in advance, or
sign all possible sums of both our annex costs? 

Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning aj,

> On Sun, Feb 27, 2022 at 04:34:31PM +, ZmnSCPxj via bitcoin-dev wrote:
>
> > In reaction to this, AJ Towns mailed me privately about some of his
> > thoughts on this insane `OP_EVICT` proposal.
> > He observed that we could generalize the `OP_EVICT` opcode by
> > decomposing it into smaller parts, including an operation congruent
> > to the Scheme/Haskell/Scala `map` operation.
>
> At much the same time Zman was thinking about OP_FOLD and in exactly the
> same context, I was wondering what the simplest possible language that
> had some sort of map construction was -- I mean simplest in a "practical
> engineering" sense; I think Simplicity already has the Euclidean/Peano
> "least axioms" sense covered.
>
> The thing that's most appealing to me about bitcoin script as it stands
> (beyond "it works") is that it's really pretty simple in an engineering
> sense: it's just a "forth" like system, where you put byte strings on a
> stack and have a few operators to manipulate them. The alt-stack, and
> supporting "IF" and "CODESEPARATOR" add a little additional complexity,
> but really not very much.
>
> To level-up from that, instead of putting byte strings on a stack, you
> could have some other data structure than a stack -- eg one that allows
> nesting. Simple ones that come to mind are lists of (lists of) byte
> strings, or a binary tree of byte strings [0]. Both those essentially
> give you a lisp-like language -- lisp is obviously all about lists,
> and a binary tree is just made of things or pairs of things, and pairs
> of things are just another way of saying "car" and "cdr".
>
> A particular advantage of lisp-like approaches is that they treat code
> and data exactly the same -- so if we're trying to leave the option open
> for a transaction to supply some unexpected code on the witness stack,
> then lisp handles that really naturally: you were going to include data
> on the stack anyway, and code and data are the same, so you don't have
> to do anything special at all. And while I've never really coded in
> lisp at all, my understanding is that its biggest problems are all about
> doing things efficiently at large scales -- but script's problem space
> is for very small scale things, so there's at least reason to hope that
> any problems lisp might have won't actually show up for this use case.

I heartily endorse LISP --- it has a trivial implementation of `eval` that is 
easily implementable once you have defined a proper data type in 
preferred-language-here to represent LISP datums.
Combine it with your idea of committing to a max-number-of-operations (which 
increases the weight of the transaction) and you may very well have something 
viable.
(In particular, even though `eval` is traditionally (re-)implemented in LISP 
itself, the limit on max-number-of-operations means any `eval` implementation 
within the same language is also forcibly made total.)

Of note is that the supposed "problem at scale" of LISP is, as I understand it, 
due precisely to its code and data being homoiconic to each other.
This homoiconicity greatly tempts LISP programmers to use macros, i.e. programs 
that generate other programs from some input syntax.
Homoiconicity means that one can manipulate code just as easily as the data, 
and thus LISP macros are a trivial extension on the language.
This allows each LISP programmer to just code up a macro to expand common 
patterns.
However, each LISP programmer then ends up implementing *different*, but 
*similar* macros from each other.
Unfortunately, programming at scale requires multiple programmers speaking the 
same language.
Then programming at scale is hampered because each LISP programmer has their 
own private dialect of LISP (formed from the common LISP language and from 
their own extensive set of private macros) and intercommunication between them 
is hindered by the fact that each one speaks their own private dialect.
Some LISP-like languages (e.g. Scheme) have classically targeted a "small" 
subset of absolutely-necessary operations, and each implementation of the 
language immediately becomes a new dialect due to having slightly different 
forms for roughly the same convenience function or macro, and *then* individual 
programmers build their own private dialect on top.
For Scheme specifically, R7RS has targeted providing a "large" standard as 
well, as did R6RS (which only *had* a "large" standard), but individual Scheme 
implementations have not always liked to implement *all* the "large" standard.

Otherwise, every big C program contains a half-assed implementation of half of 
Common LISP, so 


> -   I don't think execution costing takes into account how much memory
> is used at any one time, just how much was allocated in total; so
> the equivalent of (OP_DUP OP_DROP OP_DUP OP_DROP ..) only has the
> allocations accounted for, with no discount given for the immediate
> freeing, so it gets treated as having the same 

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread Paul Sztorc via bitcoin-dev

On 3/4/2022 7:35 AM, Billy Tetrud wrote:

sidechains cannot exist without their mainchain ...


A sidechain could stop supporting deposits from or withdrawals to 
bitcoin and completely break any relationship with the main chain.

I agree this is not as sure of a thing as starting with an altcoin
(which of course never has that kind of relationship with bitcoin).
So I do think there are some merits to sidechains in your scenario.
However, I don't think its quite accurate to say it completely
solves the problem (of a less-secure altcoin becoming dominant).



It is hard to see how this "sidechain cuts off the mainchain" scenario 
could plausibly be in enough people's interest:


* Miners would lose the block subsidy (ie, the 6.25 BTC, or whatever of 
it that still remains), and txn fees from the mainchain and all other 
merged mined chains.
* Developers would lose the ability to create a dissenting new piece of 
software (and would instead be forced into a permanent USSR-style "one 
party system" intellectual monoculture).
* Users would lose --permanently-- the ability to take their coins to 
new blockchains, removing almost all of their leverage.


Furthermore, because sidechains cannot exist without their parent (but 
not vice-versa), we can expect a large permanent interest in keeping 
mainchain node costs low. Aka: very small mainchain blocks forever. So, 
the shut-it-down mainchain-haters, would have to meet the question "why 
not just leave things the way they are?". And the cheaper the 
mainchain-nodes are, the harder that question is to answer.


However, if a sidechain really were so overwhelmingly popular as to 
clear all of these hurdles, then I would first want to understand why it 
is so popular. Maybe it is a good thing and we should cheer it on.



Your anecdote about not running a full node is amusing, and I've often 
found myself in that position. I certainly agree different people are 
different and so different trade offs can be better for different 
people. However, the question is: what tradeoffs does a largeblock 
sidechain do better than both eg Visa and lightning?


Yes, that's true. There are very many tradeoffs in general:

1. Onboarding
2. Route Capacity / Payment Limits
3. Failed Payments
4. Speed of Payment
5. Receive while offline / need for interaction/monitoring/watchtowers
6. Micropayments
7. Types of fees charged, and for what
8. Contribution to layer1 security budget
9. Auditability (re: large organizations) / general complexity

LN is certainly better for 4 and 6. But everything else is probably up 
for grabs. And this is not intended to be an exhaustive list. I just 
made it up now.


(And, if the layer2 is harmless, then its existence can be justified via 
one single net benefit, for some users, somewhere on the tradeoff-list.)


Paul
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] LN/mercury integrations

2022-03-04 Thread Tom Trevethan via bitcoin-dev
A couple of features we are considering for the mercury statechain
wallet/service and would be good to get comments/feedback on.

1.
In the current setup (https://github.com/commerceblock/mercury), deposits
are free and permissionless (i.e. no authentication required to generate a
shared key deposit addresses) and the mercury server fee (as a fixed
percentage of the coin value) is collected in the withdrawal transaction as
a UTXO paid to a fixed, specified bitcoin address. This has the advantage
of making the deposit process low friction and user friendly, but has some
disadvantages:

The withdrawal transaction fee output is typically a small fraction of the
coin value and for the smallest coin values is close to the dust limit
(i.e. these outputs may not be spendable in a high tx fee environment).
The on-chain mercury fee explicitly labels all withdrawn coins as mercury
statechain withdrawals, which is a privacy concern for many users.

The alternative that mitigates these issues is to charge the fee up-front,
via a LN invoice, before the shared key deposit address is generated. In
this approach, a user would specify in the wallet that they wanted to
deposit a specific amount into a statecoin, and instead of performing a
shared  key generation with the server, would request a LN invoice for the
withdrawal fee from the server, which would be returned to the wallet and
displayed to the user.

The user would then copy this invoice (by C or QR code) into a third
party LN wallet and pay the fee. A LN node running on the mercury server
back end would then verify that the payment had been made, and enable the
wallet to continue with the deposit keygen and deposit process. This coin
would be labeled as ‘fee paid’ by the wallet and server, and not be subject
to an on-chain fee payment on withdrawal.


2.
Withdrawal directly into LN channel.

Currently the wallet can send the coin to any type of bitcoin address
(except Taproot - P2TR - which is a pending straightforward upgrade).
To create a dual funded channel (i.e. a channel where the counterparty
provides BTC in addition to the mercury user) the withdrawal transaction
process and co-signing with the server must support the handling of PSBTs.
In this case, the withdrawal step would involve the mercury wallet
co-signing (with the mercury server), a PSBT created by a LN wallet.

To enable this, the mercury wallet should be able to both create a PSBT on
the withdrawal page, and then co-sign it with the server, and then send it
to the channel counterparty out of band (or via the third party LN
wallet/node), and import a PSBT created by the channel counterparty and
sign it, and export and/or broadcast the fully signed PSBT.

This seems to be possible (i.e. paying directly to a dual funded channel
opening tx from a third party wallet) with c-lightning and lnd via RPC, but
I’m not aware of any LN wallet that would support this kind of thing. It
has the potential to eliminate an on-chain tx, which could be valuable in a
high-fee environment.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread vjudeu via bitcoin-dev
> The Taproot address itself has to take up 32 bytes onchain, so this saves 
> nothing.

There is always at least one address, because you have a coinbase transaction 
and a solo miner or mining pool that is getting the whole reward. So, instead 
of using separate OP_RETURN's for each sidechain, for each federation, and for 
every "commitment to the blockchain", all we need is just tweaking that miner's 
key and placing everything inside unused TapScript. Then, we don't need 
separate 32 bytes for this and separate 32 bytes for that, we only need a 
commitment and a MAST-based path that can link such commitment to the address 
of this miner.

So, instead of having:




...


We could have:



On 2022-03-04 09:42:23 user ZmnSCPxj  wrote:
> Good morning vjudeu,

> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash 
> in OP_RETURN inside TapScript. Then, every sidechain node can check that 
> "this sidechain hash is connected with this Taproot address", without pushing 
> 32 bytes on-chain.

The Taproot address itself has to take up 32 bytes onchain, so this saves 
nothing.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread Billy Tetrud via bitcoin-dev
> "these sidechains are terrible" on Monday and then "these sidechains are
so good they will replace the mainchain" on Tuesday

Your premise is that a sidechain might come to dominate bitcoin, and that
this would be better than an altcoin dominating bitcoin. Did I
misunderstand you? Not quite sure why you're balking at me simply
confirming your premise.

> sidechains cannot exist without their mainchain .. imagine .. a zcash
sidechain, and someone claims they deposited 1000 BTC

A sidechain could stop supporting deposits from or withdrawals to bitcoin
and completely break any relationship with the main chain. I agree this is
not as sure of a thing as starting with an altcoin (which of course never
has that kind of relationship with bitcoin). So I do think there are some
merits to sidechains in your scenario. However, I don't think its quite
accurate to say it completely solves the problem (of a less-secure altcoin
becoming dominant).

Your anecdote about not running a full node is amusing, and I've often
found myself in that position. I certainly agree different people are
different and so different trade offs can be better for different
people. However,
the question is: what tradeoffs does a largeblock sidechain do better than
both eg Visa and lightning?

>Wouldn't life be better, if we Bitcoiners could easily sweep those fiat 
>transactions into *some* part of the BTC universe? (For example, a family of 
>largeblock sidechains). To me the answer is clearly yes.

I guess its not as clear to me. We agree it wouldn't significantly burden
Bitcoin-only nodes, but not being a burden is not a sufficient reason to do
something, only reason to not prevent it. But what are the benefits to a
user of that chain? Slightly lower fees than main bitcoin? More
decentralization than Visa or Venmo? Doesn't lightning already do better on
both accounts?



On Tue, Mar 1, 2022 at 6:00 PM Paul Sztorc  wrote:

> On 3/1/2022 12:39 AM, Billy Tetrud wrote:
>
> This entire issue is avoided completely, if all the chains --decentralized 
> and centralized-- and in the same monetary unit. Then, the monetary network 
> effects never interfere, and the decentralized chain is always guaranteed to 
> exist.
>
> It sounds like what you're saying is that without side chains, everyone might 
> switch entirely to some altcoin and bitcoin will basically die. And at that 
> point, the insecurity of that coin people switched to can be heavily 
> exploited by some attacker(s). Is that right?
>
> Yes, precisely.
>
> Its an interesting thought experiment. However, it leads me to wonder: if a 
> sidechain gets so popular that it dominates the main chain, why would people 
> keep that main chain around at all?
>
> For some reason, this is a very popular question. I suppose if you believe in 
> "one size fits all" chain philosophy (see comment below), it makes sense to 
> say "these sidechains are terrible" on Monday and then "these sidechains are 
> so good they will replace the mainchain" on Tuesday.
>
> In any event, sidechains cannot exist without their mainchain (as I see it). 
> For example, imagine that you are on a zcash sidechain, and someone claims 
> they deposited 1000 BTC, from Bitcoin Core into this sidechain? Do you give 
> them 1000 z-BTC, or not? Without the mainchain,
> you can't tell.
>
> If you run the Bip300 DriveNet demo software (drivechain.info/releases), you 
> will see for yourself: the test-sidechains are absolutely inert, UNTIL they 
> have rpc access to the mainchain. (Exactly the same way that a LN node needs 
> a Bitcoin Core node.)
>
>
>
> > someone is actually in the wrong, if they proactively censor an experiment 
> > of any type. If a creator is willing to stand behind something, then it 
> > should be tried.
>
> > it makes no difference if users have their funds stolen from a centralized 
> > Solana contract or from a bip300 centralized bit-Solana sidechain. I don't 
> > see why the tears shed would be any different.
>
> I agree with you. My point was not that we should stop anyone from doing 
> this. My point was only that we shouldn't advocate for ideas we think aren't 
> good. You were advocating for a "largeblock sidechain", and unless you have 
> good reasons to think that is an idea likely to succeed and want to share 
> them with us, then you shouldn't be advocating for that. But certainly if 
> someone *does* think so and has their own reasons, I wouldn't want to censor 
> or stop them. But I wouldn't advocate for them to do it unless their ideas 
> were convincing to me, because I know enough to know the dangers of large 
> block blockchains.
>
> Yes, I strongly agree, that we should only advocate for ideas we believe in.
>
> I do not believe in naive layer1 largeblockerism. But I do believe in 
> sidechain largeblockism.
>
> Something funny once happened to me when I was on a Bitcoin conference 
> panel*. There were three people: myself, a Blockstream person, and an 
> (ex)BitPay person. The first two of 

[bitcoin-dev] BIP Draft Submission

2022-03-04 Thread Asher Hopp via bitcoin-dev
This is my first time submitting anything to this mailer list, so I am here
with humility and I would appreciate any feedback about any aspect of my
BIP draft submission below. If you want to reach out to me directly you can
email me at as...@seent.com.

Abstract
Rather than having a maximum supply of 21 million Bitcoin, there should be
a maximum supply of 21 trillion Bitcoin. This can be accomplished by moving
the decimal place 6 places to the right of where it is today, while
reserving two degrees of accuracy after the decimal point.

Copyright
This BIP is under the Creative Commons Zero (CC0) license.

Background
On February 6th, 2010 Satoshi Nakamoto responded to a bitcointalk forum
discussion about the divisibility and economics of bitcoin as a global
currency. Satoshi chimed in to the conversation when two ideas formed:
1. Bitcoin is so scarce that a perception may exist that there is not
enough to go around – there is not even 1 Bitcoin available per person on
Earth.
2. If Bitcoin’s value continues to deflate against inflating fiat
currencies, Bitcoin transactions may become smaller and smaller, requiring
the potentially tedious use of many leading 0’s after the decimal point.

Satoshi’s suggested response to these issues was a software update to
change where the decimal place and commas are displayed when software
interprets a Bitcoin wallet’s balance: “If it gets tiresome working with
small numbers, we could change where the display shows the decimal point.
Same amount of money, just different convention for where the ","'s and
"."'s go.  e.g. moving the decimal place 3 places would mean if you had
1.0 before, now it shows it as 1,000.00.” (
https://bitcointalk.org/index.php?topic=44.msg267#msg267)

Since 2010, when Satoshi wrote that post Bitcoin has indeed become a
globally adopted currency, the dollar has inflated significantly, and
Bitcoin has deflated. There are many debates in the Bitcoin community
concerning the nomenclature of Bitcoin’s atomic unit (satoshis, sats, bits,
bitcents, mbits, etc). The debate has somewhat spiraled out of control, and
there is no clearly emerging community consensus. Additionally this issue
impacts the technology world outside of Bitcoin because there are several
proposals for various Unicode characters which factions of the Bitcoin
community have started using to represent the atomic Bitcoin unit despite
no formalized consensus.  Therefore The conditions are right to move
forward with Satoshi's vision and move the decimal place.

Details
There are several benefits to moving the decimal 6 places to the right in
Bitcoin wallet balance notation:
1. Unit bias. It is a widely held belief that Bitcoin’s adoption may be
hindered because would-be participants have a negative perception of
Bitcoin’s unit size. One Bitcoin so expensive, and some people may be
turned off by the idea of only owning a fraction of a unit.
2. Community cohesion. The Bitcoin community is deeply divided by various
proposed atomic unit names, but if this BIP is adopted there is no need to
debate nomenclature for the Bitcoin atomic unit. Bitcoin software providers
can simply continue using the Bitcoin Unicode character (₿, U+20BF), and
there are no additional unicode characters required.
3. Simplicity and standardization. Bitcoin has no borders and is used by
people in just about every corner of the world. Other than the name Bitcoin
and the Unicode character we have, there is no consensus around other
notations for Bitcoin as a currency. Rather than introducing new concepts
for people to learn, this BIP allows Bitcoin to grow under a single
standardized unit specification, with a single standard unit name, unit
size, and unit Unicode character.

There is only one drawback I can identify with this BIP, and it is purely
psychological. Moving the decimal place may produce bad optics in the
short-term, and Bitcoin’s detractors will likely seize the opportunity to
spread misinformation that moving the decimal place changes the monetary
value of anyone’s Bitcoin. It is important to note that if this BIP were to
gain consensus approval, the community would need to prepare talking points
and coordinate educational outreach efforts to explain to Bitcoin users and
wallet developers that this change does not change the proportion of the
total value of Bitcoin any particular wallet holds, and is simply a
notational change. There are no “winners” and no “losers” in this BIP – all
Bitcoin participants would be impacted in an equal and proportionate manner
on pari passu terms, and there is no change to Bitcoin’s monetary policy.

Implementation
The software updates needed to implement this BIP are restricted to the
wallet's CLI/GUI configuration, and only involve changing the location of
the decimal point and commas when viewing balances or reviewing transaction
data. Each wallet provider including Bitcoin Core would simply need to
update the display of a wallet’s balance by moving the decimal place 6

Re: [bitcoin-dev] Recursive covenant opposition, or the absence thereof, was Re: TXHASH + CHECKSIGFROMSTACKVERIFY in lieu of CTV and ANYPREVOUT

2022-03-04 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> > Continuous operation of the sidechain then implies a constant stream of 
> > 32-byte commitments, whereas continuous operation of a channel factory, in 
> > the absence of membership set changes, has 0 bytes per block being 
> > published.
>
> The sidechain can push zero bytes on-chain, just by placing a sidechain hash 
> in OP_RETURN inside TapScript. Then, every sidechain node can check that 
> "this sidechain hash is connected with this Taproot address", without pushing 
> 32 bytes on-chain.

The Taproot address itself has to take up 32 bytes onchain, so this saves 
nothing.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Recurring bitcoin/LN payments using DLCs

2022-03-04 Thread ZmnSCPxj via bitcoin-dev


Good morning Chris,

Quick question.

How does this improve over just handing over `nLockTime`d transactions?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev