Re: [bitcoin-dev] BIP-????: The Taproot Assets Protocol

2023-09-07 Thread Zac Greenwood via bitcoin-dev
Hi Laolu,

Could you explain please how facilitating registering non-Bitcoin assets on
the Bitcoin blockchain is beneficial for the Bitcoin economy?

Thanks,
Zac

On Wed, 6 Sep 2023 at 21:02, Olaoluwa Osuntokun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> After more than a year of tinkering, iterating, simplifying, and
> implementing, I'm excited to officially publish (and request BIP numbers
> for) the Taproot Assets Protocol. Since the initial publishing we've
> retained the same spec/document structure, with the addition of a new BIP
> that describes the vPSBT format (which are PSBTs for the TAP layer). Each
> BIP now also contains a set of comprehensive test vectors (to be further
> expanded over time.
>
> https://github.com/bitcoin/bips/pull/1489
>
> As the complete set of documents is large, we omit them from this email.
>
> The breakdown of the BIPs are as follows:
>
>   * `bip-tap-mssmt`: describes the MS-SMT (Merkle-Sum Sparse Merkle Tree)
> data structure used to store assets and various proofs.
>
>   * `bip-tap`: describes the Taproot Assets Protocol validation and state
> transition rules.
>
>   * `bip-tap-addr`: describes the address format used to send and receive
> assets.
>
>   * `bip-tap-vm`: describes the initial version of the off-chain TAP VM
> used
> to lock and unlock assets.
>
>   * `bip-tap-vpsbt`: describes a vPSBT (virtual PSBT) which is a series
> custom types on top of the existing PSBT protocol to facilitate more
> elaborate asset related transactions.
>
>   * `bip-tap-proof-file`: describes the flat file format which is used to
> prove and verify the provenance of an asset
>
>   * `bip-tap-universe`: describes the Universe server construct, which is
> an
> off-chain index into TAP data on-chain, used for: proof distribution,
> asset boostraping, and asset transaction archiving.
>
> Some highlights of the additions/modifications of the BIPs since the
> initial
> drafts were published last year:
>
>   * Test JSON vectors for each BIP document now included.
>
>   * The Universe construct for initial verification of assets, distributing
> asset proofs, and transaction archiving is now further specified. A
> naive and tree based syncing algorithm, along with a standardized
> REST/gRPC interface are now in place.
>
>   * The asset group key structure (formerly known as asset key families)
> has
> been further developed. Group keys allow for the creation of assets
> that
> support ongoing issuance. A valid witness of a group key during the
> minting process allows otherwise disparate assets to be considered
> fungible, and nested under the same sub-tree. A group key is itself
> just
> a taproot output key. This enables complex issuance conditions such as:
> multi-sig threshold, hash chain reveal, and any other conditions
> expressible by script (and eventually beyond!).
>
>   * New versioning bytes across the protocol to ensure extensibility and
> upgradability in a backwards compatible manner where possible. The
> asset
> metadata format now has been re-worked to only commit to a hash of the
> serialized meta data. Asset metadata can now also have structured data,
> key-value or otherwise.
>
>   * Observance of re-org protection for asset proofs. The file format now
> also uses an incremental hash function to reduce memory requirements
> when added a new transition to the end of the file.
>
>   * Specification of the vPSBT protocol [1] which is the analogue of normal
> PSBTs for the TAP layer. The packet format specifies custom key/value
> pairs for the protocol describes an aggregate TAP transaction. After
> the
> packet is signed by all participants, it's "anchored" into a normal
> Bitcoin transaction by committing to the resulting output commitments
> and witnesses.
>
> We've also made significant advancements in our initial implementation [2],
> with many wallets, explorers, services, and businesses working with us to
> test and iterate on both the protocol and the implementation. We're
> actively
> working on our next major release, which will be a major milestone towards
> the eventual mainnet deployment of the protocol!
>
>
> -- Laolu
>
> [1]: https://lightning.engineering/posts/2023-06-14-virtual-psbt/
> [2]: https://github.com/lightninglabs/taproot-assets
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP for OP_VAULT

2023-03-31 Thread Zac Greenwood via bitcoin-dev
Hi alicexbtc,

Under no circumstance should Bitcoin add any functionality intended to
support private businesses that rely on on-chain storage for their business
model.

Regarding “Fees paid: 150 BTC” (uh, *citation needed*):

To optimize for profitability a business would generally attempt to operate
using zero- or low-fee transactions. Therefore they tend to contribute
 comparatively little fees but are depriving public use of these cheap
transactions. Worse, they exert a constant upward pressure on fee levels,
making it more expensive for everyone else to transact.

Unlike miners, node operators do not receive any compensation. They however
incur additional cost for bandwidth, electricity and processing time to not
only support some current business but all businesses in the past that ever
tried to turn a profit at their expense, so also after such business failed
and has been long gone. They foot the bill.

Lastly, I don’t believe there is any value in having for instance Ordinals
spam the blockchain with images of wojaks, bored apes and other crap but
perhaps you wish to clarify why this might be something to be “excited
about”.

Your other arguments are nonsensical so excuse me for ignoring them.


Zac


On Thu, 30 Mar 2023 at 03:57, alicexbt  wrote:

> Hi Zac,
>
> Let me share what those parasites achieved:
>
> - Fees paid: 150 BTC
> - Lot of users and developers trying bitcoin that either never tried or
> gave up early in 2013-15
> - Mempools of nodes of being busy on weekends and got lots of transactions
> - PSBT became cool and application devs are trying their best to use it in
> different ways
> - Some developers exploring taproot and multisig
> - AJ shared things how covenants could help in fair, non-custodial,
> on-chain auction of ordinals that is MEV resistant although I had shared it
> earlier which involves more steps:
> https://twitter.com/144bytes/status/1634368411760476161
> - Investors exploring about funding projects
> - Bitcoin more than Bitcoin and people excited about it
>
> We can have difference of opinion, however I want bitcoin to be money and
> money means different things for people in this world. Please respect that
> else it will become like Linux, something used by 1% of world.
>
> /dev/fd0
> floppy disk guy
>
> Sent with Proton Mail secure email.
>
> --- Original Message ---
> On Wednesday, March 29th, 2023 at 12:40 PM, Zac Greenwood via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>
> > I’m not sure why any effort should be spent on theorizing how new
> opcodes might be used to facilitate parasitical use cases of the blockchain.
> >
> > If anything, business models relying on the ability to abuse the
> blockchain as a data store must be made less feasible, not more.
> >
> > Zac
> >
> >
> > On Fri, 24 Mar 2023 at 20:10, Anthony Towns via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> > > On Tue, Mar 07, 2023 at 10:45:34PM +1000, Anthony Towns via
> bitcoin-dev wrote:
> > > > I think there are perhaps four opcodes that are interesting in this
> class:
> > > >
> > > > idx sPK OP_FORWARD_TARGET
> > > > -- sends the value to a particular output (given by idx), and
> > > > requires that output have a particular scriptPubKey (given
> > > > by sPK).
> > > >
> > > > idx [...] n script OP_FORWARD_LEAF_UPDATE
> > > > -- sends the value to a particular output (given by idx), and
> > > > requires that output to have almost the same scriptPubKey as this
> > > > input, _except_ that the current leaf is replaced by "script",
> > > > with that script prefixed by "n" pushes (of values given by [...])
> > > >
> > > > idx OP_FORWARD_SELF
> > > > -- sends the value to a particular output (given by idx), and
> > > > requires that output to have the same scriptPubKey as this input
> > > >
> > > > amt OP_FORWARD_PARTIAL
> > > > -- modifies the next OP_FORWARD_* opcode to only affect "amt",
> > > > rather than the entire balance. opcodes after that affect the
> > > > remaining balance, after "amt" has been subtracted. if "amt" is
> > > > 0, the next OP_FORWARD_* becomes a no-op.
> > >
> > > The BIP 345 draft has been updated [0] [1] and now pretty much defines
> > > OP_VAULT to have the behaviour specced for OP_FORWARD_LEAF_UPDATE
> above,
> > > and OP_VAULT_RECOVER to behave as OP_FORWARD_TARGET above. Despite
> > > that, for this email I'm going to continue using the OP_FORWARD_*
> > > n

Re: [bitcoin-dev] BIP for OP_VAULT

2023-03-29 Thread Zac Greenwood via bitcoin-dev
I’m not sure why any effort should be spent on theorizing how new opcodes
might be used to facilitate parasitical use cases of the blockchain.

If anything, business models relying on the ability to abuse the blockchain
as a data store must be made less feasible, not more.

Zac


On Fri, 24 Mar 2023 at 20:10, Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Mar 07, 2023 at 10:45:34PM +1000, Anthony Towns via bitcoin-dev
> wrote:
> > I think there are perhaps four opcodes that are interesting in this
> class:
> >
> >idx sPK OP_FORWARD_TARGET
> >  -- sends the value to a particular output (given by idx), and
> > requires that output have a particular scriptPubKey (given
> > by sPK).
> >
> >idx [...] n script OP_FORWARD_LEAF_UPDATE
> >  -- sends the value to a particular output (given by idx), and
> >   requires that output to have almost the same scriptPubKey as this
> >   input, _except_ that the current leaf is replaced by "script",
> >   with that script prefixed by "n" pushes (of values given by [...])
> >
> >idx OP_FORWARD_SELF
> >  -- sends the value to a particular output (given by idx), and
> > requires that output to have the same scriptPubKey as this input
> >
> >amt OP_FORWARD_PARTIAL
> >  -- modifies the next OP_FORWARD_* opcode to only affect "amt",
> > rather than the entire balance. opcodes after that affect the
> >   remaining balance, after "amt" has been subtracted. if "amt" is
> >   0, the next OP_FORWARD_* becomes a no-op.
>
> The BIP 345 draft has been updated [0] [1] and now pretty much defines
> OP_VAULT to have the behaviour specced for OP_FORWARD_LEAF_UPDATE above,
> and OP_VAULT_RECOVER to behave as OP_FORWARD_TARGET above. Despite
> that, for this email I'm going to continue using the OP_FORWARD_*
> naming convention.
>
> Given the recent controversy over the Yuga labs ordinal auction [2],
> perhaps it's interesting to consider that these proposed opcodes come
> close to making it possible to do a fair, non-custodial, on-chain auction
> of ordinals [3].
>
> The idea here is that you create a utxo on chain that contains the ordinal
> in question, which commits to the address of the current leading bidder,
> and can be spent in two ways:
>
>   1) it can be updated to a new bidder, if the bid is raised by at least
>  K satoshis, in which case the previous bidder is refunded their
>  bid; or,
>
>   2) if there have been no new bids for a day, the current high bidder
>  wins, and the ordinal is moved to their address, while the funds
>  from their winning bid are sent to the original vendor's address.
>
> I believe this can be implemented in script as follows,
> assuming the opcodes OP_FORWARD_TARGET(OP_VAULT_RECOVER),
> OP_FORWARD_LEAF_UPDATE(OP_VAULT), OP_FORWARD_PARTIAL (as specced above),
> and OP_PUSHCURRENTINPUTINDEX (as implemented in liquid/elements [4])
> are all available.
>
> First, figure out the parameters:
>
>  * Set VENDOR to the scriptPubKey corresponding to the vendor's address.
>  * Set K to the minimum bid increment [5].
>  * Initially, set X equal to VENDOR.
>  * Initially, set V to just below the reserve price (V+K is the
>minimum initial bid).
>
> Then construct the following script:
>
>  [X] [V] [SSS] TOALT TOALT TOALT
>  0 PUSHCURRENTINPUTINDEX EQUALVERIFY
>  DEPTH NOT IF
>0 1 FORWARD_PARTIAL
>0 FROMALT FORWARD_TARGET
>1 [VENDOR] FWD_TARGET
>144
>  ELSE
>FROMALT SWAP TUCK FROMALT
>[K] ADD GREATERTHANOREQUAL VERIFY
>1 SWAP FORWARD_TARGET
>DUP FORWARD_PARTIAL
>0 ROT ROT
>FROMALT DUP 3 SWAP FORWARD_LEAF_UPDATE
>0
>  ENDIF
>  CSV
>  1ADD
>
> where "SSS" is a pushdata of the rest of the script ("TOALT TOALT TOALT
> .. 1ADD").
>
> Finally, make that script the sole tapleaf, accompanied by a NUMS point
> as the internal public key, calculate the taproot address corresponding
> to that, and send the ordinal to that address as the first satoshi.
>
> There are two ways to spend that script. With an empty witness stack,
> the following will be executed:
>
>  [X] [V] [SSS] TOALT TOALT TOALT
>-- altstack now contains [SSS V X]
>  0 PUSHCURRENTINPUTINDEX EQUALVERIFY
>-- this input is the first, so the ordinal will move to the first
>   output
>  DEPTH NOT IF
>-- take this branch: the auction is over!
>1 [VENDOR] FWD_TARGET
>-- output 1 gets the entire value of this input, and pays to
>   the vendor's hardcoded scriptPubKey
>0 1 FORWARD_PARTIAL
>0 FROMALT FORWARD_TARGET
>-- we forward at least 10k sats to output 0 (if there were 0 sats,
>   the ordinal would end up in output 1 instead, which would be a
>   bug), and output 0 pays to scriptPubKey "X"
>144
>  ELSE .. ENDIF
>-- skip over the other branch
>  CSV
>-- check that this input has baked for 144 blocks (~1 day)
>  1ADD
>-- leave 145 on the stack, which is true. succe

Re: [bitcoin-dev] Surprisingly, Tail Emission Is Not Inflationary

2022-07-13 Thread Zac Greenwood via bitcoin-dev
> your proof is incorrect (or, rather, relies on a highly unrealistic
assumption)

The assumption that coin are lost ar a constant rate is not required. Tail
emission will asymptotically decrease the rate of inflation to zero, at
which point the increase in coin exactly matches the amount of coin lost.
The rate at which coin are lost is irrelevant.

This is easy to see. Consider no coin are ever lost. The rate of inflation
will slowly decline to zero as the amount of coin grows to infinity.
However, lost coin ensures that the point at which the rate of inflation
becomes zero will be reached sooner.

If a black swan event destroys 90% of all coin, the constant tail emission
will instantly begin to inflate the supply at a 10x higher percentage. The
inflation expressed as a percentage will also immediately start to decline
because each new coin will inflate the total supply with a slightly smaller
percentage than the previous new coin. The rate of inflation will continue
to decline until zero, at which point it again matches the coin-loss
induced deflation rate.

Another scenario. Suppose that the number of coin lost becomes
significantly less for instance because better wallets and a more mature
ecosystem prevent many common coin loss events. A constant issuance of new
coin would increase the total supply, but each new coin would add less to
the total supply when expressed as a percentage. The rate of inflation
would decline to zero, at which point it again has matched the rate of
deflation due to coin loss.

Even when the rate at which coin are lost will not be constant, a tail
emission will tend to an equilibrium.

It must be observed that tail emission causes the total *potential* supply
to vary greatly depending on the deflation rate. In a low-deflation
scenario, the supply will have to grow much larger before an equilibrium
can be reached than in a scenario with moderate deflation rate. Not being
able to predict the ultimate total supply of coin is however seems
undesirable. But is it really?

The rate of inflation required for keeping Bitcoin useful highly depends on
the value of the token. At US$100k, a tail emission of 1 BTC per block
ensures safety within a few blocks for even large amounts. Continuing this
example, 1 BTC per block would mean 5.25m extra coin per 100 years. At 21m
coins and 1 BTC perpetual reward per block, the rate of inflation would be
0.25% per year.

This should put things a bit into perspective.


On Tue, 12 Jul 2022 at 01:58, Anthony Towns via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Mon, Jul 11, 2022 at 08:56:04AM -0400, Erik Aronesty via bitcoin-dev
> wrote:
> > > Alternatively, losses could be at a predictable rate that's entirely
> > > different to the one Peter assumes.
> > No, peter only assumes that there *is* a rate.
>
> No, he assumes it's a constant rate. His integration step gives a
> different result if lambda changes with t:
> https://www.wolframalpha.com/input?i=dN%2Fdt+%3D+k+-+lambda%28t%29*N
>
> On Mon, Jul 11, 2022 at 12:59:53PM -0400, Peter Todd via bitcoin-dev wrote:
> > Give me an example of an *actual* inflation rate you expect to see,
> given a
> > disaster of a given magnitude.
>
> All I was doing was saying your proof is incorrect (or, rather, relies
> on a highly unrealistic assumption), since I hadn't seen anybody else
> point that out already.
>
> But even if the proof were correct, I don't think it provides a useful
> mechanism (since there's no reason to think miners gaining all the coins
> lost in a year will be sufficient for anything), and I don't really
> think the "security budget" framework (ie, that the percentage of total
> supply given to miners each year is what's important for security)
> you're implicitly relying on is particularly meaningful.
>
> So no, not particularly interested in diving into it any deeper.
>
> Cheers,
> aj
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] No Order Mnemonic

2022-07-09 Thread Zac Greenwood via bitcoin-dev
Sorting a seed alphabetically reduces entropy by ~29 bits.

A 12-word seed has (12, 12) permutations or 479 million, which is ln(469m)
/ ln(2) ~= 29 bits of entropy. Sorting removes this entropy entirely,
reducing the seed entropy from 128 to 99 bits.

Zac


On Fri, 8 Jul 2022 at 16:09, James MacWhyte via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> What do you do if the "first" word (of 12), happens to be the last word in
>> the list alphabetically?
>>
>
> That couldn't happen. If one word is the very last from the wordlist, it
> would end up at the end of your mnemonic once you rearrange your 12 words
> alphabetically.
>
> However!
>
> (@vjudeu) Choosing 11 random words and then sorting them alphabetically
> before assigning a checksum would reduce entropy considerably. If you think
> about it, to bruteforce the entire keyspace one would only need to come up
> with every possible combination of 11 words + 1 checksum. I'm not the best
> at napkin math, but I think that leaves you with around 10 trillion
> combinations, which would only take a couple months to exhaust with
> hardware that can do 1 million guesses per second.
>
>
> James
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-25 Thread Zac Greenwood via bitcoin-dev
On Mon, 25 Apr 2022 at 07:36, ZmnSCPxj  wrote

CTV *can* benefit layer 2 users, which is why I switched from vaguely
> apathetic to CTV, to vaguely supportive of it.


Other proposals exist that also benefit L2 solutions. What makes you
support CTV specifically?

Centrally documenting the implications of each side by side and point by
point might be a useful next step. This would enable a larger part of the
community to understand each proposal and may reduce repetition and
misunderstandings on this list.

Once a common understanding of the implications of each proposal is in
place, their tradeoffs can be considered, facilitating creating consensus
on which proposal benefits a maximum of users.

Zac
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] User Resisted Soft Fork for CTV

2022-04-22 Thread Zac Greenwood via bitcoin-dev
On Fri, 22 Apr 2022 at 09:56, Keagan McClelland via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I think that trying to find ways to activate non-invasive changes should
> be everyone's goal, *even if* they personally may not have an immediate use
> case
>

A change that increases the number of use cases of Bitcoin affects all
users and is *not* non-invasive. More use cases means more blockchain usage
which increases the price of a transaction for *everyone*.

I like the maxim of Peter Todd: any change of Bitcoin must benefit *all*
users. This means that every change must have well-defined and transparent
benefits. Personally I believe that the only additions to the protocol that
would still be acceptable are those that clearly benefit layer 2 solutions
such as LN *and* do not carry the dangerous potential of getting abused by
freeloaders selling commercial services on top of “free” eternal storage on
the blockchain.

Zac

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] 7 Theses on a next step for BIP-119

2022-04-21 Thread Zac Greenwood via bitcoin-dev
On Wed, 20 Apr 2022 at 15:49, Michael Folkson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

Assuming 90 percent of miners don't signal for it in one of the Speedy
> Trial windows then the activation attempt will have failed and it will be
> back in Jeremy's court whether he tries again with a different activation
> attempt.
>
> Assuming 90 percent of miners do signal for it (unlikely in my opinion but
> presumably still a possibility) then the CTV soft fork could activate
> unless full nodes resist it.
>

This is wrong. Miners do not have the mandate to decide the faith of
softforks. The MO of softforks is that once a softfork has been merged, it
already has consensus and must be activated by miners eventually. The
various activation methods exist to ensure miners cannot sabotage a
softfork that has consensus.

The way you phrase it, makes it sound like miners have any say over
softforks. This is not the case.

Zac

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-25 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

> Either you consume the entire UTXO (take away the "U" from the "UTXO")
completely and in full, or you do not touch the UTXO

Ok, so enabling spending a UTXO partly would be a significant departure
from the systems’ design philosophy.

I have been unclear about the fee part. In my proposal there’s only one
input and zero outputs, so normally there would be no way to set any fee.
One could add a fee field although that would be slightly wasteful — it may
be sufficient to just specify the fee *rate*, for instance 0-255
sat/payload_byte, requiring only one byte for the fee. The calculation of
the actual fee can be performed by both the network and the sender. The fee
equals payload_size*feerate +
an-amount-calculated-by-preset-rules-such-that-it-raises-the-cost-of-the-transaction-to-only-marginally-less-than-what-it
would-have-cost-to-store-the-same-amount-of-data-using-one-or-more-OP_RETURN-transactions.

However explicitly specifying the fee amount is probably preferable for the
sake of transparency.

I wonder if this proposal could technically work. I fully recognize though
that even if it would, it has close to zero chances becoming reality as it
breaks the core design based on *U*TXOs (and likely also a lot of existing
software) — thank you for pointing that out and for your helpful feedback.

Zac


On Fri, 25 Feb 2022 at 13:48, ZmnSCPxj  wrote:

> Good morning Zac,
>
> > Hi ZmnSCPxj,
> >
> > To me it seems that more space can be saved.
> >
> > The data-“transaction” need not specify any output. The network could
> subtract the fee amount of the transaction directly from the specified UTXO.
>
> That is not how UTXO systems like Bitcoin work.
> Either you consume the entire UTXO (take away the "U" from the "UTXO")
> completely and in full, or you do not touch the UTXO (and cannot get fees
> from it).
>
> > A fee also need not to be specified.
>
> Fees are never explicit in Bitcoin; it is always the difference between
> total input amount minus the total output amount.
>
> > It can be calculated in advance both by the network and the transaction
> sender based on the size of the data.
>
> It is already implicitly calculated by the difference between the total
> input amount minus the total output amount.
>
> You seem to misunderstand as well.
> Fee rate is computed from the fee (computed from total input minus total
> output) divided by the transaction weight.
> Nodes do not compute fees from feerate and weight.
>
> > The calculation of the fee should be such that it only marginally
> cheaper to use this new construct over using one or more transactions. For
> instance, sending 81 bytes should cost as much as two OP_RETURN
> transactions (minus some marginal discount to incentivize the use of this
> more efficient way to store data).
>
> Do you want to change weight calculations?
> *reducing* weight calculations is a hardfork, increasing it is a softfork.
>
> > If the balance of the selected UTXO is insufficient to pay for the data
> then the transaction will be invalid.
> >
> > I can’t judge whether this particular approach would require a hardfork,
> sadly.
>
> See above note, if you want to somehow reduce the weight of the data so as
> to reduce the cost of data relative to `OP_RETURN`, that is a hardfork.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-25 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

To me it seems that more space can be saved.

The data-“transaction” need not specify any output. The network could
subtract the fee amount of the transaction directly from the specified
UTXO. A fee also need not to be specified. It can be calculated in advance
both by the network and the transaction sender based on the size of the
data.

The calculation of the fee should be such that it only marginally cheaper
to use this new construct over using one or more transactions. For
instance, sending 81 bytes should cost as much as two OP_RETURN
transactions (minus some marginal discount to incentivize the use of this
more efficient way to store data).

If the balance of the selected UTXO is insufficient to pay for the data
then the transaction will be invalid.

I can’t judge whether this particular approach would require a hardfork,
sadly.

Zac


On Fri, 25 Feb 2022 at 04:19, ZmnSCPxj  wrote:

> Good morning Zac,
>
> > Hi ZmnSCPxj,
> >
> > Any benefits of my proposal depend on my presumption that using a
> standard transaction for storing data must be inefficient. Presumably a
> transaction takes up significantly more on-chain space than the data it
> carries within its OP_RETURN. Therefore, not requiring a standard
> transaction for data storage should be more efficient. Facilitating data
> storage within some specialized, more space-efficient data structure at
> marginally lower fee per payload-byte should enable reducing the footprint
> of storing data on-chain.
> >
> > In case storing data through OP_RETURN embedded within a transaction is
> optimal in terms of on-chain footprint then my proposal doesn’t seem useful.
>
> You need to have some assurance that, if you pay a fee, this data gets on
> the blockchain.
> And you also need to pay a fee for the blockchain space.
> In order to do that, you need to indicate an existing UTXO, and of course
> you have to provably authorize the spend of that UTXO.
> But that is already an existing transaction structure, the transaction
> input.
> If you are not going to pay an entire UTXO for it, you need a transaction
> output as well to store the change.
>
> Your signature needs to cover the data being published, and it is more
> efficient to have a single signature that covers the transaction input, the
> transaction output, and the data being published.
> We already have a structure for that, the transaction.
>
> So an `OP_RETURN` transaction output is added and you put published data
> there, and existing constructions make everything Just Work (TM).
>
> Now I admit we can shave off some bytes.
> Pure published data does not need an amount, and using a transaction
> output means there is always an amount field.
> We do not want the `OP_RETURN` opcode itself, though if the data is
> variable-size we do need an equivalent to the `OP_PUSH` opcode (which has
> many variants depending on the size of the data).
>
> But that is not really a lot of bytes, and adding a separate field to the
> transaction would require a hardfork.
> We cannot use the SegWit technique of just adding a new field that is not
> serialized for `txid` and `wtxid` calculations, but is committed in a new
> id, let us call it `dtxid`, and a new Merkle Tree added to the coinbase.
> If we *could*, then a separate field for data publication would be
> softforkable, but the technique does not apply here.
> The reason we cannot use that technique is that we want to save bytes by
> having the signature cover the data to be published, and signatures need to
> be validated by pre-softfork nodes looking at just the data committed to in
> `wtxid`.
> If you have a separate signature that is in the `dtxid`, then you spend
> more actual bytes to save a few bytes.
>
> Saving a few bytes for an application that is arguably not the "job" of
> Bitcoin (Bitcoin is supposed to be for value transfer, not data archiving)
> is not enough to justify a **hard**fork.
> And any softfork seems likely to spend more bytes than what it could save.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-24 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

Any benefits of my proposal depend on my presumption that using a standard
transaction for storing data must be inefficient. Presumably a transaction
takes up significantly more on-chain space than the data it carries within
its OP_RETURN. Therefore, not requiring a standard transaction for data
storage should be more efficient. Facilitating data storage within some
specialized, more space-efficient data structure at marginally lower fee
per payload-byte should enable reducing the footprint of storing data
on-chain.

In case storing data through OP_RETURN embedded within a transaction is
optimal in terms of on-chain footprint then my proposal doesn’t seem useful.

Zac

On Fri, 25 Feb 2022 at 01:05, ZmnSCPxj  wrote:

> Good morning Zac,
>
> > Reducing the footprint of storing data on-chain might better be achieved
> by *supporting* it.
> >
> > Currently storing data is wasteful because it is embedded inside an
> OP_RETURN within a transaction structure. As an alternative, by supporting
> storing of raw data without creating a transaction, waste can be reduced.
>
> If the data is not embedded inside a transaction, how would I be able to
> pay a miner to include the data on the blockchain?
>
> I need a transaction in order to pay a miner anyway, so why not just embed
> it into the same transaction I am using to pay the miner?
> (i.e. the current design)
>
>
>
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-02-24 Thread Zac Greenwood via bitcoin-dev
Reducing the footprint of storing data on-chain might better be achieved by
*supporting* it.

Currently storing data is wasteful because it is embedded inside an
OP_RETURN within a transaction structure. As an alternative, by supporting
storing of raw data without creating a transaction, waste can be reduced.

Storing data in this way must only be marginally cheaper per on-chain byte
than the current method  using OP_RETURN by applying the appropriate
weight-per-byte for on-chain data.

The intended result is a smaller footprint for on-chain data without making
it cheaper (except marginally in order to disincentivize the use of
OP_RETURN).

Zac


On Thu, 24 Feb 2022 at 10:19, vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Since Taproot was activated, we no longer need separate OP_RETURN outputs
> to be pushed on-chain. If we want to attach any data to a transaction, we
> can create "OP_RETURN " as a branch in the TapScript. In this
> way, we can store that data off-chain and we can always prove that they are
> connected with some taproot address, that was pushed on-chain. Also, we can
> store more than 80 bytes for "free", because no such taproot branch will be
> ever pushed on-chain and used as an input. That means we can use "OP_RETURN
> <1.5 GB of data>", create some address having that taproot branch, and
> later prove to anyone that such "1.5 GB of data" is connected with our
> taproot address.
>
> Currently in Bitcoin Core we have "data" field in "createrawtransaction".
> Should the implementation be changed to place that data in a TapScript
> instead of creating separate OP_RETURN output? What do you think?
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Legal Defense Fund

2022-01-21 Thread Zac Greenwood via bitcoin-dev
The name of the fund should ideally unambiguously clarify its scope, i.e.,
Bitcoin & development. So maybe “Bitcoin Developers Community LDF”. Or
perhaps “Bitcoin Technical Community LDF” which nicely abbreviates to
BTCLDF.

Zac


On Thu, 13 Jan 2022 at 19:49, jack via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Prayank,
>
> > On 13 Jan 2022, at 10:13, Prayank  wrote:
> > I had few suggestions and feel free to ignore them if they do not make
> sense:
> >
> > 1.Name of this fund could be anything and 'The Bitcoin Legal Defense
> Fund' can be confusing or misleading for newbies. There is nothing official
> in Bitcoin however people believe things written in news articles and some
> of them might consider it as an official bitcoin legal fund.
>
> Excellent point. Will come up with a better name.
>
> Open to ideas and suggestions on all.
>
> jack
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-09-01 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

The rate-limiting algorithm would be relatively straightforward. I
documented the rate-limiting part of the algorithm below, perhaps they can
evoke new ideas of how to make this MAST-able or otherwise implement this
in a privacy preserving way.

Something like the following:

=> Create an output at block height [h0] with the following properties:

Serving as input at any block height, the maximum amount is limited to
[limit] sats;  // This rule introduces [limit] and is permanent and always
copied over to a change output
Serving as input at a block height < [h0 + window], the maximum amount is
limited to [limit - 0] sats;  // [limit - 0] to emphasize that nothing was
spent yet and no window has started.

=> A transaction occurs at block height [h1], spending [h1_spent].
The payment output created at [h1] is not encumbered and of value
[h1_spent]; // Note, this is the first encumbered transaction so [h1] is
the first block of the first window

The change output created at block height [h1] must be encumbered as
follows:
Serving as input at any block height, the maximum amount is limited to
[limit] sats;  // Permanent rule repeats
Serving as input at a block height < [h1 + window], the maximum amount is
limited to [limit - h1_spent]  // Second permanent rule reduces spendable
amount until height [h1 + window] by [h1_spent]

=> A second transaction occurs at block height [h2], spending [h2_spent].
The payment output created at [h2] is not encumbered and of value
[h2_spent]; // Second transaction, so a second window starts at [h2]

The change output created at block height [h2] must be encumbered as
follows:
Serving as input at any block height, the maximum amount is limited to
[limit] sats;  // Permanent rule repeats
Serving as input at a block height < [h1 + window], the max amount is
limited to [limit - h1_spent - h2_spent] // Reduce spendable amount between
[h1] and [h1 + window] by an additional [h2_spent]
Serving as input in range [h1 + window] <= block height < [h2 + window],
the max amount is limited to [limit - h2_spent]  // First payment no longer
inside this window so [h1_spent] no longer subtracted

... and so on. A rule that pertains to a block height < the current block
height can be abandoned, keeping the number of rules equal to the number of
transactions that exist within the oldest still active window.

Zac


On Tue, Aug 31, 2021 at 4:22 PM ZmnSCPxj  wrote:

> Good morning Zac,
>
> > Hi ZmnSCPxj,
> >
> > Thank you for your helpful response. We're on the same page concerning
> privacy so I'll focus on that. I understand from your mail that privacy
> would be reduced by this proposal because:
> >
> > * It requires the introduction of a new type of transaction that is
> different from a "standard" transaction (would that be P2TR in the
> future?), reducing the anonymity set for everyone;
> > * The payment and change output will be identifiable because the change
> output must be marked encumbered on-chain;
> > * The specifics of how the output is encumbered must be visible on-chain
> as well reducing privacy even further.
> >
> > I don't have the technical skills to judge whether these issues can
> somehow be resolved. In functional terms, the output should be spendable in
> a way that does not reveal that the output is encumbered, and produce a
> change output that cannot be distinguished from a non-change output while
> still being encumbered. Perhaps some clever MAST-fu could somehow help?
>
> I believe some of the covenant efforts may indeed have such clever MAST-fu
> integrated into them, which is why I pointed you to them --- the people
> developing these (aj I think? RubenSomsen?) might be able to accommodate
> this or some subset of the desired feature in a sufficiently clever
> covenant scheme.
>
> There are a number of such proposals, though, so I cannot really point you
> to one that seems likely to have a lot of traction.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-31 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

Thank you for your helpful response. We're on the same page concerning
privacy so I'll focus on that. I understand from your mail that privacy
would be reduced by this proposal because:

* It requires the introduction of a new type of transaction that is
different from a "standard" transaction (would that be P2TR in the
future?), reducing the anonymity set for everyone;
* The payment and change output will be identifiable because the change
output must be marked encumbered on-chain;
* The specifics of how the output is encumbered must be visible on-chain as
well reducing privacy even further.

I don't have the technical skills to judge whether these issues can somehow
be resolved. In functional terms, the output should be spendable in a way
that does not reveal that the output is encumbered, and produce a change
output that cannot be distinguished from a non-change output while still
being encumbered. Perhaps some clever MAST-fu could somehow help?

I imagine that the offered functionality does not justify the above
mentioned privacy reductions, so unless these can be addressed, without
functional modification this proposal sadly seems dead in the water.

Thanks again.

Zac


On Tue, Aug 31, 2021 at 11:00 AM ZmnSCPxj  wrote:

> Good morning Zac,
>
>
> > Perhaps you could help me understand what would be required to implement
> the *unmodified* proposal. That way, the community will be able to better
> assess the cost (in terms of effort and risk) and weigh it against the
> perceived benefits. Perhaps *then* we find that the cost could be
> significantly reduced without any significant reduction of the benefits,
> for instance by slightly compromising on the functionality such that no
> changes to consensus would be required for its implementation. (I am
> skeptical that this would be possible though). The cost reduction must be
> carefully weighed against the functional gaps it creates.
>
> For one, such output need to be explicitly visible, to implement the
> "change outputs must also be rate-limited".
> A tx spending a rate-limited output has to know that one of the outputs is
> also a rate-limited output.
>
> This flagging needs to be done by either allocating a new SegWit version
> --- a resource that is not lightly allocated, there being only 30 versions
> left if my understanding is correct --- or blessing yet another
> anyone-can-spend `scriptPubKey` template, something we want to avoid which
> is why SegWit has versions (i.e. we want SegWit to be the last
> anyone-can-spend `scriptPubKey` template we bless for a **long** time).
>
> Explicit flagging is bad as well for privacy, which is another mark
> against it.
> Notice how Taproot improves privacy by making n-of-n indistinguishable
> from 1-of-1 (and with proper design or a setup ritual, k-of-n can be made
> indistinguishable from 1-of-1).
> Notice as well that my first counterproposal is significantly more private
> than explicit flagging, and my second coutnerproposal is also more private
> if wallets change their anti-fee-sniping mitigation.
> This privacy loss represented by explicit flagging will be resisted by
> some people, especially those that use a bunch of random letters as a
> pseudonym (because duh, privacy).
>
> (Yes, people can just decide not to use the privacy-leaking
> explicitly-flagged outputs, but that reduces the anonymity set of people
> who *are* interested in privacy, so people who are interested in privacy
> will prefer that other people do not leak their privacy so they can hide
> among *those* people as well.)
>
> You also probably need to keep some data with each output.
> This can be done by explicitly storing that data in the output directly,
> rather than a commitment to that data --- again, the "change outputs must
> also be rate-limited" requirement needs to check those data.
>
> The larger data stored with the output is undesirable, ideally we want
> each output to just be a commitment rather than contain any actual data,
> because often a 20-byte commitment is smaller than the data that needs to
> be stored.
> For example, I imagine that your original proposal requires, for change
> outputs, to store:
>
> * The actual rate limit.
> * The time frame of the rate limit.
> * The reduced rate limit, since we spent an amount within a specific time
> frame (i.e. residual limit) which is why this is a change output.
> * How long that time frame lasts.
> * A commitment to the keys that can spend this.
>
> Basically, until the residual limit expires, we impose the residual limit,
> then after the expiry of the residual limit we go back to the original rate
> limit.
>
> The commitment to the keys itself takes at least 20 bytes, and if you are
> planning a to support k-of-n then that takes at least 32 bytes.
> If this was not explicitly tagged, then a 32 byte commitment to all the
> necessary data would have been enough, but you do need the explicit tagging
> for the "change outputs must be rate-limited too".

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-30 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

> I suggest looking into the covenant opcodes and supporting those instead
of your own proposal, as your application is very close to one of the
motivating examples for covenants in the first place.

I believe it is not the right approach to take a proposal, chop off key
aspects of its functionality, and rely to some future change in Bitcoin
that may perhaps enable implementing some watered down version of the
intended functionality. In my opinion the right order would be to first
discuss the unmodified proposal on a functional level and gauge community
interest, then move forward to discuss technical challenges for the
*unmodified* proposal instead of first knee-capping the proposal in order
to (presumably) reduce cost of implementation.

I believe that we both recognize that the proposed functionality would be
beneficial. I believe that your position is that functionality close to
what I have in mind can be implemented using covenants, albeit with some
gaps. For me personally however these gaps would not be acceptable because
they severely hurt the predictability and intuitiveness of the behavior of
the functionality for the end-user. But as noted, I believe at this point
it is premature to have this discussion.

Perhaps you could help me understand what would be required to implement
the *unmodified* proposal. That way, the community will be able to better
assess the cost (in terms of effort and risk) and weigh it against the
perceived benefits. Perhaps *then* we find that the cost could be
significantly reduced without any significant reduction of the benefits,
for instance by slightly compromising on the functionality such that no
changes to consensus would be required for its implementation. (I am
skeptical that this would be possible though). The cost reduction must be
carefully weighed against the functional gaps it creates.

I am aware that my proposal must be well-defined functionally before being
able to reason about its benefits and implementational aspects. I believe
that the proposed functionality is pretty straightforward, but I am happy
to come up with a more precise functional spec. However, such effort would
be wasted if there is no community interest for this functionality. So far
only few people have engaged with this thread, and I am not sure that this
is because there is no interest in the proposal or because most people just
lurk here and do not feel like giving their opinion on random proposals. It
would be great however to learn about more people's opinions.

As a reminder, the proposed functionality is to enable a user to limit the
amount that they able to spent from an address within a certain time-frame
or window (defined in number of blocks) while retaining the ability to
spend arbitrary amounts using a secondary private key (or set of private
keys). The general use case is to prevent theft of large amounts while
still allowing a user to spend small amounts over time. Hodlers as well as
exchanges dealing with cold, warm and hot wallets come to mind as users who
could materially benefit from this functionality.

Zac



On Mon, Aug 16, 2021 at 1:48 PM ZmnSCPxj  wrote:

> Good morning Zac,
>
> > Thank you for your counterproposal. I fully agree that as a first step
> we must establish whether the proposed functionality can be implemented
> without making any changes to consensus.
> >
> > Your counterproposal is understandably more technical in nature because
> it explores an implementation on top of Bitcoin as-is. However I feel that
> for a fair comparison of the functionality of both proposals a purely
> functional description of your proposal is essential.
> >
> > If I understand your proposal correctly, then I believe there are some
> major gaps between yours and mine:
> >
> > Keys for unrestricted spending: in my proposal, they never have to come
> online unless spending more than the limit is desired. In your proposal,
> these keys are required to come online in several situations.
>
> Correct, that is indeed a weakness.
>
> It is helpful to see https://zmnscpxj.github.io/bitcoin/unchained.html
> Basically: any quorum of signers can impose any rules that are not
> implementable on the base layer, including the rules you desire.
> That quorum is the "offline keyset" in my proposal.
>
> >
> > Presigning transactions: not required in my proposal. Wouldn’t such
> presigning requirement be detrimental for the usability of your proposal?
> Does it mean that for instance the amount and window in which the
> transaction can be spent is determined at the time of signing? In my
> proposal, there is no limit in the number of transactions per window.
>
> No.
> Remember, the output is a simple 1-of-1 or k-of-n of the online keyset.
> The online keyset can spend that wherever and however, including paying it
> out to N parties, or paying part of the limit to 1 party and then paying
> the remainder back to the same onchain keyset so it can access the funds in
> the future.
> 

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-16 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

Thank you for your counterproposal. I fully agree that as a first step we
must establish whether the proposed functionality can be implemented
without making any changes to consensus.

Your counterproposal is understandably more technical in nature because it
explores an implementation on top of Bitcoin as-is. However I feel that for
a fair comparison of the functionality of both proposals a purely
functional description of your proposal is essential.

If I understand your proposal correctly, then I believe there are some
major gaps between yours and mine:

Keys for unrestricted spending: in my proposal, they never have to come
online unless spending more than the limit is desired. In your proposal,
these keys are required to come online in several situations.

Presigning transactions: not required in my proposal. Wouldn’t such
presigning requirement be detrimental for the usability of your proposal?
Does it mean that for instance the amount and window in which the
transaction can be spent is determined at the time of signing? In my
proposal, there is no limit in the number of transactions per window.

Number of windows: limited in your proposal, unlimited in mine.

There are probably additional gaps that I am currently not technically able
to recognize.

I feel that the above gaps are significant enough to state that your
proposal does not meet the basic requirements of my proposal.

Next to consider is whether the gap is acceptable, weighing the effort to
implement the required consensus changes against the effort and feasibility
of implementing your counterproposal.

I feel that your counterproposal has little chance of being implemented
because of the still considerable effort required and the poor result in
functional terms. I also wonder if your proposal is feasible considering
wallet operability.

Considering all the above, I believe that implementing consensus changes in
order to support the proposed functionality would preferable  over your
counterproposal.

I acknowledge that a consensus change takes years and is difficult to
achieve, but that should not be any reason to stop exploring the appetite
for the proposed functionality and perhaps start looking at possible
technical solutions.

Zac


On Sat, 14 Aug 2021 at 03:50, ZmnSCPxj  wrote:

> Good morning Zac,
>
>
> > Hi ZmnSCPxj,
> >
> > Thank you for your insightful response.
> >
> > Perhaps I should take a step back and take a strictly functional angle.
> Perhaps the list could help me to establish whether the proposed
> functionality is:
> >
> > Desirable;
> > Not already possible;
> > Feasible to implement.
> >
> > The proposed functionality is as follows:
> >
> > The ability to control some coin with two private keys (or two sets of
> private keys) such that spending is limited over time for one private key
> (i.e., it is for instance not possible to spend all coin in a single
> transaction) while spending is unrestricted for the other private key (no
> limits apply). No limits must apply to coin transacted to a third party.
> >
> > Also, it must be possible never having to bring the unrestricted private
> key online unless more than the limit imposed on the restrictive private
> key is desired to be spent.
> >
> > Less generally, taking the perspective of a hodler: the user must be
> able to keep one key offline and one key online. The offline key allows
> unrestricted spending, the online key is limited in how much it is allowed
> to spend over time.
> >
> > Furthermore, the spending limit must be intuitive. Best candidate I
> believe would be a maximum spend per some fixed number of blocks. For
> instance, the restrictive key may allow a maximum of 100k sats per any
> window of 144 blocks. Ofcourse the user must be able to set these
> parameters freely.
>
> My proposal does not *quite* implement a window.
> However, that is because it uses `nLockTime`.
>
> With the use of `nSequence` in relative-locktime mode, however, it *does*
> implement a window, sort of.
> More specifically, it implements a timeout on spending --- if you spend
> using a presigned transaction (which creates an unencumbered
> specific-valued TXO that can be arbitrarily spent with your online keyset)
> then you cannot get another "batch" of funds until the `nSequence` relative
> locktime passes.
> However, this *does* implement a window that limits a maximum value
> spendable per any window of the relative timelock you select.
>
> The disadvantage is that `nSequence` use is a lot more obvious and
> discernible than `nLockTime` use.
> Many wallets today use non-zero `nLockTime` for anti-fee-sniping, and that
> is a good cover for `nLockTime` transactions.
> I believe Dave Harding proposed that wallets should also use, at random,
> (say 50-50) `nSequence`-in-relative-locktime-mode as an alternate
> anti-fee-sniping mechanism.
> This alternate anti-fee-sniping would help cover `nSequence` use.
>
> Note that my proposal does impose a maximum limit on the numb

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-13 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

Thank you for your insightful response.

Perhaps I should take a step back and take a strictly functional angle.
Perhaps the list could help me to establish whether the proposed
functionality is:

Desirable;
Not already possible;
Feasible to implement.

The proposed functionality is as follows:

The ability to control some coin with two private keys (or two sets of
private keys) such that spending is limited over time for one private key
(i.e., it is for instance not possible to spend all coin in a single
transaction) while spending is unrestricted for the other private key (no
limits apply). No limits must apply to coin transacted to a third party.

Also, it must be possible never having to bring the unrestricted private
key online unless more than the limit imposed on the restrictive private
key is desired to be spent.

Less generally, taking the perspective of a hodler: the user must be able
to keep one key offline and one key online. The offline key allows
unrestricted spending, the online key is limited in how much it is allowed
to spend over time.

Furthermore, the spending limit must be intuitive. Best candidate I believe
would be a maximum spend per some fixed number of blocks. For instance, the
restrictive key may allow a maximum of 100k sats per any window of 144
blocks. Ofcourse the user must be able to set these parameters freely.

I look forward to any feedback you may have.

Zac



On Tue, 10 Aug 2021 at 04:17, ZmnSCPxj  wrote:

>  fromGood morning Zac,
>
>
> With some work, what you want can be implemented, to some extent, today,
> without changes to consensus.
>
> The point you want, I believe, is to have two sets of keys:
>
> * A long-term-storage keyset, in "cold" storage.
> * A short-term-spending keyset, in "warm" storage, controlling only a
> small amount of funds.
>
> What you can do would be:
>
> * Put all your funds in a single UTXO, with an k-of-n of your cold keys
> (ideally P2TR, or some P2WSH k-of-n).
> * Put your cold keys online, and sign a transaction spending the above
> UTXO, and spending most of it to a new address that is a tweaked k-of-n of
> your cold keys, and a smaller output (up to the limit you want) controlled
> by the k-of-n of your warm keys.
>   * Keep this transaction offchain, in your warm storage.
> * Put your cold keys back offline.
> * When you need to spend using your warm keys, bring the above transaction
> onchain, then spend from the budget as needed.
>
>
> If you need to have some estimated amount of usable funds for every future
> unit of time, just create a chain of transactions with future `nLockTime`.
>
>   nLocktime +1day  nLockTime +2day
>   ++   ++   ++
>  cold UTXO -->|cold TXO|-->|cold TXO|-->|cold TXO|--> etc.
>   ||   ||   ||
>   |warm TXO|   |warm TXO|   |warm TXO|
>   ++   ++   ++
>
> Pre-sign the above transactions, store the pre-signed transactions in warm
> storage together with your warm keys.
> Then put the cold keys back offline.
>
> Then from today to tomorrow, you can spend only the first warm TXO.
> From tomorrow to the day after, you can spend only the first two warm TXOs.
> And so on.
>
> If tomorrow your warm keys are stolen, you can bring the cold keys online
> to claim the second cold TXO and limit your fund loss to only just the
> first two warm TXOs.
>
> The above is bulky, but it has the advantage of not using any special
> opcodes or features (improving privacy, especially with P2TR which would in
> theory allow k-of-n/n-of-n to be indistinguishable from 1-of-1), and using
> just `nLockTime`, which is much easier to hide since most modern wallets
> will set `nLockTime` to recent block heights.
>
> Regards,
> ZmnSCPxj
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-05 Thread Zac Greenwood via bitcoin-dev
Hi Billy,

> It sounds like you're proposing an opcode

No. I don’t have enough knowledge of Bitcoin to be able to tell how (and
if) rate-limiting can be implemented as I suggested. I am not able to
reason about opcodes, so I kept my description at a more functional level.

> I still don't understand why its useful to specify those as absolute
block heights

I feel that this a rather uninteresting data representation aspect that’s
not worth going back and forth about. Sure, specifying the length of the
epoch may also be an option, although at the price of giving up some
functionality, and without much if any gains.

By explicitly specifying the start and end block of an epoch, the user has
more flexibility in shifting the epoch (using alternate values for
epochStart and epochEnd) and simultaneously increasing the length of an
epoch. These seem rather exotic features, but there’s no harm in retaining
them.

> if you have a UTXO encumbered by rateLimit(epochStart = 800100, epochEnd
= 800200, limit = 100k, remain = 100k), what happens if you don't spend
that UTXO before block 800200?

The rate limit remains in place. So if this UTXO is spent in block 90,
then at most 100k may be spent. Also, the new epoch must be at least 100
blocks and remain must correctly account for the actual amount spent.

> This is how I'd imagine creating an opcode like this:

> rateLimit(windowSize = 144 blocks, limit = 100k sats)

This would require the system to bookkeep how much was spent since the
first rate-limited output. It is a more intuitive way of rate-limiting but
it may be much more difficult to implement, which is why I went with the
epoch-based rate limiting solution. In terms of functionality, I believe
the two solutions are nearly identical for all practical purposes.

Your next section confuses me. As I understand it, using an address as
input for a transaction will always spends the full amount at that address.
That’s why change addresses are required, no? If Bitcoin were able to pay
exact amounts then there wouldn’t be any need for change outputs.

Zac


On Thu, 5 Aug 2021 at 08:39, Billy Tetrud  wrote:

> >   A maximum amount is allowed to be spent within EVERY epoch.
>
> It sounds like you're proposing an opcode that takes in epochStart and
> epochEnd as parameters. I still don't understand why its useful to specify
> those as absolute block heights. You mentioned that this enables more
> straightforward validation logic, but I don't see how. Eg, if you have a
> UTXO encumbered by rateLimit(epochStart = 800100, epochEnd = 800200, limit
> = 100k, remain = 100k), what happens if you don't spend that UTXO before
> block 800200? Is the output no longer rate limited then? Or is the opcode
> calculating 800200-800100 = 100 and applying a rate limit for the next
> epoch? If the first, then the UTXO must be spent within one epoch to remain
> rate limited. If the second, then it seems nearly identical to simply
> specifying window=100 as a parameter instead of epochStart and epochEnd.
>
> > then there must be only a single (rate-limited) output
>
> This rule would make transactions tricky if you're sending money into
> someone else's wallet that may be rate limited. If the requirement is that
> only you yourself can send money into a rate limited wallet, then this
> point is moot but it would be ideal to not have such a requirement.
>
> This is how I'd imagine creating an opcode like this:
>
> rateLimit(windowSize = 144 blocks, limit = 100k sats)
>
> This would define that the epoch is 1 day's worth of blocks. This would
> evenly divide bitcoin's retarget period and so each window would start and
> end at those dividing lines (eg the first 144 blocks of the retargetting
> period, then the second, then the third, etc).
>
> When this output is spent, it ensures that there's a maximum of 100k sats
> is sent to addresses other than the originating address. It also records
> the amount spent in the current 144 block window for that address (eg by
> simply recording the already-spent amount on the resulting UTXO and having
> an index that allows looking up UTXOs by address and adding them up). That
> way, when any output from that address is spent again, if a new 144 block
> window has started, the limit is reset, but if its still within the same
> window, the already-spent amounts for UTXOs from that address are added up
> and subtracted from the limit, and that number is the remaining limit a
> subsequent transaction needs to adhere to.
>
> This way, 3rd party could send transactions into an address like this, and
> multiple outputs can be combined and used to spend to arbitrary outputs (up
> to the rate limit of course).
>
> On Wed, Aug 4, 2021 at 3:48 AM Zac Greenwood  wrote:
>
>> > Ah I see, this is all limited to within a single epoch.
>>
>> No, that wouldn't be useful. A maximum amount is allowed to be spent
>> within EVERY epoch.
>>
>> Consider an epoch length of 100 blocks with a spend limit of 200k per
>> epoch. T

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-04 Thread Zac Greenwood via bitcoin-dev
> Ah I see, this is all limited to within a single epoch.

No, that wouldn't be useful. A maximum amount is allowed to be spent within
EVERY epoch.

Consider an epoch length of 100 blocks with a spend limit of 200k per
epoch. The following is allowed:

epoch1 (800101 - 800200): spend 120k in block 800140. Remaining for epoch1:
80k;
epoch1 (800101 - 800200): spend another 60k in block 800195. Remaining for
epoch1: 20k;
epoch2 (800201 - 800300): spend 160k in block 800201. Remaining for epoch2:
40k.

Since the limit pertains to each individual epoch, it is allowed to spend
up to the full limit at the start of any new epoch. In this example, the
spending was as follows:

800140: 120k
800195: 60k
800201: 160k.

Note that in a span of 62 blocks a total of 340k sats was spent. This may
seem to violate the 200k limit per 100 blocks, but this is the result of
using a per-epoch limit. This allows a maximum of 400k to be spent in 2
blocks llke so: 200k in the last block of an epoch and another 200k in the
first block of the next epoch. However this is inconsequential for the
intended goal of rate-limiting which is to enable small spends over time
from a large amount and to prevent theft of a large amount with a single
transaction.

To explain the proposed design more clearly, I have renamed the params as
follows:

epochStart: block height of first block of the current epoch (was: h0);
epochEnd: block height of last block of the current epoch (was: h1);
limit: the maximum total amount allowed to be spent within the current
epoch (was: a);
remain: the remaining amount allowed to be spent within the current epoch
(was: a_remaining);

Also, to illustrate that the params are specific to a transaction, I will
hence precede the param with the transaction name like so:
tx8_limit, tx31c_remain, tx42z_epochStart, ... etc.

For simplicity, only transactions with no more than one rate-limited input
are considered, and with no more than two outputs: one rate-limited change
output, and a normal (not rate-limited) output.

Normally, a simple transaction generates two outputs: one for a payment to
a third party and one for the change address. Again for simplicity, we
demand that a transaction which introduces rate-limiting must have only a
single, rate-limited output. The validation rule might be: if a transaction
has rate-limiting params and none of its inputs are rate-limited, then
there must be only a single (rate-limited) output (and no second or change
output).

Consider rate limiting transactions tx1 having one or more normal (non
rate-limited) inputs:

tx1 gets included at block height 84;
The inputs of tx1 are not rate-limited => tx1 must have only a single
output which will become rate-limited;
params: tx1_epochStart=81, tx1_epochEnd=800100, tx1_limit=200k,
tx1_remain=200k;
=> This defines that an epoch has 100 blocks and no more than 200k sats may
be spent in any one epoch. Within the current epoch, 200k sats may still be
spent.

This transaction begins to rate-limit a set of inputs, so it has a single
rate-limited output.
Let's explore transactions that have the output of tx1 as their input. I
will denote the output of tx1 as "out1".

tx2a has out1 as its only input;
tx2a spends 50k sats and gets included at block height 803050;
tx2a specifies the following params for its change output "chg2a":
chg2a_epochStart=803001, chg2a_epochEnd=803100;
chg2a_limit=200k, chg2a_remain=150k.

To enforce rate-limiting, the system must validate the params of the change
output chg2a to ensure that overspending is not allowed.

The above params are allowed because:
=> 1. the epoch does not become smaller than 100 blocks [(chg2a_epochEnd -
chg2a_epochStart) >= (tx1_epochEnd - tx1_epochStart)]
=> 2. tx1_limit has not been increased (ch2a_limit <= tx1_limit)
=> 3. the amount spent (50k sats) does not exceed tx1_remain AND does not
exceed chg2a_limit;
=> 4. chg2a_remain" is 50k sats less than chg2a_limit.

A transaction may also further constrain further spending like so:

tx2b has out1as its only input;
tx2b spends 8k sats and gets included at block height 808105;
tx2b specifies the following params for its change output "chg2b":
chg2b_epochStart=808101, chg2b_epochEnd=808250;
chg2b_limit=10k, chg2b_remain=0.

These params are allowed because:
=> 1. the epoch does not become smaller than100 blocks. It is fine to
increase the epoch to 150 blocks because it does not enable exceeding the
original rate-limit;
=> 2. the limit (chg2b_limit) has been decreased to 10k sats, further
restricting the maximum amount allowed to be spent within the current and
any subsequent epochs;
=> 3. the amount spent (10k sats) does not exceed tx1_remain AND does not
exceed chg2b_limit;
=> 4. chg2b_remain has been set to zero, meaning that within the current
epoch (block height 808101 to and including 808250), tx2b cannot be used as
a spending input to any transaction.

Starting from block height 808251, a new epoch will start and the
rate-limited output of t

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-02 Thread Zac Greenwood via bitcoin-dev
[Note: I've moved your reply to the newly started thread]

Hi Billy,

Thank you for your kind and encouraging feedback.

I don't quite understand why you'd want to define a specific span of blocks
> for the rate limit. Why not just specify the size of the window (in blocks)
> to rate limit within, and the limit?


To enable more straightforward validation logic.

You mentioned change addresses, however, with the parameters you defined,
> there would be no way to connect together the change address with the
> original address, meaning they would have completely separate rate limits,
> which wouldn't work since the change output would ignore the previous rate
> limit.


The rate-limiting parameters must be re-specified for each rate-limited
input. So, a transaction that has a rate-limited input is only valid if its
output is itself rate-limited such that it does not violate the
rate-limiting constraints of its input.

In my thread-starter, I gave the below example of a rate-limited address a2
that serves as input for transaction t2:

a2: 99.8 sats at height 800100;
Rate-limit params: h0=80, h1=800143, a=500k, a_remaining=300k;

Transaction t2:
Included at block height 800200
Spend: 400k + fees.
Rate-limiting params: h0=800144, h1=800287, a=500k, a_remaining=100k.

Note how transaction t2 re-specifies the rate-limiting parameters.
Validation must ensure that the re-specified parameters are within bounds,
i.e., do not allow more spending per epoch than the rate-limiting
parameters of its input address a2. Re-specifying the rate-limiting
parameters offers the flexibility to further restrict spending, or to
disable any additional spending within the current epoch by setting
a_remaining to zero.

Result:
Value at destination address: 400k sats;
Rate limiting params at destination address: none;
Value at change address a3: 99.4m sats;
Rate limiting params at change address a3: h0=800144, h1=800287, a=500k,
a_remaining=100k.

As a design principle I believe it makes sense if the system is able to
verify the validity of a transaction without having to consider any
transactions that precede its inputs. As a side-note, doing away with this
design principle would however enable more sophisticated rate-limiting
(such as rate-limiting per sliding window instead of rate-limiting per
epoch having a fixed start and end block), but while at the same time
reducing the size of per rate-limiting transaction (because it would enable
specifying the rate-limiting parameters more space-efficiently). To test
the waters and to keep things relatively simple, I chose not to go into
this enhanced form of rate-limiting.

I haven't gone into how to process a transaction having multiple
rate-limited inputs. The easiest way to handle this case is to not allow
any transaction having more than one rate-limited input. One could imagine
complex logic to handle transactions having multiple rate-limited inputs by
creating multiple rate-limited change addresses. However at first glance I
don't believe that the marginal added functionality would justify the
increased implementation complexity.

 I'd be interested in seeing you write a BIP for this.


Thank you, but sadly my understanding of Bitcoin is way too low to be able
to write a BIP and do the implementation. However I see tremendous value in
this functionality. Favorable feedback of the list regarding the usefulness
and the technical feasibility of rate-limiting functionality would of
course be an encouragement for me to descend further down the rabbit hole.

Zac


On Sun, Aug 1, 2021 at 10:09 AM Zac Greenwood  wrote:

> [Resubmitting to list with minor edits. My previous submission ended up
> inside an existing thread, apologies.]
>
> Hi list,
>
> I'd like to explore whether it is feasible to implement new scripting
> capabilities in Bitcoin that enable limiting the output amount of a
> transaction based on the total value of its inputs. In other words, to
> implement the ability to limit the maximum amount that can be sent from an
> address.
>
> Two use cases come to mind:
>
> UC1: enable a user to add additional protection their funds by
> rate-limiting the amount that they are allowed to send during a certain
> period (measured in blocks). A typical use case might be a user that
> intends to hodl their bitcoin, but still wishes to occasionally send small
> amounts. Rate-limiting avoids an attacker from sweeping all the users'
> funds in a single transaction, allowing the user to become aware of the
> theft and intervene to prevent further thefts.
>
> UC2: exchanges may wish to rate-limit addresses containing large amounts
> of bitcoin, adding warm- or hot-wallet functionality to a cold-storage
> address. This would enable an exchange to drastically reduce the number of
> times a cold wallet must be accessed with private keys that give access to
> the full amount.
>
> In a typical setup, I'd envision using multisig such that the user has two
> sets of private keys to their encumbered 

[bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-01 Thread Zac Greenwood via bitcoin-dev
[Resubmitting to list with minor edits. My previous submission ended up
inside an existing thread, apologies.]

Hi list,

I'd like to explore whether it is feasible to implement new scripting
capabilities in Bitcoin that enable limiting the output amount of a
transaction based on the total value of its inputs. In other words, to
implement the ability to limit the maximum amount that can be sent from an
address.

Two use cases come to mind:

UC1: enable a user to add additional protection their funds by
rate-limiting the amount that they are allowed to send during a certain
period (measured in blocks). A typical use case might be a user that
intends to hodl their bitcoin, but still wishes to occasionally send small
amounts. Rate-limiting avoids an attacker from sweeping all the users'
funds in a single transaction, allowing the user to become aware of the
theft and intervene to prevent further thefts.

UC2: exchanges may wish to rate-limit addresses containing large amounts of
bitcoin, adding warm- or hot-wallet functionality to a cold-storage
address. This would enable an exchange to drastically reduce the number of
times a cold wallet must be accessed with private keys that give access to
the full amount.

In a typical setup, I'd envision using multisig such that the user has two
sets of private keys to their encumbered address (with a "set" of keys
meaning "one or more" keys). One set of private keys allows only for
sending with rate-limiting restrictions in place, and a second set of
private keys allowing for sending any amount without rate-limiting,
effectively overriding such restriction.

The parameters that define in what way an output is rate-limited might be
defined as follows:

Param 1: a block height "h0" indicating the first block height of an epoch;
Param 2: a block height "h1" indicating the last block height of an epoch;
Param 3: an amount "a" in satoshi indicating the maximum amount that is
allowed to be sent in any epoch;
Param 4: an amount "a_remaining" (in satoshi) indicating the maximum amount
that is allowed to be sent within the current epoch.

For example, consider an input containing 100m sats (1 BTC) which has been
rate-limited with parameters (h0, h1, a, a_remaining) of (80, 800143,
500k, 500k). These parameters define that the address is rate-limited to
sending a maximum of 500k sats in the current epoch that starts at block
height 80 and ends at height 800143 (or about one day ignoring block
time variance) and that the full amount of 500k is still sendable. These
rate-limiting parameters ensure that it takes at minimum 100m / 500k = 200
transactions and 200 x 144 blocks or about 200 days to spend the full 100m
sats. As noted earlier, in a typical setup a user should retain the option
to transact the entire amount using a second (set of) private key(s).

For rate-limiting to work, any change output created by a transaction from
a rate-limited address must itself be rate-limited as well. For instance,
expanding on the above example, assume that the user spends 200k sats from
a rate-limited address a1 containing 100m sats:

Start situation:
At block height 80: rate-limited address a1 is created;
Value of a1: 100.0m sats;
Rate limiting params of a1: h0=80, h1=800143, a=500k, a_remaining=500k;

Transaction t1:
Included at block height 800100;
Spend: 200k + fee;
Rate limiting params: h0=80, h1=800143, a=500k, a_remaining=300k.

Result:
Value at destination address: 200k sats;
Rate limiting params at destination address: none;
Value at change address a2: 99.8m sats;
Rate limiting params at change address a2: h0=80, h1=800143, a=500k,
a_remaining=300k.

In order to properly enforce rate limiting, the change address must be
rate-limited such that the original rate limit of 500k sats per 144 blocks
cannot be exceeded. In this example, the change address a2 were given the
same rate limiting parameters as the transaction that served as its input.
As a result, from block 800100 up until and including block 800143, a
maximum amount of 300k sats is allowed to be spent from the change address.

Example continued:
a2: 99.8 sats at height 800100;
Rate-limit params: h0=80, h1=800143, a=500k, a_remaining=300k;

Transaction t2:
Included at block height 800200
Spend: 400k + fees.
Rate-limiting params: h0=800144, h1=800287, a=500k, a_remaining=100k.

Result:
Value at destination address: 400k sats;
Rate limiting params at destination address: none;
Value at change address a3: 99.4m sats;
Rate limiting params at change address a3: h0=800144, h1=800287, a=500k,
a_remaining=100k.

Transaction t2 is allowed because it falls within the next epoch (running
from 800144 to 800287) so a spend of 400k does not violate the constraint
of 500k per epoch.

As could be seen, the rate limiting parameters are part of the transaction
and chosen by the user (or their wallet). This means that the parameters
must be validated to ensure that they do not violate the intended
constraints.

For ins

[bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-07-31 Thread Zac Greenwood via bitcoin-dev
Hi list,

I'd like to explore whether it is feasible to implement new scripting
capabilities in Bitcoin that enable limiting the output amount of a
transaction based on the total value of its inputs. In other words, to
implement the ability to limit the maximum amount that can be sent from an
address.

Two use cases come to mind:

UC1: enable a user to add additional protection their funds by
rate-limiting the amount they are able to send during a certain period
(measured in blocks). A typical use case might be a user that intends to
hodl their bitcoin, but still wishes to occasionally send small amounts.
This avoids an attacker from sweeping all their funds in a single
transaction, allowing the user to become aware of the theft and intervene
to prevent further theft.

UC2: exchanges may wish to rate-limit addresses containing large amounts of
bitcoin, adding warm- or hot-wallet functionality to a cold-storage
address. This would enable an exchange to drastically reduce the number of
times a cold wallet must be accessed with private keys that enable access
to the full amount.

In a typical setup, I'd envision using multisig such that the user has two
sets of private keys to their encumbered address (with a "set" of keys
meaning "one or more" keys). One set of private keys allows only for
sending with rate-limiting restrictions in place, and a s second set of
private keys allowing for sending any amount without rate-limiting,
effectively overriding such restriction.

The parameters that define in what way an output is rate-limited might be
defined as follows:

Param 1: a block height "h0" indicating the first block height of an epoch;
Param 2: a block height "h1" indicating the last block height of an epoch;
Param 3: an amount "a" in satoshi indicating the maximum amount that is
allowed to be sent in any epoch;
Param 4: an amount "a_remaining" (in satoshi) indicating the maximum amount
that is allowed to be sent within the current epoch.

For example, consider an input containing 100m sats (1 BTC) which has been
rate-limited with parameters (h0, h1, a, a_remaning) of (80, 800143,
500k, 500k). These parameters define that the address is rate-limited to
sending a maximum of 500k sats in the current epoch that starts at block
height 80 and ends at height 800143 (or about one day ignoring block
time variance) and that the full amount of 500k is still sendable. These
rate-limiting parameters ensure that it takes at minimum 100m / 500k = 200
transactions and 200 x 144 blocks or about 200 days to spend the full 100m
sats. As noted earlier, in a typical setup a user should retain the option
to transact the entire amount using a second (set of) private key(s).

For rate-limiting to work, any change output created by a transaction from
a rate-limited address must itself be rate-limited as well. For instance,
expanding on the above example, assume that the user spends 200k sats from
a rate-limited address a1 containing 100m sats:

Start situation:
At block height 80: rate-limited address a1 is created;
Value of a1: 100.0m sats;
Rate limiting params of a1: h0=80, h1=800143, a=500k, a_remaining=500k;

Transaction t1:
Included at block height 800100;
Spend: 200k + fee;
Rate limiting params: h0=80, h1=800143, a=500k, a_remaining=300k.

Result:
Value at destination address: 200k sats;
Rate limiting params at destination address: none;
Value at change address a2: 99.8m sats;
Rate limiting params at change address a2: h0=80, h1=800143, a=500k,
a_remaining=300k.

In order to properly enforce rate limiting, the change address must be
rate-limited such that the original rate limit of 500k sats per 144 blocks
cannot be exceeded. In this example, the change address a2 were given the
same rate limiting parameters as the transaction that served as its input.
As a result, from block 800100 up until and including block 800143, a
maximum amount of 300k sats is allowed to be spent from the change address.

Example continued:
a2: 99.8 sats at height 800100;
Rate-limit params: h0=80, h1=800143, a=500k, a_remaining=300k;

Transaction t2:
Included at block height 800200
Spend: 400k + fees.
Rate-limiting params: h0=800144, h1=800287, a=500k, a_remaining=100k.

Result:
Value at destination address: 400k sats;
Rate limiting params at destination address: none;
Value at change address a3: 99.4m sats;
Rate limiting params at change address a3: h0=800144, h1=800287, a=500k,
a_remaining=100k.

Transaction t2 is allowed because it falls within the next epoch (running
from 800144 to 800287) so a spend of 400k does not violate the constraint
of 500k per epoch.

As could be seen, the rate limiting parameters are part of the transaction
and chosen by the user (or their wallet). This means that the parameters
must be validated to ensure that they do not violate the intended
constraints.

For instance, this transaction should not be allowed:
a2: 99.8 sats at height 800100;
Rate-limit params of a2: h0=80, h1=800143, a=500k

Re: [bitcoin-dev] Covenant opcode proposal OP_CONSTRAINDESTINATION (an alternative to OP_CTV)

2021-07-28 Thread Zac Greenwood via bitcoin-dev
Hi Billy,

Thank you for your comprehensive reply. My purpose was to find out whether
a proposal to somehow limit the amount being sent from an address exists
and to further illustrate my thoughts by giving a concrete example of how
this might work functionally without getting to deep into the
technicalities.

As for your assumption: for an amount limit to have the desired effect, I
realize now that there must also exist some limit on the number of
transactions that will be allowed from the encumbered address.

Taking a step back, a typical use case would be a speculating user
intending to hodl bitcoin but who still wishes to be able to occasionally
transact minor amounts.

Ideally, such user should optionally still be able to bypass the rate limit
and spend the entire amount in a single transaction by signing with an
additional private key (multisig).

During the setup phase, a user sends all their to-be-rate-limited coin to a
single address. When spending from this rate limited address, any change
sent to the change address must be rate limited as well using identical
parameters. I believe that’s also what you’re suggesting.

I believe that a smart wallet should be able to set up and maintain
multiple rate-limited addresses in such a way that their aggregate
behaviour meets any rate-limiting parameters as desired by the user. This
ought to alleviate your privacy concerns because it means that the wallet
will be able to mix outputs.

The options for the to-be implemented rate-limiting parameters vary from
completely arbitrary to more restrictive.

Completely arbitrary parameters would allow users to set up a rate limit
that basically destroys their funds, for instance rate-limiting an address
to an amount of 1 satoshi per 100 blocks.

More restrictive rate limits would remove such footgun and may require that
only a combination of parameters are allowed such that all funds will be
spendable within a set number of blocks (for instance 210,000).

As for the rate-limiting parameters, in addition to a per-transaction
maximum of (minimum amount in satoshi or a percentage of the total amount
stored at the address), also the transaction frequency must be limited. I
would propose this to be expressed as a number of blocks before a next
transaction can be sent from the encumbered address(es).

I believe such user-enabled rate-limiting is superior to one that requires
a third party.

As an aside, I am not sure how a vault solution would be able to prevent an
attacker who is in possession of the vaults’ private key from sabotaging
the user by replacing the user transaction with one having a higher fee
every time the user attempts to transact. I am probably missing something
here though.

Zac


On Tue, 27 Jul 2021 at 19:21, Billy Tetrud  wrote:

> Hi Zac,
>
> I haven't heard of any proposal for limiting the amount that can be sent
> from an address. I assume you mean limiting the amount that can be sent in
> a period of time - eg something that would encode that for address A, only
> X bitcoin can be sent from the address in a given day/week/etc, is that
> right? That would actually be a somewhat difficult thing to do in the
> output-based system Bitcoin uses, and would be easier in an account based
> system like Ethereum. The problem is that each output is separate, and
> there's no concept in bitcoin of encumbering outputs together.
>
> What you could do is design a system where coins would be combined in a
> single output, and then encumber that output with a script that allows a
> limited amount of coin be sent to a destination address and requires all
> other bitcoins be returned to sender in a new change output that is also
> timelocked. That way, the new change output can't be used again until the
> timelock expires (eg a week). However, to ensure this wallet works
> properly, any deposit into the wallet would have to also spend the wallet's
> single output, so as to create a new single output at that address. So 3rd
> parties wouldn't be able to arbitrarily send money in (or rather, they
> could, but each output would have its own separate spending limit).
>
> > such kind of restriction would be extremely effective in thwarting the
> most damaging type of theft being the one where all funds are swept in a
> single transaction
>
> It would. However a normal wallet vault basically already has this
> property - a thief can't simply sweep funds instantly, but instead the
> victim will see an initiated transaction and will be able to reverse it
> within a delay time-window. I don't think adding a spending limit would add
> meaningful security to a delayed-send wallet vault like that. But it could
> be used to increase the security of a wallet vault that can be instantly
> spent from - ie if the attacker successfully steals funds, then the victim
> has time to go gather their additional keys and move the remaining
> (unstolen) funds into a new wallet.
>
> OP_CD could potentially be augmented to allow specifying lim

Re: [bitcoin-dev] Covenant opcode proposal OP_CONSTRAINDESTINATION (an alternative to OP_CTV)

2021-07-27 Thread Zac Greenwood via bitcoin-dev
Hi Billy,

On the topic of wallet vaults, are there any plans to implement a way to
limit the maximum amount to be sent from an address?

An example of such limit might be: the maximum amount allowed to send is
max(s, p) where s is a number of satoshi and p a percentage of the total
available (sendable) amount.

A minimum value may be imposed on the percentage to ensure that the address
can be emptied within a reasonable number of transactions. The second
parameter s allows a minimum permitted amount. (This is necessary because
with only the percentage parameter the minimum permitted amount converges
to zero, making it impossible to empty the address).

There may be other ways too. In my view, such kind of restriction would be
extremely effective in thwarting the most damaging type of theft being the
one where all funds are swept in a single transaction.

Zac


On Tue, 27 Jul 2021 at 03:26, Billy Tetrud via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hey James,
>
> In the examples you mentioned, what I was exploring was a mechanism of
> attack by which the attacker could steal user A's key and use that key to
> send a transaction with the maximum possible fee. User B would still
> receive some funds (probably), but if the fee could be large, the attacker
> would either do a lot of damage to user B (griefing) or could make an
> agreement with a miner to give back some of the large fee (theft).
>
> But as for use cases, the proposal mentions a number of use cases
> 
>  and
> most overlap with the use cases of op_ctv  (Jeremy
> Rubin's website for op_ctv has a lot of good details, most of which are
> also relevant to op_cd). The use case I'm most interested in is wallet
> vaults. This opcode can be used to create a wallet vault where the user
> only needs to use, for example, 1 key to spend funds, but the attacker must
> steal 2 or more keys to spend funds. The benefits of a 2 key wallet vault
> like this vs a normal 2-of-2 multisig wallet are that not only does an
> attacker have to steal both keys (same level of security), but also the
> user can lose one key and still recover their funds (better redundancy) and
> also that generally the user doesn't need to access their second key - so
> that can remain in a much more secure location (which would also probably
> make that key harder to steal). The only time the second key only comes
> into play if one key is stolen and the attacker attempts to send a
> transaction. At that point, the user would go find and use his second key
> (along with the first) to send a revoke transaction to prevent the attacker
> from stealing their funds. This is somewhat akin to a lightning watchtower
> scenario, where your wallet would watch the chain and alert you about an
> unexpected transaction, at which point you'd manually do a revoke (vs a
> watchtower's automated response). You might be interested in taking a look
> at this wallet vault design
> 
> that uses OP_CD or even my full vision
>  of the
> wallet vault I want to be able to create.
>
> With a covenant opcode like this, its possible to create very usable and
> accessible but highly secure wallets that can allow normal people to hold
> self custody of their keys without fear of loss or theft and without the
> hassle of a lot of safe deposit boxes (or other secure seed storage
> locations).
>
> Cheers,
> BT
>
>
>
>
>
> On Mon, Jul 26, 2021 at 2:08 PM James MacWhyte  wrote:
>
>> Hi Billy!
>>
>> See above, but to break down that situation a bit further, these are the
>>> two situations I can think of:
>>>
>>>1. The opcode limits user/group A to send the output to user/group B
>>>2. The opcode limits user A to send from one address they own to
>>>another address they own.
>>>
>>> I'm trying to think of a good use case for this type of opcode. In these
>> examples, an attacker who compromises the key for user A can't steal the
>> money because it can only be sent to user B. So if the attacker wants to
>> steal the funds, they would need to compromise the keys of both user A and
>> user B.
>>
>> But how is that any better than a 2-of-2 multisig? Isn't the end result
>> exactly the same?
>>
>> James
>>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Trinary Version Signaling for softfork

2021-06-30 Thread Zac Greenwood via bitcoin-dev
Eric,

> A million nodes saying a transaction is invalid does nothing to enforce
that knowledge

It does. Nodes disregard invalid transactions and invalid blocks as if they
never existed. It is not possible for any party to transact bitcoin in a
way that violates the set of rules enforced by the network of
consensus-compatible nodes that we call Bitcoin.

Zac


On Wed, Jun 30, 2021 at 2:03 PM Eric Voskuil  wrote:

> A million nodes saying a transaction is invalid does nothing to enforce
> that knowledge.
>
> An economic node is a person who refuses to accept invalid money. A node
> only informs this decision, it cannot enforce it. That’s up to people.
>
> And clearly if one is not actually accepting bitcoin for anything at the
> time, he is not enforcing anything.
>
> The idea of a non-economic node is well established, nothing new here.
>
> e
>
> On Jun 30, 2021, at 04:33, Zac Greenwood  wrote:
>
> 
> Hi Eric,
>
> > A node (software) doesn’t enforce anything. Merchants enforce consensus
> rules
>
> … by running a node which they believe to enforce the rules of Bitcoin.
>
> A node definitely enforces consensus rules and defines what is Bitcoin. I
> am quite disturbed that this is even being debated here.
>
> Zac
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Trinary Version Signaling for softfork

2021-06-30 Thread Zac Greenwood via bitcoin-dev
Hi Eric,

> A node (software) doesn’t enforce anything. Merchants enforce consensus
rules

… by running a node which they believe to enforce the rules of Bitcoin.

A node definitely enforces consensus rules and defines what is Bitcoin. I
am quite disturbed that this is even being debated here.

Zac


On Wed, 30 Jun 2021 at 11:17, Eric Voskuil via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Prayank,
>
>
>
> > So majority hash power not following the consensus rules can result in
> chain split?
>
>
>
> Any two people on different rules implies a chain split. That’s presumably
> why rule changes are called forks. There is no actual concept of “the
> rules” just one set of rules or another.
>
>
>
> > Why would majority of miners decide to mine a chain that nobody wants to
> use?
>
>
>
> I don’t presume to know why people prefer one thing over another, or what
> people want to use, nor does economics.
>
>
>
> > What are different things possible in this case based on game theory?
>
>
>
> I’ve seen no actual demonstration of the relevance of game theory to
> Bitcoin. People throw the words around quite a bit, but I can’t give you an
> answer because I have found no evidence of a valid game theoretic model
> applicable to Bitcoin. It’s not a game, it’s a market.
>
>
>
> > Do miners and mining pools participate in discussions before signaling
> for a soft fork begins?
>
>
>
> Who knows, I don’t get invited to round table meetings.
>
>
>
> > Can they still mine something else post activation even if signaling
> readiness for soft fork?
>
>
>
> A person can mine whatever they want. Signaling does not compel a miner to
> enforce. Each block mined is anonymous. But each miner seeing the signals
> of others, unless they are coordinating, would presumably assume that
> others will enforce.
>
>
>
> > Who enforces consensus rules technically in Bitcoin? Full nodes or
> Miners?
>
>
>
> A node (software) doesn’t enforce anything. Merchants enforce consensus
> rules when they reject trading for something that they don’t consider
> money. Every time two people trade both party validates what they receive
> (not what they trade away). Those receiving Bitcoin are economically
> relevant and their power is a function of how much they are doing so.
>
>
>
> Miners censor, which is inconsequential unless enforced. Majority miners
> can enforce censorship by simply not building on any non-censoring blocks.
> This is what soft fork enforcement is.
>
>
>
> > Is soft fork signaling same as voting?
>
>
>
> I don’t see that it needs a label apart from signaling. There are many
> kinds of voting. It would be hard to equate signaling with any of them.
> It’s a public signal that the miner who mined a given block miner intends
> to censor, that’s all.
>
>
>
> > According to my understanding, miners follow the consensus rules
> enforced by full nodes and get (subsidy + fees) for their work.
>
>
>
> Miners mine a chain, which ever one they want. There are many. They earn
> the block reward.
>
>
>
> > Signaling is not voting although lot of people consider it voting
> including some mining pools and exchanges.
>
>
>
> What people consider it is inconsequential. It has clearly defined
> behavior.
>
>
>
> e
>
>
>
> *From:* Prayank 
> *Sent:* Sunday, June 27, 2021 5:01 AM
> *To:* e...@voskuil.org
> *Cc:* Bitcoin Dev 
> *Subject:* Re: [bitcoin-dev] Trinary Version Signaling for softfork
>
>
>
> Hello Eric,
>
>
>
> I have few questions:
>
>
>
> > Without majority hash power support, activation simply means you are off
> on a chain split.
>
>
>
> So majority hash power not following the consensus rules can result in
> chain split? Why would majority of miners decide to mine a chain that
> nobody wants to use? What are different things possible in this case based
> on game theory?
>
>
>
> > And activation without majority hash power certainly does not “ensure”
> this.
>
>
>
> Do miners and mining pools participate in discussions before signaling for
> a soft fork begins? Can they still mine something else post activation even
> if signaling readiness for soft fork?
>
>
>
> > If one wants to enforce a soft fork (or otherwise censor) this is
> accomplished by mining (or paying others to do so). Anyone can mine, so
> everyone gets a say. Mining is trading capital now for more later. If
> enough people want to do that, they can enforce a soft fork. It’s time
> Bitcoiners stop thinking of miners as other people. Anyone can mine, and
> that’s your vote.
>
>
>
> Who enforces consensus rules technically in Bitcoin? Full nodes or Miners?
>
>
>
> Is soft fork signaling same as voting?
>
>
>
> According to my understanding, miners follow the consensus rules enforced
> by full nodes and get (subsidy + fees) for their work. Signaling is not
> voting although lot of people consider it voting including some mining
> pools and exchanges.
>
>
>
>
>
> --
>
> Prayank
> ___
> bitcoin-dev mailing list
> bi

Re: [bitcoin-dev] Trinary Version Signaling for softfork upgrades

2021-06-30 Thread Zac Greenwood via bitcoin-dev
> Majority hash power does have the ability to determine what gets
confirmed.

Miners don’t have the ability to decide whether a block is valid.

Hash power is only recognized as such if it is used for creating a valid
block, i.e., a block that strictly follows all the rules as set by the node
software that transacting users choose to run.

If suddenly 70% of all hash power decided to start mining blocks that are
invalid according to the rules set in the users’ software, then these
invalid blocks will be disregarded. From a user perspective, 70% of all
hash power will seem to have disappeared.

In short, users define what is Bitcoin, not miners. This is fundamental to
being decentralized.



On Tue, 29 Jun 2021 at 23:17, Eric Voskuil via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> On Jun 29, 2021, at 12:28, Jorge Timón  wrote:
>
> 
> "Confirmation" isn't needed for softforks.
>
>
> All transactions require confirmation. Splitting does not change this.
>
> Softforks are not compatible without miner enforcement. So soft forking
> without it has essentially the same effect as hard forking, the chain
> splits.
>
> Miners controlling confirmation doesn't mean miners control the rules,
> they never did.
>
>
> Please define “control” because these statements hinge on that word.
> Nobody “controls” the rules of others, nor did anyone claim that to be the
> case. Majority hash power does have the ability to determine what gets
> confirmed. That is the central design principle of proof of work. It takes
> that decision out of the hands of politicians and places it at the feet of
> the market.
>
> Read section 11 of the bitcoin paper "even with a majority of hashrate one
> cannot arbitrarily change rules or forge signatures.
>
>
> Never claimed that was the case. One can run any rules that one desires.
>
> You may say users chosing the rules is "politicial". Isn't miners deciding
> them for users more political?
>
>
> No, it’s economic. The largest investment in mining (including highest
> fees paid to incentivize it) determines censorship resistance.
>
> Whatever you call it, it is still how free software works: users decide
> what to run.
>
>
> A *person* can run whatever software they want. Money requires that others
> agree (same rules), and to be money bitcoin requires confirmation.
>
> It is extremely disappointing to see how few developers seem to ubderstand
> this, or even care about users deciding or miners not deciding the rules.
>
>
> It’s poorly understood because there are so many who should know better
> making very misleading statements.
>
> How can we expect users to understand bitcoin when most developers don't
> seem to understand it?
>
>
> Clearly we cannot.
>
> It is really sad.
>
> On Tue, Jun 29, 2021, 19:17 Eric Voskuil  wrote:
>
>>
>> > On Jun 29, 2021, at 10:55, Luke Dashjr  wrote:
>> >
>> > The only alternative to a split in the problematic scenarios are 1)
>> concede
>> > centralised miner control over the network,
>>
>> Miners control confirmation, entirely.
>>
>> This is the nature of bitcoin. And merchants control validation,
>> entirely. Anyone can be a miner or a merchant. Neither is inherently
>> “better” than the other. The largest merchants are likely a handful of
>> exchanges, likely at least as centralized as miners are pooled.
>>
>> Splitting does not change this.
>>
>> > and 2) have inconsistent
>> > enforcement of rules by users who don't agree on what the correct rules
>> are,
>>
>> There are no “correct” rules. Whatever rules one enforces determine what
>> network he chooses to participate in.
>>
>> > again leading to centralised miner control over the network.
>>
>> Leading to? Miners control confirmation, always. Whether that is
>> centralized, just as with merchanting, is up to individuals.
>>
>> > In other words, in this context, accepting a split between disagreeing
>> users
>> > is the ONLY way Bitcoin can possibly continue as a decentralised
>> currency.
>>
>> No, it is not. You are proposing splitting as the method of censorship
>> resistance inherent to Bitcoin. Coordinating this split requires
>> coordinated action. The whole point of bitcoin is coordinate that action
>> based on mining (proof of work). Replacing that with a political process is
>> just a reversion to political money.
>>
>> > Making that split as clean and well-defined as possible not only
>> ensures the
>> > best opportunity for both sides of the disagreement,
>>
>> Trivially accomplished, just change a rule. This isn’t about that. It’s
>> about how one gets others to go along with the new coin, or stay with the
>> old. An entirely political process, which is clearly evident from the
>> campaigns around such attempts.
>>
>> > but also minimises the
>> > risk that the split occurs at all (since the "losing" side needs to
>> concede,
>> > rather than passively continue the disagreement ongoing after the
>> attempted
>> > protocol change).
>>
>> Nobody “needs to” concede once a 

Re: [bitcoin-dev] Opinion on proof of stake in future

2021-05-18 Thread Zac Greenwood via bitcoin-dev
Hi ZmnSCPxj,

Please note that I am not suggesting VDFs as a means to save energy, but
solely as a means to make the time between blocks more constant.

Zac


On Tue, 18 May 2021 at 12:42, ZmnSCPxj  wrote:

> Good morning Zac,
>
> > VDFs might enable more constant block times, for instance by having a
> two-step PoW:
> >
> > 1. Use a VDF that takes say 9 minutes to resolve (VDF being subject to
> difficulty adjustments similar to the as-is). As per the property of VDFs,
> miners are able show proof of work.
> >
> > 2. Use current PoW mechanism with lower difficulty so finding a block
> takes 1 minute on average, again subject to as-is difficulty adjustments.
> >
> > As a result, variation in block times will be greatly reduced.
>
> As I understand it, another weakness of VDFs is that they are not
> inherently progress-free (their sequential nature prevents that; they are
> inherently progress-requiring).
>
> Thus, a miner which focuses on improving the amount of energy that it can
> pump into the VDF circuitry (by overclocking and freezing the circuitry),
> could potentially get into a winner-takes-all situation, possibly leading
> to even *worse* competition and even *more* energy consumption.
> After all, if you can start mining 0.1s faster than the competition, that
> is a 0.1s advantage where *only you* can mine *in the entire world*.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Opinion on proof of stake in future

2021-05-18 Thread Zac Greenwood via bitcoin-dev
VDFs might enable more constant block times, for instance by having a
two-step PoW:

1. Use a VDF that takes say 9 minutes to resolve (VDF being subject to
difficulty adjustments similar to the as-is). As per the property of VDFs,
miners are able show proof of work.

2. Use current PoW mechanism with lower difficulty so finding a block takes
1 minute on average, again subject to as-is difficulty adjustments.

As a result, variation in block times will be greatly reduced.

Zac


On Tue, 18 May 2021 at 09:07, ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Good morning Erik,
>
> > Verifiable Delay Functions involve active participation of a single
> > verifier. Without this a VDF decays into a proof-of-work (multiple
> > verifiers === parallelism).
> >
> > The verifier, in this case is "the bitcoin network" taken as a whole.
> > I think it is reasonable to consider that some difficult-to-game
> > property of the last N blocks (like the hash of the last 100
> > block-id's or whatever), could be the verification input.
> >
> > The VDF gets calculated by every eligible proof-of-burn miner, and
> > then this is used to prevent a timing issue.
> >
> > Seems reasonable to me, but I haven't looked too far into the
> > requirements of VDF's
> >
> > nice summary for anyone who is interested:
> > https://medium.com/@djrtwo/vdfs-are-not-proof-of-work-91ba3bec2bf4
> >
> > While VDF's almost always lead to a "cpu-speed monopoly", this would
> > only be helpful for block latency in a proof-of-burn chain. Block
> > height would be calculated by eligible-miner-burned-coins, so the
> > monopoly could be easily avoided.
>
> Interesting link.
>
> However, I would like to point out that the *real* reason that PoW
> consumes lots of power is ***NOT***:
>
> * Proof-of-work is parallelizable, so it allows miners consume more energy
> (by buying more grinders) in order to get more blocks than their
> competitors.
>
> The *real* reason is:
>
> * Proof-of-work allows miners to consume more energy in order to get more
> blocks than their competitors.
>
> VDFs attempt to sidestep that by removing parallelism.
> However, there are ways to increase *sequential* speed, such as:
>
> * Overclocking.
>   * This shortens lifetime, so you can spend more energy (on building new
> miners) in order to get more blocks than your competitors.
> * Lower temperatures.
>   * This requires refrigeration/cooling, so you can spend more energy (on
> the refrigeration process) in order to get more blocks than your
> competitors.
>
> I am certain people with gaming rigs can point out more ways to improve
> sequential speed, as necessary to get more frames per second.
>
> Given the above, I think VDFs will still fail at their intended task.
> Speed, yo.
>
> Thus, VDFs do not serve as a sufficient deterrent away from
> ever-increasing energy consumption --- it just moves the energy consumption
> increase away from the obvious (parallelism) to the
> obscure-if-you-have-no-gamer-buds.
>
> You humans just need to get up to Kardashev 1.0, stat.
>
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Force to do nothing for first 9 minutes to save 90% of mining energy

2021-05-17 Thread Zac Greenwood via bitcoin-dev
>
>
> Are there people who can freely produce new mining equipment to an
> arbitrary degree?
>

Close. Bitmain for example produces their own ASICs and rigs which they
mine with. Antpool is controlled by Bitmain and has a significant amount of
the hash power. The marginal cost of an ASIC chip or mining rig is low
compared to R&D and setting up of the production lines etc. For Bitmain
it’s relatively cheap to produce an additional mining rig.

Unfortunately I failed to make sense of your other remarks which looked
rather confused so I take the liberty to disregard them.

Zac


___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Force to do nothing for first 9 minutes to save 90% of mining energy

2021-05-16 Thread Zac Greenwood via bitcoin-dev
> if energy is only expended for 10% of the same duration, this money must
now be spent on hardware.

More equipment obviously increases the total energy usage.

You correctly point out that the total expenses of a miner are not just
energy but include capital expenses for equipment and operational cost for
staff, rent etc.

Actually, non-energy expenses are perhaps a much larger fraction of the
total cost than you might expect. Miners using excess waste energy such as
Chinese miners close to hydropower stations pay a near zero price for
energy and are unlikely to be bound by the price of electricity.

Unsurprisingly, miners having access to near-free electricity are
responsible for a significant share of the total energy usage of Bitcoin.
Since such energy is often waste energy from renewable sources such as
hydropower, the carbon footprint of Bitcoin is not nearly as alarming as
its energy usage implies. In fact, since mining is a race to the bottom in
terms of cost, these large miners drive out competing miners that employ
more expensive, often non-renewable sources of energy. It’s for instance
impossible to mine profitably using household-priced electricity. Looking
at it from that angle, access to renewable, near-free waste energy helps
keeping Bitcoin more green than it would otherwise be. To put it another
way: the high energy usage of the Bitcoin network indicates cheap,
otherwise wasted energy is employed.

For your proposal again this means that energy usage would not be likely to
decrease appreciably, because large miners having access to near-free
energy use the block-reward sized budget fully on equipment and other
operational expenses.

On the other hand, roughly every four years the coinbase reward halves,
which does significantly lower the miner budget, at least in terms of BTC.

Zac



On Sun, 16 May 2021 at 21:02, Karl via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> [sorry if I haven't replied to the other thread on this, I get swamped
> by email and don't catch them all]
>
> This solution is workable but it seems somewhat difficult to me at this
> time.
>
> The clock might be implementable on a peer network level by requiring
> inclusion of a transaction that was broadcast after a 9 minute delay.
>
> Usually a 50% hashrate attack is needed to reverse a transaction in
> bitcoin.  With this change, this naively appears to become a 5%
> hashrate attack, unless a second source of truth around time and order
> is added, to verify proposed histories with.
>
> A 5% hashrate attack is much harder here, because the users of mining
> pools would be mining only 10% of the time, so compromising mining
> pools would not be as useful.
>
> Historically, hashrate has increased exponentially.  This means that
> the difficulty of performing an attack, whether it is 5% or 50%, is
> still culturally infeasible because it is a multiplicative, rather
> than an exponential, change.
>
> If this approach were to be implemented, it could be important to
> consider how many block confirmations people wait for to trust their
> transaction is on the chain.  A lone powerful miner could
> intentionally fork the chain more easily by a factor of 10.  They
> would need to have hashrate that competes with a major pool to do so.
>
> > How would you prevent miners to already compute the simpler difficulty
> problem directly after the block was found and publish their solution
> directly after minute 9? We would always have many people with a finished /
> competing solution.
>
> Such a chain would have to wait a longer time to add further blocks
> and would permanently be shorter.
>
> > Your proposal won’t save any energy because it does nothing to decrease
> the budget available to mine a block (being the block reward).
>
> You are assuming this budget is directly related to energy
> expenditure, but if energy is only expended for 10% of the same
> duration, this money must now be spent on hardware.  The supply of
> bitcoin hardware is limited.
>
> In the long term, it won't be, so a 10% decrease is a stop-gap
> measure.  Additionally, in the long term, we will have quantum
> computers and AI-designed cryptography algorithms, so things will be
> different in a lot of other ways too.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Force to do nothing for first 9 minutes to save 90% of mining energy

2021-05-16 Thread Zac Greenwood via bitcoin-dev
Hi Michael,

Your proposal won’t save any energy because it does nothing to decrease the
budget available to mine a block (being the block reward).

Even if it were technically possible to find a way for nodes to somehow
reach consensus on a hash that gets generated after 9 minutes, all it
achieves is that miners will be expending the entire budget given to them
in the form of the block reward within a single minute on average.

Also please realize that the energy expenditure of Bitcoin is a fundamental
part of its design. An attacker has no other option than to expend as much
as half of all the miners together do in order for a sustained 51% attack
to be successful, making such attack uneconomical.

Zac

On Sat, 15 May 2021 at 23:57, Michael Fuhrmann via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hello,
>
> Bitcoin should create blocks every 10 minutes in average. So why do
> miners need to mine the 9 minutes after the last block was found? It's
> not necessary.
>
> Problem: How to prevent "pre-mining" in the 9 minutes time window?
>
> Possible ideas for discussion:
>
> - (maybe most difficult) global network timer sending a salted hash time
> code after 9 minutes. this enables validation by nodes.
>
> - (easy attempt) mining jobs before 9 minutes have a 10 (or 100 or just
> high enough) times higher difficulty. so everyone can mine any time but
> before to 9 minutes are up there will be a too high downside. It is more
> efficient to wait then paying high bills. The bitcoin will get a "puls".
>
>
> I dont think I see all problems behind these ideas but if there is a
> working solution to do so then the energy fud will find it's end. Saving
> energy without loosing rosbustness.
>
>
>
> :)
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Motivating Address type for OP_RETURN

2021-04-24 Thread Zac Greenwood via bitcoin-dev
> 1. More data allowed in scriptSig, e.g. 80 byte payload (81 actually, I
>   think) for OP_RETURN versus 40 bytes for a BIP141 payload.
>   Maximizing payload size better amortizes the overhead cost of the
>   containing transaction and the output's nValue field.

In order to reduce the footprint of data stored on-chain, could it perhaps
be beneficial to introduce some non-transaction data structure in order to
facilitate storing data on-chain such that it maximizes payload size
vs. on-chain size, while at the same time ensuring that using such data
structure is (almost) as expensive in use per payload-byte as the
next-cheapest alternative (which now is using OP_RETURN) with
help of weight units?

Zac


On Sun, Apr 25, 2021 at 12:01 AM David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sat, Apr 24, 2021 at 01:05:25PM -0700, Jeremy wrote:
> > I meant the type itself is too wide, not the length of the value. As in
> > Script can represent things we know nothing about.
>
> I guess I still don't understand your concern, then.  If script can
> represent things we know nothing about, then script commitments such as
> P2SH, P2WSH, and P2TR also represent things we know nothing about.  All
> you know is what container format they used.  For P2PK, bare multisig,
> OP_RETURN, and other direct uses of scriptPubKey, that container format
> is "bare" (or whatever you want to call it).
>
> > Btw: According to... Oh wait... You?
> >
> https://bitcoin.stackexchange.com/questions/35878/is-there-a-maximum-size-of-a-scriptsig-scriptpubkey
> > the max size is 10k bytes.
>
> I'm not sure what I knew at the time I wrote that answer, but the 10,000
> byte limit is only applied when EvalScript is run, which only happens
> when the output is being spent.  I've appended to this email a
> demonstration of creating a 11,000 byte OP_RETURN on regtest (I tried
> 999,000 bytes but ran into problems with bash's maximum command line
> length limit).  I've updated the answer to hopefully make it more
> correct.
>
> > Is it possible/easy to, say, using bech32m make an inappropriate message
> in
> > the address? You'd have to write the message, then see what it decodes to
> > without checking, and then re encode? I guess this is worse than hex?
>
> If someone wants to abuse bech32m, I suspect they'll do it the same way
> people have abused base58check[1], by using the address format's
> alphabet directly.  E.g., you compose your message using only
> the characters qpzry9x8gf2tvdw0s3jn54khce6mua7l and then append
> the appropriate checksum.
>
> [1]
> https://en.bitcoin.it/wiki/P2SH%C2%B2#The_problem:_storing_data_in_hashes
>
> > But it seems this is a general thing... If you wanted an inappropriate
> > message you could therefore just use bech32m addressed outputs.
>
> Yes, and people have done that with base58check.  IsStandard OP_RETURN
> attempts to minimize that abuse by being cheaper in two ways:
>
> 1. More data allowed in scriptSig, e.g. 80 byte payload (81 actually, I
>think) for OP_RETURN versus 40 bytes for a BIP141 payload.
>Maximizing payload size better amortizes the overhead cost of the
>containing transaction and the output's nValue field.
>
> 2. Exemption from the dust limit.  If you use a currently defined
>address type, the nValue needs to pay at least a few thousand nBTC
>(few hundred satoshis), about $0.15 USD minimum at $50k USD/BTC.  For
>OP_RETURN, the nValue can be 0, so there's no additional cost beyond
>normal transaction relay fees.
>
> Although someone creating an OP_RETURN up to ~1 MB with miner support
> can bypass the dust limit, the efficiency advantage remains no matter
> what.
>
> > One of the nice things is that the current psbt interface uses a blind
> > union type whereby the entires in an array are either [address, amount]
> or
> > ["data", hex]. Having an address type would allow more uniform handling,
> > which is convenient for strongly typed RPC bindings (e.g. rust bitcoin
> uses
> > a hashmap of address to amount so without a patch you can't create op
> > returns).
>
> I don't particularly care how the data in PSBTs are structured.  My mild
> opposition was to adding code to the wallet that exposes everyday users
> to OP_RETURN addresses.
>
> > I would much prefer to not have to do this in a custom way, as opposed
> > to a way which is defined in a standard manner across all software
> > (after all, that's the point of standards).
>
> I'm currently +0.1 on the idea of an address format of OP_RETURN, but I
> want to make sure this isn't underwhelmingly motivated or will lead to a
> resurgence of block chain graffiti.
>
> -Dave
>
> ## Creating an 11,000 byte OP_RETURN
>
> $ bitcoind -daemon -regtest -acceptnonstdtxn
> Bitcoin Core starting
>
> $ bitcoin-cli -regtest -generate 101
> {
>   "address": "bcrt1qh9uka5z040vx2rc3ltz3tpwmq4y2mt0eufux9r",
>   "blocks": [
> [...]
> }
>
> $ bitcoin-cli -regtest send '[{"data": "'$( dd if=/dev/zero bs