Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-02-06 Thread Thibaut Le Guilly via bitcoin-dev
Hi all,

A lot is being discussed but just wanted to react on some points.

# CSFS

Lloyd, good point about CSFS not providing the same privacy benefits, and
OP_CAT being required in addition. And thanks Philipp for the link to your
post, it was an interesting read!

Jeremy
>CSFS might have independent benefits, but in this case CTV is not being
used in the Oracle part of the DLC, it's being used in the user generated
mapping of Oracle result to Transaction Outcome.

My point was that CSFS could be used both in the oracle part but also in
the transaction restriction part (as in the post by Philipp), but again it
does not really provide the same model as DLC as pointed out by Lloyd.

# Performance

Regarding how much performance benefit this CTV approach would provide,
without considering the benefit of not having to transmit and store a large
number of adaptor signatures, and without considering any further
optimization of the anticipation points computation, I tried to get a rough
estimate through some benchmarking. Basically, if I'm not mistaken, using
CTV we would only have to compute the oracle anticipation points, without
needing any signing or verification. I've thus made a benchmark comparing
the current approach with signing + verification with only computing the
anticipation points, for a single oracle with 17 digits and 1 varying
payouts (between 45000 and 55000). The results are below.

Without using parallelization:
baseline:[7.8658 s 8.1122 s 8.3419 s]
no signing/no verification:  [321.52 ms 334.18 ms 343.65 ms]

Using parallelization:
baseline:[3.0030 s 3.1811 s 3.3851 s]
no signing/no verification:  [321.52 ms 334.18 ms 343.65 ms]

So it seems like the performance improvement is roughly 24x for the serial
case and 10x for the parallel case.

The two benchmarks are available (how to run them is detailed in the README
in the same folder):
*
https://github.com/p2pderivatives/rust-dlc/blob/ctv-bench-simulation-baseline/dlc-manager/benches/benchmarks.rs#L290
*
https://github.com/p2pderivatives/rust-dlc/blob/ctv-bench-simulation/dlc-manager/benches/benchmarks.rs#L290

Let me know if you think that's a fair simulation or not. One thing I'd
like to see as well is what will be the impact of having a very large
taproot tree on the size of the witness data when spending script paths
that are low in the tree, and how it would affect the transaction fee. I
might try to experiment with that at some point.

Cheers,

Thibaut

On Mon, Feb 7, 2022 at 2:56 AM Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'm not sure what is meant concretely by (5) but I think overall
> performance is ok here. You will always have 10mins or so to confirm the
> DLC so you can't be too fussy about performance!
>
>
> I mean that if you think of the CIT points as being the X axis (or
> independent axes if multivariate) of a contract, the Y axis is the
> dependent variable represented by the CTV hashes.
>
>
> For a DLC living inside a lightning channel, which might be updated
> between parties e.g. every second, this means you only have to recompute
> the cheaper part of the DLC only if you update the payoff curves (y axis)
> only, and you only have to update the points whose y value changes.
>
> For on chain DLCs this point is less relevant since the latency of block
> space is larger.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-06 Thread Eric Voskuil via bitcoin-dev


> On Feb 6, 2022, at 10:52, Pieter Wuille via bitcoin-dev 
>  wrote:
> 
> 
>> Dear Bitcoin Developers,
> 
>> -When I contacted bitInfoCharts to divide the first interval of addresses, 
>> they kindly did divided to 3 intervals. From here:
>> https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
>> -You can see that there are more than 3.1m addresses holding ≤ 0.01 BTC 
>> (1000 Sat) with total value of 14.9BTC; an average of 473 Sat per address.
> 
>> -Therefore, a simple solution would be to follow the difficulty adjustment 
>> idea and just delete all those
> 
> That would be a soft-fork, and arguably could be considered theft. While 
> commonly (but non universally) implemented standardness rules may prevent 
> spending them currently, there is no requirement that such a rule remain in 
> place. Depending on how feerate economics work out in the future, such 
> outputs may not even remain uneconomical to spend. Therefore, dropping them 
> entirely from the UTXO set is potentially destroying potentially useful funds 
> people own.
> 
>> or at least remove them to secondary storage
> 
> Commonly adopted Bitcoin full nodes already have two levels of storage 
> effectively (disk and in-RAM cache). It may be useful to investigate using 
> amount as a heuristic about what to keep and how long. IIRC, not even every 
> full node implementation even uses a UTXO model.

You recall correctly. Libbitcoin has never used a UTXO store. A full node has 
no logical need for an additional store of outputs, as transactions already 
contain them, and a full node requires all of them, spent or otherwise.

The hand-wringing over UTXO set size does not apply to full nodes, it is 
relevant only to pruning. Given linear worst case growth, even that is 
ultimately a non-issue.

>> for Archiving with extra cost to get them back, along with non-standard 
>> UTXOs and Burned ones (at least for publicly known, published, burn 
>> addresses).
> 
> Do you mean this as a standardness rule, or a consensus rule?
> 
> * As a standardness rule it's feasible, but it makes policy (further) deviate 
> from economically rational behavior. There is no reason for miners to require 
> a higher price for spending such outputs.
> * As a consensus rule, I expect something like this to be very controversial. 
> There are currently no rules that demand any minimal fee for anything, and 
> given uncertainly over how fee levels could evolve in the future, it's 
> unclear what those rules, if any, should be.
> 
> Cheers,
> 
> --
> Pieter
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-06 Thread Pieter Wuille via bitcoin-dev

> Dear Bitcoin Developers,

> -When I contacted bitInfoCharts to divide the first interval of addresses, 
> they kindly did divided to 3 intervals. From here:
> https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
> -You can see that there are more than 3.1m addresses holding ≤ 0.01 BTC 
> (1000 Sat) with total value of 14.9BTC; an average of 473 Sat per address.

> -Therefore, a simple solution would be to follow the difficulty adjustment 
> idea and just delete all those

That would be a soft-fork, and arguably could be considered theft. While 
commonly (but non universally) implemented standardness rules may prevent 
spending them currently, there is no requirement that such a rule remain in 
place. Depending on how feerate economics work out in the future, such outputs 
may not even remain uneconomical to spend. Therefore, dropping them entirely 
from the UTXO set is potentially destroying potentially useful funds people own.

> or at least remove them to secondary storage

Commonly adopted Bitcoin full nodes already have two levels of storage 
effectively (disk and in-RAM cache). It may be useful to investigate using 
amount as a heuristic about what to keep and how long. IIRC, not even every 
full node implementation even uses a UTXO model.

> for Archiving with extra cost to get them back, along with non-standard UTXOs 
> and Burned ones (at least for publicly known, published, burn addresses).

Do you mean this as a standardness rule, or a consensus rule?

* As a standardness rule it's feasible, but it makes policy (further) deviate 
from economically rational behavior. There is no reason for miners to require a 
higher price for spending such outputs.
* As a consensus rule, I expect something like this to be very controversial. 
There are currently no rules that demand any minimal fee for anything, and 
given uncertainly over how fee levels could evolve in the future, it's unclear 
what those rules, if any, should be.

Cheers,

--
Pieter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-02-06 Thread Jeremy Rubin via bitcoin-dev
I'm not sure what is meant concretely by (5) but I think overall
performance is ok here. You will always have 10mins or so to confirm the
DLC so you can't be too fussy about performance!


I mean that if you think of the CIT points as being the X axis (or
independent axes if multivariate) of a contract, the Y axis is the
dependent variable represented by the CTV hashes.


For a DLC living inside a lightning channel, which might be updated between
parties e.g. every second, this means you only have to recompute the
cheaper part of the DLC only if you update the payoff curves (y axis) only,
and you only have to update the points whose y value changes.

For on chain DLCs this point is less relevant since the latency of block
space is larger.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-06 Thread shymaa arafat via bitcoin-dev
Dear Bitcoin Developers,

-I think you may remember me sending to you about my proposal to partition
( and other stuff all about) the UTXO set Merkle in bridge servers
providing proofs Stateless nodes.
-While those previous suggestions might not have been on the most interest
of core Developers, I think this one I happened to notice is:

-When I contacted bitInfoCharts to divide the first interval of addresses,
they kindly did divided to 3 intervals. From here:
https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
-You can see that there are *more than* *3.1m addresses* holding ≤ 0.01
BTC (1000 Sat) with total value of *14.9BTC*; an average of *473 Sat* per
address.
-Keeping in mind that an address can hold more than 1 UTXO; ie, this is
even a lowerbound on the number of UTXOs holding such small values.
-Noticing also that every lightning network transaction adds one dust UTXO
(actually two one of which is instantly spent, and their dust limit is 333
Sat not even 546), ie, *this number of dust UTXOs will probably increase
with time.*
.
-Therefore, a simple solution would be to follow the difficulty adjustment
idea and just *delete all those*, or at least remove them to secondary
storage for Archiving with extra cost to get them back, *along with
non-standard UTXOs and Burned ones* (at least for publicly known,
published, burn addresses). *Benefits are:*

1- you will *relieve* the system state from the burden *of about 3.8m
UTXOs *
(*3.148952m*
+ *0.45m* non-standard
+ *0.178m* burned
https://blockchair.com/bitcoin/address/14oLvT2
https://blockchair.com/bitcoin/address/1CounterpartyXXXUWLpVr
as of today 6Feb2022)
, a number that will probably increase with time.
2-You will add to the *scarcity* of Bitcoin even with a very small amount
like 14.9 BTC.
3-You will *remove* away *the risk of using* any of these kinds for
*attacks* as happened before.
.
-Finally, the parameters could be studied for optimal values; I mean the
1st delete, the periodical interval, and also the delete threshold (maybe
all holding less than 1$ not just 546 Sat need to be deleted)
.
That's all
Thank you very much
.
Shymaa M Arafat
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV dramatically improves DLCs

2022-02-06 Thread Lloyd Fournier via bitcoin-dev
Hi Jeremy,


On Sat, 29 Jan 2022 at 04:21, Jeremy  wrote:

> Lloyd,
>
> This is an excellent write up, the idea and benefits are clear.
>
> Is it correct that in the case of a 3/5th threshold it is a total 10x *
> 30x = 300x improvement? Quite impressive.
>

Yes I think so but I am mostly guessing these numbers. The improvement is
several orders of magnitude. Enough to make almost any payout curve
possible without UX degredation I think.


> I have a few notes of possible added benefits / features of DLCs with CTV:
>
> 1) CTV also enables a "trustless timeout" branch, whereby you can have a
> failover claim that returns funds to both sides.
>
> There are a few ways to do this:
>
> A) The simplest is just an oracle-free  CTV whereby the
> timeout transaction has an absolute/relative timelock after the creation of
> the DLC in question.
>
> B) An alternative approach I like is to have the base DLC have a branch
> ` CTV` which pays into a DLC that is the exact same
> except it removes the just-used branch and replaces it with ` tx)> CTV` which contains a relative timelock R for the desired amount of
> time to resolve. This has the advantage of always guaranteeing at least R
> amount of time since the Oracles have been claimed to be non-live to
> "return funds"  to parties participating
>
>
> 2) CTV DLCs are non-interactive asynchronously third-party unilaterally
> creatable.
>
> What I mean by this is that it is possible for a single party to create a
> DLC on behalf of another user since there is no required per-instance
> pre-signing or randomly generated state. E.g., if Alice wants to create a
> DLC with Bob, and knows the contract details, oracles, and a key for Bob,
> she can create the contract and pay to it unilaterally as a payment to Bob.
>
> This enables use cases like pay-to-DLC addresses. Pay-to-DLC addresses can
> also be constructed and then sent (along with a specific amount) to a third
> party service (such as an exchange or Lightning node) to create DLCs
> without requiring the third party service to do anything other than make
> the payment as requested.
>

This is an interesting point -- I hadn't thought about interactivity prior
to this.

I agree CTV makes possible an on-chain DEX kind of thing where you put in
orders by sending txs to a DLC address generated from a maker's public key.
You could cancel the order by spending out of it via some cancel path. You
need to inform the maker of (i) your public key  (maybe you can use the
same public key as one of the inputs) and (ii) the amount the maker is
meant to put in (use fixed denominations?).

Although that's cool I'm not really a big fan of "putting the order book
on-chain" ideas because it brings up some of the problems that EVM DEXs
have.
I like centralized non-custodial order books.
For this I don't think that CTV makes a qualitative improvement given we
can use ANYONECANPAY to get some non-interactivity.
For example here's an alternative design:

The *taker*  provides a HTTP REST api where you (a maker) can:

1. POST an order using SIGHASH_ANYONECANPAY signed inputs and contract
details needed to generate the single output (the CTV DLC). The maker can
take the signatures and complete the transaction (they need to provide an
exact input amount of course).
2. DELETE an order -- the maker does some sort of revocation on the DLC
output e.g. signs something giving away all the coins in one of the
branches. If a malicious taker refuses to delete you just double spend one
of your inputs.

If the taker wants to take a non-deleted order they *could* just finish the
transaction but if they still have a connection open with the maker then
they could re-contact them to do a normal tx signing (rather than useing
the ANYONECANPAY signatures).
The obvious advantage here is that there are no transactions on-chain
unless the order is taken.
Additionally, the maker can send the same order to multiple takers -- the
takers will cancel each other's transactions should they broadcast the
transactions.
Looking forward to see if you can come up with something better than this
with CTV.
The above is suboptimal as getting both sides to have a change output is
hard but I think it's also difficult in your suggestion.
It might be better to use SIGHASH_SINGLE + ANYONECANPAY so the maker has to
be the one to provide the right input amount but the taker can choose their
change output and the fee...


>
> 3) CTV DLCs can be composed in interesting ways
>
> Options over DLCs open up many exciting types of instrument where Alice
> can do things like:
> A) Create a Option expiring in 1 week where Bob can add funds to pay a
> premium and "Open" a DLC on an outcome closing in 1 year
> B) Create an Option expiring in 1 week where one-of-many Bobs can pay the
> premium (on-chain DEX?).
>
>  See https://rubin.io/bitcoin/2021/12/20/advent-23/ for more concrete
> stuff around this.
>
> There are also opportunities for perpetual-like contracts where you could
> combine into one