Re: [bitcoin-dev] [Lightning-dev] Reconciling the off-chain and on-chain models with eltoo

2019-09-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Christian, and list,


> > > uncooperative membership change:
> > >
> > > -   a subset of channel parties might want to cooperatively sign a 
> > > channel splicing transaction to 'splice out' uncooperative parties
> >
> > I believe this is currently considered unsafe.
> > https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-April/001975.html
> > Unless you refer to another mechanism...?
> > I believe this will end up requiring deep confirmation of the
> > uncooperative close followed by a new mechanism open.
>
> Not necessarily. If we have an escape hatch in the scripts that allows
> to spend any output attached to the settlement transaction by n-1
> participants we could reclaim these into a new open right away.

This would have to be very very carefully designed.
The entire point of requiring an n-of-n signature is:

* By using an n-of-n signatory where *you* are a signer, you are completely 
immune to Sybil attacks: even if everybody other than *you* in the signatory 
set is secretly just one entity, this is no different from doing a 2-of-2 
bog-standard boring sleepy Zz Poon-Dryja Lightning Network channel.
  * Any m-of-n signatory where strictly m < n allows anybody with the ability 
to run m nodes to outright steal money from you.
* As processing power is cheap nowadays, there is no m that can be 
considered safe.
  Your alternative is to fall back on proof-of-work, but that just means 
going onchain, so you might as well just do things onchain.
  * This is why 2-of-2 channels work so well, it's the minimum useable 
construction and any multiparty construction, when Sybilled, devolves to a 
2-of-2 channel.

So the n-1 participants would have to be very very very carefully limited in 
what they can do.
And if the only "right" the n-1 participants can do is to force the nth 
participant to claim its funds onchain, then that is implementable with a 
transaction doing just that, which is pre-signed by the nth participant and 
given to participants 1..n-1.

> > > mining, mining reward and difficulty adjustment
> > >
> > > -   no equivalent concept for multi-party channels
> >
> > Fees for each update. Consider how HTLC routing in Lightning
> > implicitly pays forwarding nodes to cooperate with the forwarding. I
> > imagine most nodes in a multiparticipant offchain system will want to
> > be paid for cooperation, even if just a nominal sub-satoshi amount.
>
> If we allow generic contracts on top of the base update mechanism it'll
> be rather difficult to identify the beneficiary of an update, so it's
> hard to know who should pay a fee. I'd rather argue that cooperating is
> in the interest of all participants since they'd eventually want to
> create an update of their own, and there is no upside to become
> unresponsive.
>
> Notice that the fees we leverage in LN are because we expose our funds
> to the risk of not being available by allocating them to an HTLC, not
> for the updates themselves. Since in the forwarding scenario we're only
> exposing the funds of the forwarding nodes to this risk it's only
> natural that they'd be the ones leveraging a fee, not the other
> participants that simply sign off on the change.

I suppose that could be argued.

However, I imagine it is possible for some of the updates to be implementable 
via HTLCs within sub-mechanisms of the higher mechanism.
If so, a participant may refuse to sign for the higher mechanism in order to 
force others to use HTLCs on the lower mechanisms, and thereby earn fees due to 
HTLC usage.
I believe I argue as much here: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-July/002055.html

> ZmnSCPxj can request a factory channel reorganization to move some funds from 
> the ZmnSCPxj<->Rene channel to the ZmnSCPxj<->YAijbOJA channel.
> This has the same effect, i.e. it allows a forwarding attempt to push 
> through, that would not be possible without the factory-level channel 
> reorganization.
>
> Further, assuming only ZmnSCPxj, YAijbOJA, and Rene are in the channel 
> factory, then it is the same: all three need to be online in order for the 
> JIT-routing to work.
>
> But I observed above that, in a channel rebalance using current channels 
> (without factories) Rene cannot be convinced to waive the fee.

The counterargument above is that if rebalances can be made fee-free, then the 
above argument disappears.


>
> > > privacy:
> > >
> > > -   disassociate a particular update from signer(s)
> > > -   disassociate IP address of signers from signature
> > > -   using SIGHASH_ALL for cooperative closes
> >
> > I suppose Tor can be used to disassociate IP address from signers if
> > everyone is from a hidden service. However, we need to include some
> > kind of mix mechanism to allow individual signers to disassociate
> > their ownership of funds from their identity as signers. Though such
> > mechanisms already exist as theoretical constructs, so "just needs
> > implementing".
> > But then ag

Re: [bitcoin-dev] Taproot proposal

2019-09-18 Thread Pieter Wuille via bitcoin-dev
On Mon, 16 Sep 2019 at 21:10, ZmnSCPxj via bitcoin-dev
 wrote:

> ‐‐‐ Original Message ‐‐‐
> > I'd prefer to not support P2SH-nested TR. P2SH wrapping was useful for 
> > segwit
> > v0 for compatibility reasons. Most wallets/exchanges/services now support 
> > sending
> > to native segwit addresses (https://en.bitcoin.it/wiki/Bech32_adoption) and 
> > that
> > will be even more true if Schnorr/Taproot activate in 12+ months time.
> >
> > Apologies for necroing an ancient thread, but I'm echoing my agreement with 
> > John here.
> > We still have plenty of time to have ecosystem upgrade by the time taproot 
> > is likely to activate.

> On the other hand, the major benefit of taproot is the better privacy and 
> homogeneity afforded by Taproot, and supporting both P2SH-wrapped and 
> non-wrapped SegWit v1 addresses simply increases the number of places that a 
> user may be characterized and potentially identified.

I'm starting to lean towards not allowing P2SH wrapped Taproot as well.

Given the progress bech32 adoption has made in the past year or so, I
don't think adding P2SH support would result in many more software
authors deciding to implement receive-to-taproot functionality. And
without that advantage, having the option of supporting P2SH wrapping
actually risks degrading the privacy goals it aims for (see ZmnSCPxj's
argument above).

My main intuition for keeping P2SH is that Segwit was really designed
to support both, and I expect that disallowing P2SH would actually
require (very slightly) more complex validation code. I don't think
this is a sufficiently strong reason, especially as keeping P2SH
support does increase the number of combinations software needs to
test (both in consensus code and wallets).

Cheers,

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] bip-tapscript resource limits

2019-09-18 Thread Pieter Wuille via bitcoin-dev
Hi all,

In the draft for bip-tapscript (see [1], current version [2]), we
propose removing the per-block sigops limit for tapscript scripts, and
replacing it with a "every script gets a budget of sigops based on its
witness size (one per 50 WU)". Since signatures (plus pubkeys) take
more WU than that, this is not a restriction for anything but
pathologically constructed scripts. Simultaneously, it removes the
multi-dimensional optimization problem that theoretically needs to be
solved to maximize revenue in block template construction.

With our recent work on Miniscript (see [3]), we discovered that the
variety of other script resource limits also introduce (weaker)
complex optimization requirements, but for script constructors instead
of miners. An overview:
1) Scripts are limited to 1 bytes (and 3600 by standardness currently)
2) The total number of non-push opcodes in a script + the number of
keys participating in executed OP_CHECKMULTISIG(VERIFY) opcodes must
not exceed 201.
3) The size of the stack + altstack combined cannot exceed 1000
elements during execution (and the initial stack is limited to 100
elements by standardness currently)
4) The maximum size of elements on the stack is 520 bytes (and 80
bytes in the initial stack by standardness)

In a discussion about this with Andrew Poelstra we wondered whether
all these limits are still necessary in bip-tapscript. I believe the
only relevant ones are those that reduce total memory usage, or
verification CPU usage per witness byte. Total script verification CPU
usage isn't relevant I believe, because the same effect can be
accomplished by having a transaction (or block) with multiple inputs.

So let's go over the above resource limits, and see how they help with
limiting memory usage or CPu usage per byte.

# Script size limit

Memory usage for validation can grow with larger scripts, but only
indirectly by constructing extra stack data. Since those are
independently limited by (3), we don't need to consider those here.

There used to be a way through which larger scripts would cause larger
per byte verification cost, but it no longer applies, I believe. Due
to the scriptCode being replaced with a pre-hashed tapleaf hash, the
per-sigop hashing cost is now easily made independent of the size of
the script in implementations.

My suggestion is to drop the script size limit in tapscript, and
instead have it only be implicitly limited by transaction size limits.

# The 201 non-push opcodes limit

Ignoring how more opcodes can grow the stack and altstack (which are
already restricted by 3), I believe there is only one way that
additional (executed) opcodes can increase per-opcode execution time
in the current Bitcoin Core implementation [4], namely the "vfExec"
stack that keeps track of what sides of IF/NOTIF/ELSE/ENDIF execution
is currently passing through. As pointed out by Sergio Demian Lerner
[5], an O(1) algorithm can do this just as well (a variant of which is
implemented in PR 16902 [6]).

Taking such a patch into account, I don't think there are any problems
with removing the 201 ops limit for bip-tapscript scripts. Especially
given its strange semantics around OP_CHECKMULTISIG(VERIFY) (the keys
participating in those are each counted as 1 towards the 201 limit,
but only when executed, while all non-push opcodes are counted as 1
even when not executed), I think this is a nice simplification.

# The 1000 element limit for stack + altstack

A limit for the number of elements on the stack/altstack directly
affects memory usage. In a naive implementation without deduplication
as is used in Bitcoin Core now, every OP_3DUP can add 120 bytes of
memory usage plus the size of the data in the created elements
themselves (which can be a multiple of that number), leading to
several GB of memory usage for executing a maximal 4 MB script
(multiplied by the number of parallel executions). Even when using
reference-counting techniques to reduce duplication, 100 MB memory
usage is not unreasonable. I don't think those are acceptable numbers.

The stack size can also directly affect per-opcode execution time for
OP_ROLL, again shown by [5]. A block full of the most tightly packed
OP_ROLLS (which I believe is a repetition of OP_3DUP OP_ROLL OP_ROLL
OP_ROLL) operating on a stack of 1000 elements for me takes around 4.3
s of CPU time to verify. That's significant, but it's surprisingly
close to what a block packed with OP_CHECKSIGs (taking the 1 sigop /
50 WU limit into account) takes to verify on the same machine (3.8 s).
Even more remarkably, that time is also very close to how long a block
full of most tightly packed OP_HASH256s on 520 byte inputs take to
verify when disabling SHA256 hardware acceleration (3.6 s).

I believe we should keep this 1000 element stack limit for these
reasons. The 100 limit on input stacks can be increased to 1000 for
uniformity with the during-execution limit.

# The 520 byte stack element size limit

Given that there are no kno

Re: [bitcoin-dev] [Lightning-dev] Reconciling the off-chain and on-chain models with eltoo

2019-09-18 Thread Christian Decker via bitcoin-dev
ZmnSCPxj  writes:
>> cooperative close:
>> * when all parties mutually agree to close the channel
>> * close the channel with a layer one transaction which finalizes the outputs 
>> from the most recent channel output state
>> * should be optimized for privacy and low on-chain fees
>
> Of note is that a close of an update mechanism does not require the
> close of any hosted update mechanisms, or more prosaically, "close of
> channel factory does not require close of hosted channels".  This is
> true for both unilateral and cooperative closes.
>
> Of course, the most likely reason you want to unilaterally close an
> outer mechanism is if you have some contract in some deeply-nested
> mechanism that will absolute-locktime expire "soon", in which case you
> have to close everything that hosts it.  But for example if a channel
> factory has channels A B C and only A has an HTLC that will expire
> soon, while the factory and A have to close, B and C can continue
> operation, even almost as if nothing happened to A.

Indeed this is something that I think we already mentioned back in the
duplex micropayment channel days, though it was a bit hidden and only
mentioned HTLCs (though the principle carries over for other structures
built on the raw update mechanism):

> The process simply involves one party creating the teardown
> transaction, both parties signing it and committing it to the
> blockchain. HTLC outputs which have not been removed by agreement can
> be copied over to the summary transaction such that the same timelocks
> and resolution rules apply.

Notice that in the case of eltoo the settlement transaction is already
the same as the teardown transaction in DMC.

>> membership change (ZmnSCPxj ritual):
>> * when channel parties want to leave or add new members to the channel
>> * close and reopen a new channel via something like a channel splicing 
>> transaction to the layer one blockchain
>> * should be optimized for privacy and low on-chain fees paid for by parties 
>> entering and leaving the channel
>
> Assuming you mean that any owned funds will eventually have to be
> claimed onchain, I suppose this is doable as splice-out.
>
> But note that currently we have some issues with splice-in.
>
> As far as I can tell (perhaps Lisa Neigut can correct me, I believe
> she is working on this), splice-in has the below tradeoffs:
>
> 1.  Option 1: splice-in is async (other updates can continue after all 
> participants have sent the needed signatures for the splice-in).
> Drawback is that spliced-in funds need to be placed in a temporary
> n-of-n, meaning at least one additional tx.

Indeed this is the first proposal I had back at the Milan spec meeting,
and you are right that it requires stashing the funds in a temporary
co-owned output to make sure the transition once we splice in is
atomic. Batching could help here, if we have 3 participants joining they
can coordinate to set the funds aside together and then splice-in at the
same time. The downside is the added on-chain transaction, and the fact
that the funds are not operational until they reach the required depth
(I don't think we can avoid this with the current security guarantees
provided by Bitcoin). Notice that there is still some uncertainty
regarding the confirmation of the splice-in even though the funds were
stashed ahead of time, and we may end up in a state where we assumed
that the splice-in will succeed, but the fees we attached turn out to be
too low. In this case we built a sandcastle that collapses due to our
foundation being washed away, and we'd have to go back and agree on
re-splicing with corrected fees (which a malicious participant might
sabotage) or hope the splice eventually confirms.

> 2.  Option 2: splice-in is efficient (only the splice-in tx appears onchain).
> Drawback is that subsequent updates can only occur after the splice-in tx 
> is deeply confirmed.
> * This can be mitigated somewhat by maintaining a pre-splice-in
> and post-splice-in mechanism, until the splice-in tx is deeply
> confirmed, after which the pre-splice-in version is discarded.
>   Updates need to be done on *both* mechanisms until then, and any
> introduced money is "unuseable" anyway until the splice-in tx
> confirms deeply since it would not exist in the pre-splice-in
> mechanism yet.

This is the more complex variant we discussed during the last
face-to-face in Australia, and it seemed to me that people were mostly
in favor of doing it this way. It adds complexity since we maintain
multiple variants (making it almost un-implementable in LN-penalty),
however the reduced footprint, and the uncertainty regarding
confirmations in the first solution are strong arguments in favor of
this option.

> But perhaps a more interesting thing (and more in keeping with my
> sentiment "a future where most people do not typically have
> single-signer ownership of coins onchain") would be to transfer funds
> from one multipart