Re: [bitcoin-dev] Two questions about segwit implementation

2019-05-27 Thread Johnson Lau via bitcoin-dev
Empty scriptSig doesn’t imply segwit input: if the previous scriptPubKey is 
OP_1 (which does not allow witness), it could still be spent with an empty 
scriptSig

Similarly, a scriptSig looking like a spend of P2SH-segwit doesn’t imply segwit 
input: if the previous scriptPubKey is empty, it could be spent with a push of 
any non-zero value.

> On 27 May 2019, at 1:09 AM, Aymeric Vitte  wrote:
> 
> I did not phrase correctly in fact, what I meant is: if the validator
> sees empty or witness script in scriptSig, then this is a segwit input,
> and doing this one by one the validator can associate the correct segwit
> data to the correct segwit input, so 00 does not look to be needed
> 
> Le 26/05/2019 à 18:28, Johnson Lau a écrit :
>> This is not how it works. While the transaction creator may know which 
>> inputs are segwit, the validators have no way to tell until they look up the 
>> UTXO set.
>> 
>> In a transaction, all information about an input the validators have is the 
>> 36-byte outpoint (txid + index). Just by looking at the outpoint, there is 
>> no way to tell whether it is segwit-enabled or not. So there needs to be a 
>> way to tell the validator that “the witness for this input is empty”, and it 
>> is the “00”.
>> 
>>> On 27 May 2019, at 12:18 AM, Aymeric Vitte  wrote:
>>> 
>>> ……. for the 00 number of witness
>>> data for non segwit inputs the one that is doing the transaction knows
>>> which inputs are segwit or not, then parsing the transaction you can
>>> associate the correct input to the correct witness data, without the
>>> need of 00, so I must be missing the use case
>> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Two questions about segwit implementation

2019-05-27 Thread Johnson Lau via bitcoin-dev
This is not how it works. While the transaction creator may know which inputs 
are segwit, the validators have no way to tell until they look up the UTXO set.

In a transaction, all information about an input the validators have is the 
36-byte outpoint (txid + index). Just by looking at the outpoint, there is no 
way to tell whether it is segwit-enabled or not. So there needs to be a way to 
tell the validator that “the witness for this input is empty”, and it is the 
“00”.

> On 27 May 2019, at 12:18 AM, Aymeric Vitte  wrote:
> 
> ……. for the 00 number of witness
> data for non segwit inputs the one that is doing the transaction knows
> which inputs are segwit or not, then parsing the transaction you can
> associate the correct input to the correct witness data, without the
> need of 00, so I must be missing the use case


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Two questions about segwit implementation

2019-05-27 Thread Johnson Lau via bitcoin-dev

> On 26 May 2019, at 7:56 AM, Aymeric Vitte via bitcoin-dev 
>  wrote:
> 
> I realized recently that my segwit implementation was not correct,
> basically some time ago, wrongly reading the specs (and misleaded by
> what follows), I thought that scriptsig would go into witness data as it
> was, but that's not the case, op_pushdata is replaced by varlen
> 

Witness is not script. There is no op_pushdata or any other opcodes.

Witness is a stack. For each input, the witness starts with a CCompactSize for 
the number of stack elements for this input. Each stack element in turns starts 
with a CCompactSize for the size of this element, followed by the actual data


> Now reading correctly the specs, they seem to be not totally correct,
> then the first question is: why OP_0 is 00 in witness data and not 0100?
> Does this apply to other op_codes? This does not look logical at all
> 

A “00” element means the size of this element is zero. Since it’s zero size, no 
data is followed. This will create an empty element on the stack. It’s 
effectively same as OP_0 (Again, witness is not script)

A “0100” element means the element size is one, and the data for this element 
is “00”. So it will leave an 1-byte element on the stack.


> The second question is: why for non segwit inputs there is a 00 length
> in segwit data, what is the rational for that? It should just be nothing
> since you don't need this to reconciliate things

The “00” here means "this input has no witness stack element”. You need this 
even for non segwit inputs, because there is no way to tell whether an input is 
segwit-enabled or not, until you look up the UTXO, which might not be always 
available. Transaction serialization couldn’t rely on contextual information.

However, if all inputs have no stack element, the spec requires you to always 
use the non-segwit serialization.

> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safety of committing only to transaction outputs

2019-05-25 Thread Johnson Lau via bitcoin-dev


> On 25 May 2019, at 4:59 AM, Jeremy  wrote:
> 
> Hi Johnson,
> 
> As noted on the other thread, witness replay-ability can be helped by salting 
> the taproot key or the taproot leaf script at the last stage of a congestion 
> control tree.
> 

The salt will be published when it is first spent. Salting won’t help if the 
address is reused.

> I also think that chaperone signatures should be opt-in; there are cases 
> where we may not want them. OP_COSHV is compatible with an additional 
> checksig operation.
> 
> There are also other mechanisms that can improve the safety. Proposed below:
> 
> OP_CHECKINPUTSHASHVERIFY -- allow checking that the hash of the inputs is a 
> particular value. The top-level of a congestion control tree can check that 
> the inputs match the desired inputs for that spend, and default to requiring 
> N of N otherwise. This is replay proof! This is useful for other applications 
> too.

It is circular dependent: the script has to commit to the txid, and the txid is 
a function of script


> 
> OP_CHECKFEEVERIFY -- allowing an explicit commitment to the exact amount of 
> fee limits replay to txns which were funded with the exact amount of the 
> prior. If there's a mismatch, an alternative branch can be used. This is a 
> generally useful mechanism, but means that transactions using it must have 
> all inputs/outputs set.
> 

This restricts replayability to input with same value, but is still 
replay-able, just like ANYPREVOUT committing to the input value


> Best,
> 
> Jeremy
> --
> @JeremyRubin  
> 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Congestion Control via OP_CHECKOUTPUTSHASHVERIFY proposal

2019-05-25 Thread Johnson Lau via bitcoin-dev
Functionally, COHV is a proper subset of ANYPREVOUT (NOINPUT). The only 
justification to do both is better space efficiency when making covenant.

With eltoo as a clear usecase of ANYPREVOUT, I’m not sure if we really want a 
very restricted opcode like COHV. But these are my comments, anyway:

1. The “one input” rule could be relaxed to “first input” rule. This allows 
adding more inputs as fees, as an alternative to CPFP. In case the value is 
insufficient to pay the required outputs, it is also possible to rescue the 
UTXO by adding more inputs.

2. While there is no reason to use non-minimal push, there is neither a reason 
to require minimal push. Since minimal push is never a consensus rule, COHV 
shouldn’t be a special case.

3. As I suggested in a different post 
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016963.html 
),
 the argument for requiring a prevout binding signature may also be applicable 
to COHV

> On 21 May 2019, at 4:58 AM, Jeremy via bitcoin-dev 
>  wrote:
> 
> Hello bitcoin-devs,
> 
> Below is a link to a BIP Draft for a new opcode, OP_CHECKOUTPUTSHASHVERIFY. 
> This opcode enables an easy-to-use trustless congestion control techniques 
> via a rudimentary, limited form of covenant which does not bear the same 
> technical and social risks of prior covenant designs.
> 
> Congestion control allows Bitcoin users to confirm payments to many users in 
> a single transaction without creating the UTXO on-chain until a later time. 
> This therefore improves the throughput of confirmed payments, at the expense 
> of latency on spendability and increased average block space utilization. The 
> BIP covers this use case in detail, and a few other use cases lightly.
> 
> The BIP draft is here:
> https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki
>  
> 
> 
> The BIP proposes to deploy the change simultaneously with Taproot as an 
> OPSUCCESS, but it could be deployed separately if needed.
> 
> An initial reference implementation of the consensus changes and  tests which 
> demonstrate how to use it for basic congestion control is available at 
> https://github.com/JeremyRubin/bitcoin/tree/congestion-control 
> .  The 
> changes are about 74 lines of code on top of sipa's Taproot reference 
> implementation.
> 
> Best regards,
> 
> Jeremy Rubin
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_DIFFICULTY to enable difficulty hedges (bets) without an oracle and 3rd party.

2019-05-24 Thread Johnson Lau via bitcoin-dev
A gamble like this, decentralised or not, is easy to manipulate since 
difficulty is determined entirely by the last block in a cycle

> On 24 May 2019, at 1:42 AM, Tamas Blummer via bitcoin-dev 
>  wrote:
> 
> Difficulty change has profound impact on miner’s production thereby introduce 
> the biggest risk while considering an investment.
> Commodity markets offer futures and options to hedge risks on traditional 
> trading venues. Some might soon list difficulty futures.
> 
> I think we could do much better than them natively within Bitcoin.
> 
> A better solution could be a transaction that uses nLocktime denominated in 
> block height, such that it is valid after the difficulty adjusted block in 
> the future.
> A new OP_DIFFICULTY opcode would put onto stack the value of difficulty for 
> the block the transaction is included into. 
> The output script may then decide comparing that value with a strike which 
> key can spend it. 
> The input of the transaction would be a multi-sig escrow of those who entered 
> the bet. 
> The winner would broadcast. 
> 
> Once signed by both the transaction would not carry any counterparty risk and 
> would not need an oracle to settle according to the bet.
> 
> I plan to draft a BIP for this as I think this opcode would serve significant 
> economic interest of Bitcoin economy, and is compatible with Bitcoin’s aim 
> not to introduce 3rd party to do so.
> 
> Do you see a fault in this proposal or want to contribute?
> 
> Tamas Blummer 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Safety of committing only to transaction outputs

2019-05-24 Thread Johnson Lau via bitcoin-dev
This is a meta-discussion for any approach that allows the witness committing 
to only transaction outputs, but not inputs.

We can already do the following things with the existing bitcoin script system:
* commit to both inputs and outputs: SIGHASH_ALL or SIGHASH_SINGLE, with 
optional SIGHASH_ANYONECANPAY
* commit to only inputs but not outputs: SIGHASH_NONE with optional 
SIGHASH_ANYONECANPAY
* not commit to any input nor output: not using any sigop; using a trivial 
private key; using the SIGHASH_SINGLE bug in legacy script

The last one is clearly unsafe as any relay/mining node may redirect the 
payment to any output it chooses. The witness/scriptSig is also replayable, so 
any future payment to this script will likely be swept immediately

SIGHASH_NONE with ANYONECANPAY also allows redirection of payment, but the 
signature is not replayable

But it’s quite obvious that not committing to outputs are inherently insecure

The existing system doesn’t allow committing only to outputs, and we now have 3 
active proposals for this function:

1. CAT and CHECKSIGFROMSTACK (CSFS): 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016946.html 

2. ANYPREVOUT (aka NOINPUT): 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html 

3. CHECKOUTPUTSHASHVERIFY (COHV): 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016934.html 


With outputs committed, redirecting payment is not possible. On the other hand, 
not committing to any input means the witness is replayable without the consent 
of address owner. Whether replayability is acceptable is subject to 
controversy, but the ANYPREVOUT proposal fixes this by requiring a chaperone 
signature that commits to input. However, if the rationale for chaperone 
signature stands, it should be applicable to all proposals listed above.

A more generic approach is to always require a “safe" signature that commits to 
at least one input. However, this interacts poorly with the "unknown public key 
type” upgrade path described in bip-tapscript 
(https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki 
), since 
it’d be a hardfork to turn an “unknown type sig” into a “safe sig”. But we 
could still use a new “leaf version” every time we introduce new sighash types, 
so we could have a new definition for “safe sig”. I expect this would be a rare 
event and it won’t consume more than a couple leaf versions. By the way, 
customised sighash policies could be done with CAT/CSFS.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot proposal

2019-05-09 Thread Johnson Lau via bitcoin-dev


> 
>> 
>> Some way to sign an additional script (not committed to by the witness
>> program) seems like it could be a trivial addition.
> 
> It seems to me the annex can be used for this, by having it contain both the 
> script and the signature somehow concatenated.

This is not possible since the whole annex is signed. It is possible if the 
signed “script” does not require further input, like per-input lock-time, 
relative lock-time, check block hash


> 
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety

2019-03-22 Thread Johnson Lau via bitcoin-dev


> On 22 Mar 2019, at 9:59 AM, ZmnSCPxj via bitcoin-dev 
>  wrote:
> 
> Good morning aj,
>> 
>> If you are committing to the script code, though, then each settlement
>> sig is already only usable with the corresponding update tx, so you
>> don't need to roll the keys. But you do need to make it so that the
>> update sig requires the CLTV; one way to do that is using codeseparator
>> to distinguish between the two cases.
>> 
>>> Also, I cannot understand `OP_CODESEPARATOR`, please no.
>> 
>> If codeseparator is too scary, you could probably also just always
>> require the locktime (ie for settlmenet txs as well as update txs), ie:
>> 
>> OP_CHECKLOCKTIMEVERIFY OP_DROP
>>  OP_CHECKDLSVERIFY  OP_CHECKDLS
>> 
>> and have update txs set their timelock; and settlement txs set a absolute
>> timelock, relative timelock via sequence, and commit to the script code.
>> 
>> (Note that both those approaches (with and without codesep) assume there's
>> some flag that allows you to commit to the scriptcode even though you're
>> not committing to your input tx (and possibly not committing to the
>> scriptpubkey). BIP118 doesn't have that flexibility, so the A_s_i and
>> B_s_i key rolling is necessary)
> 
> I think the issue I have here is the lack of `OP_CSV` in the settlement 
> branch.
> 
> Consider a channel with offchain transactions update-1, settlement-1, 
> update-2, and settlement-2.
> If update-1 is placed onchain, update-1 is also immediately spendable by 
> settlement-1.
> But settlement-1 cannot be spent by update-2 and thus the invalidation of 
> older state fails.
> 
> The `OP_CSV` in the settlement branch of the update transaction outputs 
> exists to allow later update transactions have higher priority over 
> settlement transactions.
> 
> To ensure that a settlement signature can only take the settlement branch, we 
> need a distinct public key for the branch, so at least `A_s` and `B_s` 
> without rolling them for each `i`, if we use `nLockTime` on the settlement 
> transactions and enforce it with `OP_CHECKLOCKTIMEVERIFY`.
> It might be possible to do this with `OP_CODESEPARATOR`, but we do need the 
> `OP_CSV` in the settlement branch.
> 
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
> 

OP_CSV (BIP112) is not needed. Only BIP68 relative-time is needed.

With this script:

 OP_CHECKLOCKTIMEVERIFY OP_DROP  OP_CHECKSIGVERIFY  
OP_CHECKSIG

For update purpose, A and B will co-sign the muSig with nLockTime = t, not 
committing to the scriptCode, and no BIP68 lock time

For settlement purpose, A and B will co-sign the muSig with nLockTime = t, 
committing to the scriptCode, and with an agreed BIP68 locktime

Without committing to the scriptCode and BIP68 lock time, the update sig could 
be bind to any previous update tx immediately.

OTOH, the settlement sig will only bind to a specific update tx (thought 
scriptCode), and only after the relative locktime is passed.

The eltoo paper is wrong about using OP_CSV. That’s a common mistake even for 
experienced bitcoin developer. OP_CSV is needed only if one party could single 
handedly decide the relative-lock-time. However, this is not the case here as 
it is a muSig.

(With some risks of distracting the discussion, please note that even this 
script:  OP_CHECKLOCKTIMEVERIFY OP_DROP  OP_CHECKSIGVERIFY  
OP_CHECKSIG doesn’t need OP_CSV, despite not using muSig. It is because the 2 
sigs must use the same relative locktime, or the tx is invalid.)___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety

2019-03-21 Thread Johnson Lau via bitcoin-dev


> On 20 Mar 2019, at 4:07 PM, ZmnSCPxj via bitcoin-dev 
>  wrote:
> 
> Hi aj,
> 
> Re-reading again, I think perhaps I was massively confused by this:
> 
>> - alternatively, we could require every script to have a valid signature
>> that commits to the input. In that case, you could do eltoo with a
>> script like either:
>> 
>>  CHECKSIGVERIFY  CHECKSIG
>> or  CHECKSIGVERIFY  CHECKSIG
>> 
>> 
>> where A is Alice's key and B is Bob's key, P is muSig(A,B) and Q is
>> a key they both know the private key for. In the first case, Alice
>> would give Bob a NOINPUT sig for the tx, and when Bob wanted to publish
>> Bob would just do a SIGHASH_ALL sig with his own key. In the second,
>> Alice and Bob would share partial NOINPUT sigs of the tx with P, and
>> finish that when they wanted to publish.
> 
> Do you mean that *either* of the above two scripts is OK, *or* do you mean 
> they are alternatives within a single MAST or `OP_IF`?
> 

It means either.

If you use  CHECKSIGVERIFY  CHECKSIG style, A and B will exchange the 
NOINPUT sig, and they will add the required non-NOINPUT sig when needed.

If you use  CHECKVERIFY  CHECKSIG, A and B will co-sign the 
muSig(A,B) with NOINPUT. They will also share the private key of Q, so they 
could produce a non-NOINPUT sig when needed.

The first style is slightly easier as it doesn’t need muSig. But with 3 or more 
parties, the second style is more efficient.

However, if you use watchtower, you have to use the second style. That means 
you need to share the private key for Q with the watchtower, That also means 
the watchtower will have the ability to reply the NOINPU muSig. But it is still 
strictly better than anyone-can-replay.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2019-03-05 Thread Johnson Lau via bitcoin-dev


> On 20 Feb 2019, at 4:24 AM, Luke Dashjr  wrote:
> 
> Even besides NOINPUT, such a wallet would simply never show a second payment 
> to the same address (or at least never show it as confirmed, until 
> successfully spent).

This is totally unrelated to NOINPUT. You can make a wallet like this today 
already, and tell your payer not to reuse address.


> 
> At least if tx versions are used, it isn't possible to indicate this 
> requirement in current Bitcoin L1 addresses. scriptPubKey might not be 
> impossible to encode, but it isn't really clear what the purpose of doing so 
> is.

It sounds like you actually want to tag such outputs as scriptPubKey, so you 
could encode this requirement in the address?

If we allow NOINPUT unconditionally (i.e. all v1 addresses are spendable with 
NOINPUT), you may only create a different proposal to indicate such special 
requirements 

> 
> If people don't want to use NOINPUT, they should just not use it. Trying to 
> implement a nanny in the protocol is inappropriate and limits what developers 
> can do who actually want the features.
> 
> Luke
> 
> 
> On Tuesday 19 February 2019 19:22:07 Johnson Lau wrote:
>> This only depends on the contract between the payer and payee. If the
>> contract says address reuse is unacceptable, it’s unacceptable. It has
>> nothing to do with how the payee spends the coin. We can’t ban address
>> reuse at protocol level (unless we never prune the chain), so address reuse
>> could only be prevented at social level.
>> 
>> Using NOINPUT is also a very weak excuse: NOINPUT always commit to the
>> value. If the payer reused an address but for different amount, the payee
>> can’t claim the coin is lost due to previous NOINPUT use. A much stronger
>> way is to publish the key after a coin is well confirmed.
>> 
>>> On 20 Feb 2019, at 3:04 AM, Luke Dashjr  wrote:
>>> 
>>> On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
>>>> While this seems fully compatible with eltoo, is there any other
>>>> proposals require NOINPUT, and is adversely affected by either way of
>>>> tagging?
>>> 
>>> Yes, this seems to break the situation where a wallet wants to use
>>> NOINPUT for everything, including normal L1 payments. For example, in the
>>> scenario where address reuse will be rejected/ignored by the recipient
>>> unconditionally, and the payee is considered to have burned their
>>> bitcoins by attempting it.
>>> 
>>> Luke
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2019-03-05 Thread Johnson Lau via bitcoin-dev
This only depends on the contract between the payer and payee. If the contract 
says address reuse is unacceptable, it’s unacceptable. It has nothing to do 
with how the payee spends the coin. We can’t ban address reuse at protocol 
level (unless we never prune the chain), so address reuse could only be 
prevented at social level.

Using NOINPUT is also a very weak excuse: NOINPUT always commit to the value. 
If the payer reused an address but for different amount, the payee can’t claim 
the coin is lost due to previous NOINPUT use. A much stronger way is to publish 
the key after a coin is well confirmed.

> On 20 Feb 2019, at 3:04 AM, Luke Dashjr  wrote:
> 
> On Thursday 13 December 2018 12:32:44 Johnson Lau via bitcoin-dev wrote:
>> While this seems fully compatible with eltoo, is there any other proposals
>> require NOINPUT, and is adversely affected by either way of tagging?
> 
> Yes, this seems to break the situation where a wallet wants to use NOINPUT 
> for 
> everything, including normal L1 payments. For example, in the scenario where 
> address reuse will be rejected/ignored by the recipient unconditionally, and 
> the payee is considered to have burned their bitcoins by attempting it.
> 
> Luke


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2019-02-09 Thread Johnson Lau via bitcoin-dev
INPUT, always try to make all outputs tagged to be 
>> NOINPUT-spendable. (NOTE: you can still spend tagged outputs with normal 
>> signatures, so this won’t permanently taint your coins as NOINPUT-spendable) 
>> It makes sense because the use of NOINPUT signature strongly suggests that 
>> you don’t know the txid of the parent tx, so you may most likely want your 
>> outputs to be NOINPUT-spendable as well. I thought of making this a policy 
>> or consensus rule, but may be it’s just overkill.
>> 
>> 
>> 
>>> On 9 Feb 2019, at 3:01 AM, Jonas Nick  wrote:
>>> 
>>> Output tagging may result in reduced fungibility in multiparty eltoo 
>>> channels.
>>> If one party is unresponsive, the remaining participants want to remove
>>> the party from the channel without downtime. This is possible by creating
>>> settlement transactions which pay off the unresponsive party and fund a new
>>> channel with the remaining participants.
>>> 
>>> When the party becomes unresponsive, the channel is closed by broadcasting 
>>> the
>>> update transaction as usual. As soon as that happens the remaining
>>> participants can start to update their new channel. Their update signatures
>>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is 
>>> not
>>> final (because update tx is not confirmed and may have to rebind to another
>>> output). Therefore, the funding output of the new channel must be NOINPUT
>>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>>> of fungibility would not have happened without output tagging.
>>> 
>>> funding output  update output
>>> settlement outputs  update output
>>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ 
>>> NOINPUT tagged: (A' & B'), -> ...
>>>  C' 
>>> ]
>>> If the expectation is that the unresponsive party returns, fungibility is
>>> not reduced due to output tagging because the above scheme can be used
>>> off-chain until the original channel can be continued.
>>> 
>>> Side note: I was not able to come up with an similar, eltoo-like protocol 
>>> that works
>>> if you can't predict in advance who will become absent.
>>> 
>>> On 12/13/18 12:32 PM, Johnson Lau via bitcoin-dev wrote:
>>>> NOINPUT is very powerful, but the tradeoff is the risks of signature 
>>>> replay. While the key holders are expected not to reuse key pair, little 
>>>> could be done to stop payers to reuse an address. Unfortunately, key-pair 
>>>> reuse has been a social and technical norm since the creation of Bitcoin 
>>>> (the first tx made in block 170 reused the previous public key). I don’t 
>>>> see any hope to change this norm any time soon, if possible at all.
>>>> 
>>>> As the people who are designing the layer-1 protocol, we could always 
>>>> blame the payer and/or payee for their stupidity, just like those people 
>>>> laughed at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). 
>>>> The existing bitcoin script language is so restrictive. It disallows many 
>>>> useful smart contracts, but at the same time prevented many dumb 
>>>> contracts. After all, “smart” and “dumb” are non-technical judgement. The 
>>>> DAO contract has always been faithfully executed. It’s dumb only for those 
>>>> invested in the project. For me, it was just a comedy show.
>>>> 
>>>> So NOINPUT brings us more smart contract capacity, and at the same time we 
>>>> are one step closer to dumb contracts. The target is to find a design that 
>>>> exactly enables the smart contracts we want, while minimising the risks of 
>>>> misuse.
>>>> 
>>>> The risk I am trying to mitigate is a payer mistakenly pay to a previous 
>>>> address with the exactly same amount, and the previous UTXO has been spent 
>>>> using NOINPUT. Accidental double payment is not uncommon. Even if the 
>>>> payee was honest and willing to refund, the money might have been spent 
>>>> with a replayed NOINPUT signature. Once people lost a significant amount 
>>>> of money this way, payers (mostly exchanges) may refuse to send money to 
>>>> anything other than P2PKH, native-P2WPKH and native-P2WSH (as the

Re: [bitcoin-dev] Safer NOINPUT with output tagging

2019-02-09 Thread Johnson Lau via bitcoin-dev
t; 
>> Side note: I was not able to come up with an similar, eltoo-like protocol 
>> that works
>> if you can't predict in advance who will become absent.
>> 
> An eltoo-like protocol that works (without going on-chain) if you can't 
> predict in advance who will become absent would be a childchain. If the 
> off-chain protocol can continue updating in the abscence of other parties, it 
> means that other parties' signatures must not be required when they are not 
> involved in the off-chain state update. If other parties' signatures must not 
> be required, there must be a way of having a common verifiable 'last state' 
> to prevent a party to simultaneously 'fork' the state with two different 
> parties, and double-spend. A solution for this is a childchain for Bitcoin. 
> An example of this is what is known as a 'Broken Factory' attack [1] 
> (https://bitcoin.stackexchange.com/questions/77434/how-does-channel-factory-act/81005#81005)
> 
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can be continued.
> 
> I believe that in many cases other parties won't be able to continue until 
> the unresponsive parties go back online. That might be true in particular 
> scenarios, but generally speaking, the party might have gone unresponsive 
> during a factory-level update (i.e. off-chain closing and opening of 
> channels), while some parties might have given out their signature for the 
> update without receiving a fully signed transaction. In this case they do not 
> even know which channel they have open (the one of the old state that they 
> have fully signed, or the one for the new state that they have given out 
> their signature for). This is known as a 'Stale Factory', and can be 
> exploited by an adversary in a 'Stale Factory' attack [1]. Even if they knew 
> which state they are in (i.e. the party went unresponsive but not during a 
> factory-level update), some of them might have run out of funds in some of 
> their channels of the factory, and might want to update, while they will not 
> be willing to wait for a party to go back online (something for which they 
> also have zero guarantees of).
> 
> An eltoo-like protocol that works (allowing going on-chain) if you can't in 
> advance who will become absent, then this is precisely why 'Transaction 
> Fragments' have been suggested. They allow an eltoo-like protocol even when 
> one cannot predict in advance who will become absent, or malicious (by 
> publishing invalid states), cause the non-absent parties can unite their 
> fragments and create a valid spendable factory-level transaction that 
> effectively kicks out the malicious parties, while leaving the rest of the 
> factory as it was. To the best of my understanding, the eltoo original 
> proposal also allows this though.
> 
> Best,
> 
> Alejandro.
> 
> [1]: Scalable Lightning Factories for Bitcoin, 
> https://eprint.iacr.org/2018/918.pdf
> 
> 
> On 08/02/2019 20:01, Jonas Nick via bitcoin-dev wrote:
>> Output tagging may result in reduced fungibility in multiparty eltoo 
>> channels.
>> If one party is unresponsive, the remaining participants want to remove
>> the party from the channel without downtime. This is possible by creating
>> settlement transactions which pay off the unresponsive party and fund a new
>> channel with the remaining participants.
>> 
>> When the party becomes unresponsive, the channel is closed by broadcasting 
>> the
>> update transaction as usual. As soon as that happens the remaining
>> participants can start to update their new channel. Their update signatures
>> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
>> final (because update tx is not confirmed and may have to rebind to another
>> output). Therefore, the funding output of the new channel must be NOINPUT
>> tagged. Assuming the remaining parties later settle cooperatively, this loss
>> of fungibility would not have happened without output tagging.
>> 
>> funding output  update output
>> settlement outputs  update output
>> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ 
>> NOINPUT tagged: (A' & B'), -> ...
>>                        
>> C' ]
>> If the expectation is that the unresponsive party returns, fungibility is
>> not reduced due to output tagging because the above scheme can be used
>> off-chain until the original channel can

Re: [bitcoin-dev] Safer NOINPUT with output tagging

2019-02-09 Thread Johnson Lau via bitcoin-dev
This is really interesting. If I get it correctly, I think the fungibility hit 
could be avoided, just by making one more signature, and not affecting the 
blockchain space usage.

Just some terminology first. In a 3-party channel, “main channel” means the one 
requires all parties to update, and “branch channel” requires only 2 parties to 
update.

By what you describe, I think the most realistic scenario is “C is going to 
offline soon, and may or may not return. So the group wants to keep the main 
channel open, and create a branch channel for A and B, during the absence of 
C”. I guess this is what you mean by being able to "predict in advance who will 
become absent”

I call this process as “semi-cooperative channel closing” (SCCC). During a 
SCCC, the settlement tx will have 2 outputs: one as (A & B), one as (C). 
Therefore, a branch channel could be opened with the (A & B) output. The 
channel opening must use NOINPUT signature, since we don’t know the txid of the 
settlement tx. With the output tagging requirement, (A & B) must be tagged, and 
lead to the fungibility loss you described.

However, it is possible to make 2 settlement txs during SCCC. Outputs of the 
settlement tx X are tagged(A) and C. Outputs of the settlement tx Y are 
untagged(A) and C. Both X and Y are BIP68 relative-time-locked, but Y has a 
longer time lock.

The branch channel is opened on top of the tagged output of tx X. If A and B 
want to close the channel without C, they need to publish the last update tx of 
the main channel. Once the update tx is confirmed, its txid becomes permanent, 
so are the txids of X and Y. If A and B decide to close the channel 
cooperatively, they could do it on top of the untagged output of tx Y, without 
using NOINPUT. There won’t be any fungibility loss. Other people will only see 
the uncooperative closing of the main channel, and couldn’t even tell the 
number of parties in the main channel. Unfortunately, the unusual long lock 
time of Y might still tell something.

If anything goes wrong, A or B could publish X before the lock time of Y, and 
settle it through the usual eltoo style. Since this is an uncooperative closing 
anyway, the extra fungibility loss due to tagging is next to nothing. However, 
it may suggest that the main channel was a multi-party one.

For C, the last update tx of the main channel and the settlement tx Y are the 
only things he needs to get the money back. C has to sign tx X, but he 
shouldn’t get the complete tx X. Otherwise, C might have an incentive to 
publish X in order to get the money back earlier, at the cost of fungibility 
loss of the branch channel.

To minimise the fungibility loss, we’d better make it a social norm: if you 
sign your tx with NOINPUT, always try to make all outputs tagged to be 
NOINPUT-spendable. (NOTE: you can still spend tagged outputs with normal 
signatures, so this won’t permanently taint your coins as NOINPUT-spendable) It 
makes sense because the use of NOINPUT signature strongly suggests that you 
don’t know the txid of the parent tx, so you may most likely want your outputs 
to be NOINPUT-spendable as well. I thought of making this a policy or consensus 
rule, but may be it’s just overkill.



> On 9 Feb 2019, at 3:01 AM, Jonas Nick  wrote:
> 
> Output tagging may result in reduced fungibility in multiparty eltoo channels.
> If one party is unresponsive, the remaining participants want to remove
> the party from the channel without downtime. This is possible by creating
> settlement transactions which pay off the unresponsive party and fund a new
> channel with the remaining participants.
> 
> When the party becomes unresponsive, the channel is closed by broadcasting the
> update transaction as usual. As soon as that happens the remaining
> participants can start to update their new channel. Their update signatures
> must use SIGHASH_NOINPUT. This is because in eltoo the settlement txid is not
> final (because update tx is not confirmed and may have to rebind to another
> output). Therefore, the funding output of the new channel must be NOINPUT
> tagged. Assuming the remaining parties later settle cooperatively, this loss
> of fungibility would not have happened without output tagging.
> 
> funding output  update output
> settlement outputs  update output
> [ A & B & C ] -> ... -> [ (A & B & C & state CLTV) | (As & Bs & Cs) ] -> [ 
> NOINPUT tagged: (A' & B'), -> ...
>   C' ]
> If the expectation is that the unresponsive party returns, fungibility is
> not reduced due to output tagging because the above scheme can be used
> off-chain until the original channel can be continued.
> 
> Side note: I was not able to come up with an similar, eltoo-like protocol 
&

Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-24 Thread Johnson Lau via bitcoin-dev
I find another proposed use of CODESEPARATOR here: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2016-March/000455.html
 


 OP_CHECKSIG
OP_IF

OP_ELSE
 OP_CSV OP_DROP
OP_CODESEPARATOR 
OP_ENDIF
OP_CHECKSIG
It is actually 2 scripts:

S1:  OP_CHECKSIGVERIFY  OP_CHECKSIG
S2:  OP_CSV OP_DROP  OP_CHECKSIG

Under taproot, we could make Q = P + H(P||S2)G, where P = MuSig(KeyA, KeyB)

S1 becomes a direct spending with Q, and there is no need to use OP_IF or 
CODESEPARATOR in S2 at all.


==

If it is only to force R reuse, there is no need to use CODESEPARATOR:

Input:  Script: 2DUP EQUAL NOT VERIFY 2 PICK SWAP CAT  DUP 
TOALTSTACK CHECKSIGVERIFY CAT FROMALTSTACK CHECKSIG

But using CODESEPARATOR will save 3 bytes
Input:   Script:  OVER SWAP CAT  DUP TOALTSTACK 
CHECKSIGVERIFY CODESEPARATOR SWAP CAT FROMALTSTACK CHECKSIG

However, a much better way would be:

Input:  Script:  SWAP CAT  CHECKSIG

The discrete log of R could be a shared secret between A and B. If the purpose 
is to publish the private key to the whole world, R = G could be used.

> On 24 Dec 2018, at 8:01 PM, ZmnSCPxj  wrote:
> 
> Good morning,
> 
>> Could anyone propose a better use case of CODESEPARATOR?
> 
> Long ago, aj sent an email on Lightning-dev about use of CODESEPARATOR to 
> impose Scriptless Script even without Schnorr. It involved 3 signatures with 
> different CODESEPARATOR places, and forced R reuse so that the signatures to 
> claim the funds revealed the privkey.
> 
> The script shown had all CODESEPARATOR in a single branch.
> 
> I cannot claim to understand the script, and am having difficulty digging 
> through the mailinglist
> 
> Regards,
> ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-23 Thread Johnson Lau via bitcoin-dev


> On 23 Dec 2018, at 12:26 PM, Anthony Towns  wrote:
> 
> On Sat, Dec 22, 2018 at 02:54:42AM +0800, Johnson Lau wrote:
>> The question I would like to ask is: is OP_CODESEPARATOR useful under 
>> taproot? Generally speaking, CODESEPARATOR is useful only with conditional 
>> opcodes (OP_IF etc), and conditional opcodes are mostly replaced by 
>> merklized scripts. I am not sure how much usability is left with 
>> CODESEPARATOR
> 
> If you don't have conditionals, then I think committing to the (masked)
> script gives you everything you could do with codeseparator.

I don’t think CODESEPARATOR is useful without conditionals. By useful I mean 
making a script more compact

> 
> If you don't commit to the (masked) script, don't have conditionals,
> and don't have codeseparator, then I don't think you can make a signature
> distinguish which alternative script it's intending to sign; but you can
> just give each alternative script in the MAST a slight variation of the
> key and that seems good enough.

You can and should always use a different in different branch. If this best 
practice is always followed, committing to masked script is not necessary

> 
> OTOH, I think for (roughly) the example you gave:
> 
>  DEPTH 3 EQUAL
>  IF  CHECKSIGVERIFY HASH160  EQUALVERIFY CODESEP
>  ELSE  CLTV DROP
>  ENDIF
>   CHECKSIG
> 
> then compared to the taproot equivalent:
> 
>  P = muSig(Alice,Bob)
>  S1 =  CHECKSIGVERIFY  CHECKSIGVERIFY HASH160  EQUAL
>  S2 =  CHECKSIGVERIFY  CLTV
> 
> the IF+CODESEP approach is actually cheaper (lighter weight) if you're
> mostly (>2/3rds of the time) taking the S1 branch. This is because the
> "DEPTH 3 EQUAL IF/ELSE/ENDIF CODESEP  CLTV DROP" overhead is less
> than the 32B overhead to choose a merkle branch).
> 
> (That said, I'm not sure what Alice's signature in the S1 branch actually
> achieves in that script; and without that in S1, the taproot approach is
> cheaper all the time. Scriptless scripts would be cheaper still)
> 
>> If no one needs CODESEPARATOR, we might just disable it, and makes the 
>> validation code a bit simpler
> 
> Since it only affects the behaviour of the checkdls (checksig) operators,
> even if it was disabled, it could be re-enabled fairly easily in a new
> script subversion if needed (ie, it could be re-added when upgrading
> witness version 1 from script version 0 to 1).
> 
> Cheers,
> aj
> 

Yes, I don’t think it needs Alice signature in S1 at all. So the original 
example doesn’t even need CODESEPARATOR at all. 

Could anyone propose a better use case of CODESEPARATOR?


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-23 Thread Johnson Lau via bitcoin-dev


> On 22 Dec 2018, at 10:25 PM, ZmnSCPxj  wrote:
> 
> Good morning Johnson,
> 
>> Generally speaking, I think walletless protocol is needed only when you want 
>> to rely a third party to open a offchain smart contract. It could be 
>> coinswap, eltoo, or anything similar.
> 
> I think a third party would be pointless in general, but then I am strongly 
> against custodiality.
> 
> The idea is that you have some kind of hardware wallet or similar "somewhat 
> cold" storage *that you control yourself*, and crate channels for your hot 
> offchain Lightning wallet, without adding more transactions from your 
> somewhat-cold storage to your hot offchain Lightning wallet on the blockchain.
> 
> Then you could feed a set of addresses to the hot offchain wallet (addresses 
> your somewhat-cold storage controls) so that when channels are closed, the 
> funds go to your somwhat-cold storage.
> 
> I also doubt that any custodial service would want to mess around with 
> deducting funds from what the user input as the desired payment.  I have not 
> seen a custodial service that does so (this is not a scientific study; I 
> rarely use custodial services); custodial services will deduct more from your 
> balance than what you send, but will not modify what you send, and will 
> prevent you from sending more than your balance minus the fees they charge 
> for sending onchain.
> 
> Even today, custodial services deducting from your sent value (rather than 
> the balance remaining after you send) would be problematic when interacting 
> with merchants (or their payment processors) accepting onchain payments; the 
> merchant would refuse to service a lower value than what it charges and it 
> may be very technically difficult to recover such funds from the merchant.
> I expect such a custodial service would quickly lose users, but the world 
> surprises me often.
> 
> Regards,
> ZmnSCPxj


If the users are expected to manually operate a hardware wallet to fund the 
channel, they might do stupid things like using 2 wallets to make 2 txs, 
thinking that they could combine the values this way; or “refilling” the 
offchain wallet with the address, as you suggested. While I appreciate the goal 
to separate the coin-selecting wallet with the offchain wallet, I am not sure 
if we should rely on users to do critical steps like entering the right value 
or not reusing the address. Especially, the setup address should be hidden from 
user’s view, so only a very few “intelligent advanced users" could try to 
refill the channel.

If we don’t rely on the user as the bridge between the hardware wallet and the 
offchain wallet, we need a communication protocol between them. With such 
protocol, there is no need to spend the setup TXO with NOINPUT.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-21 Thread Johnson Lau via bitcoin-dev


> On 21 Dec 2018, at 7:17 AM, Rusty Russell  wrote:
> 
> Johnson Lau  writes:
> 
>>> But I don't see how OP_CODESEPARATOR changes anything here, wrt NOINPUT?
>>> Remember, anyone can create an output which can be spent by any NOINPUT,
>>> whether we go for OP_MASK or simply not commiting to the input script.
>>> 
>> 
>> Let me elaborate more. Currently, scriptCode is truncated at the last 
>> executed CODESEPARATOR. If we have a very big script with many 
>> CODESEPARATORs and CHECKSIGs, there will be a lot of hashing to do.
>> 
>> To fix this problem, it is proposed that the new sighash will always commit 
>> to the same H(script), instead of the truncated scriptCode. So we only need 
>> to do the H(script) once, even if the script is very big
> 
> Yes, I read this as proposed, it is clever.  Not sure we'd be
> introducing it if OP_CODESEPARATOR didn't already exist, but at least
> it's a simplfication.
> 
>> In the case of NOINPUT with MASKEDSCRIPT, it will commit to the 
>> H(masked_script) instead of H(script).
>> 
>> To make CODESEPARATOR works as before, the sighash will also commit to the 
>> position of the last executed CODESEPARATOR. So the semantics doesn’t 
>> change. For scripts without CODESEPARATOR, the committed value is a constant.
>> 
>> IF NOINPUT does not commit to H(masked_script), technically it could still 
>> commit to the position of the last executed CODESEPARATOR. But since the 
>> wallet is not aware of the actual content of the script, it has to guess the 
>> meaning of such committed positions, like “with the HD key path m/x/y/z, I 
>> assume the script template is blah blah blah because I never use this path 
>> for another script template, and the meaning of signing the 3rd 
>> CODESEPARATOR is blah blah blah”. It still works if the assumptions hold, 
>> but sounds quite unreliable to me.
> 
> My question is more fundamental.  If NOINPUT doesn't commit to the input
> at all, no script, no code separator, nothing.  I'm struggling to
> understand your original comment was "without signing the script or
> masked script, OP_CODESEPARATOR becomes unusable or insecure with
> NOINPUT."
> 
> I mean, non-useful, sure.  Its purpose is to alter signature behavior,
> and from the script POV there's no signature with this form of NOINPUT.
> But other than the already-established "I reused keys for multiple
> outputs" oops, I don't see any new dangers?
> 
> Thanks,
> Rusty.

The question I would like to ask is: is OP_CODESEPARATOR useful under taproot? 
Generally speaking, CODESEPARATOR is useful only with conditional opcodes 
(OP_IF etc), and conditional opcodes are mostly replaced by merklized scripts. 
I am not sure how much usability is left with CODESEPARATOR

If no one needs CODESEPARATOR, we might just disable it, and makes the 
validation code a bit simpler

If CODESEPARATOR is useful, then we should find a way to make it works with 
NOINPUT. With H(masked_script) committed, the meaning of the CODESEPARATOR 
position is very clear. Without H(masked_script), the meaning of the position 
totally relies on the assumption that “this public key is only used in this 
script template”.

Ignore CODESEPARATOR and more generally, I agree with you that script masking 
does not help in the case of address (scriptPubKey) reuse, which is the 
commonest type of reuse. However, it prevents replayability when the same 
public key is reused in different scripts
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-21 Thread Johnson Lau via bitcoin-dev


> On 21 Dec 2018, at 7:15 PM, Christian Decker  
> wrote:
> 
> Johnson Lau  writes:
> 
>> I think the use of OP_CSV (BIP112) is not needed here (although it
>> doesn’t really harm except taking a few more bytes). All you need is
>> to sign the settlement tx with a BIP68 relative locktime. Since this
>> is a 2-of-2 branch, both parties need to agree with the relative
>> locktime, so it is not necessary to restrict it through OP_CSV
> 
> I keep forgetting about BIP68, but you're right, that should be
> sufficient for our use-case and would safe us a few bytes.
> 

With taproot, this actually saves a lot more than a few bytes. For each update, 
you will make 3 signatures. One is a SIGHASH_ALL spending the setup TXO with no 
locktime. One is a NOINPUT spending a previous update TXO with absolute 
locktime. One is a NOINPUT spending the latest update TXO with relative 
locktime. For the first and third signatures, you will just sign directly with 
the scriptPubKey, without revealing the hidden taproot script. The second 
signature will reveal the taproot script, but it is needed only when someone 
published an outdated update tx.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-21 Thread Johnson Lau via bitcoin-dev


> On 21 Dec 2018, at 7:40 PM, ZmnSCPxj  wrote:
> 
> Good morning Johnson,
> 
>> The proposed solution is that an output must be “tagged” for it to be 
>> spendable with NOINPUT, and the “tag” must be made explicitly by the payer. 
>> There are 2 possible ways to do the tagging:
> 
> First off, this is a very good idea I think.
> 
> 
>>While this seems fully compatible with eltoo, is there any other 
>> proposals require NOINPUT, and is adversely affected by either way of 
>> tagging?
> 
> 
> It prevents use of SIGHASH_NOINPUT to support walletless offchain protocols.
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015925.html
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/015926.html
> 
> In brief, this idea of "walletless offchain software" is motivated by the 
> fact, that various onchain wallets exist with many features.
> For instance, privacy-enhancement as in Samourai/Wasabi/etc.
> And so on.
> There are requests to include such features in e.g. Lightning software, for 
> example: https://github.com/ElementsProject/lightning/issues/2105
> But it is enough of a challenge to implement Lightning, without the 
> additional burden of implementing nice onchain features like coin control and 
> change labelling.
> 
> It would be best if we can retain features from an onchain wallet, while 
> using our coin on an offchain system.
> Further, it would allow onchain wallet developers to focus and gain expertise 
> on onchain wallet features, and, vice versa, for offchain walletless software 
> developers to focus on offchain software features.
> 
> The core idea comes from the way that offchain systems need to be set up:
> 
> 1.  First we agree on a (currently unconfirmed) txid and output number on 
> which to anchor our offchain system (the funding transaction).
> 2.  Then we sign a backout transaction (the initial commitment transactions 
> under Poon-Dryja, or the timelock branches for CoinSwapCS, or the initial 
> kickoff-settlement transactions for Decker-Russell-Osuntokun) spending the 
> agreed TXO, to return the money back to the owner(s) in case some participant 
> aborts the setting up of the offchain system.
> 3.  Then we sign and broadcast the funding transaction.
> 
> Unfortunately, the typical onchain wallet has a very simple and atomic 
> (uncuttable) process for making transactions:
> 
> 1.  Make, sign, and broadcast transaction with an output paying to the 
> desired address.
> 
> Thus a typical onchain wallet cannot be used to set up a funding transaction 
> for an offchain system.
> 
> Now suppose we take advantage of `SIGHASH_NOINPUT`, and modify our offchain 
> system setup as below:
> 
> 1.  First we agree on a N-of-N pubkey on which to anchor our offchain system 
> (the funding address).
> 2.  Then we sign (with SIGHASH_NOINPUT) a backout transaction (the initial 
> commitment transactions under Poon-Dryja, or the timelock branches for 
> CoinSwapCS, or the initial kickoff-settlement transactions for 
> Decker-Russell-Osuntokun), spending the agreed funding address, to return the 
> money back to the owner(s) in case some participant aborts the setting up of 
> the offchain system.
> 3.  Make, sign, and broadcast transaction with an output paying to the 
> funding address.  This step can be done by any typical onchain wallet.
> 
> Note that only the starting backout transaction is *required* to sign with 
> `SIGHASH_NOINPUT`.
> For instance, a Poon-Dryja channel may sign succeeding commitment 
> transactions with `SIGHASH_ALL`.
> Finally, only in case of disaster (some participant aborts before the 
> offchain system is set up) is the `SIGHASH_NOINPUT` backoff transaction 
> broadcasted.
> A "normal close" of the offchain system can be signed with typical 
> `SIGHASH_ALL` for no fungibility problems.
> 
> With this, an offchain system need not require its implementing software to 
> implement its own wallet.
> Further, onchain wallets can directly put its funds into an offchain system, 
> without requiring an onchain transfer to an offchain software wallet.
> 
> This can be helpful when building overall software.
> We might take any commodity onchain wallet and any commodity offchain 
> software, and we can integrate them easily to create a seamless wallet 
> experience that allows spending and receiving onchain and offchain.
> Further, improvements in one software component do not require re-building of 
> the other software component.
> 
> --
> 
> That said:
> 
> 1.  For Lightning and similar systems, the fact that the Lightning node will 
> give you an address that, when paid using any commodity onchain wallet, opens 
> a channel, means that people will make wrong assumptions.
>In particular, they are likely to assume that address reuse is safe and 
> will attempt to "refill" their channel by paying to the same address again in 
> the future.
>From this alone, we can immediately see that this idea is pointless.
> 2.  Dual-funding, 

Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-20 Thread Johnson Lau via bitcoin-dev


> On 17 Dec 2018, at 11:10 AM, Rusty Russell  wrote:
> 
> Johnson Lau  writes:
>> I don’t think this has been mentioned: without signing the script or masked 
>> script, OP_CODESEPARATOR becomes unusable or insecure with NOINPUT.
>> 
>> In the new sighash proposal, we will sign the hash of the full script (or 
>> masked script), without any truncation. To make OP_CODESEPARATOR works like 
>> before, we will commit to the position of the last executed 
>> OP_CODESEPARATOR. If NOINPUT doesn’t commit to the masked script, it will 
>> just blindly committing to a random OP_CODESEPARATOR position, which a 
>> wallet couldn’t know what codes are actually being executed.
> 
> My anti-complexity argument leads me to ask why we'd support
> OP_CODESEPARATOR at all?  Though my argument is weaker here: no wallet
> need support it.

Because it could make scripts more compact in some cases?

This is an example: 
https://github.com/bitcoin/bitcoin/pull/11423#issuecomment-333441321 


But this is probably not a good example for taproot, as it could be more 
efficient by making the 2 branches as different script merkle leaves.


> 
> But I don't see how OP_CODESEPARATOR changes anything here, wrt NOINPUT?
> Remember, anyone can create an output which can be spent by any NOINPUT,
> whether we go for OP_MASK or simply not commiting to the input script.
> 

Let me elaborate more. Currently, scriptCode is truncated at the last executed 
CODESEPARATOR. If we have a very big script with many CODESEPARATORs and 
CHECKSIGs, there will be a lot of hashing to do.

To fix this problem, it is proposed that the new sighash will always commit to 
the same H(script), instead of the truncated scriptCode. So we only need to do 
the H(script) once, even if the script is very big

In the case of NOINPUT with MASKEDSCRIPT, it will commit to the 
H(masked_script) instead of H(script).

To make CODESEPARATOR works as before, the sighash will also commit to the 
position of the last executed CODESEPARATOR. So the semantics doesn’t change. 
For scripts without CODESEPARATOR, the committed value is a constant.

IF NOINPUT does not commit to H(masked_script), technically it could still 
commit to the position of the last executed CODESEPARATOR. But since the wallet 
is not aware of the actual content of the script, it has to guess the meaning 
of such committed positions, like “with the HD key path m/x/y/z, I assume the 
script template is blah blah blah because I never use this path for another 
script template, and the meaning of signing the 3rd CODESEPARATOR is blah blah 
blah”. It still works if the assumptions hold, but sounds quite unreliable to 
me.

Johnson

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-20 Thread Johnson Lau via bitcoin-dev


> On 21 Dec 2018, at 1:20 AM, Christian Decker  
> wrote:
> 
> Johnson Lau  writes:
>> Correct me if I’m wrong.
>> 
>> For the sake of simplicity, in the following I assume BIP118, 143, and
>> 141-P2WSH are used (i.e. no taproot). Also, I skipped all the possible
>> optimisations.
>> 
>> 1. A and B are going to setup a channel.
>> 
>> 2. They create one setup tx, with a setup output of the following
>> script:  CLTV DROP 2 Au Bu 2 CHECKMULTISIG. Do not sign
> 
> If we are using a trigger transaction the output of the setup
> transaction would simply be `2 Au Bu 2 OP_CMS`. If we were to use a CLTV
> in there we would not have an option to later attach a collaborative
> close transaction that is valid immediately. Furthermore the timeout of
> the CLTV would start ticking down the exact moment the setup transaction
> is confirmed, hence whatever effect we are trying to achieve with that
> timelock is limited, and we have a limit to the total lifetime of the
> channel.

CLTV is absolute locktime. Only CSV will have the “time ticking” issue, but 
that’s not used here. The required locktime  is many years in the past. To 
collaboratively close, you just need to sign with SIGHASH_ALL, with a locktime 
s+1.

> 
>> 3. They create the update tx 0, spending the setup output with NOINPUT
>> and locktime = s+1, to the update-0 output with the script: IF 2 As0
>> Bs0 2 CHECKMULTISIG ELSE  CLTV DROP 2 Au Bu 2 CHECKMULTISIG ENDIF
> 
> Update 0 is usually what I call the trigger transaction. It takes the
> 2-of-2 multisig from the setup transaction and translates it into the
> two-branch output that further updates or settlements can be attached
> to. The settlement transaction attached to the trigger / update 0
> reflects the initial state of the channel, i.e., if A added 2 BTC and B
> added 1 BTC then settlement 0 will have 2 outputs with value 2 and 1
> respectively, with the user's keys (this can also be considered the
> refund in case of one party disappearing right away).
> 
> The second branch in the script you posted is the update branch, which is
> not encumbered by a CSV, while the first branch is the one encumbered
> with the CSV and is called the settlement branch since we'll be
> attaching settlement txs to it.
> 
> The CLTV looks correct to me and ensures that we can only attach any
> state >= s+1.
> 
> So just to show the output script for state `i` how I think they are
> correct:
> 
> ```
> OP_IF
>   OP_CSV 2   2 OP_CHECKMULTISIG
> OP_ELSE
>   OP_CLTV OP_DROP 2   2 OP_CHECKMULTISIG 
> ```
> 
> And the input scripts for the update tx and the settlement tx
> respectively would be:
> 
> ```
> OP_FALSE  
> ```
> 
> and
> 
> ```
> OP_TRUE  
> ```

I think the use of OP_CSV (BIP112) is not needed here (although it doesn’t 
really harm except taking a few more bytes). All you need is to sign the 
settlement tx with a BIP68 relative locktime. Since this is a 2-of-2 branch, 
both parties need to agree with the relative locktime, so it is not necessary 
to restrict it through OP_CSV


> 
>> 4. They create the settlement tx 0, spending the update-0 output with
>> As0 and Bs0 using BIP68 relative-locktime, with 2 settlement outputs
> 
> If I'm not mistaken the CSV needs to be in the scriptPubkey (or P2WSH
> equivalent) since segwit witnesses only allow pushes. Hence the script
> in point 3 needs to add that :-)

I believe you confused OP_CSV (BIP112) with BIP68. Relative locktime is 
enforced by BIP68 (i.e. setting the nSequence). OP_CSV indirectly enforces 
relative-locktime by checking the value of nSequence. BIP68 could work 
standalone without OP_CSV, while OP_CSV is dependant on BIP68. In the case of 
n-of-n eltoo state update, OP_CSV is not needed because all n parties need to 
agree with the same nSequence value of the settlement tx. This is enough to 
make sure the settlement tx has delayed settlement.

> 
>> 5. They sign the setup tx and let it confirm
> 
> They also need to sign (but not broadcast) update_0, in order to allow
> either party to initiate the closure if the counterparty become
> unresponsive. The order in which settlement_0 and update_0 are signed is
> not important by the way, so we can just batch these. The important part
> is that signing the setup acts as a commitment.

Sure. This is obvious.

> 
>> 6. To update, they create the update tx 1, spending the setup output
>> with NOINPUT and locktime = s+2, to the update-1 output with the
>> script: IF 2 As1 Bs1 2 CHECKMULTISIG ELSE  CLTV DROP 2 Au Bu 2
>> CHECKMULTISIG ENDIF and create the settlement tx 1, spending the
>> update-1 output with As1 and Bs1 using relative-locktime, with 2
>> settlement outputs
> 
> The output script of the updates are identical to the ones in the
> trigger or update_0 transaction, so they'd also need a CSV (this is why
> committing to the script structure with masking still works).
> 
>> 7. To close the channel, broadcast update tx 1. Wait for several
>> confirmations. And broadcast settlement-tx-1
> 
> We have 

Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-20 Thread Johnson Lau via bitcoin-dev


> On 20 Dec 2018, at 6:09 AM, Christian Decker  
> wrote:
> 
> Ruben Somsen via bitcoin-dev  >
> writes:
> 
>> Hi Johnson,
>> 
>> The design considerations here seem similar to the ML discussion of
>> whether Graftroot should be optional [1].
>> 
>>> While this seems fully compatible with eltoo, is there any other proposals 
>>> require NOINPUT, and is adversely affected by either way of tagging?
>> 
>> As far as I can tell it should be compatible with Statechains [2],
>> since it pretty much mirrors Eltoo in setup.
>> 
>> My understanding is somewhat lacking, so perhaps I am missing the
>> mark, but it is not completely clear to me how this affects
>> fungibility if taproot gets added and the setup and trigger tx for
>> Eltoo get combined into a single transaction. Would the NOINPUT
>> spending condition be hidden inside the taproot commitment?
> 
> I'm not aware of a way to combine the setup and trigger transaction. The
> trigger transaction was introduced in order to delay the start of the
> timeouts until a later time, to avoid having an absolute lifetime limit
> and having really huge timeout. If we were to combine the trigger
> transaction with the setup transaction (which is broadcast during
> channel creation), all of those timeouts would start counting down
> immediately, and we could just skip the trigger transaction
> altogether. It'd be more interesting to combine update and trigger
> transactions in a sort of cut-through combination, but that doesn't seem
> possible outside of Mimblewimble.
> 
> Cheers,
> Christian


Correct me if I’m wrong.

For the sake of simplicity, in the following I assume BIP118, 143, and 
141-P2WSH are used (i.e. no taproot). Also, I skipped all the possible 
optimisations.

1. A and B are going to setup a channel.

2. They create one setup tx, with a setup output of the following script:  
CLTV DROP 2 Au Bu 2 CHECKMULTISIG. Do not sign

3. They create the update tx 0, spending the setup output with NOINPUT and 
locktime = s+1, to the update-0 output with the script:
IF 2 As0 Bs0 2 CHECKMULTISIG ELSE  CLTV DROP 2 Au Bu 2 CHECKMULTISIG ENDIF

4. They create the settlement tx 0, spending the update-0 output with As0 and 
Bs0 using BIP68 relative-locktime, with 2 settlement outputs

5. They sign the setup tx and let it confirm

6. To update, they create the update tx 1, spending the setup output with 
NOINPUT and locktime = s+2, to the update-1 output with the script:
IF 2 As1 Bs1 2 CHECKMULTISIG ELSE  CLTV DROP 2 Au Bu 2 CHECKMULTISIG ENDIF
and create the settlement tx 1, spending the update-1 output with As1 and Bs1 
using relative-locktime, with 2 settlement outputs

7. To close the channel, broadcast update tx 1. Wait for several confirmations. 
And broadcast settlement-tx-1


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-19 Thread Johnson Lau via bitcoin-dev


> On 18 Dec 2018, at 4:08 AM, Johnson Lau via bitcoin-dev 
>  wrote:
> 
> 
> 
>> On 17 Dec 2018, at 11:48 PM, Ruben Somsen > <mailto:rsom...@gmail.com>> wrote:
>> 
>> Hi Johnson,
>> 
>> The design considerations here seem similar to the ML discussion of
>> whether Graftroot should be optional [1].
> 
> Yes, but the “tagging” emphasises more on the payer’s side: if the payer 
> cannot guarantee that the payee would never reuse the key, the payer could 
> avoid any NOINPUT-related trouble by tagging properly.
> 
>> 
>>> While this seems fully compatible with eltoo, is there any other proposals 
>>> require NOINPUT, and is adversely affected by either way of tagging?
>> 
>> As far as I can tell it should be compatible with Statechains [2],
>> since it pretty much mirrors Eltoo in setup.
>> 
>> My understanding is somewhat lacking, so perhaps I am missing the
>> mark, but it is not completely clear to me how this affects
>> fungibility if taproot gets added and the setup and trigger tx for
>> Eltoo get combined into a single transaction. Would the NOINPUT
>> spending condition be hidden inside the taproot commitment?
> 
> For the design considerations I mentioned above, the tags must be explicit 
> and configurable by the payer. So it couldn’t be hidden in taproot.
> 
> If you don’t care about fungibility, you can always tag your setup output, 
> and makes it ready for NOINPUT spending. Every update will need 2 signatures: 
> a NOINPUT to spend the setup output or an earlier update output, and a 
> NOINPUT to settle the latest update output.
> 
> If you care about fungibility, you can’t tag your setup output. Every update 
> will need 3 signatures: a SINGLEINPUT (aka ANYONECANPAY) to spend the setup 
> output, a NOINPUT to spend an earlier update output, and a NOINPUT to settle 
> the latest update output.
> 
> (Actually, as soon as you made the first update tx with SINGLEINPUT, you 
> don’t strictly need to make any SINGLEINPUT signatures in the later updates 
> again, as the first update tx (or any update with a SINGLEINPUT signature) 
> could be effectively the trigger tx. While it makes the settlement more 
> expensive, it also means accidentally missing a SINGLEINPUT signature will 
> not lead to any fund loss. So security-wise it’s same as the always-tagging 
> scenario.)
> 
> The most interesting observation is: you never have the need to use NOINPUT 
> on an already confirmed UTXO, since nothing about a confirmed UTXO is 
> mutable. And every smart contract must anchor to a confirmed UTXO, or the 
> whole contract is double-spendable. So the ability to NOINPUT-spend a setup 
> output should not be strictly needed. In some (but not all) case it might 
> make the protocol simpler, though.
> 
> So the philosophy behind output tagging is “avoid NOINPUT at all cost, until 
> it is truly unavoidable"
> 

After thinking more carefully, I believe output tagging could have no adverse 
effect on eltoo.

Consider a system without tagging, where you could always spend an output with 
NOINPUT. Under taproot, state update could be made in 2 ways:

a) Making 2 sigs for each update. One sig is a “script path” locktime NOINPUT 
spending of the setup output or an earlier update output. One sig is a “key 
path” relative-locktime NOINPUT spending of the new update output. In taproot 
terminology, “key path” means direct spending with the scriptPubKey, and 
“script path” means revealing the script hidden in taproot. Key path spending 
is always cheaper.

b) Making 3 sigs for each update. One sig is a key path SINGLEINPUT (aka 
ANYONECANPAY) or NOINPUT spending of the setup output, without any locktime. 
One sig is a script path locktime NOINPUT spending of an earlier update output 
(if this is not the first update). One sig is a key path relative-locktime 
NOINPUT spending of the new update output

Note that in b), the first signature could be either SINGLEINPUT or NOINPUT, 
and they just work as fine. So SINGLEINPUT should be used to avoid unnecessary 
replayability.

In the case of uncooperative channel closing, b) is always cheaper than a), 
since this first broadcast signature will be a key path signature. Also, b) has 
better privacy if no one is cheating (only the last update is broadcast). The 
only information leaked in b) is the use of SINGLEINPUT and the subsequent 
relative-locktime NOINPUT. However, the script path signature in a) will leak 
the state number, which is the maximum number of updates made in this channel.

In conclusion, b) is cheaper and more private, but it is more complex by 
requiring 3 sigs per update rather than 2. I think it is an acceptable 
tradeoff. (And as I mentioned in my last mail, missing some SINGLEINPUT sigs is 
not the end 

Re: [bitcoin-dev] Schnorr and taproot (etc) upgrade

2018-12-19 Thread Johnson Lau via bitcoin-dev


> On 18 Dec 2018, at 12:58 PM, Anthony Towns  wrote:
> 
> On Mon, Dec 17, 2018 at 10:18:40PM -0500, Russell O'Connor wrote:
>> On Mon, Dec 17, 2018 at 3:16 PM Johnson Lau  wrote:
>>But I’m not sure if that would do more harm than good. For example, people
>>might lose money by copying an existing script template. But they might
>>also lose money in the same way as CHECKMULTISIG is disabled. So I’m not
>>sure.
> 
> Well, if CHECKSIG* and CHECKMULTISIG* are all disabled in favour of
> CHECKDLS, CHECKDLSVERIFY and CHECKDLSADD with both different names and
> different opcodes, copying a script template opcode-for-opcode from v0
> to v1 will always fail. (With taproot, this doesn't necessarily mean you
> lose money, even if the script is impossible to ever satisfy, since you
> may be able to recover via the direct signature path)
> 
>>Another related thing I’d like to bikeshed is to pop the stack after
>>OP_CLTV and OP_CSV. The same pros and cons apply.
>> This one is almost a no-brainer I think.  Nearly every instance of OP_CSV is
>> followed by an OP_DROP and we'd save 1 WU per OP_CSV if we pop the stack
>> afterwards.
> 
> It's definitely bikeshedding so whatever; but to me, it seems like it'd
> be easier for everyone to have it so that if you've got the same opcode
> in v0 script and v1.0 script; they have precisely the same semantics.
> 
> (That said, constructions like " CLTV  CHECKSIGVERIFY" that avoid
> the DROP and work when you're expected to leave a true value on the
> stack won't work if you have to end up with an empty stack)
> 
> Cheers,
> aj
> 

I think you mean   CHECKSIGVERIFY  CLTV, but this works only for simple 
script. Most likely you need a DROP if you use IF or CODESEPARATOR.

However, if we change the rule from “one true stack item” to “empty stack”, 
CLTV/CSV popping stack will make more sense. So I think either we change all, 
or change nothing.

The “true stack item” and CLTV/CSV as NOP are tech debt. Fixing them in new 
script version makes script creation easier and sometimes cheaper, but the fix 
itself creates further tech debts in the code. So I don’t have strong opinion 
on this topic.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Schnorr and taproot (etc) upgrade

2018-12-19 Thread Johnson Lau via bitcoin-dev


> On 16 Dec 2018, at 7:38 AM, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> On Fri, Dec 14, 2018 at 8:39 AM Anthony Towns via bitcoin-dev 
>  > wrote:
>   5. if there's exactly one, non-zero item on the stack; succeed
> 
> Unless it is too much bikeshedding, I'd like to propose that to succeed the 
> stack must be exactly empty.  Script is more composable that way, removing 
> the need for special logic to handle top-level CHECKSIG, vs mid-level 
> CHECKSIGVERIFY.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

I proposed the same in BIP114. I wish Satoshi had designed that way. But I’m 
not sure if that would do more harm than good. For example, people might lose 
money by copying an existing script template. But they might also lose money in 
the same way as CHECKMULTISIG is disabled. So I’m not sure.

Another related thing I’d like to bikeshed is to pop the stack after OP_CLTV 
and OP_CSV. The same pros and cons apply.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer NOINPUT with output tagging

2018-12-19 Thread Johnson Lau via bitcoin-dev


> On 17 Dec 2018, at 11:48 PM, Ruben Somsen  wrote:
> 
> Hi Johnson,
> 
> The design considerations here seem similar to the ML discussion of
> whether Graftroot should be optional [1].

Yes, but the “tagging” emphasises more on the payer’s side: if the payer cannot 
guarantee that the payee would never reuse the key, the payer could avoid any 
NOINPUT-related trouble by tagging properly.

> 
>> While this seems fully compatible with eltoo, is there any other proposals 
>> require NOINPUT, and is adversely affected by either way of tagging?
> 
> As far as I can tell it should be compatible with Statechains [2],
> since it pretty much mirrors Eltoo in setup.
> 
> My understanding is somewhat lacking, so perhaps I am missing the
> mark, but it is not completely clear to me how this affects
> fungibility if taproot gets added and the setup and trigger tx for
> Eltoo get combined into a single transaction. Would the NOINPUT
> spending condition be hidden inside the taproot commitment?

For the design considerations I mentioned above, the tags must be explicit and 
configurable by the payer. So it couldn’t be hidden in taproot.

If you don’t care about fungibility, you can always tag your setup output, and 
makes it ready for NOINPUT spending. Every update will need 2 signatures: a 
NOINPUT to spend the setup output or an earlier update output, and a NOINPUT to 
settle the latest update output.

If you care about fungibility, you can’t tag your setup output. Every update 
will need 3 signatures: a SINGLEINPUT (aka ANYONECANPAY) to spend the setup 
output, a NOINPUT to spend an earlier update output, and a NOINPUT to settle 
the latest update output.

(Actually, as soon as you made the first update tx with SINGLEINPUT, you don’t 
strictly need to make any SINGLEINPUT signatures in the later updates again, as 
the first update tx (or any update with a SINGLEINPUT signature) could be 
effectively the trigger tx. While it makes the settlement more expensive, it 
also means accidentally missing a SINGLEINPUT signature will not lead to any 
fund loss. So security-wise it’s same as the always-tagging scenario.)

The most interesting observation is: you never have the need to use NOINPUT on 
an already confirmed UTXO, since nothing about a confirmed UTXO is mutable. And 
every smart contract must anchor to a confirmed UTXO, or the whole contract is 
double-spendable. So the ability to NOINPUT-spend a setup output should not be 
strictly needed. In some (but not all) case it might make the protocol simpler, 
though.

So the philosophy behind output tagging is “avoid NOINPUT at all cost, until it 
is truly unavoidable"

> 
> Cheers,
> Ruben Somsen
> 
> [1] 
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-May/016006.html
> [2]  
> https://www.reddit.com/r/Bitcoin/comments/9nhjea/eli51525faq_for_statechains_offchain_transfer_of/
> 
> On Mon, Dec 17, 2018 at 8:20 PM Johnson Lau via bitcoin-dev
>  wrote:
>> 
>> NOINPUT is very powerful, but the tradeoff is the risks of signature replay. 
>> While the key holders are expected not to reuse key pair, little could be 
>> done to stop payers to reuse an address. Unfortunately, key-pair reuse has 
>> been a social and technical norm since the creation of Bitcoin (the first tx 
>> made in block 170 reused the previous public key). I don’t see any hope to 
>> change this norm any time soon, if possible at all.
>> 
>> As the people who are designing the layer-1 protocol, we could always blame 
>> the payer and/or payee for their stupidity, just like those people laughed 
>> at victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The 
>> existing bitcoin script language is so restrictive. It disallows many useful 
>> smart contracts, but at the same time prevented many dumb contracts. After 
>> all, “smart” and “dumb” are non-technical judgement. The DAO contract has 
>> always been faithfully executed. It’s dumb only for those invested in the 
>> project. For me, it was just a comedy show.
>> 
>> So NOINPUT brings us more smart contract capacity, and at the same time we 
>> are one step closer to dumb contracts. The target is to find a design that 
>> exactly enables the smart contracts we want, while minimising the risks of 
>> misuse.
>> 
>> The risk I am trying to mitigate is a payer mistakenly pay to a previous 
>> address with the exactly same amount, and the previous UTXO has been spent 
>> using NOINPUT. Accidental double payment is not uncommon. Even if the payee 
>> was honest and willing to refund, the money might have been spent with a 
>> replayed NOINPUT signature. Once people lost a significant amount of money 
>> this way, payers (mostly exchanges) may refuse to send mone

Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-19 Thread Johnson Lau via bitcoin-dev


> On 16 Dec 2018, at 2:55 PM, Rusty Russell via bitcoin-dev 
>  wrote:
> 
> Anthony Towns via bitcoin-dev  writes:
>> On Thu, Dec 13, 2018 at 11:07:28AM +1030, Rusty Russell via bitcoin-dev 
>> wrote:
>>> And is it worthwhile doing the mask complexity, rather than just
>>> removing the commitment to script with NOINPUT?  It *feels* safer to
>>> restrict what scripts we can sign, but is it?
>> 
>> If it's not safer in practice, we've spent a little extra complexity
>> committing to a subset of the script in each signature to no gain. If
>> it is safer in practice, we've prevented people from losing funds. I'm
>> all for less complexity, but not for that tradeoff.
> 
> There are many complexities we could add, each of which would prevent
> loss of funds in some theoretical case.

Every security measures are overkill, until someone get burnt. If these 
security measures are really effective, no one will get burnt. The inevitable 
conclusion is: every effective security measures are overkill.

> 
> From practical experience; reuse of private keys between lightning and
> other things is not how people will lose funds[1].

Assuming an user holds a private key exclusively and securely, currently there 
are only 2 ways to lose funds by private key reuse: 1. reusing the same 
signature nonce; 2. signing the hash “one”, for the SIGHASH_SINGLE consensus 
bug.

People lost money for the first reason. Since this is a feature of the 
signature schemes we use, unavoidably that will happen again from time to time. 
The second one has been fixed in segwit (though incompletely), and could be 
completely fixed with a simple softfork.

Overall speaking, while private key reuse hurts fungibility and privacy, it is 
not terribly insecure, as long as you use rfc6979 and are not foolish enough to 
sign hash “one”. This actually thanks to the fact that signatures always 
committed to the previous txid. It makes sure that a signature is never valid 
for more than one UTXO. Unfortunately, the guarantee of non-replayability 
incentified the practice of key-reuse, since the day-one of bitcoin. While 
NOINPUT fundamentally changes this security assumption, it won’t change this 
long-established culture of key reuse.

So you argument seems just begging the question. Without NOINPUT, it is just 
impossible to lose money by key reuse, and this is exactly the topic we are 
debating.


> 
> It *is* however non-trivially more complicated for wallets; they
> currently have a set of script templates which they will sign (ie. no
> OP_CODESEPARATOR) and I implemented BIP 143 with only the simplest of
> naive code[2].  In particular, there is no code to parse scripts.

Sorry that I’m not familiar with the implementation details of your wallet. But 
as you don’t have code to parse scripts, I assume your wallet can’t handle 
OP_CODESEPARATOR? However, this is exactly what you should do: only implement 
what you actually need, and ignore those unrelated details.

Also, trying to faithfully and completely reproduce the consensus code in a 
wallet (even if you don’t need that at all) could be extremely dangerous. Such 
wallet might be tricked, for example, to sign the hash “one” and get all money 
stolen (I was told someone really did that, but I don’t know the details)

If you didn’t implement OP_CODESEPARATOR because you didn’t use it, there is no 
reason for you to fully implement OP_MASKEDPUSH nor script parsing. In existing 
signature schemes (e.g. BIP143), signatures always commit to the script being 
executed (the “scriptCode”). I assume that all wallets would re-construct the 
scriptCode at signing time, based on the limited set of script templates they 
support. If a wallet had a function called GetScriptCodeForMultiSig() for this 
purpose, all they need now is a GetMaskedScriptCodeForMultiSig() that returns 
the masked template, or a new option in the existing 
GetScriptCodeForMultiSig(). It does not need to be something like 
GetMaskedScript(GetScriptCodeForMultiSig()). After all, only a very small 
number of script templates really need NOINPUT. A GetMaskedScript() in a wallet 
is just an overkill (and a vulnerability if mis-implemented) 

> 
> Bitcoind developers are not in a good position to assess complexity
> here.  They have to implement *everything*, so each increment seems
> minor.  In addition, none of these new script versions will ever make
> bitcoind simpler, since they have to support all prior ones.  Wallets,
> however, do not have to.
> 
> I also think that minimal complexity for (future) wallets is an
> underappreciated feature: the state of wallets in bitcoin is poor[3]
> so simplicity should be a key goal.

It is a 3-way tradeoff of security, complexity, and functionality. While not 
everyone might appreciate this, security seems to always be the dominent factor 
in bitcoin protocol development. It was the reason why most core contributors 
were hesitant towards BIP148, despite they all love the functionality of segwit.

Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-19 Thread Johnson Lau via bitcoin-dev
I don’t think this has been mentioned: without signing the script or masked 
script, OP_CODESEPARATOR becomes unusable or insecure with NOINPUT.

In the new sighash proposal, we will sign the hash of the full script (or 
masked script), without any truncation. To make OP_CODESEPARATOR works like 
before, we will commit to the position of the last executed OP_CODESEPARATOR. 
If NOINPUT doesn’t commit to the masked script, it will just blindly committing 
to a random OP_CODESEPARATOR position, which a wallet couldn’t know what codes 
are actually being executed.

> On 14 Dec 2018, at 5:30 PM, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> On Thu, Dec 13, 2018 at 11:07:28AM +1030, Rusty Russell via bitcoin-dev wrote:
>> And is it worthwhile doing the mask complexity, rather than just
>> removing the commitment to script with NOINPUT?  It *feels* safer to
>> restrict what scripts we can sign, but is it?
> 
> If it's not safer in practice, we've spent a little extra complexity
> committing to a subset of the script in each signature to no gain. If
> it is safer in practice, we've prevented people from losing funds. I'm
> all for less complexity, but not for that tradeoff.
> 
> Also, saying "I can't see how to break this, so it's probably good
> enough, even if other people have a bad feeling about it" is a crypto
> anti-pattern, isn't it?
> 
> I don't see how you could feasibly commit to more information than script
> masking does for use cases where you want to be able to spend different
> scripts with the same signature [0]. If that's possible, I'd probably
> be for it.
> 
> At the same time, script masking does seem feasible, both for
> lightning/eltoo, and even for possibly complex variations of scripts. So
> committing to less doesn't seem wise.
> 
>> You already need both key-reuse and amount-reuse to be exploited.
>> SIGHASH_MASK only prevents you from reusing this input for a "normal"
>> output; if you used this key for multiple scripts of the same form,
>> you're vulnerable[1].
> 
> For example, script masking seems general enough to prevent footguns
> even if (for some reason) key and value reuse across eltoo channels
> were a requirement, rather than prohibited: you'd make the script be
> " MASK  CLTV 2DROP  CHECKSIG", and your
> signature will only apply to that channel, even if another channel has
> the same capacity and uses the same keys, a and b.
> 
>> So I don't think it's worth it.  SIGHASH_NOINPUT is simply dangerous
>> with key-reuse, and Don't Do That.
> 
> For my money, "NOINPUT" commits to dangerously little context, and
> doesn't really feel safe to include as a primitive -- as evidenced by
> the suggestion to add "_UNSAFE" or similar to its name. Personally, I'm
> willing to accept a bit of risk, so that feeling doesn't make me strongly
> against the idea; but it also makes it hard for me to want to support
> adding it. To me, committing to a masked script is a huge improvement.
> 
> Heck, if it also makes it easier to do something safer, that's also
> probably a win...
> 
> Cheers,
> aj
> 
> [0] You could, perhaps, commit to knowing the private keys for all the
>*outputs* you're spending to, as well as the inputs, which comes
>close to saying "I know this is a scary NOINPUT transaction, but
>we're paying to ourselves, so it will all be okay".
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Safer NOINPUT with output tagging

2018-12-17 Thread Johnson Lau via bitcoin-dev
NOINPUT is very powerful, but the tradeoff is the risks of signature replay. 
While the key holders are expected not to reuse key pair, little could be done 
to stop payers to reuse an address. Unfortunately, key-pair reuse has been a 
social and technical norm since the creation of Bitcoin (the first tx made in 
block 170 reused the previous public key). I don’t see any hope to change this 
norm any time soon, if possible at all.

As the people who are designing the layer-1 protocol, we could always blame the 
payer and/or payee for their stupidity, just like those people laughed at 
victims of Ethereum dumb contracts (DAO, Parity multisig, etc). The existing 
bitcoin script language is so restrictive. It disallows many useful smart 
contracts, but at the same time prevented many dumb contracts. After all, 
“smart” and “dumb” are non-technical judgement. The DAO contract has always 
been faithfully executed. It’s dumb only for those invested in the project. For 
me, it was just a comedy show.

So NOINPUT brings us more smart contract capacity, and at the same time we are 
one step closer to dumb contracts. The target is to find a design that exactly 
enables the smart contracts we want, while minimising the risks of misuse.

The risk I am trying to mitigate is a payer mistakenly pay to a previous 
address with the exactly same amount, and the previous UTXO has been spent 
using NOINPUT. Accidental double payment is not uncommon. Even if the payee was 
honest and willing to refund, the money might have been spent with a replayed 
NOINPUT signature. Once people lost a significant amount of money this way, 
payers (mostly exchanges) may refuse to send money to anything other than 
P2PKH, native-P2WPKH and native-P2WSH (as the only 3 types without possibility 
of NOINPUT)

The proposed solution is that an output must be “tagged” for it to be spendable 
with NOINPUT, and the “tag” must be made explicitly by the payer. There are 2 
possible ways to do the tagging:

1. A certain bit in the tx version must be set
2. A certain bit in the scriptPubKey must be set

I will analyse the pros and cons later.

Using eltoo as example. The setup utxo is a simple 2-of-2 multisig, and should 
not be tagged. This makes it indistinguishable from normal 1-of-1 utxo. The 
trigger tx, which spends the setup utxo, should be tagged, so the update txs 
could spend the trigger utxo with NOINPUT. Similarly, all update txs should be 
tagged, so they could be spent by other update txs and settlement tx with 
NOINPUT. As the final destination, there is no need to tag in the settlement tx.

In payer’s perspective, tagging means “I believe this address is for 
one-time-use only” Since we can’t control how other people manage their 
addresses, we should never do tagging when paying to other people.

I mentioned 2 ways of tagging, and they have pros and cons. First of all, 
tagging in either way should not complicate the eltoo protocol in anyway, nor 
bring extra block space overhead.

A clear advantage of tagging with scriptPubKey is we could tag on a per-output 
basis. However, scriptPubKey tagging is only possible with native-segwit, not 
P2SH. That means we have to disallow NOINPUT in P2SH-segwit (Otherwise, *all* 
P2SH addresses would become “risky” for payers) This should be ok for eltoo, 
since it has no reason to use P2SH-segwit in intermediate txs, which is more 
expensive.

Another problem with scriptPubKey tagging is all the existing bech32 
implementations will not understand the special tag, and will pay to a tagged 
address as usual. An upgrade would be needed for them to refuse sending to 
tagged addresses by default.

On the other hand, tagging with tx version will also protect P2SH-segwit, and 
all existing wallets are protected by default. However, it is somewhat a layer 
violation and you could only tag all or none output in the same tx. Also, as 
Bitcoin Core has just removed the tx version from the UTXO database, adding it 
back could be a little bit annoying, but doable.

There is an extension to the version tagging, which could make NOINPUT even 
safer. In addition to tagging requirement, NOINPUT will also sign the version 
of the previous tx. If the wallet always uses a randomised tx version, it makes 
accidental replay very unlikely. However, that will burn a few more bits in the 
tx version field.

While this seems fully compatible with eltoo, is there any other proposals 
require NOINPUT, and is adversely affected by either way of tagging?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-17 Thread Johnson Lau via bitcoin-dev


> On 12 Dec 2018, at 6:50 AM, Russell O'Connor  wrote:
> 
> On Sun, Dec 9, 2018 at 2:13 PM Johnson Lau  > wrote:
> The current proposal is that a 64-byte signature will be used for the default 
> “signing all” sighash, and 65-byte for other sighash types. The space saved 
> will allow a few more txs in a block, so I think it worths doing. However, 
> this also makes witness weight estimation more difficult in multisig cases.
> 
> This idea of signing witness weight has been brought up before. I think the 
> concern is the difficulty to estimate the witness weight for complex scripts, 
> which need this feature most. So it will work when it is not needed, and will 
> not work when it is needed.
> 
> Is there any script example that witness size malleability is unavoidable?
> 
> I tend to think in opposite terms. Is there a proof that any script can be 
> transformed into an equivalent one that avoids witness weight malleability?   
> But I admit there is a trade off:  If we don't allow for signature covers 
> weight, and we do need it, it will be too late to add.  On the other hand if 
> we add signature covers weight, but it turns out that no Script ever needs to 
> use it, then we've added that software complexity for no gain.  However, I 
> think the software complexity is relatively low, making it worthwhile.
> 
> Moreover, even if witness weight malleability is entirely avoidable, it 
> always seems to come at a cost.  Taking as an example libwally's proposed " 
> csv_2of3_then_2"
>  Script 
> ,
>  it begins with "OP_DEPTH OP_1SUB OP_1SUB" spending 3 vbytes to avoid any 
> possible witness malleability versus just taking a witness stack item to 
> determine the branch, costing 1 or 2 (unmalleated) vbytes.  Now to be fair, 
> under Taproot this particular script's witness malleability problem probably 
> goes away.  Nonetheless, I think it is fair to say that Bitcoin Script was 
> designed without any regard given to scriptSig/witness malleability concerns 
> and the result is that one is constantly fighting against malleability 
> issues.  Short of a wholesale replacement of Bitcoin Script, I do think that 
> having an option for signature covers weight is one of the best ways to 
> address the whole problem.
> 
> Regarding your point about 64/65-byte signatures; I speculate that in most 
> protocols, all parties that are able to consider signing the weight, know 
> what sighash flags the other parties are expected to be using.  However, your 
> point is well-taken, and if we choose to adopt the option of signatures 
> covering weight, we ought to make sure there exists a 65-byte signature that 
> performs the equivalent of a sigHashAll (of course, still covering that 
> particular sighash flag under the signature), to ensure that 
> anti-weight-malleability can be use even when the sighash flags that other 
> parties will use are unknown.  Even with the extra vbytes in the signatures, 
> there may be a net weight savings by avoiding the need for anti-malleability 
> Script code. (It might also be reasonable to have participants create 
> signatures for a small range of different weight values? (Sorry in advance to 
> PSBT)).

I think the root cause of witness weight malleability is some opcodes accept 
variable size input (without affecting the output), and that input is provided 
by the puzzle solver. Going through the opcode list, I think such opcodes 
include IF, NOTIF, VERIFY, DROP, 2DROP, NIP, DEPTH, and all arithmetic opcode 
that accepts CScriptNum (including CHECKMULTISIG)

VERIFY, DROP, 2DROP, NIP are not real problem, since they should not be the 
first opcode to interact with data directly provided by the puzzle solver.

CHECKMULTISIG is fixed by BIP147. For the key number and sig number, they 
should be part of the script, so not malleable.

DEPTH is a problem only if its inputs are not later examined by other opcodes. 
Again, this is pointless.

The liberally example should be protected by the MINIMAL_IF policy, which 
requires the input of OP_IF be minimal. As you note, OP_IF could be replaced by 
taproot in many cases

Non-minimal CScriptNum is also banned as BIP62 policy.

For the purpose of preventing malicious third party witness bloating, all we 
need is the miners to enforce the policy. There is no reason for miners to 
accept size malleated txs, as that will reduce the usable block space. If they 
hate a tx, they would simply drop it, instead of wasting the block space.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-12-10 Thread Johnson Lau via bitcoin-dev
The current proposal is that a 64-byte signature will be used for the default 
“signing all” sighash, and 65-byte for other sighash types. The space saved 
will allow a few more txs in a block, so I think it worths doing. However, this 
also makes witness weight estimation more difficult in multisig cases.

This idea of signing witness weight has been brought up before. I think the 
concern is the difficulty to estimate the witness weight for complex scripts, 
which need this feature most. So it will work when it is not needed, and will 
not work when it is needed.

Is there any script example that witness size malleability is unavoidable?

> On 7 Dec 2018, at 12:57 AM, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> One more item to consider is "signature covers witness weight".
> 
> While signing the witness weight doesn't completely eliminate witness 
> malleability (of the kind that can cause grief for compact blocks), it does 
> eliminate the worst kind of witness malleability from the user's perspective, 
> the kind where malicious relay nodes increase the amount of witness data and 
> therefore reduce the overall fee-rate of the transaction.  Generally users 
> should strive to construct their Bitcoin Scripts in such a way that witness 
> malleability isn't possible, but as you are probably aware, this can be quite 
> difficult to achieve as Scripts become more complex and maybe isn't even 
> possible for some complex Scripts.
> 
> Given the new fixed-sized signature of the Schnorr BIP, it becomes much 
> easier to compute the final witness weight prior to signing.  In complex 
> multi-party signing protocol, the final witness weight might not be known at 
> signing time for everyone involved, so the "signature covers witness weight" 
> ought to be optional.
> 
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-28 Thread Johnson Lau via bitcoin-dev
This is incompatible with bip-schnorr, which intentionally disallow such use by 
always committing to the public key: 
https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki

With the recent fake Satoshi signature drama, and other potential ways to 
misuse and abuse, I think this is a better way to go, which unfortunately might 
disallow some legitimate applications.

Covenants could be made using OP_CHECKSIGFROMSTACK 
(https://fc17.ifca.ai/bitcoin/papers/bitcoin17-final28.pdf) or OP_PUSHTXDATA 
(https://github.com/jl2012/bips/blob/vault/bip-0ZZZ.mediawiki). I think this is 
the next step following the taproot soft fork

> On 28 Nov 2018, at 8:54 AM, Bob McElrath via bitcoin-dev 
>  wrote:
> 
> I have been working on an experimental wallet that implements Bitcoin
> Covenants/Vaults following a blog post I wrote about 2 years ago, using
> "Pay-to-Timelock Signed Transaction" (P2TST).  (Also mentioned recently by
> kanzure in a talk somewheres...)  The idea is that you deposit to an address 
> for
> which you don't know the private key.  Instead you construct a second
> transaction sending that to a timelocked staging address for which you DO have
> the privkey (it also has an IF/ELSE condition with a second spending condition
> for use in case of theft attempt).  In order to do this you either have to
> delete the privkey of the deposit address (a difficult proposition to know 
> it's
> actually been deleted), but instead one can construct a signature directly 
> using
> a RNG, and use the SIGHASH to compute the corresponding pubkey via ECDSA
> recover, from which you compute the corresponding address.  In this way your
> wallet is a set of P2TST transactions and a corresponding privkey, with a (set
> of) emergency keys.
> 
> This interacts with NOINPUT in the following way: if the input to the
> transaction commits to the pubkey in any way, you have a circular dependency 
> on
> the pubkey that could only be satisfied by breaking a hash function.  This
> occurs with standard sighash's which commit to the txid, which in turn commit 
> to
> the address, which commits to the pubkey, so this construction of
> covenants/vaults requires NOINPUT.
> 
> AFAICT sipa's proposal is compatible with the above vaulted construction by
> using SIGHASH_NOINPUT | SIGHASH_SCRIPTMASK to remove the
> scriptPubKey/redeemScript from the sighash.  Putting the
> scriptPubKey/redeemScript in the sighash introduces the same circular
> dependency, but SIGHASH_SCRIPTMASK removes it.
> 
> One would probably want to provide the fee from a separate wallet so as to be
> able to account for fluctuating fee pressures when the unvaulting occurs a 
> long
> time after vaulting.  Thus you'd want to use SIGHASH_SINGLE so that a 
> fee-wallet
> can add fees (or for composability of P2TSTs), and SIGHASH_NOFEE as well.
> 
> P.S. Also very excited to combine the above idea with 
> Taproot/Graftroot/g'Root.
> 
> --
> Cheers, Bob McElrath
> 
> "For every complex problem, there is a solution that is simple, neat, and 
> wrong."
>-- H. L. Mencken 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-28 Thread Johnson Lau via bitcoin-dev


> On 28 Nov 2018, at 11:41 AM, Pieter Wuille via bitcoin-dev 
>  wrote:
> 
> So a combined proposal:
> * All existing sighash flags, plus NOINPUT and MASK
> (ANYONECANPAY/NOINPUT/MASK are encoded in 2 bits).
> * A new opcode called OP_MASKEDPUSH, whose only runtime behaviour is
> failing if not immediately followed by a push, or when appearing as
> last opcode in the script.

I suggest to use the place of OP_RESERVED (0x50) as OP_MASKEDPUSH. The reason 
is 0x50 is not counted towards the 201 opcode limit, so people could mask as 
many pushes as needed.

In a new script version, of course, we could make any opcode not being counted. 
But that would just create another special case in the EvalScript() code.

(Or, maybe we should limit the use of OP_MASKEDPUSH? I think this is open for 
discussion.)

> * Signatures are 64 plus an optional sighash byte. A missing sighash
> byte implies ALL, and ALL cannot be specified explicitly.
> * The sighash is computed from the following:
>  * A 64-byte constant tag
>  * Data about the spending transaction:
>* The transaction version number
>* The hash of txins' prevouts+amounts+sequences (or nothing if 
> ANYONECANPAY)

Do you want to make it 1 hash or 3 hashes? With 3 hashes, it could share 
hashPrevouts and hashSequence with BIP143. Making everything 1 hash will only 
result in redundent hashing for each input

>* The hash of all txouts (or just the corresponding txout if
> SINGLE; nothing if NONE)

Starting from this sighash version, I think we should forbid the use of SINGLE 
without a matching output. Also, the undefined output type should also be 
invalid.

>* The transaction locktime
>  * Data about the output being spent:
>* The prevout (or nothing if NOINPUT)
>* The amount
>* The sequence number
>* The hash of the scriptPubKey (or nothing if MASK)

I think we should just use the scriptPubKey, since sPK is fixed size (23-byte 
for p2sh and 35-byte for native segwit).

In order to distinguish p2sh and native segwit for MASKED NOINPUT, you also 
need to commit to an additional 1-bit value

>  * Data about the script being executed:
>* The hash of the scriptCode (after masking out, if MASK is set)

For direct key-spending (i.e. not taprooted script), I suggest to set the 
H(scriptCode) to zero, for the following reasons:
1) Since we have already committed to sPK, which is already a *direct* hash of 
scriptCode, it is redundant to do it again.
2) This could save one SHA256 compression for direct key-spending, which is 
probably 90% of all cases
3) This allows hardware wallet to tell whether they are using direct-spending 
path or taproot script path

Since we may want 3) anyway, we don’t need to commit to another 1-bit value if 
we simply set H(scriptCode) to zero

We should also ban MASKED NOINPUT for direct-spending, which doesn’t make 
sense. And it is not safe since both H(scriptCode) and sPK are empty.

>* The opcode number of the last executed OP_CODESEPARATOR (or
> 0x if none)
>  * The sighash mode

This proposal will only use 4 out of the 8 sighash bits. Do we want to make 
those 4 unused bits invalid, or ignored? Leaving at least 1 bit valid but 
ignored (“bit-x"), and 1 bit invalid (“bit-y”), will allow opt-in/out hardfork 
replay-protection, for example:

* default signatures are those with both bit-x and bit-y unset.
* If we want to make default signatures replayable across chains, the new fork 
should reject signatures with bit-x, and accept sigs with or without bit-y. In 
this case, defaults sigs are valid for both chains. Sigs with bit-x is valid 
only for original chain, and sigs with bit-y is valid only for new chain.
* If we want to make default signatures non-replayabble, the new fork should 
reject all default sigs, but accept sigs with either bit-x or bit-y set. In 
this case, default sig is valid only for original chain. Sigs with bit-x is 
valid for both chains, and sigs with bit-y is valid only for new chain.

Replayability is sometimes desirable, for example, an LN opened before a fork 
should be able to be settled on both chains

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-26 Thread Johnson Lau via bitcoin-dev


> On 23 Nov 2018, at 5:40 PM, Christian Decker via bitcoin-dev 
>  wrote:
> 
> Anthony Towns  writes:
>> Commiting to just the sequence numbers seems really weird to me; it
>> only really prevents you from adding inputs, since you could still
>> replace any input that was meant to be there by almost any arbitrary
>> other transaction...
> 
> It's a really roundabout way of committing to the inputs, I
> agree. I'm actually wondering if it makes sense to correct that
> additional blanked field in BIP118 at all since it seems there is no
> real use-case for NOINPUT that doesn't involve blanking the
> `hashSequence` as well.

I think we just make it as simple as this: Always commit to sequence of the 
same input. Commit to hashSequence if and only if all inputs and all outputs 
are signed.

The next-generation SIGHASH will introduce not only NOINPUT, but also signing 
of fees, previous scriptPubKey, and all input values, etc. So it won’t be a 
simple hack over BIP143. BIP118 might be better changed to be an informational 
BIP, focus on the rationale and examples of NOINPUT, and be cross-referenced 
with the consensus BIP.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-24 Thread Johnson Lau via bitcoin-dev
>Even still, each call to OP_CODESEPARATOR / OP_CHECKSIG pair requires 
>recomputing a new #5. scriptCode from BIP 143, and hence computes a new 
>transaction digest.

In the existing sighash (i.e. legacy and BIP143), there are 6 canonical SIGHASH 
types: 1, 2, 3, 0x81, 0x82, 0x83. In consensus, however, all 256 types are 
valid and distinct. An adversarial miner could use non-standard sighash types 
to nullify any attempt to cache sighash values (i.e. you have to compute a new 
tx digest for every OP_CHECKSIG, even without using OP_CODESEPARATOR).

The only way to prevent this is reject OP_CODESEPARATOR, FindAndDelete(), and 
non-standard SIGHASH with a softfork. However, this doesn’t work in the 
next-generation SIGHASH, as tens of standard sighash types will exist. And, 
more importantly, sighash cache is no longer necessary in segwit, with the 
legacy O(n^2) hash bug being fixed.

In summary, sighash cache is not necessary nor efficient in the next-generation 
SIGHASH, and is not a sufficient reason to remove OP_CODESEPARATOR, especially 
when people find OP_CODESEPARATOR useful in some way.

But just to be clear, I think OP_CODESEPARATOR should be deprecated in legacy 
scripts. There is a general negative sentiment against OP_CODESEPARATOR but I 
think we need to evaluate case by case.

> On 23 Nov 2018, at 6:10 AM, Russell O'Connor  wrote:
> 
> 
> 
> On Thu, Nov 22, 2018 at 3:53 PM Johnson Lau  > wrote:
> Assuming a script size of 128 bytes (including SHA256 padding), 2^20 scripts 
> is 134MB. Double it to 268MB for the merkle branch hashes. With roughly 
> 100MB/s, this should take 2.5s (or 42min for 30 levels). However, memory use 
> is not considered.
> 
> >each call to this operation effectively takes O(script-size) time
> I’m not sure if this is correct. Actually, CTransactionSignatureSerializer() 
> scans every script for OP_CODESEPARATOR. Scripts with and without 
> OP_CODESEPARATOR should take exactly the same O(script-size) time (see 
> https://github.com/bitcoin/bitcoin/pull/14786 
> )
> Also, this is no longer a concern under segwit (BIP143), which 
> CTransactionSignatureSerializer() is not used. Actually, OP_CODESEPARATOR 
> under segwit is way simpler than the proposed OP_MASK. If one finds OP_MASK 
> acceptable, there should be no reason to reject OP_CODESEPARATOR.
> 
> Even still, each call to OP_CODESEPARATOR / OP_CHECKSIG pair requires 
> recomputing a new #5. scriptCode from BIP 143, and hence computes a new 
> transaction digest.  I understood that this issue was the main motivation for 
> wanting to deprecate OP_CODESEPARATOR and remove it from later versions of 
> script.
> 
> However, given that we are looking at a combinatorial explosion in SIGHASH 
> flag combinations already, coupled with existing SigOp limitations, maybe the 
> cost of recomputing scriptCode with OP_CODESEPARATOR isn't such a big deal.
> 
> And even if we choose remove the behavior of OP_CODESEPARATOR in new versions 
> of Script, it seems more than 30 layers of sequential OP_IFs can be 
> MASTified, so there is no need to use OP_CODESEPARATOR within that limit.
> 
> >One suggestion I heard (I think I heard it from Pieter) to achieve the above 
> >is to add an internal counter that increments on every control flow 
> >operator,……...
> If I have to choose among OP_CODESEPARATOR and “flow operator counting”, I’d 
> rather choose OP_CODESEPARATOR. At least we don’t need to add more lines to 
> the consensus code, just for something that is mostly archivable with MAST.
> 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-23 Thread Johnson Lau via bitcoin-dev
Assuming a script size of 128 bytes (including SHA256 padding), 2^20 scripts is 
134MB. Double it to 268MB for the merkle branch hashes. With roughly 100MB/s, 
this should take 2.5s (or 42min for 30 levels). However, memory use is not 
considered.

>each call to this operation effectively takes O(script-size) time
I’m not sure if this is correct. Actually, CTransactionSignatureSerializer() 
scans every script for OP_CODESEPARATOR. Scripts with and without 
OP_CODESEPARATOR should take exactly the same O(script-size) time (see 
https://github.com/bitcoin/bitcoin/pull/14786)
Also, this is no longer a concern under segwit (BIP143), which 
CTransactionSignatureSerializer() is not used. Actually, OP_CODESEPARATOR under 
segwit is way simpler than the proposed OP_MASK. If one finds OP_MASK 
acceptable, there should be no reason to reject OP_CODESEPARATOR.

>One suggestion I heard (I think I heard it from Pieter) to achieve the above 
>is to add an internal counter that increments on every control flow 
>operator,……...
If I have to choose among OP_CODESEPARATOR and “flow operator counting”, I’d 
rather choose OP_CODESEPARATOR. At least we don’t need to add more lines to the 
consensus code, just for something that is mostly archivable with MAST.


> On 23 Nov 2018, at 12:23 AM, Russell O'Connor  wrote:
> 
> I see, so your suggestion is that a sequence of OP_IF ... OP_ENDIF can be 
> replaced by a Merklized Script tree of that depth in practice.
> 
> I'm concerned that at script creation time it takes exponential time to 
> complete a Merkle root of depth 'n'.  Can anyone provide benchmarks or 
> estimates of how long it takes to compute a Merkle root of a full tree of 
> various depths on typical consumer hardware?  I would guess things stop 
> becoming practical at a depth of 20-30.
> 
> On Thu, Nov 22, 2018 at 9:28 AM Johnson Lau  wrote:
> With MAST in taproot, OP_IF etc become mostly redundant, with worse privacy. 
> To maximise fungibility, we should encourage people to use MAST, instead of 
> improve the functionality of OP_IF and further complicate the protocol.
> 
> 
>> On 22 Nov 2018, at 1:07 AM, Russell O'Connor via bitcoin-dev 
>>  wrote:
>> 
>> On Mon, Nov 19, 2018 at 10:22 PM Pieter Wuille via bitcoin-dev 
>>  wrote:
>> So my question is whether anyone can see ways in which this introduces
>> redundant flexibility, or misses obvious use cases?
>> 
>> Hopefully my comment is on-topic for this thread:
>> 
>> Given that we want to move away from OP_CODESEPARATOR, because each call to 
>> this operation effectively takes O(script-size) time, we need a replacement 
>> for the functionality it currently provides.  While perhaps the original 
>> motivation for OP_CODESEPARTOR is surrounded in mystery, it currently can be 
>> used (or perhaps abused) for the task of creating signature that covers, not 
>> only which input is being signed, but which specific branch within that 
>> input Script code is being signed for.
>> 
>> For example, one can place an OP_CODESEPARATOR within each branch of an IF 
>> block, or by placing an OP_CODESEPARATOR before each OP_CHECKSIG operation.  
>> By doing so, signatures created for one clause cannot be used as signatures 
>> for another clause.  Since different clauses in Bitcoin Script may be 
>> enforcing different conditions (such as different time-locks, hash-locks, 
>> etc), it is useful to be able to sign in such a way that your signature is 
>> only valid when the conditions for a particular branch are satisfied.  In 
>> complex Scripts, it may not be practical or possible to use different public 
>> keys for every different clause. (In practice, you will be able to get away 
>> with fewer OP_CODESEPARATORS than one in every IF block).
>> 
>> One suggestion I heard (I think I heard it from Pieter) to achieve the above 
>> is to add an internal counter that increments on every control flow 
>> operator, OP_IF, OP_NOTIF, OP_ELSE, OP_ENDIF, and have the signature cover 
>> the value of this counter.  Equivalently we divide every Bitcoin Script 
>> program into blocks deliminated by these control flow operator and have the 
>> signature cover the index of the block that the OP_CHECKSIG occurs within.  
>> More specifically, we will want a SigHash flag to enables/disable the 
>> signature covering this counter.
>> 
>> There are many different ways one might go about replacing the remaining 
>> useful behaviour of OP_CODESEPARATOR than the one I gave above. I would be 
>> happy with any solution.
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-23 Thread Johnson Lau via bitcoin-dev
With MAST in taproot, OP_IF etc become mostly redundant, with worse privacy. To 
maximise fungibility, we should encourage people to use MAST, instead of 
improve the functionality of OP_IF and further complicate the protocol.


> On 22 Nov 2018, at 1:07 AM, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> On Mon, Nov 19, 2018 at 10:22 PM Pieter Wuille via bitcoin-dev 
>  > wrote:
> So my question is whether anyone can see ways in which this introduces
> redundant flexibility, or misses obvious use cases?
> 
> Hopefully my comment is on-topic for this thread:
> 
> Given that we want to move away from OP_CODESEPARATOR, because each call to 
> this operation effectively takes O(script-size) time, we need a replacement 
> for the functionality it currently provides.  While perhaps the original 
> motivation for OP_CODESEPARTOR is surrounded in mystery, it currently can be 
> used (or perhaps abused) for the task of creating signature that covers, not 
> only which input is being signed, but which specific branch within that input 
> Script code is being signed for.
> 
> For example, one can place an OP_CODESEPARATOR within each branch of an IF 
> block, or by placing an OP_CODESEPARATOR before each OP_CHECKSIG operation.  
> By doing so, signatures created for one clause cannot be used as signatures 
> for another clause.  Since different clauses in Bitcoin Script may be 
> enforcing different conditions (such as different time-locks, hash-locks, 
> etc), it is useful to be able to sign in such a way that your signature is 
> only valid when the conditions for a particular branch are satisfied.  In 
> complex Scripts, it may not be practical or possible to use different public 
> keys for every different clause. (In practice, you will be able to get away 
> with fewer OP_CODESEPARATORS than one in every IF block).
> 
> One suggestion I heard (I think I heard it from Pieter) to achieve the above 
> is to add an internal counter that increments on every control flow operator, 
> OP_IF, OP_NOTIF, OP_ELSE, OP_ENDIF, and have the signature cover the value of 
> this counter.  Equivalently we divide every Bitcoin Script program into 
> blocks deliminated by these control flow operator and have the signature 
> cover the index of the block that the OP_CHECKSIG occurs within.  More 
> specifically, we will want a SigHash flag to enables/disable the signature 
> covering this counter.
> 
> There are many different ways one might go about replacing the remaining 
> useful behaviour of OP_CODESEPARATOR than the one I gave above. I would be 
> happy with any solution.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT

2018-11-22 Thread Johnson Lau via bitcoin-dev
If we sign the txids of all inputs, we should also explicitly commit to their 
values. Only this could fully eliminate any possible way to lie about input 
value to hardware wallets

> Does it make sense to keep SIGHASH_NONE?
SIGHASH_NONE should be kept. ANYONECANPAY|NONE allows donation of dust UTXOs to 
miners

> I think NONE without NOFEE doesn't make much sense…….
We might refuse to sign weird combinations like NOFEE|ALLINPUT|ALLOUTPUT. But 
to keep the consensus logic simple, we should just validate it as usual.

> OP_MASK seems a bit complicated to me. …...
Yes, it looks complicated to me, and it improves security only in some 
avoidable edge cases in SIGHASH_NOINPUT:

The common case: the exact masked script or address is reused. OP_MASK can’t 
prevent signature replay since the masked script is the same.

The avoidable case: the same public key is reused in different script 
templates. OP_MASK may prevent signature replay is the masked script is not the 
same.

The latter case is totally avoidable since one could and should use a different 
public key for different script.

It could be made much simpler as NOINPUT with/without SCRIPT. This again is 
only helpful in the avoidable case above, but it doesn’t bring too much 
complexity.

> I don't have a reason why, but committing to the scriptCode feels to me like 
> it reduces the "hackiness" of NOINPUT a lot.
OP_MASK is designed to preserve the hackiness, while provide some sort of 
replay protection (only in avoidable cases). However, I’m not sure who would 
actually need NOINPUT with KNOWNSCRIPT

> On 21 Nov 2018, at 4:29 AM, Anthony Towns via bitcoin-dev 
>  wrote:
> 
> On Mon, Nov 19, 2018 at 02:37:57PM -0800, Pieter Wuille via bitcoin-dev wrote:
>> Here is a combined proposal:
>> * Three new sighash flags are added: SIGHASH_NOINPUT, SIGHASH_NOFEE,
>> and SIGHASH_SCRIPTMASK.
>> * A new opcode OP_MASK is added, which acts as a NOP during execution.
>> * The sighash is computed like in BIP143, but:
>>  * If SIGHASH_SCRIPTMASK is present, for every OP_MASK in scriptCode
>> the subsequent opcode/push is removed.
>>  * The scriptPubKey being spent is added to the sighash, unless
>> SIGHASH_SCRIPTMASK is set.
>>  * The transaction fee is added to the sighash, unless SIGHASH_NOFEE is set.
>>  * hashPrevouts, hashSequence, and outpoint are set to null when
>> SIGHASH_NOINPUT is set (like BIP118, but not for scriptCode).
> 
> Current flags are {ALL, NONE, SINGLE} and ANYONECANPAY, and the BIP143
> tx digest consists of the hash of:
> 
>  1 nVersion
>  4 outpoint
>  5 input scriptCode
>  6 input's outpoint value
>  7 input's nSeq
>  9 nLocktime
> 10 sighash
> 
>  2 hashPrevOuts (commits to 4,5,6; unless ANYONECANPAY)
>  3 hashSequence (commits to 7; only if ALL and not ANYONECANPAY)
>  8 hashOutputs
>   - NONE: 0
>   - SINGLE: {value,scriptPubKey} for corresponding output
>   - otherwise: {value,scriptPubKey} for all outputs
> 
> The fee is committed to by hashPrevOuts and hashOutputs, which means
> NOFEE is only potentially useful if ANYONECANPAY or NONE or SINGLE is set.
> 
> For NOINPUT, (2),(3),(4) are cleared, and SCRIPTMASK (which munges (5))
> is only useful given NOINPUT, since (4) indirectly commits to (5). 
> 
> Given this implementation, NOINPUT effectively implies ANYONECANPAY,
> I think. (I think that is also true of BIP 118's NOINPUT spec)
> 
> Does it make sense to treat this as two classes of options, affecting
> the input and output side:
> 
>  output: (pick one, using bits 0,1)
>* NONE -- don't care where the money goes
>* SINGLE -- want this output
>* ALL -- want exactly this set of outputs
> 
>  input: (pick one, using bits 4,5)
>* PARTIALSCRIPT -- spending from some tx with roughly this script (and
>   maybe others; SCRIPTMASK|NOINPUT|ANYONECANPAY)
>* KNOWNSCRIPT -- spending from some tx with exactly this script (and
> maybe others; NOINPUT|ANYONECANPAY)
>* KNOWNTX -- spending from this tx (and maybe others; ANYONECANPAY)
>* ALL_INPUTS -- spending from exactly these txes
> 
>  combo: (flag, bit 6)
>* NOFEE -- don't commit to the fee
> 
> I think NONE without NOFEE doesn't make much sense, and
> NOFEE|ALL|ALL_INPUTS would also be pretty weird. Might make sense to
> warn/error on signing when asking for those combinations, and maybe even
> to fail on validating them.
> 
> (Does it make sense to keep SIGHASH_NONE? I guess SIGHASH_NONE|ALL_INPUTS
> could be useful if you just use sigs on one of the other inputs to commit
> to a useful output)
> 
> FWIW, OP_MASK seems a bit complicated to me. How would you mask a script
> that looks like:
> 
>   OP_MASK IF  ENDIF  ...
> 
> or:
> 
>   IF OP_MASK ENDIF  ...
> 
> I guess if you make the rule be "for every OP_MASK in scriptCode the
> *immediately* subsequent opcode/push is removed (if present)" it would
> be fine though -- that would make OP_MASK in both the above not have
> any effect. (Maybe a more explicit 

Re: [bitcoin-dev] BIP sighash_noinput

2018-09-26 Thread Johnson Lau via bitcoin-dev
In BIP143, the nSequence of the same input is always signed, with any hashtype. 
Why do you need to sign the sequence of other inputs?

> On 26 Sep 2018, at 5:36 PM, Jonas Nick via bitcoin-dev 
>  wrote:
> 
>> At the risk of bikeshedding, shouldn't NOINPUT also zero out the
>> hashSequence so that its behaviour is consistent with ANYONECANPAY?
> 
> There is a good reason for not doing that. If NOINPUT would sign the
> hashSequence then it would be possible to get rid of OP_CSV in eltoo update
> scripts. As a result update scripts could be taprootified because the more
> common branch (settlement) would be just a 2-of-2 multisig. Applying taproot
> would then make unilateral settlement look like a single pubkey spend and 
> avoid
> having to reveal the unexecuted (update) branch.
> 
> Eltoo update transaction outputs consist of two branches, update and
> settlement, where the update branch can be spend by a more recent update
> transaction if an obsolete update transaction ends up spending the funding
> output. The settlement branch is a 2-of-2 multisig with a relative timelock
> using OP_CSV. Removing OP_CSV is possible because both parties signature is
> required to spend the update transaction. They will only sign if the input has
> the right sequence numbers which is sufficient to enforce the timeout (BIP68) 
> -
> assuming they are covered by the signature.
> 
> There's a catch: hashSequence includes the sequence numbers of all transaction
> inputs. That's not a problem for eltoo because settlement transactions only
> have one input. The update mechanism with update transactions relies on being
> able to bump the fee by unilaterally adding inputs and and change outputs to
> the transaction. That's also not a problem because update spends do not use
> relative timelocks and they are signed with SINGLE. So whenever NOINPUT is
> combined SINGLE the hashSequence should be zeroed. This is in fact what a
> minimal change to the current NOINPUT implementation would naturally do (see
> below). However, that's error-prone when using NOINPUT in other contexts so in
> general it would be better if NOINPUT would only sign the sequence number of
> the corresponding input.
> 
> Another downside of this approach is that you can never rebind to an output
> with an OP_CSV that requires a larger sequence number, unless you also sign
> with SIGHASH_SINGLE. It's difficult to imagine application where this would be
> an issue.
> 
> This is the modification to the NOINPUT implementation
> (https://github.com/cdecker/bitcoin/commits/noinput) which makes eltoo
> unilateral closes taprootifiable:
> +++ b/src/script/interpreter.cpp
> @@ -1223,7 +1223,7 @@ uint256 SignatureHash(const CScript& scriptCode, const 
> CTransaction& txTo, unsig
> hashPrevouts = cacheready ? cache->hashPrevouts : 
> GetPrevoutHash(txTo);
> }
> 
> -if (!(nHashType & SIGHASH_ANYONECANPAY) && (nHashType & 0x1f) != 
> SIGHASH_SINGLE && (nHashType & 0x1f) != SIGHASH_NONE && !noinput) {
> +if (!(nHashType & SIGHASH_ANYONECANPAY) && (nHashType & 0x1f) != 
> SIGHASH_SINGLE && (nHashType & 0x1f) != SIGHASH_NONE) {
> hashSequence = cacheready ? cache->hashSequence : 
> GetSequenceHash(txTo);
> }
> 
> On 5/1/18 4:58 PM, Russell O'Connor via bitcoin-dev wrote:
>> At the risk of bikeshedding, shouldn't NOINPUT also zero out the
>> hashSequence so that its behaviour is consistent with ANYONECANPAY?
>> 
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SIGHASH2 for version 1 witness programme

2018-08-31 Thread Johnson Lau via bitcoin-dev
signatures commit to the size of witness.
>> 
>> Any suggestions are welcomed. But I have the following questions:
>> 
>> 1. Should NOINPUT commit to scriptCode and/or scriptPubKey? I think it 
>> should, because that helps avoid signature replay in some cases, and also 
>> lets hardware wallets know what they are signing. I am asking this because 
>> BIP118 proposes the opposite. I want to make sure I’m not missing something 
>> here.
>> 2. Do we want to add LASTOUTPUT and DUALOUTPUT? Suggested by gmaxwell, an 
>> example use case is kickstarter, where individual supporters send money to 
>> the last output for a kickstarter project, and send change to the matched 
>> output. However, I doubt if this would be actually used this way, because 
>> the kickstarter organiser could always take the money before the target is 
>> met, by making up the difference with his own input. This is an inherent 
>> problem for any anonymous kickstarter scheme. If these new SIGHASHs are not 
>> useful in other applications, I am not sure if we should add them.
>> 3. Instead of these restrictive MATCH/LAST/DUALOUTPUT, do we want a fully 
>> flexible way to sign a subset of outputs? The indexes of signed outputs are 
>> put at the end of the signature, and the signature won’t commit to these 
>> index values. Therefore, a third party could collect all transactions of 
>> this kind and merge them into one transaction. To limit the sighash cost, 
>> number of signed outputs might not be more than 2 or 3. Some potential 
>> problems: a) code complexity; b) 1-byte or 2-byte index: 1-byte will limit 
>> the number of outputs to 256. 2-byte will use more space even for smaller 
>> txs; c) highly variable signature size makes witness size estimation more 
>> difficult
>> 4. Should we sign the exact witness size (as proposed), or an upper size 
>> limit? Signing an upper limit will take up more space, as the limit has to 
>> be explicitly shown in the witness. The overhead could be avoided by showing 
>> the limit only if the actual witness size is not equal to the committed 
>> limit. However, I tend to keep it simple and sign the exact value. If in a 
>> multi-sig setup some signers are unable to accurately estimate the witness 
>> size, they should leave this responsibility to the last signer who should 
>> know the exact size.
>> 
>> 
>>> On 1 Jun 2018, at 2:35 AM, Johnson Lau via bitcoin-dev 
>>>  wrote:
>>> 
>>> Since 2016, I have made a number of proposals for the next generation of 
>>> script. Since then, there has been a lot of exciting development on this 
>>> topic. The most notable ones are Taproot and Graftroot proposed by Maxwell. 
>>> It seems the most logical way is to implement MAST and other new script 
>>> functions inside Taproot and/or Graftroot. Therefore, I substantially 
>>> simplified my earlier proposal on SIGHASH2. It is a superset of the 
>>> existing SIGHASH and the BIP118 SIGHASH_NOINPUT, with further flexibility 
>>> but not being too complicated. It also fixes some minor problems that we 
>>> found in the late stage of BIP143 review. For example, the theoretical (but 
>>> not statistical) possibility of having same SignatureHash() results for a 
>>> legacy and a witness transaction. This is fixed by padding a constant at 
>>> the end of the message so collision would not be possible.
>>> 
>>> A formatted version and example code could be found here:
>>> https://github.com/jl2012/bips/blob/sighash2/bip-sighash2.mediawiki
>>> https://github.com/jl2012/bitcoin/commits/sighash2
>>> 
>>> 
>>> 
>>> 
>>> BIP: YYY
>>> Layer: Consensus (soft fork)
>>> Title: Signature checking operations in version 1 witness program
>>> Author: Johnson Lau 
>>> Comments-Summary: No comments yet.
>>> Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0YYY
>>> Status: Draft
>>> Type: Standards Track
>>> Created: 2017-07-19
>>> License: BSD-3-Clause
>>> 
>>> 
>>> *Abstract
>>> 
>>> This BIP defines signature checking operations in version 1 witness program.
>>> 
>>> *Motivation
>>> 
>>> Use of compact signatures to save space.
>>> 
>>> More SIGHASH options, more flexibility
>>> 
>>> *Specification
>>> 
>>> The following specification is applicable to OP_CHECKSIG and 
>>> OP_CHECKSIGVERIFY in version 1 witness program.
>>> 
>>> **Public Key Format
&

Re: [bitcoin-dev] SIGHASH2 for version 1 witness programme

2018-08-30 Thread Johnson Lau via bitcoin-dev
After gathering some feedbacks I substantially revised the proposal. This 
version focus on improving security, and reduces the number of optional 
features.

Formatted BIP and sample code at:
https://github.com/jl2012/bips/blob/sighash2/bip-sighash2.mediawiki
https://github.com/jl2012/bitcoin/commits/sighash2

The major new features compared with BIP143:

1. If signing all inputs, also sign all input value. BIP143 signature only 
covers the value of the same input. In some cases this may not be adequate for 
hardware wallet to determine the right amount of fees. Signing all input values 
will secure any possible case.
2. Sign both scriptCode and previous scriptPubKey. In the original bitcoin 
design, previous scriptPubKey is signed as the scriptCode. However, this is not 
the case with P2SH and segwit. Explicitly committing to the scriptPubKey allows 
hardware wallet to confirm what it is actually signing (e.g. P2SH-segwit vs. 
Native-segwit).
3. SIGHASH2_NOINPUT: basically same as BIP118, but the signature commits to 
both scriptCode and scriptPubKey. This prevents signature replay if the same 
public key is used in different scripts.
4. SIGHASH2_MATCHOUTPUT (previously SIGHASH_SINGLE) disallows out-of-range case.
5. SIGHASH2_LASTOUTPUT: signs only the highest index output.
6. SIGHASH2_DUALOUTPUT: signs the matched output and the highest index output. 
Described by gmaxwell at 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016188.html 
<https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016188.html>
7. Signing the amount of fees (optional, yes by default). In case of not 
signing all inputs or outputs, users may still want to commit to a specific fee 
amount.
8. Signing the witness size (optional, yes by default). While segwit fixed txid 
malleability, it is not a fix of script malleability. It may cause some trouble 
if an attacker could bloat the witness and reduce the fee priority of a 
transaction. Although the witness size is not malleable for most simple 
scripts, this is not guaranteed for more complex ones. Such kind of size 
malleability could be avoided if signatures commit to the size of witness.

Any suggestions are welcomed. But I have the following questions:

1. Should NOINPUT commit to scriptCode and/or scriptPubKey? I think it should, 
because that helps avoid signature replay in some cases, and also lets hardware 
wallets know what they are signing. I am asking this because BIP118 proposes 
the opposite. I want to make sure I’m not missing something here.
2. Do we want to add LASTOUTPUT and DUALOUTPUT? Suggested by gmaxwell, an 
example use case is kickstarter, where individual supporters send money to the 
last output for a kickstarter project, and send change to the matched output. 
However, I doubt if this would be actually used this way, because the 
kickstarter organiser could always take the money before the target is met, by 
making up the difference with his own input. This is an inherent problem for 
any anonymous kickstarter scheme. If these new SIGHASHs are not useful in other 
applications, I am not sure if we should add them.
3. Instead of these restrictive MATCH/LAST/DUALOUTPUT, do we want a fully 
flexible way to sign a subset of outputs? The indexes of signed outputs are put 
at the end of the signature, and the signature won’t commit to these index 
values. Therefore, a third party could collect all transactions of this kind 
and merge them into one transaction. To limit the sighash cost, number of 
signed outputs might not be more than 2 or 3. Some potential problems: a) code 
complexity; b) 1-byte or 2-byte index: 1-byte will limit the number of outputs 
to 256. 2-byte will use more space even for smaller txs; c) highly variable 
signature size makes witness size estimation more difficult
4. Should we sign the exact witness size (as proposed), or an upper size limit? 
Signing an upper limit will take up more space, as the limit has to be 
explicitly shown in the witness. The overhead could be avoided by showing the 
limit only if the actual witness size is not equal to the committed limit. 
However, I tend to keep it simple and sign the exact value. If in a multi-sig 
setup some signers are unable to accurately estimate the witness size, they 
should leave this responsibility to the last signer who should know the exact 
size.


> On 1 Jun 2018, at 2:35 AM, Johnson Lau via bitcoin-dev 
>  wrote:
> 
> Since 2016, I have made a number of proposals for the next generation of 
> script. Since then, there has been a lot of exciting development on this 
> topic. The most notable ones are Taproot and Graftroot proposed by Maxwell. 
> It seems the most logical way is to implement MAST and other new script 
> functions inside Taproot and/or Graftroot. Therefore, I substantially 
> simplified my earlier proposal on SIGHASH2. It is a superset of the existing 
> SIGHASH and the BIP118 SIGHASH_NOINPUT, with 

Re: [bitcoin-dev] Testnet3 Reest

2018-08-30 Thread Johnson Lau via bitcoin-dev
A public testnet is still useful so in articles people could make references to 
these transactions.

Maybe we could have 2 testnets at the same time, with one having a smaller 
block size?

> On 31 Aug 2018, at 4:02 AM, Peter Todd via bitcoin-dev 
>  wrote:
> 
> Signed PGP part
> On Thu, Aug 30, 2018 at 12:58:42PM +0530, shiva sitamraju via bitcoin-dev 
> wrote:
>> Hi,
>> 
>> Testnet is now 1411795 blocks and a full sync is taking atleast 48 hours.
>> 
>> Is a testnet reset scheduled in the next release or any reason not to do a
>> reset ?
>> 
>> Fast onboarding/lower disk overheads would be  very much appreicated for
>> testing purposes
> 
> Actually I'd advocate the opposite: I'd want testnet to be a *larger*
> blockchain than mainnet to find size-related issues first.
> 
> Note that for testing regtest is often a better alternative, and you can setup
> private regtest blockchains fairly easily and with good control over exactly
> when and how blocks are created.
> 
> -- 
> https://petertodd.org 'peter'[:-1]@petertodd.org
> 
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Getting around to fixing the timewarp attack.

2018-08-25 Thread Johnson Lau via bitcoin-dev
To determine the new difficulty, it is supposed to compare the timestamps of 
block (2016n - 1) with block (2016n - 2017). However, an off-by-one bug makes 
it compares with block (2016n - 2016) instead.

A naive but perfect fix is to require every block (2016x) to have a timestamp 
not smaller than that of its parent block. However, a chain-split would happen 
even without any attack, unless super-majority of miners are enforcing the new 
rules. This also involves mandatory upgrade of pool software (cf. pool software 
upgrade is not mandatory for segwit). The best way is to do it with something 
like BIP34, which also requires new pool software. 

We could have a weaker version of this, to require the timestamp of block 
(2016x) not smaller than its parent block by t-seconds, with 0 <= t <= 
infinity. With a bigger t, the fix is less effective but also less likely to 
cause intentional/unintentional split. Status quo is t = infinity.

Reducing the value of t is a softfork. The aim is to find a t which is 
small-enough-to-prohibit-time-wrap-attack but also big-enough-to-avoid-split. 
With t=86400 (one day), a time-wrap attacker may bring down the difficulty by 
about 1/14 = 7.1% per round. Unless new blocks were coming incredibly slow, the 
attacker needs to manipulate the MTP for at least 24 hours, or try to rewrite 
24 hours of history. Such scale of 51% attack is already above the 100-block 
coinbase maturity safety theshold and we are facing a much bigger problem.

With t=86400, a non-majority, opportunistic attacker may split the chain only 
if we have no new block for at least 24 - 2 = 22 hours (2-hours is the protocol 
limit for using a future timestamp) at the exact moment of retarget. That means 
no retarget is possible in the next 2016 blocks. Doing a time-wrap attack at 
this point is not quite interesting as the coin is probably already worthless. 
Again, this is a much bigger problem than the potential chain spilt. People 
will yell for a difficulty (and time wrap fix, maybe) hardfork to resuscitate 
the chain.

 


> On 21 Aug 2018, at 4:14 AM, Gregory Maxwell via bitcoin-dev 
>  wrote:
> 
> Since 2012 (IIRC) we've known that Bitcoin's non-overlapping
> difficulty calculation was vulnerable to gaming with inaccurate
> timestamps to massively increase the rate of block production beyond
> the system's intentional design. It can be fixed with a soft-fork that
> further constraints block timestamps, and a couple of proposals have
> been floated along these lines.
> 
> I put a demonstration of timewarp early in the testnet3 chain to also
> let people test mitigations against that.  It pegs the difficulty way
> down and then churned out blocks at the maximum rate that the median
> time protocol rule allows.
> 
> I, and I assume others, haven't put a big priority into fixing this
> vulnerability because it requires a majority hashrate and could easily
> be blocked if someone started using it.
> 
> But there haven't been too many other network consensus rules going on
> right now, and I believe at least several of the proposals suggested
> are fully compatible with existing behaviour and only trigger in the
> presence of exceptional circumstances-- e.g. a timewarp attack.  So
> the risk of deploying these mitigations would be minimal.
> 
> Before I dust off my old fix and perhaps prematurely cause fixation on
> a particular approach, I thought it would be useful to ask the list if
> anyone else was aware of a favourite backwards compatible timewarp fix
> proposal they wanted to point out.
> 
> Cheers.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SIGHASH2 for version 1 witness programme

2018-06-01 Thread Johnson Lau via bitcoin-dev


> On 2 Jun 2018, at 2:15 AM, Russell O'Connor  wrote:
> 
> 
> I prefer a different opcode for CHECKSIGFROMSTACK because I dislike opcodes 
> that pop a non-static number of elements off the stack.  Popping a dynamic 
> number of stack elements makes it more difficult to validate that a Script 
> pubkey doesn't allow any funny business.


Agreed. This is one of the reasons I think we should remove CHECKMULTISIG in 
the new script system___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SIGHASH2 for version 1 witness programme

2018-06-01 Thread Johnson Lau via bitcoin-dev


> On 1 Jun 2018, at 11:03 PM, Russell O'Connor  wrote:
> 
> 
> 
> On Thu, May 31, 2018 at 2:35 PM, Johnson Lau via bitcoin-dev 
>  <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
> 
>   Double SHA256 of the serialization of:
> 
> Should we replace the Double SHA256 with a Single SHA256?  There is no 
> possible length extension attack here.  Or are we speculating that there is a 
> robustness of Double SHA256 in the presence of SHA256 breaking?
> 
> I suggest putting `sigversion` at the beginning instead of the end of the 
> format.  Because its value is constant, the beginning of the SHA-256 
> computation could be pre-computed in advance.  Furthermore, if we make the 
> `sigversion` exactly 64-bytes long then the entire first block of the SHA-256 
> compression function could be pre-computed.
> 
> Can we add CHECKSIGFROMSTACK or do you think that would go into a separate 
> BIP?

I think it’s just a tradition to use double SHA256. One reason we might want to 
keep dSHA256 is a blind signature might be done by giving only the single 
SHA256 hash to the signer. At the same time, a non-Bitcoin signature scheme 
might use SHA512-SHA256. So a blind signer could distinguish the message type 
without learning the message.

sigversion is a response to Peter Todd’s comments on BIP143: 
https://petertodd.org/2016/segwit-consensus-critical-code-review#bip143-transaction-signature-verification
 
<https://petertodd.org/2016/segwit-consensus-critical-code-review#bip143-transaction-signature-verification>

I make it a 0x0100 at the end of the message because the last 4 bytes has 
been the nHashType in the legacy/BIP143 protocol. Since the maximum legacy 
nHashType is 0xff, no collision could ever occur.

Putting a 64-byte constant at the beginning should also work, since a collision 
means SHA256 is no longer preimage resistance. I don’t know much about SHA256 
optimisation. How good it is as we put a 64-byte constant at the beginning, 
while we also make the message 64-byte longer?

For CHECKSIGFROMSTACK (CSFS), I think the question is whether we want to make 
it as a separate opcode, or combine that with CHECKSIG. If it is a separate 
opcode, I think it should be a separate BIP. If it is combined with CHECKSIG, 
we could do something like this: If the bit 10 of SIGHASH2 is set, CHECKSIG 
will pop one more item from stack, and serialize its content with the 
transaction digest. Any thought?


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Disallow insecure use of SIGHASH_SINGLE

2018-05-31 Thread Johnson Lau via bitcoin-dev
I’ve made a PR to add a new policy to disallow using SIGHASH_SINGLE without 
matched output:

https://github.com/bitcoin/bitcoin/pull/13360

Signature of this form is insecure, as it commits to no output while users 
might think it commits to one. It is even worse in non-segwit scripts, which is 
effectively SIGHASH_NOINPUT|SIGHASH_NONE, so any UTXO of the same key could be 
stolen. (It’s restricted to only one UTXO in segwit, but it’s still like a 
SIGHASH_NONE.)

This is one of the earliest unintended consensus behavior. Since these 
signatures are inherently unsafe, I think it does no harm to disable this 
unintended “feature” with a softfork. But since these signatures are currently 
allowed, the first step is to make them non-standard.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] SIGHASH2 for version 1 witness programme

2018-05-31 Thread Johnson Lau via bitcoin-dev
Since 2016, I have made a number of proposals for the next generation of 
script. Since then, there has been a lot of exciting development on this topic. 
The most notable ones are Taproot and Graftroot proposed by Maxwell. It seems 
the most logical way is to implement MAST and other new script functions inside 
Taproot and/or Graftroot. Therefore, I substantially simplified my earlier 
proposal on SIGHASH2. It is a superset of the existing SIGHASH and the BIP118 
SIGHASH_NOINPUT, with further flexibility but not being too complicated. It 
also fixes some minor problems that we found in the late stage of BIP143 
review. For example, the theoretical (but not statistical) possibility of 
having same SignatureHash() results for a legacy and a witness transaction. 
This is fixed by padding a constant at the end of the message so collision 
would not be possible.

A formatted version and example code could be found here:
https://github.com/jl2012/bips/blob/sighash2/bip-sighash2.mediawiki
https://github.com/jl2012/bitcoin/commits/sighash2




BIP: YYY
  Layer: Consensus (soft fork)
  Title: Signature checking operations in version 1 witness program
  Author: Johnson Lau 
  Comments-Summary: No comments yet.
  Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0YYY
  Status: Draft
  Type: Standards Track
  Created: 2017-07-19
  License: BSD-3-Clause


*Abstract

This BIP defines signature checking operations in version 1 witness program.

*Motivation

Use of compact signatures to save space.

More SIGHASH options, more flexibility

*Specification

The following specification is applicable to OP_CHECKSIG and OP_CHECKSIGVERIFY 
in version 1 witness program.

**Public Key Format

The pubic key MUST be exactly 33 bytes.

If the first byte of the public key is a 0x02 or 0x03, it MUST be a compressed 
public key. The signature is a Schnorr signature (To be defined separately)

If the first byte of the public key is neither 0x02 nor 0x03, the signature is 
assumed valid. This is for future upgrade.

**Signature Format

The following rules apply only if the first byte of the public key is a 0x02 or 
0x03.

If the signature size is 64 to 66 byte, it MUST be a valid Schnorr signature or 
the script execution MUST fail (cf. BIP146 NULLFAIL). The first 32-byte is the 
R value in big-endian. The next 32-byte is the S value in big-endian. The 
remaining data, if any, denotes the hashtype in little-endian (0 to 0x).

hashtype MUST be minimally encoded. Any trailing zero MUST be removed.

If the signature size is zero, it is accepted as the "valid failing" signature 
for OP_CHECKSIG to return a FALSE value to the stack. (cf. BIP66)

The script execution MUST fail with a signature size not 0, 64, 65, or 66-byte.

**New hashtype definitions

hashtype and the SignatureHash function are re-defined:

  Double SHA256 of the serialization of:
 1. nVersion (4-byte little endian)
 2. hashPrevouts (32-byte hash)
 3. hashSequence (32-byte hash)
 4. outpoint (32-byte hash + 4-byte little endian)
 5. scriptCode (serialized as scripts inside CTxOuts)
 6. nAmount (8-byte little endian)
 7. nSequence (4-byte little endian)
 8. hashOutputs (32-byte hash)
 9. nLocktime (4-byte little endian)
10. nInputIndex (4-byte little endian)
11. nFees (8-byte little endian)
12. hashtype (4-byte little endian)
13. sigversion (4-byte little endian for the fixed value 0x0100)

The bit 0 to 3 of hashtype denotes a value between 0 and 15:

• If the value is 1, the signature is invalid.
• If the value is 3 or below, hashPrevouts is the hash of all input, 
same as defined in BIP143. Otherwise, it is 32-byte of 0x...
• If the value is 7 or below, outpoint is the COutPoint of the current 
input. Otherwise, it is 36-byte of 0x...
• If the value is 0, hashSequence is the hash of all sequence, same as 
defined in BIP143. Otherwise, it is 32-byte of 0x...
• If the value is even (including 0), nSequence is the nSequence of the 
current input. Otherwise, it is 0x.
• If the value is 6, 7, 10, 11, 14, or 15, nInputIndex is 0x. 
Otherwise, it is the index of the current input.
• If the value is 11 or below, nAmount is the value of the current 
input (same as BIP143). Otherwise, it is 0x.

The bit 4 and 5 of hashtype denotes a value between 0 and 3:

• If the value is 0, hashOutputs is same as the SIGHASH_ALL case in 
BIP143 as a hash of all outputs.
• If the value is 1, the signature is invalid.
• If the value is 2, hashOutputs is same as the SIGHASH_SINGLE case in 
BIP143 as a hash of the matching output. If a matching output does not exist, 
hashOutputs is 32-byte of 0x...
• If the value is 3, hashOutputs is 32-byte of 0x...
If bit 6 is set (SIGHASH2_NOFEE), nFees is 0x. Otherwise, it is 
the fee 

Re: [bitcoin-dev] Should Graftroot be optional?

2018-05-25 Thread Johnson Lau via bitcoin-dev


> On 24 May 2018, at 10:08 AM, Gregory Maxwell via bitcoin-dev 
>  wrote:
> 
> On Thu, May 24, 2018 at 1:58 AM, Pieter Wuille via bitcoin-dev
>  wrote:
>> Thanks everyone who commented so far, but let me clarify the context
>> of this question first a bit more to avoid getting into the weeds too
>> much.
> 
> since the signer(s) could have signed an arbitrary
> transaction instead, being able to delegate is strictly less powerful.
> 


Actually, we could introduce graftroot-like delegation to all existing and new 
P2PK and P2PKH UTXOs with a softfork:

1. Define SIGHASH_GRAFTROOT = 0x40. New rules apply if (nHashType & 
SIGHASH_GRAFTROOT)

2. The third stack item MUST be at least 64 bytes, with 32-byte R and 32-byte 
S, followed by a script of arbitrary size. It MUST be a valid signature for the 
script with the original public key.

3. The remaining stack items MUST solve the script

Conceptually this could be extended to arbitrary output types, not just P2PK 
and P2PKH. But it might be too complicated to describe here.

(We can’t do this in P2WPKH and P2WSH due to the implicit CLEANSTACK rules. But 
nothing could stop us to do it by introducing another witness structure (which 
is invisible to current nodes) and store the extra graftroot signatures and 
scripts)

A graftroot design like this is a strict subset of existing signature checking 
rules. If this is dangerous, the existing signature checking rules must be 
dangerous. This also doesn’t have any problem with blind signature, since the 
first signature will always sign the outpoint, with or without ANYONECANPAY. 
(As I pointed out in my reply to Andrew, his concern applies only to 
SIGHASH_NOINPUT, not graftroot)


==

With the example above, I believe there is no reason to make graftroot 
optional, at the expense of block space and/or reduced privacy. Any potential 
problem (e.g. interaction with blind signature) could be fixed by a proper 
rules design.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Should Graftroot be optional?

2018-05-25 Thread Johnson Lau via bitcoin-dev
While you have rescind your concern, I’d like to point out that it’s strictly a 
problem of SIGHASH_NOINPUT, not graftroot (or script delegation in general).

For example, we could modify graftroot. Instead of signing the (script), we 
require it to sign (outpoint | script). That means a graftroot signature would 
never be valid for more than one UTXO.

> On 24 May 2018, at 1:52 AM, Andrew Poelstra via bitcoin-dev 
>  wrote:
> 
> On Wed, May 23, 2018 at 01:50:13PM +, Andrew Poelstra via bitcoin-dev 
> wrote:
>> 
>> Graftroot also break blind signature schemes. Consider a protocol such as [1]
>> where some party has a bunch of UTXOs all controlled (in part) by the same
>> key X. This party produces blind signatures on receipt of new funds, and can
>> only verify the number of signatures he produces, not anything about what he
>> is signing.
>> 
>> BTW, the same concern holds for SIGHASH_NOINPUT, which I'd also like to be
>> disable-able. Maybe we should extend one of ZmnSCPxj's suggestions to include
>> a free "flags" byte or two in the witness?
>> 
>> (I also had the same concern about signature aggregation. It seems like it's
>> pretty hard to preserve the "one signature = at most one input" invariant of
>> Bitcoin, but I think it's important that it is preserved, at least for
>> outputs that need it.)
>> 
>> Or maybe, since it appears it will require a space hit to support optional
>> graftroot anyway, we should simply not include it in a proposal for Taproot,
>> since there would be no opportunity cost (in blockchain efficiency) to doing
>> it later.
>> 
>> [1] https://github.com/apoelstra/scriptless-scripts/pull/1 
>> 
> 
> On further thought, I rescind this concern (ditto for SIGHASH_NOINPUT) (but
> not for aggregate sigs, they still interact badly with blind signatures).
> 
> As long as graftroot (and NOINPUT) sigs commit to the public key, it is
> possible for a server to have unique keys for every output, even while
> retaining the same private key, and thereby ensure that "one sig can spend
> only one output" holds. To do this, suppose the server has a BIP32 xpubkey
> (xG, cc). A blind signer using the private key x can be made to sign not
> only for xG, but also for any publicly-derived child keys of (xG, cc).
> 
> Here is a simple scheme that does this:
> 
>  1. Signer provides a nonce R = kG
> 
>  2. Challenger computes bip32 tweak h, chooses blinders alpha and beta,
> and computes:
> R' = R + alpha*G + beta*P
> e  = H(P + hG || R' || tx)
> e' = e + beta
> and sends e' to the signer.
> 
>  3. Signer replies with s = k + xe' (= k + beta*x + (x + h)e - he)
> 
>  4. Challenger unblinds this as s' = s + alpha + he
> 
> (This blind signature scheme is vulnerable to Wagner's attack, though see
> Schnorr 2004 [1] for mitigations that are perfectly compatible with this
> modified BIP32ish scheme.)
> 
> I'm unsure whether key-prefixing is _actually_ necessary for this, but it
> makes the security argument much clearer since the messagehash contains
> some data which can be made unique per-utxo and is committed in the chain.
> 
> 
> Andrew
> 
> 
> [1] 
> http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=8ECEF929559865FD68D1D873555D18FE?doi=10.1.1.68.9836=rep1=pdf
> 
> 
> -- 
> Andrew Poelstra
> Mathematics Department, Blockstream
> Email: apoelstra at wpsoftware.net
> Web:   https://www.wpsoftware.net/andrew
> 
> "A goose alone, I suppose, can know the loneliness of geese
> who can never find their peace,
> whether north or south or west or east"
>   --Joanna Newsom
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making OP_TRUE standard?

2018-05-09 Thread Johnson Lau via bitcoin-dev


> On 10 May 2018, at 3:27 AM, Peter Todd <p...@petertodd.org> wrote:
> 
> On Thu, May 10, 2018 at 01:56:46AM +0800, Johnson Lau via bitcoin-dev wrote:
>> You should make a “0 fee tx with exactly one OP_TRUE output” standard, but 
>> nothing else. This makes sure CPFP will always be needed, so the OP_TRUE 
>> output won’t pollute the UTXO set
>> 
>> Instead, would you consider to use ANYONECANPAY to sign the tx, so it is 
>> possible add more inputs for fees? The total tx size is bigger than the 
>> OP_TRUE approach, but you don’t need to ask for any protocol change.
>> 
>> In long-term, I think the right way is to have a more flexible SIGHASH 
>> system to allow people to add more inputs and outputs easily.
> 
> I don't think that will work, as a zero-fee tx won't get relayed even with
> CPFP, due to the fact that we haven't yet implemented package-based tx
> relaying.
> 
> -- 
> https://petertodd.org 'peter'[:-1]@petertodd.org

My only concern is UTXO pollution. There could be a “CPFP anchor” softfork that 
outputs with empty scriptPubKey and 0 value are spendable only in the same 
block. If not spent immediately, they become invalid and are removed from UTXO. 
But I still think the best solution is a more flexible SIGHASH system, which 
doesn’t need CPFP at all.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making OP_TRUE standard?

2018-05-09 Thread Johnson Lau via bitcoin-dev
You should make a “0 fee tx with exactly one OP_TRUE output” standard, but 
nothing else. This makes sure CPFP will always be needed, so the OP_TRUE output 
won’t pollute the UTXO set

Instead, would you consider to use ANYONECANPAY to sign the tx, so it is 
possible add more inputs for fees? The total tx size is bigger than the OP_TRUE 
approach, but you don’t need to ask for any protocol change.

In long-term, I think the right way is to have a more flexible SIGHASH system 
to allow people to add more inputs and outputs easily.



> On 9 May 2018, at 7:57 AM, Rusty Russell via bitcoin-dev 
>  wrote:
> 
> Hi all,
> 
>The largest problem we are having today with the lightning
> protocol is trying to predict future fees.  Eltoo solves this elegantly,
> but meanwhile we would like to include a 546 satoshi OP_TRUE output in
> commitment transactions so that we use minimal fees and then use CPFP
> (which can't be done at the moment due to CSV delays on outputs).
> 
> Unfortunately, we'd have to P2SH it at the moment as a raw 'OP_TRUE' is
> non-standard.  Are there any reasons not to suggest such a policy
> change?
> 
> Thanks!
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 117 Feedback

2018-03-05 Thread Johnson Lau via bitcoin-dev
Altstack in v0 P2WSH should be left untouched. If anyone is already using 
altstack, BIP117 would very likely confiscate those UTXOs because the altstack 
would unlikely be executable.

Even in v1 witness, I think altstack should remain be a temporary data storage.

The “(many scripts) concatinated together in reverse order to form a serialized 
script” in BIP117 is exactly the same security hole of Satoshi’s scriptSig + 
OP_CODESAPARATOR + scriptPubKey . That means it is possible to skip execution 
of scriptPubKey by using a scriptSig with an invalid push operation, so the 
whole concatenated script becomes a simple push.

For SigOp limit, I think it’d become more and more difficult to maintain the 
current statical analyzability model as we try to introduce more functions. I 
think we should just migrate to a model of limiting sigop per weight, and count 
the actual number of sigop during execution.  ( 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015764.html
 ) Actually, this approach is cheaper to analyse, as you only need to look at 
the witness size, and don’t need to look at the script at all.



> On 9 Jan 2018, at 6:22 AM, Rusty Russell via bitcoin-dev 
>  wrote:
> 
> I've just re-read BIP 117, and I'm concerned about its flexibility.  It
> seems to be doing too much.
> 
> The use of altstack is awkward, and makes me query this entire approach.
> I understand that CLEANSTACK painted us into a corner here :(
> 
> The simplest implementation of tail recursion would be a single blob: if
> a single element is left on the altstack, pop and execute it.  That
> seems trivial to specify.  The treatment of concatenation seems like
> trying to run before we can walk.
> 
> Note that if we restrict this for a specific tx version, we can gain
> experience first and get fancier later.
> 
> BIP 117 also drops SIGOP and opcode limits.  This requires more
> justification, in particular, measurements and bounds on execution
> times.  If this analysis has been done, I'm not aware of it.
> 
> We could restore statically analyzability by rules like so:
> 1.  Only applied for tx version 3 segwit txs.
> 2.  For version 3, top element of stack is counted for limits (perhaps
>with discount).
> 3.  The blob popped off for tail recursion must be identical to that top
>element of the stack (ie. the one counted above).
> 
> Again, future tx versions could drop such restrictions.
> 
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Alternative way to count sigops

2018-02-16 Thread Johnson Lau via bitcoin-dev
Short history


Satoshi introduced sigops counting as a softfork to limit the number of 
signature operation in a block. He statically counted all 
OP_CHECK(MULTI)SIG(VERIFY) in both scriptSig and scriptPubKey, assumed a 
OP_CHECKMULTISIG is equivalent to 20 OP_CHECKSIG, and enforced a block limit of 
2 sigop. The counting is not contextual, i.e. one doesn’t need the UTXO set 
to determine the number of sigop. The counting was also static so one doesn’t 
need to execute a script in order to count sigop. However, this is completely 
wrong for few reasons: a) opcodes in scriptPubKey are not executed; b) 
scriptPubKey of spent UTXO, which are actually executed, are not counted at 
all; c) it greatly overestimate the cost of multi-sig; d) it counts sigop in 
unexecuted branch.

As P2SH was introduced, sigop counting also covered the sigop redeemScript. 
This is good because redeemScript is what being executed. It also improved the 
counting of OP_CHECKMULTISIG. If it is in certain canonical form, it would 
count the number of public keys instead of assuming it as 20. On the other 
hand, counting sigop becomes not possible without the UTXO set, since one needs 
UTXO to identify P2SH inputs. Also, the canonical OP_CHECKMULTISIG counting is 
not quite elegant and created more special cases in the code.

Segwit (BIP141) scaled the legacy sigop limit by 4x. So every legacy sigop 
becomes 4 new sigop, with a block limit of 8 new sigop. P2WPKH is counted 
as 1 new sigop, and P2WSH is counted in the same way as P2SH.



Problem


We now have multiple 2nd generation script proposals, such as BIP114, BIP117, 
taproot, etc. BIP114 and taproot allows static sigop counting, but not BIP117 
as it requires execution to determine what would be run as script (like 
OP_EVAL). As we want to allow more complicated script functions, static sigop 
counting might not be easy. However, we still want to have a limit to avoid 
unexpected DoS attack.




Proposal


Since we have a block weight limit of 4,000,000 and sigop limit of 80,000, each 
sigop could not use more than 50 weight unit on average. For new script 
proposals we could count the actual number of sigop at execution (i.e. skip 
unexecuted sigop, skip 0-size signature, count the actual checksig operations 
in multi-sig), and make sure the number of executed sigop * 50 is not greater 
than the size of the input.

The minimal size of each input is 32 (prevout.hash) + 4 (prevout.n) + 4 
(nSequence) + 1 (empty scriptSig) = 41 bytes or 164 weight unit. So the new 
rule would require that (164 + input witness size) >= (actual_sigop * 50). This 
is a per-input limit, as script validation is parallel.

Since a compressed key is 33 bytes and a normal compact signature is 64 bytes, 
the 1:50 ratio should be more than enough to allow any normal use of CHECKSIG, 
unless people are doing weird things like 2DUP 2DUP 2DUP……..CHECKSIG CHECKSIG 
CHECKSIG CHECKSIG , which would have many sigop with a relatively small 
witness. Interactive per-tx signature aggregation allows 64bytes/tx signature, 
and per-block non-interatcitve signature aggregation allows 32bytes/signature 
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014272.html 
).
 In such cases, the 1:50 ratio might not be enough if many signatures are 
aggregated. Depends on the number of sigop we could tolerate, the 1:50 ratio 
might be reduced to 1:32 or lower to make sure legitimate use would never hit 
the limit. I think 32 is reasonable as it is about the size of a public key, 
which would guarantee each pubkey must get a sigop slot.

So a relay node could be certain that a tx won’t spend excessive CPU power by 
just looking at its size. If it spends too much, it is invalid and script 
execution could be terminated early.


signature.asc
Description: Message signed with OpenPGP
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why SegWit Anyway?

2017-11-20 Thread Johnson Lau via bitcoin-dev
Not really. BIP140 might be easier to implement, but in longterm the UTXO 
overhead is significant and unnecessary. There are also other benefits of 
segwit written in BIP141. Some of those are applicable even if you are making a 
new coin.

> On 21 Nov 2017, at 2:07 AM, Praveen Baratam  wrote:
> 
> BIP 140 looks like it solves Tx Malleability with least impact on current 
> practices. It is still a soft fork though.
> 
> Finally, if we were to create an alternative cyptocurrency similar to 
> Bitcoin, a Normalized Tx ID approach would be a better choice if I get it 
> right!
> ᐧ
> 
> On Mon, Nov 20, 2017 at 11:15 PM, Johnson Lau  > wrote:
> We can’t “just compute the Transaction ID the same way the hash for signing 
> the transaction is computed” because with different SIGHASH flags, there are 
> 6 (actually 256) ways to hash a transaction.
> 
> Also, changing the definition of TxID is a hardfork change, i.e. everyone are 
> required to upgrade or a chain split will happen.
> 
> It is possible to use “normalised TxID” (BIP140) to fix malleability issue. 
> As a softfork, BIP140 doesn’t change the definition of TxID. Instead, the 
> normalised txid (i.e. txid with scriptSig removed) is used when making 
> signature. Comparing with segwit (BIP141), BIP140 does not have the 
> side-effect of block size increase, and doesn’t provide any incentive to 
> control the size of UTXO set. Also, BIP140 makes the UTXO set permanently 
> bigger, as the database needs to store both txid and normalised txid
> 
>> On 21 Nov 2017, at 1:24 AM, Praveen Baratam via bitcoin-dev 
>> > > wrote:
>> 
>> Bitcoin Noob here. Please forgive my ignorance.
>> 
>> From what I understand, in SegWit, the transaction needs to be serialized 
>> into a data structure that is different from the current one where 
>> signatures are separated from the rest of the transaction data.
>> 
>> Why change the format at all? Why cant we just compute the Transaction ID 
>> the same way the hash for signing the transaction is computed?
>> 
>> -- 
>> Dr. Praveen Baratam
>> 
>> about.me 
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org 
>> 
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
>> 
> 
> 
> 
> 
> -- 
> Dr. Praveen Baratam
> 
> about.me 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why SegWit Anyway?

2017-11-20 Thread Johnson Lau via bitcoin-dev
We can’t “just compute the Transaction ID the same way the hash for signing the 
transaction is computed” because with different SIGHASH flags, there are 6 
(actually 256) ways to hash a transaction.

Also, changing the definition of TxID is a hardfork change, i.e. everyone are 
required to upgrade or a chain split will happen.

It is possible to use “normalised TxID” (BIP140) to fix malleability issue. As 
a softfork, BIP140 doesn’t change the definition of TxID. Instead, the 
normalised txid (i.e. txid with scriptSig removed) is used when making 
signature. Comparing with segwit (BIP141), BIP140 does not have the side-effect 
of block size increase, and doesn’t provide any incentive to control the size 
of UTXO set. Also, BIP140 makes the UTXO set permanently bigger, as the 
database needs to store both txid and normalised txid

> On 21 Nov 2017, at 1:24 AM, Praveen Baratam via bitcoin-dev 
>  wrote:
> 
> Bitcoin Noob here. Please forgive my ignorance.
> 
> From what I understand, in SegWit, the transaction needs to be serialized 
> into a data structure that is different from the current one where signatures 
> are separated from the rest of the transaction data.
> 
> Why change the format at all? Why cant we just compute the Transaction ID the 
> same way the hash for signing the transaction is computed?
> 
> -- 
> Dr. Praveen Baratam
> 
> about.me 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Making OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

2017-11-15 Thread Johnson Lau via bitcoin-dev
In https://github.com/bitcoin/bitcoin/pull/11423 
 I propose to make 
OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

I think FindAndDelete() is one of the most useless and complicated functions in 
the script language. It is omitted from segwit (BIP143), but we still need to 
support it in non-segwit scripts. Actually, FindAndDelete() would only be 
triggered in some weird edge cases like using out-of-range SIGHASH_SINGLE.

Non-segwit scripts also use a FindAndDelete()-like function to remove 
OP_CODESEPARATOR from scriptCode. Note that in BIP143, only executed 
OP_CODESEPARATOR are removed so it doesn’t have the FindAndDelete()-like 
function. OP_CODESEPARATOR in segwit scripts are useful for Tumblebit so it is 
not disabled in this proposal

By disabling both, it guarantees that scriptCode serialized inside 
SignatureHash() must be constant

If we use a softfork to remove FindAndDelete() and OP_CODESEPARATOR from 
non-segwit scripts, we could completely remove FindAndDelete() from the 
consensus code later by whitelisting all blocks before the softfork block. The 
first step is to make them non-standard in the next release.


 ___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-21 Thread Johnson Lau via bitcoin-dev

> On 22 Sep 2017, at 12:33 AM, Luke Dashjr  wrote:
> 
> On Thursday 21 September 2017 8:02:42 AM Johnson Lau wrote:
>> I think it’s possible only if you spend more witness space to store the
>> (pubkey, message) pairs, so that old clients could understand the
>> aggregation produced by new clients. But this completely defeats the
>> purpose of doing aggregation.
> 
> SigAgg is a softfork, so old clients *won't* understand it... am I missing 
> something?
> 
> For example, perhaps the lookup opcode could have a data payload itself (eg, 
> like pushdata opcodes do), and the script can be parsed independently from 
> execution to collect the applicable ones.

I think the current idea of sigagg is something like this: the new OP_CHECKSIG 
still has 2 arguments: top stack must be a 33-byte public key, and the 2nd top 
stack item is signature. Depends on the sig size, it returns different value:

If sig size is 0, it returns a 0 to the top stack
If sig size is 1, it is treated as a SIGHASH flag, and the SignatureHash() 
“message” is calculated. It sends the (pubkey, message) pair to the aggregator, 
and always returns a 1 to the top stack
If sig size is >1, it is treated as the aggregated signature. The last byte is 
SIGHASH flag. It sends the (pubkey, message) pair and the aggregated signature 
to the aggregator, and always returns a 1 to the top stack.

If all scripts pass, the aggregator will combine all pairs to obtain the aggkey 
and aggmsg, and verify against aggsig. A tx may have at most 1 aggsig.

(The version I presented above is somewhat simplified but should be enough to 
illustrate my point)

So if we have this script:

OP_1 OP_RETURNTRUE  OP_CHECKSIG

Old clients would stop at the OP_RETURNTRUE, and will not send the pubkey to 
the aggregator

If we softfork OP_RETURNTRUE to something else, even as OP_NOP11, new clients 
will send the (key, msg) pair to the aggregator. Therefore, the aggregator of 
old and new clients will see different data, leading to a hardfork.

OTOH, OP_NOP based softfork would not have this problem because it won’t 
terminate script and return true.


> 
>>> This is another approach, and one that seems like a good idea in general.
>>> I'm not sure it actually needs to take more witness space - in theory,
>>> such stack items could be implied if the Script engine is designed for
>>> it upfront. Then it would behave as if it were non-verify, while
>>> retaining backward compatibility.
>> 
>> Sounds interesting but I don’t get it. For example, how could you make a
>> OP_MUL out of OP_NOP?
> 
> The same as your OP_MULVERIFY at the consensus level, except new clients 
> would 
> execute it as an OP_MUL, and inject pops/pushes when sending such a 
> transaction to older clients. The hash committed to for the script would 
> include the inferred values, but not the actual on-chain data. This would 
> probably need to be part of some kind of MAST-like softfork to be viable, and 
> maybe not even then.
> 
> Luke

I don’t think it’s worth the code complexity, just to save a few bytes of data 
sent over wire; and to be a soft fork, it still takes the block space.

Maybe we could create many OP_DROPs and OP_2DROPs, so new VERIFY operations 
could pop the stack. This saves 1 byte and also looks cleaner.

Another approach is to use a new script version for every new non-verify type 
operation. Problem is we will end up with many versions. Also, signatures from 
different versions can’t be aggregated. (We may have multiple aggregators in a 
transaction)



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-21 Thread Johnson Lau via bitcoin-dev

> On 21 Sep 2017, at 12:11 PM, Luke Dashjr  wrote:
> 
> On Wednesday 20 September 2017 5:13:04 AM Johnson Lau wrote:
>> 2. OP_RETURNTRUE does not work well with signature aggregation. Signature
>> aggregation will collect (pubkey, message) pairs in a tx, combine them,
>> and verify with one signature. However, consider the following case:
>> 
>> OP_RETURNTRUE OP_IF  OP_CHECKSIGVERIFY OP_ENDIF OP_TRUE
>> 
>> For old nodes, the script terminates at OP_RETURNTRUE, and it will not
>> collect the (pubkey, message) pair.
>> 
>> If we use a softfork to transform OP_RETURNTRUE into OP_17 (pushing the
>> number 17 to the stack), new nodes will collect the (pubkey, message) pair
>> and try to aggregate with other pairs. This becomes a hardfork.
> 
> This seems like a problem for signature aggregation to address, not a problem 
> for OP_RETURNTRUE... In any case, I don't think it's insurmountable. 
> Signature 
> aggregation can simply be setup upfront, and have the Script verify inclusion 
> of keys in the aggregation?

I think it’s possible only if you spend more witness space to store the 
(pubkey, message) pairs, so that old clients could understand the aggregation 
produced by new clients. But this completely defeats the purpose of doing 
aggregation.

We use different skills to save space. For example, we use 1-byte SIGHASH flag 
to imply the 32-byte message. For maximal space saving, sig aggregation will 
also rely on such skills. However, the assumption is that all signatures 
aggregated must follow exactly the same set of rules.


> 
>> Technically, we could create ANY op code with an OP_NOP. For example, if we
>> want OP_MUL, we could have OP_MULVERIFY, which verifies if the 3rd stack
>> item is the product of the top 2 stack items. Therefore, OP_MULVERIFY
>> OP_2DROP is functionally same as OP_MUL, which removes the top 2 items and
>> returns the product. The problem is it takes more witness space.
> 
> This is another approach, and one that seems like a good idea in general. I'm 
> not sure it actually needs to take more witness space - in theory, such stack 
> items could be implied if the Script engine is designed for it upfront. Then 
> it would behave as if it were non-verify, while retaining backward 
> compatibility.

Sounds interesting but I don’t get it. For example, how could you make a OP_MUL 
out of OP_NOP?


> 
> Luke


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-20 Thread Johnson Lau via bitcoin-dev

> On 21 Sep 2017, at 3:29 AM, Mark Friedenbach  wrote:
> 
> 
>> On Sep 19, 2017, at 10:13 PM, Johnson Lau  wrote:
>> 
>> If we don’t want this ugliness, we could use a new script version for every 
>> new op code we add. In the new BIP114 (see link above), I suggest to move 
>> the script version to the witness, which is cheaper.
> 
> To be clear, I don’t think it is so much that the version should be moved to 
> the witness, but rather that there are two separate version values here — one 
> in the scriptPubKey which specifies the format and structure of the segwit 
> commitment itself, and another in the witness which gates functionality in 
> script or whatever else is used by that witness type. Segwit just 
> unfortunately didn’t include the latter, an oversight that should be 
> corrected on the on the next upgrade opportunity.
> 
> The address-visible “script version” field should probably be renamed 
> “witness type” as it will only be used in the future to encode how to check 
> the witness commitment in the scriptPubKey against the data provided in the 
> witness. Upgrades and improvements to the features supported by those witness 
> types won’t require new top-level witness types to be defined. Defining a new 
> opcode, even one with modifies the stack, doesn’t change the hashing scheme 
> used by the witness type.
> 
> v0,32-bytes is presently defined to calculate the double-SHA256 hash of the 
> top-most serialized item on the stack, and compare that against the 32-byte 
> commitment value. Arguably it probably should have hashed the top two values, 
> one of which would have been the real script version. This could be fixed 
> however, even without introducing a new witness type. Do a soft-fork upgrade 
> that checks if the witness redeem script is push-only, and if so then pop the 
> last push off as the script version (>= 1), and concatenate the rest to form 
> the actual redeem script. We inherit a little technical debt from having to 
> deal with push limits, but we avoid burning v0 in an upgrade to v1 that does 
> little more than add a script version.
> 
> v1,32-bytes would then be used for a template version of MAST, or whatever 
> other idea comes along that fundamentally changes the way the witness 
> commitment is calculated.
> 
> Mark

This is exactly what I suggest with BIP114. Using v1, 32-byte to define the 
basic structure of Merklized Script, and define the script version inside the 
witness

Johnson
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-19 Thread Johnson Lau via bitcoin-dev

> On 19 Sep 2017, at 11:09 AM, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> On Tuesday 19 September 2017 12:46:30 AM Mark Friedenbach via bitcoin-dev 
> wrote:
>> After the main discussion session it was observed that tail-call semantics
>> could still be maintained if the alt stack is used for transferring
>> arguments to the policy script.
> 
> Isn't this a bug in the cleanstack rule?
> 
> (Unrelated...)
> 
> Another thing that came up during the discussion was the idea of replacing 
> all 
> the NOPs and otherwise-unallocated opcodes with a new OP_RETURNTRUE 
> implementation, in future versions of Script. This would immediately exit the 
> program (perhaps performing some semantic checks on the remainder of the 
> Script) with a successful outcome.
> 
> This is similar to CVE-2010-5141 in a sense, but since signatures are no 
> longer Scripts themselves, it shouldn't be exploitable.
> 
> The benefit of this is that it allows softforking in ANY new opcode, not only 
> the -VERIFY opcode variants we've been doing. That is, instead of merely 
> terminating the Script with a failure, the new opcode can also remove or push 
> stack items. This is because old nodes, upon encountering the undefined 
> opcode, will always succeed immediately, allowing the new opcode to do 
> literally anything from that point onward.
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

I have implemented OP_RETURNTRUE in an earlier version of MAST (BIP114) but 
have given up the idea, for 2 reasons:

1. I’ve updated BIP114 to allow inclusion of scripts in witness, and require 
them to be signed. In this way users could add additional conditions for the 
validity of a signature. For example, with OP_CHECKBLOCKHASH, it is possible to 
make the transaction valid only in the specified chain. (More discussion in 
https://github.com/jl2012/bips/blob/vault/bip-0114.mediawiki#Additional_scripts_in_witness
 

 )

2. OP_RETURNTRUE does not work well with signature aggregation. Signature 
aggregation will collect (pubkey, message) pairs in a tx, combine them, and 
verify with one signature. However, consider the following case:

OP_RETURNTRUE OP_IF  OP_CHECKSIGVERIFY OP_ENDIF OP_TRUE

For old nodes, the script terminates at OP_RETURNTRUE, and it will not collect 
the (pubkey, message) pair.

If we use a softfork to transform OP_RETURNTRUE into OP_17 (pushing the number 
17 to the stack), new nodes will collect the (pubkey, message) pair and try to 
aggregate with other pairs. This becomes a hardfork.


Technically, we could create ANY op code with an OP_NOP. For example, if we 
want OP_MUL, we could have OP_MULVERIFY, which verifies if the 3rd stack item 
is the product of the top 2 stack items. Therefore, OP_MULVERIFY OP_2DROP is 
functionally same as OP_MUL, which removes the top 2 items and returns the 
product. The problem is it takes more witness space.

If we don’t want this ugliness, we could use a new script version for every new 
op code we add. In the new BIP114 (see link above), I suggest to move the 
script version to the witness, which is cheaper.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fast Merkle Trees

2017-09-12 Thread Johnson Lau via bitcoin-dev

> On 8 Sep 2017, at 4:04 AM, Mark Friedenbach via bitcoin-dev 
>  wrote:
> 
> If I understand the revised attack description correctly, then there
> is a small window in which the attacker can create a script less than
> 55 bytes in length, where nearly all of the first 32 bytes are
> selected by the attacker, yet nevertheless the script seems safe to
> the counter-party. The smallest such script I was able to construct
> was the following:
> 
>  CHECKSIGVERIFY HASH160  EQUAL
> 
> This is 56 bytes and requires only 7 bits of grinding in the fake
> pubkey. But 56 bytes is too large. Switching to secp256k1 serialized
> 32-byte pubkeys (in a script version upgrade, for example) would
> reduce this to the necessary 55 bytes with 0 bits of grinding. A
> smaller variant is possible:
> 
> DUP HASH160  EQUALVERIFY CHECKSIGVERIFY HASH160 
>  EQUAL
> 
> This is 46 bytes, but requires grinding 96 bits, which is a bit less
> plausible.
> 
> Belts and suspenders are not so terrible together, however, and I
> think there is enough of a justification here to look into modifying
> the scheme to use a different IV for hash tree updates. This would
> prevent even the above implausible attacks.
> 

I think you overestimated the difficulty. Consider this MAST branch (an example 
in BIP114)

"Timestamp" CHECKLOCKTIMEVERIFY  CHECKSIGVERIFY

This requires just a few bytes of collision.




___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-12 Thread Johnson Lau via bitcoin-dev

> On 12 Sep 2017, at 10:03 AM, Mark Friedenbach  wrote:
> 
> My apologies for a delay in responding to emails on this list; I have
> been fighting a cold.
> 
> (Also my apologies to Johnson Lau, as I forgot to forward this to the list.)
> 
> On Sep 8, 2017, at 2:21 AM, Johnson Lau  wrote:
> 
>> Tail-call execution semantics require "unclean stake" , i.e. final
>> stake with more than one item. However, "unclean stake" is invalid
>> (not just non-standard) in BIP141, so you could only use it with
>> legacy P2SH (which is totally pointless). A different design
>> like OP_EVAL might be needed, or you need a new witness script
>> version.
> 
> I believe you meant "unclean stack," and you are correct. This was
> also pointed out last tuesday by a participant at the in-person
> CoreDev meetup where the idea was presented.
> 
> This doesn't kill the idea, it just complicates the implementation
> slightly. A simple fix would be to allow tail-recursion to occur if
> the stack is not clean (as can happen with legacy P2SH as you point
> out, or yet to be defined version 1+ forms of segwit script), OR if
> there is a single item on the stack and the alt-stack is not empty.
> For segwit v0 scripts you then have to move any arguments to the alt
> stack before ending the redeem script, leaving just the policy script
> on the main stack.

This is ugly and actually broken, as different script path may require 
different number of stack items, so you don’t know how many OP_TOALTSTACK do 
you need. Easier to just use a new witness version

> 
>> I think you have also missed the sigOp counting of the executed
>> script. As you can't count it without executing the script, the
>> current static analysability is lost. This was one of the reasons
>> for OP_EVAL being rejected. Since sigOp is a per-block limit, any
>> OP_EVAL-like operation means block validity will depend on the
>> precise outcome of script execution (instead of just pass or fail),
>> which is a layer violation.
> 
> I disagree with this design requirement.
> 
> The SigOp counting method used by bitcoin is flawed. It incorrectly
> limits not the number of signature operations necessary to validate a
> block, but rather the number of CHECKSIGs potentially encountered in
> script execution, even if in an unexecuted branch. (Admitedly MAST
> makes this less of an issue, but there are still useful compact
> scripts that use if/else constructs to elide a CHECKSIG.) Nor will it
> account for aggregation when that feature is added, or properly
> differentiate between signature operations that can be batched and
> those that can not.
> 
> Additionally there are other resources used by script that should be
> globally limited, such as hash operations, which are not accounted for
> at this time and cannot be statically assessed, even by the flawed
> mechanism by which SigOps are counted. I have maintained for some time
> that bitcoin should move from having multiple separate global limits
> (weight and sigops, hashed bytes in XT/Classic/BCH) to a single linear
> metric that combines all of these factors with appropriate
> coefficients.
> 

I like the idea to have an unified global limit and suggested a way to do it 
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html).
 But I think this is off-topic here.



> A better way of handling this problem, which works for both SigOps and
> HashOps, is to have the witness commit to the maximum resources
> consumed by validation of the spend of the coin, to relay this data
> with the transaction and include it in the SigHash, and to use the
> committed maximum for block validation. This could be added in a
> future script version upgrade. This change would also resolve the
> issue that led to the clean stack rule in segwit, allowing future
> versions of script to use tail-call recursion without involving the
> alt-stack.
> 
> Nevertheless it is constructive feedback that the current draft of the
> BIP and its implementation do not count SigOps, at all. There are a
> couple of ways this can be fixed by evaluating the top-level script
> and then doing static analysis of the resulting policy script, or by
> running the script and counting operations actually performed.


In any case, I think maintaining static analysability for global limit(s) is 
very important. Ethereum had to give up their DAO softfork plan at the last 
minute, exactly due to the lack of this: 
http://hackingdistributed.com/2016/06/28/ethereum-soft-fork-dos-vector/

Otherwise, one could attack relay and mining nodes by sending many small size 
txs with many sigops, forcing them to validate, and discard due to insufficient 
fees.

Technically it might be ok if we commit the total validation cost (sigop + 
hashop + whatever) as the first witness stack item, but that’d take more space 
and I’m not sure if it is desirable. Anyway, giving up static analysability for 
scripts is a fundamental change 

[bitcoin-dev] BIP114 Merklized Script update and 5 BIPs for new script functions

2017-09-08 Thread Johnson Lau via bitcoin-dev
I have rewritten and simplified BIP114, and renamed it to “Merklized Script”, 
as a more accurate description after consulting the original proposers of MAST. 
It could be considered as a special case of MAST, but has basically the same 
functions and scaling properties of MAST.

Compared with Friedenbach’s latest tail-call execution semantics proposal, I 
think the most notable difference is BIP114 focuses on maintaining the static 
analysability, which was a reason of OP_EVAL (BIP12) being rejected. Currently 
we could count the number of sigOp without executing the script, and this 
remains true with BIP114. Since sigOp is a block-level limit, any OP_EVAL-like 
operation means block validity will depend on the precise outcome of script 
execution (instead of just pass or fail), which is a layer violation.

Link to the revised BIP114: 
https://github.com/jl2012/bips/blob/vault/bip-0114.mediawiki

On top of BIP114, new script functions are defined with 5 BIPs:

VVV: Pay-to-witness-public-key: 
https://github.com/jl2012/bips/blob/vault/bip-0VVV.mediawiki
WWW: String and Bitwise Operations in Merklized Script Version 0: 
https://github.com/jl2012/bips/blob/vault/bip-0WWW.mediawiki
XXX: Numeric Operations in Merklized Script Version 0: 
https://github.com/jl2012/bips/blob/vault/bip-0XXX.mediawiki
YYY: ECDSA signature operations in Merklized Script Version 0: 
https://github.com/jl2012/bips/blob/vault/bip-0YYY.mediawiki
ZZZ: OP_PUSHTXDATA: https://github.com/jl2012/bips/blob/vault/bip-0ZZZ.mediawiki

As a summary, these BIPs have the following major features:

1. Merklized Script: a special case of MAST, allows users to hide unexecuted 
branches in their scripts (BIP114)
2. Delegation: key holder(s) may delegate the right of spending to other keys 
(scripts), with or without additional conditions such as locktime. (BIP114, VVV)
3. Enabling all OP codes disabled by Satoshi (based on Elements project with 
modification. BIPWWW and XXX)
4. New SIGHASH definition with very high flexibility (BIPYYY)
5. Covenant (BIPZZZ)
6. OP_CHECKSIGFROMSTACK, modified from Elements project (BIPYYY)
7. Replace ~72 byte DER sig with fixed size 64 byte compact sig. (BIPYYY)

All of these features are modular and no need to be deployed at once. The very 
basic BIP114 (merklized script only, no delegation) could be done quite easily. 
BIP114 has its own versioning system which makes introducing new functions very 
easy.

Things I’d like to have:

1. BIP114 now uses SHA256, but I’m open to other hash design
2. Using Schnorr or similar signature scheme, instead of ECDSA, in BIPYYY.

Reference implementation: https://github.com/jl2012/bitcoin/commits/vault
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-08 Thread Johnson Lau via bitcoin-dev
Some comments with the tail-call execution semantics BIP:

Tail-call execution semantics require “unclean stake”, i.e. final stake with 
more than one item. However, “unclean stake” is invalid (not just non-standard) 
in BIP141, so you could only use it with legacy P2SH (which is totally 
pointless….). A different design like OP_EVAL might be needed, or you need a 
new witness script version.

I think you have also missed the sigOp counting of the executed script. As you 
can’t count it without executing the script, the current static analysability 
is lost. This was one of the reasons for OP_EVAL being rejected. Since sigOp is 
a per-block limit, any OP_EVAL-like operation means block validity will depend 
on the precise outcome of script execution (instead of just pass or fail), 
which is a layer violation.

(An alternative is to make sigOp a per-input limit instead of per-block limit, 
just like the 201 nOp limit. But this is a very different security model)

Witness script versioning is by design fully compatible with P2SH and BIP173, 
so there will be no hurdle for existing wallets to pay to BIP114. Actually it 
should be completely transparent to them.

For code complexity, the minimal BIP114 could be really simple, like <30 lines 
of code? It looks complex now because it does much more than simply hiding 
scripts in a hash.



> On 7 Sep 2017, at 8:38 AM, Mark Friedenbach via bitcoin-dev 
>  wrote:
> 
> I would like to propose two new script features to be added to the
> bitcoin protocol by means of soft-fork activation. These features are
> a new opcode, MERKLE-BRANCH-VERIFY (MBV) and tail-call execution
> semantics.
> 
> In brief summary, MERKLE-BRANCH-VERIFY allows script authors to force
> redemption to use values selected from a pre-determined set committed
> to in the scriptPubKey, but without requiring revelation of unused
> elements in the set for both enhanced privacy and smaller script
> sizes. Tail-call execution semantics allows a single level of
> recursion into a subscript, providing properties similar to P2SH while
> at the same time more flexible.
> 
> These two features together are enough to enable a range of
> applications such as tree signatures (minus Schnorr aggregation) as
> described by Pieter Wuille [1], and a generalized MAST useful for
> constructing private smart contracts. It also brings privacy and
> fungibility improvements to users of counter-signing wallet/vault
> services as unique redemption policies need only be revealed if/when
> exceptional circumstances demand it, leaving most transactions looking
> the same as any other MAST-enabled multi-sig script.
> 
> I believe that the implementation of these features is simple enough,
> and the use cases compelling enough that we could BIP 8/9 rollout of
> these features in relatively short order, perhaps before the end of
> the year.
> 
> I have written three BIPs to describe these features, and their
> associated implementation, for which I now invite public review and
> discussion:
> 
> Fast Merkle Trees
> BIP: https://gist.github.com/maaku/41b0054de0731321d23e9da90ba4ee0a
> Code: https://github.com/maaku/bitcoin/tree/fast-merkle-tree
> 
> MERKLEBRANCHVERIFY
> BIP: https://gist.github.com/maaku/bcf63a208880bbf8135e453994c0e431
> Code: https://github.com/maaku/bitcoin/tree/merkle-branch-verify
> 
> Tail-call execution semantics
> BIP: https://gist.github.com/maaku/f7b2e710c53f601279549aa74eeb5368
> Code: https://github.com/maaku/bitcoin/tree/tail-call-semantics
> 
> Note: I have circulated this idea privately among a few people, and I
> will note that there is one piece of feedback which I agree with but
> is not incorporated yet: there should be a multi-element MBV opcode
> that allows verifying multiple items are extracted from a single
> tree. It is not obvious how MBV could be modified to support this
> without sacrificing important properties, or whether should be a
> separate multi-MBV opcode instead.
> 
> Kind regards,
> Mark Friedenbach
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys

2017-08-28 Thread Johnson Lau via bitcoin-dev
Yes it is allowed in TxOuts. And yes it is designed to save space. But the 
problem is Bob can’t assume Alice understands the new TxOuts format. If Bob 
really wants to save space this way, he should first ask for a new BIP173 
address from Alice. Never try to convert a P2PKH address to a P2SH or BIP173 
address without the consent of the recipient.
 

> On 29 Aug 2017, at 4:55 AM, Alex Nagy via bitcoin-dev 
>  wrote:
> 
> Thanks Gregory - to be clear should Native P2WPKH scripts only appear in 
> redeem scripts?  From reading the various BIPs it had seemed like Native 
> P2WPKH and Native P2WSH were also valid and identifiable if they were encoded 
> in TxOuts.  The theoretical use case for this would be saving bytes in Txes 
> with many outputs.
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Extension block proposal by Jeffrey et al

2017-05-09 Thread Johnson Lau via bitcoin-dev
To make it completely transparent to unupgraded wallets, the return outputs 
have to be sent to something that is non-standard today, i.e. not P2PK, P2PKH, 
P2SH, bare multi-sig, and (with BIP141) v0 P2WPKH and v0 P2WSH.

Mainchain segwit is particularly important here, as that allows atomic swap 
between the bitcoin and xbitcoin. Only services with high liquidity (exchanges, 
payment processors) would need to occasionally settle between the chains.


> On 9 May 2017, at 08:56, Christopher Jeffrey  wrote:
> 
> Johnson,
> 
> Yeah, I do still see the issue. I think there are still some reasonable
> ways to mitigate it.
> 
> I've started revising the extension block specification/code to coexist
> with mainchain segwit. I think the benefit of this is that we can
> require exiting outputs to only be witness programs. Presumably segwit
> wallets will be more likely to be aware of a new output maturity rule
> (I've opened a PR[1] which describes this in greater detail). I think
> this probably reduces the likelihood of the legacy wallet issue,
> assuming most segwit-supporting wallets would implement this rule before
> the activation of segwit.
> 
> What's your opinion on whether this would have a great enough effect to
> prevent the legacy wallet issue? I think switching to witness programs
> only may be a good balance between fungibility and backward-compat,
> probably better all around than creating a whole new
> addr-type/wit-program just for exits.
> 
> [1] https://github.com/tothemoon-org/extension-blocks/pull/16 
> 
> 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-09 Thread Johnson Lau via bitcoin-dev
No, changing from 50% to 75% is a hardfork. (75 -> 50 is a softfork). Unless 
you make it pre-scheduled, or leave a special “backdoor” softfork to change the 
discount.

And that would certainly reduce the max tx/s with 50% discount, also reduce the 
incentive to spend witness UTXO. 

> On 10 May 2017, at 00:19, Sergio Demian Lerner  
> wrote:
> 
> Thanks Johnson and Hampus for the clarifications. 
> However, I would rather do the opposite: soft-fork to 50% now, and soft-fork 
> again to 75% discount later if needed, because it doesn't affect the max 
> transactions/second. 
> 
> Segwit as it is today should be activated. However if it is not before 
> November, then for the next Segwit attempt I would choose a more conservative 
> 50% discount.
> 
> 
> 
> On Tue, May 9, 2017 at 12:45 PM, Johnson Lau  > wrote:
> 
> > On 9 May 2017, at 21:49, Sergio Demian Lerner via bitcoin-dev 
> >  > > wrote:
> >
> >
> > So it seems the 75% discount has been chosen with the idea that in the 
> > future the current transaction pattern will shift towards multisigs. This 
> > is not a bad idea, as it's the only direction Bitcoin can scale without a 
> > HF.
> > But it's a bad idea if we end up doing, for example, a 2X blocksize 
> > increase HF in the future. In that case it's much better to use a 50% 
> > witness discount, and do not make scaling risky by making the worse case 
> > block size 8 Mbytes, when it could have been 2*2.7=5.4 Mbytes.
> >
> 
> As we could change any parameter in a hardfork, I don’t think this has any 
> relation with the current BIP141 proposal. We could just use 75% in a 
> softfork, and change that to a different value (or completely redefine the 
> definition of weight) with a hardfork later.
> 
> 
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Some real-world results about the current Segwit Discount

2017-05-09 Thread Johnson Lau via bitcoin-dev

> On 9 May 2017, at 21:49, Sergio Demian Lerner via bitcoin-dev 
>  wrote:
> 
> 
> So it seems the 75% discount has been chosen with the idea that in the future 
> the current transaction pattern will shift towards multisigs. This is not a 
> bad idea, as it's the only direction Bitcoin can scale without a HF. 
> But it's a bad idea if we end up doing, for example, a 2X blocksize increase 
> HF in the future. In that case it's much better to use a 50% witness 
> discount, and do not make scaling risky by making the worse case block size 8 
> Mbytes, when it could have been 2*2.7=5.4 Mbytes.
> 

As we could change any parameter in a hardfork, I don’t think this has any 
relation with the current BIP141 proposal. We could just use 75% in a softfork, 
and change that to a different value (or completely redefine the definition of 
weight) with a hardfork later.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Trustless Segwit activation bounty protocol (aka. bribing the miners)

2017-04-27 Thread Johnson Lau via bitcoin-dev
As other explained, your scheme is broken.

Unless we have a softfork first (OP_CHECKBIP9VERIFY: payment is valid only if a 
BIP9 proposal is active), it is not possible to create a softfork bounty in a 
decentralised way

On the other hand, hardfork bounty is very simple. You just need to make sure 
your tx violates existing rules


> On 28 Apr 2017, at 01:48, Matt Bell via bitcoin-dev 
>  wrote:
> 
> Hello everyone,
> 
> I've been thinking of an alternative to possibly get Segwit activated sooner: 
> bribing the miners. This proposal may not be necessary if everyone is already 
> set on doing a UASF, but  miners seem to optimize for short-term profits and 
> this may make it easier for BitMain to accept its fate in losing the 
> ASICBoost advantage.
> 
> Here is a possible trustless contract protocol where contributors could 
> pledge to a Segwit bounty which would be paid out to miners iff Segwit is 
> activated, else the funds are returned to the contributor:
> 
> # Setup
> 
> - The contributor picks a possible future height where Segwit may be 
> activated and when the funds should be released, H.
> - The contributor chooses a bounty contribution amount X.
> - The contributor generates 3 private keys (k1, k2, and k3) and corresponding 
> pubkeys (K1, K2, and K3).
> - The contributor creates and broadcasts the Funding Transaction, which has 2 
> outputs:
>   * Output 0, X BTC, P2SH redeem script:
>  CHECKLOCKTIMEVERIFY DROP
>  CHECKSIGVERIFY
>   * Output 1, 0.1 BTC, P2SH redeem script:
>  CHECKLOCKTIMEVERIFY DROP
>  CHECKSIGVERIFY
> - The contributor builds the Segwit Assertion Transaction:
>   * nTimeLock set to H
>   * Input 0, spends from Funding Transaction output 1, signed with k2, 
> SIGHASH_ALL
>   * Output 0, 0.1 BTC, P2WPKH using K3
> - The contributor builds the Bounty Payout Transaction:
>   * nTimeLock set to H
>   * Input 0, spends from Funding Transaction output 0, signed with k1, 
> SIGHASH_ALL
>   * Input 1, spends from Segwit Assertion Transaction output 0, signed with 
> k3, SIGHASH_ALL
>   * No outputs, all funds are paid to the miner
> - The contributor publishes the Segwit Assertion Transaction and Bounty 
> Payout Transaction (with signatures) somewhere where miners can find them
> 
> # Process
> 
> 1. After the setup, miners can find Funding Transactions confirmed on the 
> chain, and verify the other 2 transactions are correct and have valid 
> signatures. They can sum all the valid bounty contracts they find to factor 
> into their expected mining profit.
> 2A. Once the chain reaches height H-1, if Segwit has activated, miners can 
> claim the bounty payout by including the Segwit Assertion and Bounty Payout 
> transactions in their block H. Since Segwit has activated, the output from 
> the Segwit Assertion tx can be spent by the Bounty Payout, so both 
> transactions are valid and the miner receives the funds.
> 2B. If Segwit has not activated at height H, Input 1 of the Bounty Payout is 
> not valid since it spends a P2WPKH output, preventing the miner from 
> including the Bounty Payout transaction in the block. (However, the output of 
> the Segwit Assertion tx can be claimed since it is treated as 
> anyone-can-spend, although this is not an issue since it is a very small 
> amount). The contributor can reclaim the funds from Output 0 of the Funding 
> tx by creating a new transaction, signed with k1.
> 
> # Notes
> 
> - This is likely a win-win scenario for the contributors, since Segwit 
> activating will likely increase the price of Bitcoin, which gives a positive 
> return if the price increases enough. If it does not activate, the funds will 
> be returned so nothing is at risk.
> - Contributors could choose H heights on or slightly after an upcoming 
> possible activation height. If contributors pay out to many heights, then the 
> bounty can be split among many miners, it doesn't have to be winner-take-all.
> - If Segwit does not activate at H, the contributor has until the next 
> possible activation height to claim their refund without risking it being 
> taken by another miner. This could be outsourced by signing a refund 
> transaction which pays a fee to some third-party who will be online at H and 
> can broadcast the transaction. If the contributor wants to pay a bounty for a 
> later height, they should create a new contract otherwise a miner could 
> invalidate the payout by spending the output of the Segwit Assertion.
> 
> Thanks, I'd like to hear everyone's thoughts. Let me know if you find any 
> glaring flaws or have any other comments.
> Matt
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org

Re: [bitcoin-dev] Segwit v2

2017-04-26 Thread Johnson Lau via bitcoin-dev

> On 27 Apr 2017, at 04:01, Luke Dashjr  wrote:
> 
> On Wednesday 26 April 2017 7:31:38 PM Johnson Lau wrote:
>> I prefer not to do anything that requires pools software upgrade or wallet
>> upgrade. So I prefer to keep the dummy marker, and not change the
>> commitment structure as suggested by another post.
> 
> Fair enough, I guess. Although I think the dummy marker could actually be non-
> consensus critical so long as the hashing replaces it with a 0.
> 
>> For your second suggestion, I think we should keep scriptSig empty as that
>> should be obsoleted. If you want to put something in scriptSig, you should
>> put it in witness instead.
> 
> There are things scriptSig can do that witness cannot today - specifically 
> add 
> additional conditions under the signature. We can always obsolete scriptSig 
> later, after segwit has provided an alternative way to do this.

You can do this with witness too, which is also cheaper. Just need to make sure 
the signature covers a special part of the witness. I will make a proposal to 
Litecoin soon, which allows signing and executing extra scripts in witness. 
Useful for things like OP_PUSHBLOCKHASH

> 
>> Maybe we could restrict witness to IsPushOnly() scriptPubKey, so miners
>> can’t put garbage to legacy txs.
> 
> They already can malleate transactions and add garbage to the blocks. I don't 
> see the benefit here.

Witness is cheaper and bigger

> 
> Luke


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segwit v2

2017-04-26 Thread Johnson Lau via bitcoin-dev
I prefer not to do anything that requires pools software upgrade or wallet 
upgrade. So I prefer to keep the dummy marker, and not change the commitment 
structure as suggested by another post.

For your second suggestion, I think we should keep scriptSig empty as that 
should be obsoleted. If you want to put something in scriptSig, you should put 
it in witness instead.

Maybe we could restrict witness to IsPushOnly() scriptPubKey, so miners can’t 
put garbage to legacy txs. But I think relaxing the witness program size to 73 
bytes is enough for any purpose.

> On 21 Apr 2017, at 04:28, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> Since BIP 141's version bit assignment will timeout soon, and needing 
> renewal, 
> I was thinking it might make sense to make some minor tweaks to the spec for 
> the next deployment. These aren't critical, so it's perfectly fine if BIP 141 
> activates as-is (potentially with BIP 148), but IMO would be an improvement 
> if 
> a new deployment (non-BIP148 UASF and/or new versionbit) is needed.
> 
> 1. Change the dummy marker to 0xFF instead of 0. Using 0 creates ambiguity 
> with incomplete zero-input transactions, which has been a source of confusion 
> for raw transaction APIs. 0xFF would normally indicate a >32-bit input count, 
> which is impossible right now (it'd require a >=158 GB transaction) and 
> unlikely to ever be useful.
> 
> 2. Relax the consensus rules on when witness data is allowed for an input. 
> Currently, it is only allowed when the scriptSig is null, and the 
> scriptPubKey 
> being spent matches a very specific pattern. It is ignored by "upgrade-safe" 
> policy when the scriptPubKey doesn't match an even-more-specific pattern. 
> Instead, I suggest we allow it (in the consensus layer only) in combination 
> with scriptSig and with any scriptPubKey, and consider these cases to be 
> "upgrade-safe" policy ignoring.
> 
> The purpose of the second change is to be more flexible to any future 
> softforks. I consider it minor because we don't know of any possibilities 
> where it would actually be useful.
> 
> Thoughts?
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Using a storage engine without UTXO-index

2017-04-08 Thread Johnson Lau via bitcoin-dev

> On 9 Apr 2017, at 03:56, Tomas  wrote:
> 
> 
>> I don’t fully understand your storage engine. So the following deduction
>> is just based on common sense.
>> 
>> a) It is possible to make unlimited number of 1-in-100-out txs
>> 
>> b) The maximum number of 100-in-1-out txs is limited by the number of
>> previous 1-in-100-out txs
>> 
>> c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS
>> purpose you should limit the number of previous 1-in-100-out txs. 
>> 
>> d) Limit 1-in-100-out txs == Limit UTXO growth
>> 
>> I’m not surprised that you find an model more efficient than Core. But I
>> don’t believe one could find a model that doesn’t become more efficient
>> with UTXO growth limitation.
> 
> My efficiency claims are *only* with regards to order validation. If we
> assume all transactions are already pre-synced and verified, bitcrust's
> order validation is very fast, and (only slightly) negatively effected
> by input-counts.

pre-synced means already in mempool and verified? Then it sounds like we just 
need some mempool optimisation? The tx order in a block is not important, 
unless they are dependent

> 
>> One more question: what is the absolute minimum disk and memory usage in
>> bitcrust, compared with the pruning mode in Core?
> 
> As bitcrust doesn't support this yet, I cannot give accurate numbers,
> but I've provided some numbers estimates earlier in the thread.
> 
> 
> Rereading my post and these comments, I may have stepped on some toes
> with regards to SegWit's model. I like SegWit (though I may have a
> slight preference for BIP140), and I understand the reasons for the
> "discount", so this was not my intention. I just think that the reversal
> of costs during peak load order validation is a rather interesting
> feature of using spend-tree  based validation. 
> 
> Tomas

Please no conspiracy theory like stepping on someone’s toes. I believe it’s 
always nice to challenge the established model. However, as I’m trying to make 
some hardfork design, I intend to have a stricter UTXO growth limit. As you 
said "protocol addressing the UTXO growth, might not be worth considering 
protocol improvements*, it sounds like UTXO growth limit wouldn’t be very 
helpful for your model, which I doubt. 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Using a storage engine without UTXO-index

2017-04-08 Thread Johnson Lau via bitcoin-dev

> On 8 Apr 2017, at 15:28, Tomas via bitcoin-dev 
>  wrote:
>> 
> 
> I think you are being a bit harsh here . I am also clearly explaining
> the difference only applies to peak load, and just making a suggestion.
> I simply want to stress the importance of protocol / implementation
> separation as even though you are correct UTXO data is always a resource
> cost for script validation (as I also state), the ratio of different
> costs are  not necessarily *identical* across implementation. 
> 
> Note that the converse also holds: In bitcrust, if the last few blocks
> contain many inputs, the peak load verification for this block is
> slower. This is not the case in Core.
> 
> Tomas
> 

I don’t fully understand your storage engine. So the following deduction is 
just based on common sense.

a) It is possible to make unlimited number of 1-in-100-out txs

b) The maximum number of 100-in-1-out txs is limited by the number of previous 
1-in-100-out txs

c) Since bitcrust performs not good with 100-in-1-out txs, for anti-DoS purpose 
you should limit the number of previous 1-in-100-out txs. 

d) Limit 1-in-100-out txs == Limit UTXO growth

I’m not surprised that you find an model more efficient than Core. But I don’t 
believe one could find a model that doesn’t become more efficient with UTXO 
growth limitation.

Maybe you could try an experiment with regtest? Make a lot 1-in-100-out txs 
with many blocks, then spend all the UTXOs with 100-in-1-out txs. Compare the 
performance of bitcrust with core. Then repeat with 1-in-1-out chained txs (so 
the UTXO set is always almost empty)

One more question: what is the absolute minimum disk and memory usage in 
bitcrust, compared with the pruning mode in Core?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Extension block proposal by Jeffrey et al

2017-04-05 Thread Johnson Lau via bitcoin-dev

> On 5 Apr 2017, at 22:05, Olaoluwa Osuntokun  wrote:
> 
> 
> The concrete parameters chosen in the proposal are: each channel opening
> transaction reserves 700-bytes within _each_ block in the chain until the
> transaction has been closed. 
> 
> 

Why so? It seems you are describing it as a softfork. With hardfork or 
extension block, a new rule could simply grant extra space when the tagged UTXO 
is spent. So if the usual block size limit is 1MB, when the special UTXO is 
made, the block size limit decreases to 1MB-700 byte, and the user has to pay 
for that 700 byte. When it is spent, the block size will become 1MB+700 byte.

But miners or even users may abuse this system: they may try to claim all the 
unused space when the blocks are not congested, or when they are mining empty 
block, and sell those tagged UTXO later. So I think we need to limit the 
reservable space in each block, and deduct more space than it is reserved. For 
example, if 700 bytes are reserved, the deduction has to be 1400 byte.

With BIP68, there are 8 unused bits in nSequence. We may use a few bits to let 
users to fine tune the space they want to reserve. Maybe 1 = 256 bytes

I think this is an interesting idea to explorer and I’d like to include this in 
my hardfork proposal.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A different approach to define and understand softforks and hardforks

2017-04-05 Thread Johnson Lau via bitcoin-dev
Softforks and hardforks are usually defined in terms of block validity (BIP99): 
making valid blocks invalid is a softfork, making invalid blocks valid is a 
hardfork, and SFs are usually considered as less disruptive as it is considered 
to be “opt-in”. However, as shown below this technical definition could be very 
misleading. Here I’m trying to redefine the terminology in terms of software 
upgrade necessity and difficulty.

Softforks are defined as consensus rule changes that non-upgraded software will 
be able to function exactly as usual, as if the rule changes have never happened

Hardforks are defined as consensus rule changes that non-upgraded software will 
cease to function or be severely handicapped

SFs and HFs under this definitions is a continuum, which I call it 
“hardfork-ness”. A pure softfork has no hardfork-ness.

*Mining node

Under this definitions, for miners, any trivial consensus rule changes is 
somewhat a hardfork, as miners can’t reliably use non-upgraded software to 
create blocks. However, there is still 3 levels of “hardfork-ness”, for example:

1. Those with lower hardfork-ness would be the SFs that miners do not need to 
upgrade their software at all. Instead, the minimum requirement is to setup a 
boarder node with latest rules to make sure they won’t mine on top of an 
invalid block. Examples include CSV and Segwit

2. Some SFs have higher hardfork-ness, for example BIP65 and BIP66. The minimum 
actions needed include setting up a boarder node and change the block version. 
BIP34 has even higher hardfork-ness as more actions are needed to follow the 
new consensus.

3. Anything else, ranging from simple HFs like BIP102 to complete HFs like 
spoonnet, or soft-hardfork like forcenet, have the highest hardfork-ness. In 
these cases, boarder nodes are completely useless. Miners have to upgrade their 
servers in order to stay with the consensus.

*Non-mining full node

Similarly, in terms of non-mining full node, as the main function is to 
fully-validate all applicable rules on the network, any consensus change is a 
hardfork for this particular function. However, a technical SF would have much 
lower hardfork-ness than a HF, as a border node is everything needed in a SF. 
Just consider a company has some difficult-to-upgrade software that depends on 
Bitcoin Core 0.8. Using a 0.13.1+ boarder node will make sure they will always 
follow the latest rules. In case of a HF, they have no choice but to upgrade 
the backend system.

So we may use the costs of running a boarder node to further define the 
hardfork-ness of SFs, and it comes to the additional resources needed:

1. Things like BIP34, 65, 66, and CSV involves trivial resources use so they 
have lowest hardfork-ness.

2. Segwit is higher because of increased block size.

3. Extension block has very high hardfork-ness as people may not have enough 
resources to run a boarder node.

* Fully validating wallets

In terms of the wallet function in full node, without considering the issues of 
validation, the hardfork-ness could be ranked as below:

1. BIP34, 65, 66, CSV, segwit all have no hardfork-ness for wallets. 
Non-upgraded wallets will work exactly in the same way as before. Users won’t 
notice any change at all. (In some cases they may not see a new tx until it has 
1 confirmation, but this is a mild issue and 0-conf is unsafe anyway)

2. Extension block, as presented in my January post ( 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013490.html
 

 ), has higher hardfork-ness, as users of legacy wallets may find it difficult 
to receive payments from upgraded wallet. However, once they got paid, the user 
experience is same as before

3. Another extension block proposal ( 
https://github.com/tothemoon-org/extension-blocks 
 ) has very high 
hardfork-ness for wallets, as legacy wallets will frequently and suddenly find 
that incoming and outgoing txs becoming invalid, and need to sign the 
invalidated txs again, even no one is trying to double spend.

4. Hardfork rule changes have highest hardfork-ness for full node wallets

I’ll explain the issues with extension block in a separate post in details

* Real SPV wallet

The SPV wallets as proposed by Satoshi should have the ability to fully 
validate the rules when needed, so they could be somehow seen as fully 
validating wallets. So far, real SPV wallet is just vapourware.

* Fake SPV wallet, aka light wallet

All the so-called SPV wallets we have today are fake SPV according to 
whitepaper definition. Since they validate nothing, the hardfork-ness profile 
is very different:

1. BIP34, 65, 66, CSV, segwit has no hardfork-ness for light wallets. Block 
size HF proposals (BIP10x) and Bitcoin Unlimited also have no hardfork-ness 
(superficially, but not philosophically). Along the same line, even an 
inflation 

Re: [bitcoin-dev] BIP draft: Extended block header hardfork

2017-04-02 Thread Johnson Lau via bitcoin-dev

> On 3 Apr 2017, at 04:39, Russell O'Connor <rocon...@blockstream.io> wrote:
> 
> On Sun, Apr 2, 2017 at 4:13 PM, Johnson Lau via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org 
> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
> • the witness of the first input of the coinbase transaction MUST 
> have exactly one stack item (the "extended header"), with the following data:
> • bytes 0 to 3: nHeight MUST be equal to the height of this 
> block (signed little endian)
> 
>  Someone told me a while back that it would be more natural if we move the 
> nHeight from the coinbase script to the coinbase locktime.  Have you 
> considered doing this?


Yes, it’d look much better. But I’m thinking of a different approach: instead 
of using a hash of …., we use the hash of previous block for the 
coinbase input. With some new SIGHASH design, this allows people to pay to a 
child of a particular block. This is actually implemented in my spoonnet2 
branch. I’ll describe it with a BIP soon

However, what I’m trying to do in the extended block header is independent to 
the design of coinbase tx. Here I’m trying to let people knowing the height 
just by a header and extended header (<300 bytes), without requiring all 
headers in the history.

Also I forgot to post the link of the BIP: 
https://github.com/jl2012/bips/blob/spoonnet/bip-extheader.mediawiki 
<https://github.com/jl2012/bips/blob/spoonnet/bip-extheader.mediawiki>___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP draft: Extended block header hardfork

2017-04-02 Thread Johnson Lau via bitcoin-dev
This is the first of a series of BIPs describing my “spoonnet” experimental 
hardfork. Recently many bitcoin businesses expressed their requirements for 
supporting a hardfork proposal. While it is proven to be extremely difficult to 
obtain community-wide consensus, spoonnet fulfills all the commonly requested 
features, including deterministic activation logic, strong and simple 2-way 
replay protection, wipe-out protection, and predictable resources use. A few 
more BIPs are coming to describe these features.

The activation is purely based on flag day. Since it is very difficult to 
measure community consensus on-chain, this may only be done off-chain, and 
everyone switch to the new software when the vast majority agree. This is more 
a social issue than a technical one.

Reference implementation for consensus codes could be found at: 
https://github.com/jl2012/bitcoin/tree/spoonnet2 . This does not include 
mempool, wallet, and mining support. Mempool and wallet support are more tricky 
due to replay attack protection.

BIP: ? 
Layer: Consensus (hard fork) 
Title: Extended block header hardfork 
Author: Johnson Lau  
Comments-Summary: No comments yet. 
Comments-URI: 
Status: Draft 
Type: Standards Track 
Created: 2017-03-31 
License: BSD-2-Clause


Abstract

This BIP proposes a flexible and upgradable extended block header format 
thorough a hardfork.

Motivation

In the current Bitcoin protocol, the block header is fixed at 80 bytes with no 
space reserved for additional data. The coinbase transaction becomes the only 
practical location for new consensus-critical data, such as those proposed by 
BIP100 and BIP141. Although this preserves maximal backward compatibility for 
full nodes, it is not ideal for light clients because the size of coinbase 
transaction and depth of Merkle tree are indefinite.

This BIP proposes an extended block header format with the following objectives:

• To provide a flexible header format which is easily upgradeable with 
softforks.
• Old light nodes following the hardfork chain if it has most 
proof-of-work, but not seeing any transactions.
• Being compatible with the Stratum mining protocol to avoid mining 
machine upgrade.
• Having a deterministic hardfork activation.
• Being a permanent hardfork, as supporting nodes will not accept 
blocks mined in old rules after hardfork is activated.

Specification

The following rules are activated when the median timestamp of the past 11 
blocks is equal to or greater than a to-be-determined time and after activation 
of BIP65.


• the nVersion of the block header MUST have the most significant bit 
(the sign bit) signalled.
• for the purpose of counting softforks proposal signalling (BIP9), the 
sign bit is ignored.
• segregated witness MUST be enabled, if it had not been already 
activated through the BIP9 mechanism.
• the witness of the first input of the coinbase transaction MUST have 
exactly one stack item (the "extended header"), with the following data:
• bytes 0 to 3: nHeight MUST be equal to the height of this 
block (signed little endian)
• bytes 4 to 5: MUST be exactly 0x
• bytes 6 to 53: extra data with no meaning in Bitcoin protocol
• bytes 54 to 85: hashMerkleRoot the transaction Merkle root 
(calculated in the same way as the original Merkle root in the block header)
• bytes 86 to 117: hashWitnessRoot the witness Merkle root (NOT 
calculated in the way described in BIP141)
• bytes 118 to 121: nTx MUST be equal to the number of 
transactions in this block (unsigned little endian, minimum 1)
• bytes 122 to 129: nFees MUST be equal to the total 
transaction fee paid by all transactions, except the coinbase transaction, in 
the block (unsigned little endian)
• bytes 130 to 137: nWeight MUST be equal to or greater than 
the total weight of all transactions in the block (to be described in another 
BIP. NOT calculated in the way described in BIP141)
• bytes 138+ : Reserved space for future upgrades
• bytes 36 to 67 in the block header, the place originally for the 
hashMerkleRoot is replaced by the double SHA256 hash of the extended header.
• size of the extended header MUST be at least 138 bytes.
• wtxid of the coinbase transaction is calculated as if the witness of 
its first input is empty.
• the hashWitnessRoot is calculated with all wtxid as leaves, in a way 
similar to the hashMerkleRoot.
• the OP_RETURN witness commitment rules described in BIP141 is not 
enforced.
• The witness reserved valued described in BIP141 is removed from the 
protocol.
A special extheader softfork is defined, with the following BIP9 parameters:
• bit: 15
• starttime: 0
• timeout: 0x
Until the extheader softfork is activated, 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Johnson Lau via bitcoin-dev

> On 29 Mar 2017, at 14:24, Emin Gün Sirer via bitcoin-dev 
>  wrote:
> 
> >Even when several of the experts involved in the document you refer has my 
> >respect and admiration, I do not agree with some of their conclusions
> 
> I'm one of the co-authors of that study. I'd be the first to agree with your 
> conclusion
> and argue that the 4MB size suggested in that paper should not be used without
> compensation for two important changes to the network.
> 
> Our recent measurements of the Bitcoin P2P network show that network speeds
> have improved tremendously. From February 2016 to February 2017, the average
> provisioned bandwidth of a reachable Bitcoin node went up by approximately 
> 70%. 
> And that's just in the last year.

4 * 144 * 30 = 17.3GB per month, or 207GB per year. Full node initialisation 
will become prohibitive for most users until a shortcut is made (e.g. witness 
pruning and UTXO commitment but these are not trust-free)

> 
> Further, the emergence of high-speed block relay networks, like Falcon 
> (http://www.falcon-net.org )
> and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the 
> picture dramatically. 

Also as the co-author of the selfish mining paper, you should know all these 
technology assume big miners being benevolent.

> 
> So, the 4MB limit mentioned in our paper should not be used as a protocol 
> limit today. 
> 
> Best,
> - egs
> 
> 
> 
> On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev 
>  > wrote:
> Alphonse,
> 
>  
> 
> Even when several of the experts involved in the document you refer has my 
> respect and admiration, I do not agree with some of their conclusions some of 
> their estimations are not accurate other changed like Bootstrap Time, Cost 
> per Confirmed Transaction they consider a network of 450,000,00 GH and today 
> is 3.594.236.966 GH, the energy consumption per GH is old, the cost of 
> electricity is wrong even when the document was made and is hard to find any 
> parameter used that is valid for an analysis today.
> 
>  
> 
> Again with all respect to the experts involved in that analysis is not valid 
> today.
> 
>  
> 
> I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s 
> Law all has been verified for many years and support that 32 MB in 2020 are 
> possible and equals or less than 1 MB in 2010.
> 
>  
> 
> Again may be is not possible Johnson Lau and LukeJr invested a significant 
> amount of time investigating ways to do a safe HF, and may be not possible to 
> do a safe HF today but from processing power, bandwidth and storage is 
> totally valid and Wang Chung proposal has solid grounds.
> 
>  
> 
> Regards
> 
>  
> 
> Juan
> 
>  
> 
>  
> 
> From: Alphonse Pace [mailto:alp.bitc...@gmail.com 
> ] 
> Sent: Tuesday, March 28, 2017 2:53 PM
> To: Juan Garavaglia >; Wang Chun 
> <1240...@gmail.com >
> Cc: Bitcoin Protocol Discussion  >
> 
> 
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> 
>  
> 
> Juan,
> 
>  
> 
> I suggest you take a look at this paper: 
> http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf 
>   It may help you form 
> opinions based in science rather than what appears to be nothing more than a 
> hunch.  It shows that even 4MB is unsafe.  SegWit provides up to this limit.
> 
>  
> 
> 8MB is most definitely not safe today.
> 
>  
> 
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed 
> making the block size limit 32MiB.  
> 
>  
> 
>  
> 
> Wang Chun,
> 
> 
> Can you specify what meeting you are talking about?  You seem to have not 
> replied on that point.  Who were the participants and what was the purpose of 
> this meeting?
> 
>  
> 
> -Alphonse
> 
>  
> 
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia  > wrote:
> 
> Alphonse,
> 
>  
> 
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB 
> limit valid in next halving, from network, storage and CPU perspective or 1MB 
> was too high in 2010 what is possible or 1MB is to low today.
> 
>  
> 
> If is unsafe or impossible to raise the blocksize is a different topic. 
> 
>  
> 
> Regards
> 
>  
> 
> Juan
> 
>  
> 
>  
> 
> From: bitcoin-dev-boun...@lists.linuxfoundation.org 
>  
> [mailto:bitcoin-dev-boun...@lists.linuxfoundation.org 
> ] On Behalf Of Alphonse 
> Pace via bitcoin-dev
> Sent: Tuesday, March 28, 2017 2:24 PM
> To: Wang Chun <1240...@gmail.com >; Bitcoin 
> Protocol 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-28 Thread Johnson Lau via bitcoin-dev

> On 29 Mar 2017, at 04:50, Tom Zander <t...@freedommail.ch 
> <mailto:t...@freedommail.ch>> wrote:
> 
> On Tuesday, 28 March 2017 19:34:23 CEST Johnson Lau via bitcoin-dev wrote:
>> So if we really want to get prepared for a potential HF with unknown
>> parameters,
> 
> That was not suggested.
> 
> Maybe you can comment on the very specific suggestion instead?
> 
> -- 
> Tom Zander
> Blog: https://zander.github.io <https://zander.github.io/>
> Vlog: https://vimeo.com/channels/tomscryptochannel 
> <https://vimeo.com/channels/tomscryptochannel>

Just take something like FlexTran as example. How you could get prepared for 
that without first finalising the spec?

Or changing the block interval from 10 minutes to some other value?

Also, fixing the sighash bug for legacy scripts?

There are many other ideas that require a HF:
https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas 
<https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas>___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-28 Thread Johnson Lau via bitcoin-dev
You are probably not the first one nor last one with such idea. Actually, Luke 
wrote up a BIP with similar idea in mind:

https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki 


Instead of just lifting the block size limit, he also suggested to remove many 
other rules. I think he has given up this idea because it’s just too 
complicated.

If we really want to prepare for a hardfork, we probably want to do more than 
simply increasing the size limit. For example, my spoonnet proposal:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html
 


In a HF, we may want to relocate the witness commitment to a better place. We 
may also want to fix Satoshi's sighash bug. These are much more than simple 
size increase.

So if we really want to get prepared for a potential HF with unknown 
parameters, I’d suggest to set a time bomb in the client, which will stop 
processing of transactions with big warning in GUI. The user may still have an 
option to continue with old rules at their own risks.

Or, instead of increasing the block size, we make a softfork to decrease the 
block size to 1kB and block reward to 0, activating far in the future. This is 
similar to the difficulty bomb in ETH, which will freeze the network.

> On 29 Mar 2017, at 00:59, Wang Chun via bitcoin-dev 
>  wrote:
> 
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
> 
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
> 
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
> 
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
> 
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
> 
> Anyway, we must code something right now, before it becomes too late.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Spoonnet: another experimental hardfork

2017-02-06 Thread Johnson Lau via bitcoin-dev
Finally got some time over the Chinese New Year holiday to code and write this 
up. This is not the same as my previous forcenet ( 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html
 ). It is much simpler. Trying to activate it on testnet will get you banned. 
Trying to activate it on mainnet before consensus is reached will make you lose 
money.

This proposal includes the following features:

1. A fixed starting time. Not dependent on miner signalling. However, it 
requires at least 51% of miners to actually build the new block format in order 
to get activated.

2. It has no mechanism to prevent a split. If 49% of miners insist on the 
original chain, they could keep going. Split prevention is a social problem, 
not a technical one.

3. It is compatible with existing Stratum mining protocol. Only pool software 
upgrade is needed

4. A new extended and flexible header is located at the witness field of the 
coinbase transaction

5. It is backward compatible with existing light wallets

6. Dedicated space for miners to put anything they want, which bitcoin users 
could completely ignore. Merge-mining friendly.

7. Small header space for miners to include non-consensus enforced bitcoin 
related data, useful for fee estimation etc.

8. A new transaction weight formula to encourage responsible use of UTXO

9. A linear growth of actual block size until certain limit

10. Sighash O(n^2) protection for legacy (non-segwit) outputs

11. Optional anti-transaction replay

12. A new optional coinbase tx format that allows additional inputs, including 
spending of immature previous coinbase outputs



Specification [Rationales]:


Activation:

* A "hardfork signalling block" is a block with the sign bit of header nVersion 
is set [Clearly invalid for old nodes; easy opt-out for light wallets]

* If the median-time-past of the past 11 blocks is smaller than the 
HardForkTime (exact time to be determined), a hardfork signalling block is 
invalid.

* Child of a hardfork signalling block MUST also be a hardfork signalling block

* Initial hardfork signalling is optional, even if the HardForkTime has past 
[requires at least 51% of miners to actually build the new block format]

* HardForkTime is determined by a broad consensus of the Bitcoin community. 
This is the only way to prevent a split.


Extended header:

* Main header refers to the original 80 bytes bitcoin block header

* A hardfork signalling block MUST have a additional extended header

* The extended header is placed at the witness field of the coinbase 
transaction [There are 2 major advantages: 1. coinbase witness is otherwise 
useless; 2. Significantly simply the implementation with its stack structure]

* There must be exactly 3 witness items (Header1; Header2 ; Header3)
**Header1 must be exactly 32 bytes of the original transaction hash Merkle root.
**Header2 is the secondary header. It must be 36-80 bytes. The first 4 bytes 
must be little-endian encoded number of transactions (minimum 1). The next 32 
bytes must be the witness Merkle root (to be defined later). The rest, if any, 
has no consensus meaning. However, miners MUST NOT use this space of 
non-bitcoin purpose [the additional space allows non-censensus enforced data to 
be included, easily accessible to light wallets]
**Header3 is the miner dedicated space. It must not be larger than 252 bytes. 
Anything put here has no consensus meaning [space for merge mining; non-full 
nodes could completely ignore data in this space; 252 is the maximum size 
allowed for signal byte CompactSize]

* The main header commitment is H(Header1|H(H(Header2)|H(Header3)))  H() = 
dSHA256() [The hardfork is transparent to light wallets, except one more 
32-byte hash is needed to connect a transaction to the root]

* To place the ext header, segwit becomes mandatory after hardfork


A “backdoor” softfork the relax the size limit of Header 2 and Header 3:

* A special BIP9 softfork is defined with bit-15. If this softfork is 
activated, full nodes will not enforce the size limit for Header 2 and Header 
3. [To allow header expansion without a hardfork. Avoid miner abuse while 
providing flexibility. Expansion might be needed for new commitments like fraud 
proof commitments]


Anti-tx-replay:

* Hardfork network version bit is 0x0200. A tx is invalid if the highest 
nVersion byte is not zero, and the network version bit is not set.

* Masked tx version is nVersion with the highest byte masked. If masked version 
is 3 or above, sighash for OP_CHECKSIG alike is calculated using BIP143, except 
0x0200 is added to the nHashType (the nHashType in signature is still a 
1-byte value) [ensure a clean split of signatures; optionally fix the O(n^2) 
problem]

* Pre-hardfork policy change: nVersion is determined by the masked tx version 
for policy purpose. Setting of Pre-hardfork network version bit 0x0100 is 
allowed.

* Details: 

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-27 Thread Johnson Lau via bitcoin-dev

> On 26 Jan 2017, at 03:32, Tom Harding  wrote:
> 
> On 1/24/2017 8:03 PM, Johnson Lau wrote:
>> it seems they are not the same: yours is opt-out, while mine is opt-in.
> 
> I missed this.  So in fact you propose a self-defeating requirement on the 
> new network, which would force unmodified yet otherwise compatible systems to 
> change to support the new network at all. This is unlikely to be included in 
> new network designs.
> 
> I suggest that the opt-out bits proposal comes from a more realistic position 
> that would actually make sense for everyone.
> 

I think there are some misunderstanding. You’d better read my source code if my 
explanation is not clear.

From my understanding our proposals are the same, just with a bitwise not (~) 
before the network characteristic byte. So you set a bit to opt-out a network, 
while I set a bit to opt-in a network (and opt-out any other)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Consensus critical limits in Bitcoin protocol and proposed block resources limit accounting

2017-01-27 Thread Johnson Lau via bitcoin-dev
There are many consensus critical limits scattered all over the Bitcoin 
protocol. The first part of this post is to analyse what the current limits 
are. These limits could be arranged into different categories:

1. Script level limit. Some limits are restricted to scripts, including size 
(1 bytes), nOpCount (201), stack plus alt-stack size (1000), and stack push 
size (520). If these limits are passed, they won’t have any effects on the 
limits of the other levels.

2. Output value limit: any single output value must be >=0 and <= 21 million 
bitcoin

3. Transaction level limit: The only transaction level limit we have currently, 
is the total output value must be equal to or smaller than the total input 
value for non-coinbase tx.

4. Block level limit: there are several block level limits:
a. The total output value of all txs must be equal to or smaller than the total 
input value with block reward.
b. The serialised size including block header and transactions must not be over 
1MB. (or 4,000,000 in terms of tx weight with segwit)
c. The total nSigOpCount must not be over 20,000 (or 80,000 nSigOpCost with 
segwit)

There is an unavoidable layer violation in terms of the block level total 
output value. However, all the other limits are restricted to its level. 
Particularly, the counting of nSigOp does not require execution of scripts. 
BIP109 (now withdrawn) tried to change this by implementing a block level 
SigatureHash limit and SigOp limit by counting the accurate value through 
running the scripts.

So currently, we have 2 somewhat independent block resources limits: weight and 
SigOp. A valid block must not exceed any of these limits. However, for miners 
trying to maximise the fees under these limits, they need to solve a non-linear 
equation. It’s even worse for wallets trying to estimate fees, as they have no 
idea what txs are miners trying to include. In reality, everyone just ignore 
SigOp for fee estimation, as the size/weight is almost always the dominant 
factor.

In order to not introduce further non-linearity with segwit, after examining 
different alternatives, we decided that the block weight limit should be a 
simple linear function:  3*base size + total size, which allows bigger block 
size and provides incentives to limit UTXO growth. With normal use, this allows 
up to 2MB of block size, and even more if multi-sig becomes more popular. A 
side effect is that allows a theoretical way to fill up the block to 4MB with 
mostly non-transaction data, but that’d only happen if a miner decide to do it 
due to non-standardness. (and this is actually not too bad, as witness could be 
pruned in the future)

Some also criticised that the weight accounting would make a “simple 2MB 
hardfork” more dangerous, as the theoretical limits will be 8MB which is too 
much. This is a complete straw man argument, as with a hardfork, one could 
introduce any rules at will, including revolutionising the calculation of block 
resources, as shown below.

—
Proposal: a new block resources limit accounting

Objectives: 
1. linear fee estimation
2. a single, unified, block level limit for everything we want to limit
3. do not require expensive script evaluation

Assumptions:
1. the maximum base block size is about 1MB (for a hardfork with bigger block, 
it just needs to upscale the value)
2. a hardfork is done (despite some of these could also be done with a softfork)

Version 1: without segwit
The tx weight is the maximum of the following values:
— Serialised size in byte
— accurate nSigOpCount * 50 (statical counting of SigOp in scriptSig, 
redeemScript, and previous scriptPubKey, but not the new scriptPubKey)
The block level limit is 1,000,000

Although this looks similar to the existing approach, this actually makes the 
fee estimation a linear problem. Wallets may now calculate both values for a tx 
and take the maximum, and compare with other txs on the same basis. On the 
other hand, the total size and SigOpCount of a block may never go above the 
existing limits (1MB and 2) no matter how the txs look like. (In some edge 
cases, the max block size might be smaller than 1MB, if the weight of some 
transactions is dominated by the SigOpCount)

Version 2: extending version 1 with segwit
The tx weight is the maximum of the following values:
— Serialised size in byte * 2
— Base size * 3 + total size
— accurate SigOpCount * 50 (as a hardfork, segwit and non-segwit SigOp could be 
counted in the same way and no need to scale) 
The block level limit is 4,000,000

For similar reasons the fee estimation is also a linear problem. An interesting 
difference between this and BIP141 is this will limit the total block size 
under 2MB, as 4,000,000 / 2 (the 2 as the scaling factor for the serialised 
size). If the witness inflation really happens (which I highly doubt as it’s a 
miner initiated attack), we could introduce a similar limit just with a 
softfork.

Version 3: extending version 2 to 

Re: [bitcoin-dev] Three hardfork-related BIPs

2017-01-26 Thread Johnson Lau via bitcoin-dev
I can’t recommend your first 2 proposals. But I only have the time to talk 
about the first one for now.

There are 2 different views on this topic:

1. “The block size is too small and people can’t buy a coffee with an on-chain 
transaction. Let’s just remove the limit”

2. “The block size is too big and people can’t run full nodes or do initial 
blockchain download (IBD). Let’s just reduce the limit”

For me, both approaches just show the lack of creativity, and lack of 
responsibility. Both just try to solve one problem, disregarding all the other 
consequences.

The 1MB is here, no matter you like it or not, it’s the current consensus. Any 
attempts to change this limit (up or down) require wide consensus of the whole 
community, which might be difficult.

Yes, I agree with you that the current 1MB block size is already too big for 
many people to run a full node. That’s bad, but it doesn’t mean we have no 
options other than reducing the block size. Just to cite some:

1. Blockchain pruning is already available, so the storage of blockchain is 
already an O(1) problem. The block size is not that important for this part
2. UTXO size is an O(n) problem, but we could limit its growth without limit 
the block size, by charging more for UTXO creation, and offer incentive for 
UTXO spending  **
3. For non-mining full node, latency is not critical. 1MB per 10 minutes is not 
a problem unless with mobile network. But I don’t think mobile network is ever 
considered as a suitable way for running a full node
4. For mining nodes, we already have compact block and xthin block, and FIBRE
5. For IBD, reducing the size won’t help much as it is already too big for many 
people. The right way to solve the IBD issue is to implement long latency UTXO 
commitment. Nodes will calculate a UTXO commitment every 1000 block, and commit 
to the UTXO status of the previous 1000 block (e.g. block 11000 will commit to 
the UTXO of block 1). This is a background process and the overhead is 
negligible. When such commitments are confirmed for sufficiently long (e.g. 1 
year), people will assume it is correct, and start IBD from that point by 
downloading UTXO from some untrusted sources. That will drastically reduce the 
time for IBD
6. No matter we change the block size limit or not, we need to implement a 
fraud-proof system to allow probabilistic validation by SPV nodes. So even a 
smartphone may validate 0.1% of the blockchain, and with many people using 
phone wallet, it will only be a net gain to the network security 

For points 2 and 6 above, I have some idea implemented in my experimental 
hardfork.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html
 



> On 27 Jan 2017, at 09:06, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> I've put together three hardfork-related BIPs. This is parallel to the 
> ongoing 
> research into the MMHF/SHF WIP BIP, which might still be best long-term.
> 
> 1) The first is a block size limit protocol change. It also addresses three 
> criticisms of segwit: 1) segwit increases the block size limit which is 
> already considered by many to be too large; 2) segwit treats pre-segwit 
> transactions “unfairly” by giving the witness discount only to segwit 
> transactions; and 3) that spam blocks can be larger than blocks mining 
> legitimate transactions. This proposal may (depending on activation date) 
> initially reduce the block size limit to a more sustainable size in the short-
> term, and gradually increase it up over the long-term to 31 MB; it will also 
> extend the witness discount to non-segwit transactions. Should the initial 
> block size limit reduction prove to be too controversial, miners can simply 
> wait to activate it until closer to the point where it becomes acceptable 
> and/or increases the limit. However, since the BIP includes a hardfork, the 
> eventual block size increase needs community consensus before it can be 
> deployed. Proponents of block size increases should note that this BIP does 
> not interfere with another more aggressive block size increase hardfork in 
> the 
> meantime. I believe I can immediately recommend this for adoption; however, 
> peer and community review are welcome to suggest changes.
> Text: https://github.com/luke-jr/bips/blob/bip-blksize/bip-blksize.mediawiki
> Code: https://github.com/bitcoin/bitcoin/compare/master...luke-jr:bip-blksize 
> (consensus code changes only)
> 
> 2) The second is a *preparatory* change, that should allow trivially 
> transforming certain classes of hardforks into softforks in the future. It 
> essentially says that full nodes should relax their rule enforcement, after 
> sufficient time that would virtually guarantee they have ceased to be 
> enforcing the full set of rules anyway. This allows these relaxed rules to be 
> modified or removed in a softfork, 

Re: [bitcoin-dev] Changing the transaction version number to be varint

2017-01-26 Thread Johnson Lau via bitcoin-dev

> On 20 Jan 2017, at 22:02, Tom Zander via bitcoin-dev 
>  wrote:
> 
> The way to do this is that from a certain block-height the current 
> transaction format labels bytes 2, 3 & 4 to be unused.
> From that same block height the interpretation of the first byte is as 
> varint.
> Last, we add the rule from that block-height that only transactions that do 
> not lie about their version number are valid. Which means version 1.
> 
> Do people see any problems with this?
> This could be done as a soft fork.

Yes, because:

a) what you are talking is a hardfork, because existing nodes will not be able 
to deserialise the transaction. They will forever interpret the first 4 bytes 
as nVersion.

b) it is not a “lie” to use non-version 1 txs. It is permitted since v0.1. And 
version 2 txs is already used due to BIP68.

c) if you are talking about changing the tx serialisation just for network 
transfer, it’s just a p2p protocol upgrade, not softfork nor hardfork

-

There are 3 ways to introduce new tx formats:

1. through a softfork, and make the old clients blind to the new format. That’s 
the segwit approach

2. through a hardfork. Forget the old clients and require new clients to 
understand the new format. That’s the FlexTran approach (in my understanding)

3. p2p only, which won’t affect consensus. No one could stop you if you try to 
copy a block by writing in your native language and pass to your peer.

In either way, one could introduce whatever new format one wants.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Extension block softfork proposal

2017-01-26 Thread Johnson Lau via bitcoin-dev
This is a pre-BIP which allows extra block space through a soft-fork. It is 
completely transparent to existing wallets (both send and receive), but new 
wallets taking advantage of the extra block space will have a very different 
user experience.

I’m sure this is controversial but I think it’s an interesting academic topic. 
If we’d ever have any fully consensus enforced 2-way-peg side chain design, 
that’d be something like this.

Objectives:

1. Provide more block space through a soft forks
2. Completely transparent to existing wallets
3. Not breaking any current security assumptions


Specification and Terminology:

Main block / block: the current bitcoin block (with witness if BIP141 is 
activated)

Main transaction / tx: txs in the current bitcoin network (with witness)

Main UTXO / UTXO: the normal UTXO

Extension transaction / xtx: transactions with a format same as the witness tx 
format described in BIP141, without scriptSig field, and the “flag” as 0x02. 
Only witness program are allowed for scriptPubKey of xtx

Extension block / xblock: xblock is a collection of xtx. Each block may have 0 
or 1 xblock when this softfork is activated.

Extension UTXO / xUTXO: the UTXO set for of the extension block.

Bridging witness program: A new type of witness program is defined. The witness 
script version is OP_2. The program length could be 4 to 40. The first byte 
("direction flag”[note 1]) must be 0x00 (indicating block->xblock) or 0x01 
(indicating xblock->block). Like P2WPKH and P2WSH, the bridging program could 
be wrapped by P2SH. There are 2 ways to spend this program type on the main 
block:
  1) Spend it like a usual witness program with a tx. For example, if the 
bridging program is OP_2 <0x14{20 bytes}>, it could be spent like a 
version-0 20bytes programme, i.e. P2WPKH. Nothing special would happen in this 
case
  2) Spend it like a usual witness program with a special xtx, the genesis xtx. 
In this case, the miner including this xtx will need to do more as described 
below.

Integrating UTXO: a special UTXO with a value >= the total value of all 
existing xUTXO and scriptPubKey is OP_1. (to make the spec easier to read, here 
we assume that now we have a zero value UTXO with its outpoint hardcoded as the 
initial integrating UTXO. In practice we may have the first miner making xblock 
to create the initial integrating UTXO)

Integrating transaction: if a block has an xblock, the second transaction in 
the block must be the integrating transaction. The inputs include the spent 
UTXO of all the genesis xtx in this xblock. If it is a bare witness program, 
the witness must be empty. If it is a P2SH witness program, the scriptSig must 
be the bridging witness program and the witness must be empty. The last input 
must be the original integrating UTXO, with empty witness and scriptSig. If no 
one is trying to send money back from the xblock to the main block, the only 
output is the updated integrating UTXO, which the value must be >= the total 
value of all xUTXO


Up to now, I have described how we could send bitcoins from the main UTXO to 
the xUTXO. Simply speaking, people send money to a new form of witness 
programme. They have the flexibility to spend it in the main block or xblock. 
Nothing special would happen if they send to the main block. If they send to 
the xblock, the value of such UTXO will be collected by the integrating UTXO.

After people sent money to xblock, they could trade inside the xblock just like 
in the main block. Since xblock is invisible to the pre-softfork users, we 
could have whatever size limit for the xblock, which is not a topic of this 
proposal.

The tricky part is sending from xblock to main block.

Returning transaction: returning transaction is a special xtx, sending money to 
a bridging witness program, with a direction flag of 0x01. These bridging 
witness program won’t be recorded in the xUTXO set. Instead, an output is added 
to the integrating tx, with the bridging witness program and corresponding 
value, called the “returning UTXO”. The returning UTXOs are not spendable until 
confirmed by 100 blocks. The updated integrating UTXO is the last output, and 
is not restricted by the 100-block requirement

Fees collection in xblock: Same as normal tx, people pay fee in xblock by 
making output value < input value. Since the value of the integrating UTXO is 
>= the total value of all existing xUTXO, if fees are paid in the xblock, that 
will reduce the value of the integrating UTXO, and miners are paid through the 
usual coinbase tx as fee.

xblock commitment: 2 xblock merkle root, with and without witness, are placed 
exactly after the witness commitment in the coinbase tx.(maybe we could use the 
coinbase reserved witness value, details TBD). If there is no xblock 
commitment, xblock must be empty and integrating tx is not allowed.


Same as any 2-way-peg proposal, sending money from the side chain to the main 
chain is always the most 

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-26 Thread Johnson Lau via bitcoin-dev
manent
>>> division.
>>> 
>>> On 1/25/17, Matt Corallo via bitcoin-dev
>>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>> "A. For users on both existing and new fork, anti-replay is an option,
>>>> not mandatory"
>>>> 
>>>> To maximize fork divergence, it might make sense to require this. Any
>>>> sensible proposal for a hard fork would include a change to the sighash
>>>> anyway, so might as well make it required, no?
>>>> 
>>>> Matt
>>>> 
>>>> On 01/24/17 14:33, Johnson Lau via bitcoin-dev wrote:
>>>>> This is a pre-BIP. Just need some formatting to make it a formal BIP
>>>>> 
>>>>> Motivation:
>>>>> 
>>>>> In general, hardforks are consensus rule changes that make currently
>>>>> invalid transactions / blocks valid. It requires a very high degree of
>>>>> consensus and all economic active users migrate to the new rules at the
>>>>> same time. If a significant amount of users refuse to follow, a
>>>>> permanent ledger split may happen, as demonstrated by Ethereum (“DAO
>>>>> hardfork"). In the design of DAO hardfork, a permanent split was not
>>>>> anticipated and no precaution has been taken to protect against
>>>>> transaction replay attack, which led to significant financial loss for
>>>>> some users.
>>>>> 
>>>>> A replay attack is an attempt to replay a transaction of one network on
>>>>> another network. It is normally impossible, for example between Bitcoin
>>>>> and Litecoin, as different networks have completely different ledgers.
>>>>> The txid as SHA256 hash guarantees that replay across network is
>>>>> impossible. In a blockchain split, however, since both forks share the
>>>>> same historical ledger, replay attack would be possible, unless some
>>>>> precautions are taken.
>>>>> 
>>>>> Unfortunately, fixing problems in bitcoin is like repairing a flying
>>>>> plane. Preventing replay attack is constrained by the requirement of
>>>>> backward compatibility. This proposal has the following objectives:
>>>>> 
>>>>> A. For users on both existing and new fork, anti-replay is an option,
>>>>> not mandatory.
>>>>> 
>>>>> B. For transactions created before this proposal is made, they are not
>>>>> protected from anti-replay. The new fork has to accept these
>>>>> transactions, as there is no guarantee that the existing fork would
>>>>> survive nor maintain any value. People made time-locked transactions in
>>>>> anticipation that they would be accepted later. In order to maximise
>>>>> the
>>>>> value of such transactions, the only way is to make them accepted by
>>>>> any
>>>>> potential hardforks.
>>>>> 
>>>>> C. It doesn’t require any consensus changes in the existing network to
>>>>> avoid unnecessary debate.
>>>>> 
>>>>> D. As a beneficial side effect, the O(n^2) signature checking bug could
>>>>> be fixed for non-segregated witness inputs, optionally.
>>>>> 
>>>>> Definitions:
>>>>> 
>>>>> “Network characteristic byte” is the most significant byte of the
>>>>> nVersion field of a transaction. It is interpreted as a bit vector, and
>>>>> denotes up to 8 networks sharing a common history.
>>>>> 
>>>>> “Masked version” is the transaction nVersion with the network
>>>>> characteristic byte masked.
>>>>> 
>>>>> “Existing network” is the Bitcoin network with existing rules, before a
>>>>> hardfork. “New network” is the Bitcoin network with hardfork rules. (In
>>>>> the case of DAO hardfork, Ethereum Classic is the existing network, and
>>>>> the now called Ethereum is the new network)
>>>>> 
>>>>> “Existing network characteristic bit” is the lowest bit of network
>>>>> characteristic byte
>>>>> 
>>>>> “New network characteristic bit” is the second lowest bit of network
>>>>> characteristic byte
>>>>> 
>>>>> Rules in new network:
>>>>> 
>>>>> 1. If the network characteristic byte is non-zero, and the new netwo

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-25 Thread Johnson Lau via bitcoin-dev
I don’t think this is how the blockchain consensus works. If there is a split, 
it becomes 2 incompatible ledgers. Bitcoin is not a trademark, and you don’t 
need a permission to hardfork it. And what you suggest is also technically 
infeasible, as the miners on the new chain may not have a consensus only what’s 
happening in the old chain.

> On 26 Jan 2017, at 15:03, Chris Priest <cp368...@ohiou.edu> wrote:
> 
> I don't think the solution should be to "fix the replay attack", but
> rather to "force the replay effect". The fact that transactions can be
> relayed should be seen as a good thing, and not something that should
> be fixed, or even called an "attack".
> 
> The solution should be to create a "bridge" that replays all
> transactions from one network over to the other, and vice-versa. A
> fork should be transparent to the end-user. Forcing the user to choose
> which network to use is bad, because 99% of people that use bitcoin
> don't care about developer drama, and will only be confused by the
> choice. When a user moves coins mined before the fork date, both
> blockchains should record that transaction. Also a rule should be
> introduced that prevents users "tainting" their prefork-mined coins
> with coins mined after the fork. All pre-fork mined coins should
> "belong" to the network with hashpower majority. No other networks
> should be able to claim pre-forked coins as being part of their
> issuance (and therefore part of market cap). Market cap may be
> bullshit, but it is used a lot in the cryptosphere to compare coins to
> each other.
> 
> The advantage of pre-fork coins being recorded on both forks is that
> if one fork goes extinct, no one loses any money. This setup
> encourages the minority chain to die,and unity returned. If pre-fork
> coins change hands on either fork (and not on the other), then holders
> have an incentive to not let their chain die, and the networks will be
> irreversibly split forever. The goal should be unity not permanent
> division.
> 
> On 1/25/17, Matt Corallo via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> "A. For users on both existing and new fork, anti-replay is an option,
>> not mandatory"
>> 
>> To maximize fork divergence, it might make sense to require this. Any
>> sensible proposal for a hard fork would include a change to the sighash
>> anyway, so might as well make it required, no?
>> 
>> Matt
>> 
>> On 01/24/17 14:33, Johnson Lau via bitcoin-dev wrote:
>>> This is a pre-BIP. Just need some formatting to make it a formal BIP
>>> 
>>> Motivation:
>>> 
>>> In general, hardforks are consensus rule changes that make currently
>>> invalid transactions / blocks valid. It requires a very high degree of
>>> consensus and all economic active users migrate to the new rules at the
>>> same time. If a significant amount of users refuse to follow, a
>>> permanent ledger split may happen, as demonstrated by Ethereum (“DAO
>>> hardfork"). In the design of DAO hardfork, a permanent split was not
>>> anticipated and no precaution has been taken to protect against
>>> transaction replay attack, which led to significant financial loss for
>>> some users.
>>> 
>>> A replay attack is an attempt to replay a transaction of one network on
>>> another network. It is normally impossible, for example between Bitcoin
>>> and Litecoin, as different networks have completely different ledgers.
>>> The txid as SHA256 hash guarantees that replay across network is
>>> impossible. In a blockchain split, however, since both forks share the
>>> same historical ledger, replay attack would be possible, unless some
>>> precautions are taken.
>>> 
>>> Unfortunately, fixing problems in bitcoin is like repairing a flying
>>> plane. Preventing replay attack is constrained by the requirement of
>>> backward compatibility. This proposal has the following objectives:
>>> 
>>> A. For users on both existing and new fork, anti-replay is an option,
>>> not mandatory.
>>> 
>>> B. For transactions created before this proposal is made, they are not
>>> protected from anti-replay. The new fork has to accept these
>>> transactions, as there is no guarantee that the existing fork would
>>> survive nor maintain any value. People made time-locked transactions in
>>> anticipation that they would be accepted later. In order to maximise the
>>> value of such transactions, the only way is to make them accepted by any
>>> pot

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Johnson Lau via bitcoin-dev

> On 25 Jan 2017, at 15:29, Natanael  wrote:
> 
> 
> Den 25 jan. 2017 08:22 skrev "Johnson Lau"  >:
> Assuming Alice is paying Bob with an old style time-locked tx. Under your 
> proposal, after the hardfork, Bob is still able to confirm the time-locked tx 
> on both networks. To fulfil your new rules he just needs to send the outputs 
> to himself again (with different tx format). But as Bob gets all the money on 
> both forks, it is already a successful replay
> 
> Why would Alice be sitting on an old-style signed transaction with UTXO:s 
> none of which she controls (paying somebody else), with NO ability to 
> substitute the transaction for one where she DOES control an output, leaving 
> her unable to be the one spending the replay protecting child transaction? 

If Alice still has full control, she is already protected by my proposal, which 
does not require any protecting child transaction.

But in many cases she may not have full control. Make it clearer, consider 
that’s actually a 2-of-2 multisig of Alice and Bob, and the time locked tx is 
sending to Bob. If the time locked tx is unprotected in the first place, Bob 
will get all the money from both forks anyway, as there is no reason for him to 
renegotiate with Alice.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Johnson Lau via bitcoin-dev
Assuming Alice is paying Bob with an old style time-locked tx. Under your 
proposal, after the hardfork, Bob is still able to confirm the time-locked tx 
on both networks. To fulfil your new rules he just needs to send the outputs to 
himself again (with different tx format). But as Bob gets all the money on both 
forks, it is already a successful replay


> On 25 Jan 2017, at 15:15, Natanael  wrote:
> 
> 
> Den 25 jan. 2017 08:06 skrev "Johnson Lau"  >:
> What you describe is not a fix of replay attack. By confirming the same tx in 
> both network, the tx has been already replayed. Their child txs do not matter.
> 
> Read it again. 
> 
> The validation algorithm would be extended so that the transaction can't be 
> replayed, because replaying it in the other network REQUIRES a child 
> transaction in the same block that is valid, a child transaction the is 
> unique to the network. By doing this policy change simultaneously in both 
> networks, old pre-signed transactions *can not be replayed by anybody but the 
> owner* of the coins (as he must spend them immediately in the child 
> transaction). 
> 
> It means that as soon as spent, the UTXO sets immediately and irrevocably 
> diverges across the two networks. Which is the entire point, isn't it? 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Johnson Lau via bitcoin-dev
What you describe is not a fix of replay attack. By confirming the same tx in 
both network, the tx has been already replayed. Their child txs do not matter.

> On 25 Jan 2017, at 09:22, Natanael <natanae...@gmail.com> wrote:
> 
> 
> 
> Den 24 jan. 2017 15:33 skrev "Johnson Lau via bitcoin-dev" 
> <bitcoin-dev@lists.linuxfoundation.org 
> <mailto:bitcoin-dev@lists.linuxfoundation.org>>:
> 
> 
> B. For transactions created before this proposal is made, they are not 
> protected from anti-replay. The new fork has to accept these transactions, as 
> there is no guarantee that the existing fork would survive nor maintain any 
> value. People made time-locked transactions in anticipation that they would 
> be accepted later. In order to maximise the value of such transactions, the 
> only way is to make them accepted by any potential hardforks.
> 
> This can be fixed. 
> 
> Make old-format transactions valid *only when paired with a fork-only 
> follow-up transaction* which is spending at least one (or all) of the outputs 
> of the old-format transaction. 
> 
> (Yes, I know this introduces new statefulness into the block validation 
> logic. Unfortunately it is necessary for maximal fork safety. It can be 
> disabled at a later time if ever deemed no longer necessary.)
> 
> Meanwhile, the old network SHOULD soft-fork in an identical rule with a 
> follow-up transaction format incompatible with the fork. 
> 
> This means that old transactions can not be replayed across forks/networks, 
> because they're not valid when stand-alone. It also means that all wallet 
> clients either needs to be updated OR paired with software that intercepts 
> generated transactions, and automatically generates the correct follow-up 
> transaction for it (old network only). 
> 
> The rules should be that old-format transactions can't reference new-format 
> transactions, even if only a softfork change differ between the formats. This 
> prevents an unnecessary amount of transactions pairs generated by old 
> wallets. Thus they can spend old outputs, but not spend new ones. 
> 
> 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-24 Thread Johnson Lau via bitcoin-dev
Yes, it’s similar. I’ll quote your design if/when I formalise my BIP. But it 
seems they are not the same: yours is opt-out, while mine is opt-in.

However, my proposal in nowhere depends on standardness for the protection. It 
depends on the new network enforcing a new SignatureHash for txs with an 
nVersion not used in the existing network. This itself is a hardfork and the 
existing network would never accept such txs.

This is to avoid requiring any consensus changes to the existing network, as 
there is no guarantee that such softfork would be accepted by the existing 
network. If the new network wants to protect their users, it’d be trivial for 
them to include a SignatureHash hardfork like this, along with other other 
hardfork changes. Further hardforks will only require changing the network 
characteristic bit, but not the SignatureHash.

If the hardfork designers don’t like the fix of BIP143, there are many other 
options. The simplest one would be a trivial change to Satoshi’s SignatureHash, 
such as adding an extra value at the end of the algorithm. I just don’t see any 
technical reasons not to fix the O(n^2) problem altogether, if it is trivial 
(but not that trivial if the hardfork is not based on segwit)


> On 25 Jan 2017, at 02:52, Tom Harding <t...@thinlink.com> wrote:
> 
> On 1/24/2017 6:33 AM, Johnson Lau via bitcoin-dev wrote:
>> 9. If the network characteristic byte is non-zero, and the existing
>> network characteristic bit is set, the masked version is used to
>> determine whether a transaction should be mined or relayed (policy change)
> 
> Johnson,
> 
> Your proposal supports 8 opt-out bits compatible with may earlier
> description:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-July/012917.html.
> 
> If the existing network really wants to play along, it should execute a
> soft fork as soon as possible to support its own hard-fork opt-out bit
> ("network characteristic bit").  It is totally inadequate for a new
> network to rely on non-standardness in the existing network to prevent
> replay there.  Instead, in the absence of a supported opt-out bit in the
> existing network, a responsible new network would allow something that
> is invalid in the existing network, for transactions where replay to the
> existing network is undesirable.
> 
> It is an overreach for your BIP to suggest specific changes to be
> included in the new network, such as the specific O(n^2) fix you
> suggest.  This is a matter for the new network itself.
> 
> 


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Script Abuse Potential?

2017-01-02 Thread Johnson Lau via bitcoin-dev
No, there could only have not more than 201 opcodes in a script. So you may 
have 198 OP_2DUP at most, i.e. 198 * 520 * 2 = 206kB

For OP_CAT, just check if the returned item is within the 520 bytes limit.

> On 3 Jan 2017, at 11:27, Jeremy via bitcoin-dev 
>  wrote:
> 
> It is an unfortunate script, but can't actually ​do that much​ it seems​. The 
> MAX_SCRIPT_ELEMENT_SIZE = 520 Bytes.​ Thus, it would seem the worst you could 
> do with this would be to (1-520*2)*520*2 bytes  ~=~ 10 MB.
> 
> ​Much more concerning would be the op_dup/op_cat style bug, which under a 
> similar script ​would certainly cause out of memory errors :)
> 
> 
> 
> --
> @JeremyRubin  
> 
> On Mon, Jan 2, 2017 at 4:39 PM, Steve Davis via bitcoin-dev 
>  > wrote:
> Hi all,
> 
> Suppose someone were to use the following pk_script:
> 
> [op_2dup, op_2dup, op_2dup, op_2dup, op_2dup, ...(to limit)..., op_2dup, 
> op_hash160, , op_equalverify, op_checksig]
> 
> This still seems to be valid AFAICS, and may be a potential attack vector?
> 
> Thanks.
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org 
> 
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
> 
> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Johnson Lau via bitcoin-dev

> On 14 Dec 2016, at 19:07, Luke Dashjr <l...@dashjr.org> wrote:
> 
> On Wednesday, December 14, 2016 11:01:58 AM Johnson Lau via bitcoin-dev wrote:
>> There is no reason to use a timestamp beyond 4 bytes.
> 
> Actually, there is: lock times... my overflow solution doesn't have a 
> solution 
> to that. :x


You could steal a few bits form tx nVersion through a softfork
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Johnson Lau via bitcoin-dev

> On 5 Dec 2016, at 04:00, adiabat  wrote:
> 
> Interesting stuff! I have some comments, mostly about the header.
> 
> The header of forcenet is mostly described in Luke’s BIP, but I have made 
> some amendments as I implemented it. The format is (size in parentheses; 
> little endian):
> 
> Height (4), BIP9 signalling field (4), hardfork signalling field (3), 
> merge-mining hard fork signalling field (1), prev hash (32), timestamp (4), 
> nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), Hash TMR (32), Hash 
> WMR (32), total tx size (8) , total tx weight (8), total sigops (8), number 
> of tx (4), merkle branches leading to header C (compactSize + 32 bit hashes)
> 
> First, I'd really rather not have variable length fields in the header.  It's 
> so much nicer to just have a fixed size.
> 
> Is having both TMR and WMR really needed?  As segwit would be required with 
> this header type, and the WMR covers a superset of the data that the TMR 
> does, couldn't you get rid of the TMR?  The only disadvantage I can see is 
> that light clients may want a merkle proof of a transaction without having to 
> download the witnesses for that transaction.  This seems pretty minor, 
> especially as once they're convinced of block inclusion they can discard the 
> witness data, and also the tradeoff is that light clients will have to 
> download and store and extra 32 bytes per block, likely offsetting any 
> savings from omitting witness data.
> 

I foresee there will be 2 types of headers under this system: the 80 bytes 
short header and the variable length full header. Short headers are enough to 
link everything up. SPV needs the full header only if they are interested in 
any tx in a block. 

> The other question is that there's a bit that's redundant: height is also 
> committed to in the coinbase tx via bip 34 (speaking of which, if there's a 
> hard-fork, how about reverting bip 34 and committing to the height with 
> coinbase tx nlocktime instead?)

you could omit the transmission of nHeight, as it is implied (saving 4bytes). 
Storing nHeight of headers is what every full and SPV nodes would do anyway


> 
> Total size / weight / number of txs also feels pretty redundant.  Not a lot 
> of space but it's hard to come up with a use for them.  Number of tx could be 
> useful if you want to send all the leaves of a merkle tree, but you could 
> also do that by committing to the depth of the merkle tree in the header, 
> which is 1 byte.

Yes, I agree with you that these are not particularly useful. Sum tree is more 
useful but it has other problems (see my other reply)

Related discussion: 
https://github.com/jl2012/bitcoin/commit/69e613bfb0f777c8dcd2576fe1c2541ee7a17208
 

> 
> Also how about making timestamp 8 bytes?  2106 is coming up soon :)

No need. See my other reply

> 
> Maybe this is too nit-picky; maybe it's better to put lots of stuff in for 
> testing the forcenet and then take out all the stuff that wasn't used or had 
> issues as it progresses.
> 
> Thanks and looking forward to trying out forcenet!
> 
> -Tadge

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Johnson Lau via bitcoin-dev
There is no reason to use a timestamp beyond 4 bytes. Just let it overflow. If 
a blockchain is stopped for more than 2^31 seconds, it’s just dead.

> On 5 Dec 2016, at 19:58, Tom Zander via bitcoin-dev 
>  wrote:
> 
> On Sunday, 4 December 2016 21:37:39 CET Hampus Sjöberg via bitcoin-dev 
> wrote:
>>> Also how about making timestamp 8 bytes?  2106 is coming up soon 
>> 
>> AFAICT this was fixed in this commit:
>> https://github.com/jl2012/bitcoin/commit/
> fa80b48bb4237b110ceffe11edc14c813
>> 0672cd2#diff-499d7ee7998a27095063ed7b4dd7c119R200
> 
> That commit hacks around it, a new block header fixes it. Subtle difference.
> 
> -- 
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-14 Thread Johnson Lau via bitcoin-dev
I think the biggest problem of sum tree is the lack of flexibility to redefine 
the values with softforks. For example, in the future we may want to define a 
new CHECKSIG with witness script version 1. That would be counted as a SigOp. 
Without a sum tree design, that’d be easy as we could just define new SigOp 
through a softfork (e.g. the introduction of P2SH SigOp, and the witness v0 
SigOp). In a sum tree, however, since the nSigOp is implied, any redefinition 
requires either a hardfork or a new sum tree (and the original sum tree becomes 
a placebo for old nodes. So every softfork of this type creates a new tree)

Similarly, we may have secondary witness in the future, and the tx weight would 
be redefined with a softfork. We will face the same problem with a sum tree

The only way to fix this is to explicitly commit to the weight and nSigOp, and 
the committed value must be equal to or larger than the real value. Only in 
this way we could redefine it with softfork. However, that means each tx will 
have an overhead of 16 bytes (if two int64 are used)

You could find related discussion here: 
https://github.com/jl2012/bitcoin/commit/69e613bfb0f777c8dcd2576fe1c2541ee7a17208
 


Maybe we could make this optional: for nodes running exactly the same rules, 
they could omit the weight and nSigOp value in transmission. To talk to legacy 
nodes, they need to transmit the newly defined weight and nSigOp. But this 
makes script upgrade much complex.


> On 12 Dec 2016, at 00:40, Tier Nolan via bitcoin-dev 
>  > wrote:
> 
> On Sat, Dec 10, 2016 at 9:41 PM, Luke Dashjr  > wrote:
> On Saturday, December 10, 2016 9:29:09 PM Tier Nolan via bitcoin-dev wrote:
> > Any new merkle algorithm should use a sum tree for partial validation and
> > fraud proofs.
> 
> PR welcome.
> 
> Fair enough.  It is pretty basic.
> 
> https://github.com/luke-jr/bips/pull/2 
> 
> 
> It sums up sigops, block size, block cost (that is "weight" right?) and fees.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org 
> 
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Forcenet: an experimental network with a new header format

2016-12-04 Thread Johnson Lau via bitcoin-dev
Based on Luke Dashjr’s code and BIP: 
https://github.com/luke-jr/bips/blob/bip-mmhf/bip-mmhf.mediawiki , I created an 
experimental network to show how a new header format may be implemented.

Basically, the header hash is calculated in a way that non-upgrading nodes 
would see it as a block with only the coinbase tx and zero output value. They 
are effectively broken as they won’t see any transactions confirmed. This 
allows rewriting most of the rules related to block and transaction validity. 
Such technique has different names like soft-hardfork, firmfork, evil softfork, 
and could be itself a controversial topic. However, I’d rather not to focus on 
its soft-hardfork property, as that would be trivial to turn this into a true 
hardfork (e.g. setting the sign bit in block nVersion, or setting the most 
significant bit in the dummy coinbase nLockTime)

Instead of its soft-HF property, I think the more interesting thing is the new 
header format. The current bitcoin header has only 80 bytes. It provides only 
32bits of nonce space and is far not enough for ASICs. It also provides no room 
for committing to additional data. Therefore, people are forced to put many 
different data in the coinbase transaction, such as merge-mining commitments, 
and the segwit commitment. It is not a ideal solution, especially for light 
wallets.

Following the practice of segwit development of making a experimental network 
(segnet), I made something similar and call it the Forcenet (as it forces 
legacy nodes to follow the post-fork chain)

The header of forcenet is mostly described in Luke’s BIP, but I have made some 
amendments as I implemented it. The format is (size in parentheses; little 
endian):

Height (4), BIP9 signalling field (4), hardfork signalling field (3), 
merge-mining hard fork signalling field (1), prev hash (32), timestamp (4), 
nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), Hash TMR (32), Hash 
WMR (32), total tx size (8) , total tx weight (8), total sigops (8), number of 
tx (4), merkle branches leading to header C (compactSize + 32 bit hashes)

In addition to increasing the max block size, I also showed how the calculation 
and validation of witness commitment may be changed with a new header. For 
example, since the commitment is no longer in the coinbase tx, we don’t need to 
use a …. hash for the coinbase tx like in BIP141.

Something not yet done:
1. The new merkle root algorithm described in the MMHF BIP
2. The nTxsSigops has no meaning currently
3. Communication with legacy nodes. This version can’t talk to legacy nodes 
through the P2P network, but theoretically they could be linked up with a 
bridge node
4. A new block weight definition to provide incentives for slowing down UTXO 
growth
5. Many other interesting hardfork ideas, and softfork ideas that works better 
with a header redesign

For easier testing, forcenet has the following parameters:

Hardfork at block 200
Segwit is always activated
1 minutes block with 4 (prefork) and 8 (postfork) weight limit
50 blocks coinbase maturity
21000 blocks halving
144 blocks retarget

How to join: codes at https://github.com/jl2012/bitcoin/tree/forcenet1 , start 
with "bitcoind —forcenet" .
Connection: I’m running a node at 8333.info with default port (38901)
Mining: there is only basic internal mining support. Limited GBT support is 
theoretically possible but needs more hacking. To use the internal miner, 
writeup a shell script to repeatedly call “bitcoin-cli —forcenet generate 1”
New RPC commands: getlegacyblock and getlegacyblockheader, which generates 
blocks and headers that are compatible with legacy nodes.

This is largely work-in-progress so expect a reset every couple weeks

jl2012




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


  1   2   >