[bitcoin-dev] Batch validation of CHECKMULTISIG using an extra hint field

2022-10-19 Thread Mark Friedenbach via bitcoin-dev
When Satoshi wrote the first version of bitcoin, s/he made what was almost 
certainly an unintentional mistake. A bug in the original CHECKMULTISIG 
implementation caused an extra item to be popped off the stack upon completion. 
This extra value is not used in any way, and has no consensus meaning. Since 
this value is often provided in the witness, it unfortunately provides a 
malleability vector as anybody can change the extra/dummy value in the 
signature without invalidating a transaction. In legacy scripts NULLDUMMY is a 
policy rule that states this value must be zero, and this was made a consensus 
rule for segwit scripts.

This isn’t the only problem with CHECKMULTISIG. For both ECDSA and Schnorr 
signatures, batch validation could enable an approximate 2x speedup, especially 
during the initial block download phase. However the CHECKMULTISIG algorithm, 
as written, seemingly precludes batch validation for threshold signatures as it 
attempts to validate the list of signatures with the list of pubkeys, in order, 
dropping an unused pubkey only when a signature validation fails. As an 
example, the following script

[2 C B A 3 CHECKMULTISIG]

Could be satisfied by the following witness:

[0 c a]

Where “a” is a signature for pubkey A, and “c” a signature for pubkey C. During 
validation, the signature a is checked using pubkey A, which is successful, so 
the internal algorithm increments the signature pointer AND the pubkey pointer 
to the next elements in the respective lists, removing both from future 
consideration. Next the signature c is checked with pubkey B, which fails, so 
only the pubkey pointer is incremented. Finally signature c is checked with 
pubkey C, which passes. Since 2 signatures passed and this is equal to the 
specified threshold, the opcode evaluates as true. All inputs (including the 
dummy 0 value) are popped from the stack.

The algorithm cannot batch validate these signatures because for any partial 
threshold it doesn’t know which signatures map to which pubkeys.

Not long after segwit was released for activation, making the NULLDUMMY rule 
consensus for segwit scripts, the observation was made by Luke-Jr on IRC[1] 
that this new rule was actually suboptimal. Satoshi’s mistake gave us an extra 
parameter to CHECKMULTISIG, and it was entirely within our means to use this 
parameter to convey extra information to the CHECKMULTISIG algorithm, and 
thereby enable batch validation of threshold signatures using this opcode.

The idea is simple: instead of requiring that the final parameter on the stack 
be zero, require instead that it be a minimally-encoded bitmap specifying which 
keys are used, or alternatively, which are not used and must therefore be 
skipped. Before attempting validation, ensure for a k-of-n threshold only k 
bits are set in the bitfield indicating the used pubkeys (or n-k bits set 
indicating the keys to skip). The updated CHECKMULTISIG algorithm is as 
follows: when attempting to validate a signature with a pubkey, first check the 
associated bit in the bitfield to see if the pubkey is used. If the bitfield 
indicates that the pubkey is NOT used, then skip it without even attempting 
validation. The only signature validations which are attempted are those which 
the bitfield indicates ought to pass. This is a soft-fork as any validator 
operating under the original rules (which ignore the “dummy” bitfield) would 
still arrive at the correct pubkey-signature mapping through trial and error.

Aside: If you wanted to hyper-optimize, you could use a binomial encoding of 
the bitmask hint field, given that the n-choose-k threshold is already known. 
Or you could forego encoding the k threshold entirely and infer it from the 
number of set bits. However in either case the number of bytes saved is 
negligible compared to the overall size of a multisig script and witness, and 
there’d be a significant tradeoff in usability. Such optimization is probably 
not worth it.

If you’d rather see this in terms of code, there’s an implementation of this 
that I coded up in 2019 and deployed to a non-bitcoin platform:

https://github.com/tradecraftio/tradecraft/commit/339dafc0be37ae5465290b22d204da4f37c6e261

Unfortunately this observation was made too late to be incorporated into 
segwit, but future versions of script could absolutely use the hint-field trick 
to enable batch validation of CHECKMULTISIG scripts. So you can imagine my 
surprise when reviewing the Taproot/Tapscript BIPs I saw that CHECKMULTISIG was 
disabled for Tapscript, and the justification given in the footnotes is that 
CHECKMULTISIG is not compatible with batch validation! Talking with a few other 
developers including Luke-Jr, it has become clear that this solution to the 
CHECKMULTISIG batch validation problem had been completely forgotten and did 
not come up during Tapscript review. I’m posting this now because I don’t want 
the trick to be lost again.

Kind regards,
Mark Friedenbach


Re: [bitcoin-dev] Taproot: Privacy preserving switchable scripting

2018-01-23 Thread Mark Friedenbach via bitcoin-dev
I had the opposite response in private, which I will share here. As recently as 
Jan 9th feedback on BIP 117 was shared on this list by Pieter Wuille and others 
suggesting we adopt native MAST template instead of the user programmable 
combination of BIPs 116 and 117. Part of my response then was, I quote:

I havent the hubris to suggest that we know exactly what a templated MAST 
*should* look like. It's not used in production anywhere. Even if we did have 
the foresight, the tail-call semantics allow for other constructions besides 
MAST and for the sake of the future we should allow such permission-less 
innovation. The proper sequence of events should be to enable features in a 
generic way, and then to create specialized templates to save space for common 
constructions. Not the other way around. [1]

I take this advance as further evidence in favor of this view. As recently as 
24 hours ago if you had asked what a native-MAST template would have looked 
like, the answer would have been something like Johnson Lau’s BIP 114, with 
some quibbling over details. Taproot is a clearly superior approach. But is it 
optimal? I don’t think we can claim that now. Optimality of these constructs 
isn’t something easily proven, with the nearest substitute being unchanging 
consensus over extended periods of time.

Every time we add an output type specialization, we introduce a new codepath in 
the core of the script consensus that must be maintained forever. Take P2SH: 
from this point forward there is no reason to use it in new applications, ever. 
But it must be forever supported. In an alternate universe we could have 
deployed a native MAST proposal, like BIP 114, only to have Taproot-like 
schemes discovered after activation. That would have been a sucky outcome. It 
is still the case that we could go for Taproot right now, and then in six 
months or a year’s time we find an important tweak or a different approach 
entirely that is even better, but the activation process had already started. 
That would be a sucky outcome we haven’t avoided yet.

This is not an argument against template specialization for common code paths, 
especially those which increase fungibility of coins. I do think we should have 
a native MAST template eventually, using Taproot or something better. However 
if I may be allowed I will make an educated guess about the origin of Taproot: 
I think it’s no coincidence that Greg had this insight and/or wrote it up 
simultaneous with a push by myself and others for getting MAST features into 
bitcoin via BIPs 98, 116, and 117, or 114. Cryptographers tend to only think up 
solutions to problems that are on their minds. And the problems on most 
people’s minds are primarily those that are deployable today, or otherwise 
near-term applicable.

BIPS 116 and 117 each provide a reusable component that together happens to 
enable a generic form of MAST. Even without the workarounds required to avoid 
CLEANSTACK violations, the resulting MAST template is larger than what is 
possible with specialization. However let’s not forget that (1) they also 
enable other applications like honeypots, key trees, and script delegation; and 
relevant to this conversation (2) they get the MAST feature available for use 
in production by the wider community. I don’t think I’d personally be willing 
to bet that we found the optimal MAST structure in Greg’s Taproot until we have 
people doing interesting production work like multisig wallets, lightning 
protocol, and the next set of consensus features start putting it into 
production and exploring edge cases. We may find ways Taproot can be tweaked to 
enable other applications (like encoding a hash preimage as well) or simplify 
obscure corner cases.

I feel quite strongly that the correct approach is to add support for generic 
features to accomplish the underlying goal in a user programmable way, and THEN 
after activation and some usage consider ways in which common use cases can be 
made more efficient through output specialization. To take a more obvious 
example, lightning protocol is still an active area or research and I think it 
is abundantly clear that we don’t know yet what the globally optimal layer-2 
caching protocol will be, even if we have educated guesses as to its broad 
structure. A proposal right now to standardize a more compact lightning script 
type would be rightly rejected. It is less obvious but just as true that the 
same should hold for MAST.

I have argued these points before in favor of permission less innovation first, 
then application specialization later, in [1] and at the end of the rather long 
email [2]. I hope you can take the time to read those if you still feel we 
should take a specialized template approach instead of the user programmable 
BIPSs 116 and 117.

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015537.html
 

Re: [bitcoin-dev] Blockchain Voluntary Fork (Split) Proposal (Chaofan Li)

2018-01-22 Thread Mark Friedenbach via bitcoin-dev

> On Jan 22, 2018, at 11:01 AM, Ilan Oh via bitcoin-dev 
>  wrote:
> 
> The chain with the most mining power will tend to have more value.

I believe you have the causality on that backwards. The tokens which are worth 
more value will attract more mining hash rate. Miners respond to cash-out 
value, they don’t set it.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] ScriptPubkey consensus translation

2018-01-18 Thread Mark Friedenbach via bitcoin-dev
The downsides could be mitigated somewhat by only making the dual 
interpretation apply to outputs older than a cutoff time after the activation 
of the new feature. For example, five years after the initial activation of the 
sigagg soft-fork, the sigagg rules will apply to pre-activation UTXOs as well. 
That would allow old UTXOs to be spent more cheaply, perhaps making some dust 
usable again, but anyone who purposefully sent funds to old-style outputs after 
the cutoff are not opened up to the dual interpretation.

> On Jan 18, 2018, at 11:30 AM, Gregory Maxwell via bitcoin-dev 
>  wrote:
> 
> A common question when discussing newer more efficient pubkey types--
> like signature aggregation or even just segwit-- is "will this thing
> make the spending of already existing outputs more efficient", which
> unfortunately gets an answer of No because the redemption instructions
> for existing outputs have already been set, and don't incorporate
> these new features.
> 
> This is good news in that no one ends up being forced to expose their
> own funds to new cryptosystems whos security they may not trust.  When
> sigagg is deployed, for example, any cryptographic risk in it is borne
> by people who opted into using it.
> 
> Lets imagine though that segwit-with-sigagg has been long deployed,
> widely used, and is more or less universally accepted as at least as
> good as an old P2PKH.
> 
> In that case, it might be plausible to include in a hardfork a
> consensus rule that lets someone spend scriptPubkey's matching
> specific templates as though they were an alternative template.  So
> then an idiomatic P2PKH or perhaps even a P2SH-multisig could be spent
> as though it used the analogous p2w-sigagg script.
> 
> The main limitation is that there is some risk of breaking the
> security assumptions of some complicated external protocol e.g. that
> assumed that having a schnorr oracle for a key wouldn't let you spend
> coins connected to that key.  This seems like a pretty contrived
> concern to me however, and it's one that can largely be addressed by
> ample communication in advance.  (E.g. discouraging the creation of
> excessively fragile things like that, and finding out if any exist so
> they can be worked around).
> 
> Am I missing any other arguments?
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 117 Feedback

2018-01-09 Thread Mark Friedenbach via bitcoin-dev
The use of the alt stack is a hack for segwit script version 0 which has the 
clean stack rule. Anticipated future improvements here are to switch to a 
witness script version, and a new segwit output version which supports native 
MAST to save an additional 40 or so witness bytes. Either approach would allow 
getting rid of the alt stack hack. They are not part of the proposal now 
because it is better to do things incrementally, and because we anticipate 
usage of MAST to better inform these less generic changes.

Your suggestion of “single blob on the stack” seems to be exactly this proposal 
afaict? Note the witness data needs to be passed separately because signatures 
can’t be included in that single blob if that blob is hashed and compared 
against something in the scriptPubKey.

The sigop and opcode limit drop can be justified with some back of the envelope 
calculations. Non-scriptPubKey scripts are fundamentally limited by 
blocksize/weight and the most damage you can do, as an adversary, is limited by 
space. The most expensive thing you can do check a signature. Our assumptions 
about block size safety are basically due to how much computation you can stuff 
in a block with checksigs — all the analysis there applies.

My suggestion is to limit the number of checksigs allowed in a script to 
size(script+witness)/64, but I wanted this to come up in review rather than 
complicate the code right off the bat.

I will make a strong assertion: static analyzing the number of opcodes and 
sigops gets us absolutely nothing. It is cargo cult safety engineering. No need 
to perpetuate it when it is now in the way.

Sent from my iPhone

> On Jan 9, 2018, at 8:22 PM, Rusty Russell  wrote:
> 
> I've just re-read BIP 117, and I'm concerned about its flexibility.  It
> seems to be doing too much.
> 
> The use of altstack is awkward, and makes me query this entire approach.
> I understand that CLEANSTACK painted us into a corner here :(
> 
> The simplest implementation of tail recursion would be a single blob: if
> a single element is left on the altstack, pop and execute it.  That
> seems trivial to specify.  The treatment of concatenation seems like
> trying to run before we can walk.
> 
> Note that if we restrict this for a specific tx version, we can gain
> experience first and get fancier later.
> 
> BIP 117 also drops SIGOP and opcode limits.  This requires more
> justification, in particular, measurements and bounds on execution
> times.  If this analysis has been done, I'm not aware of it.
> 
> We could restore statically analyzability by rules like so:
> 1.  Only applied for tx version 3 segwit txs.
> 2.  For version 3, top element of stack is counted for limits (perhaps
>with discount).
> 3.  The blob popped off for tail recursion must be identical to that top
>element of the stack (ie. the one counted above).
> 
> Again, future tx versions could drop such restrictions.
> 
> Cheers,
> Rusty.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Transaction aging to relieve user concerns.

2017-12-28 Thread Mark Friedenbach via bitcoin-dev
However users can’t know with any certainty whether transactions will “age out” 
as indicated, since this is only relay policy. Exceeding the specified timeout 
doesn’t prevent a miner from including it in the chain, and therefore doesn’t 
really provide any actionable information.

> On Dec 28, 2017, at 11:55 AM, Dan Bryant via bitcoin-dev 
>  wrote:
> 
> The goal here is to make it easier for users to know with better certainty 
> when an TXN is going to age out
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Total fees have almost crossed the block reward

2017-12-21 Thread Mark Friedenbach via bitcoin-dev
Every transaction is replace-by-fee capable already. Opt-in replace by fee as 
specified in BIP 125 is a fiction that held sway only while the income from 
fees or fee replacement was so much smaller than subsidy.

> On Dec 21, 2017, at 3:35 PM, Paul Iverson via bitcoin-dev 
>  wrote:
> 
> I agree with Greg.  What is happening is a cause for celebration: it is the 
> manifestation of our long-desired fee market in action.  That people are 
> willing to pay upwards of $100 per transaction shows the huge demand to 
> transact on the world's most secure ledger. This is what success looks like, 
> folks!
> 
> Now that BTC is being phased out as a means of payment nearly everywhere 
> (e.g., Steam dropping BTC as a payment option) (to be replaced with the 
> more-suitable LN when ready), I'd propose that we address the stuck 
> transaction issue by making replace-by-fee (RBF) ubiquitous.  Why not make 
> every transaction RBF by default, and then encourage via outreach and 
> education other wallet developers to do the same?  
> 
> The frustration with BTC today is less so the high-fees (people realize 
> on-chain transactions in a secure decentralized ledger are necessarily 
> costly) but by the feeling of helplessness when their transaction is stuck.  
> Being able to easily bump a transaction's fee for users who are in a hurry 
> would go a long way to improving the user experience.  
> 
> Paul.
> 
> 
> On Thu, Dec 21, 2017 at 2:44 PM, Gregory Maxwell via bitcoin-dev 
>  > wrote:
> Personally, I'm pulling out the champaign that market behaviour is
> indeed producing activity levels that can pay for security without
> inflation, and also producing fee paying backlogs needed to stabilize
> consensus progress as the subsidy declines.
> 
> I'd also personally prefer to pay lower fees-- current levels even
> challenge my old comparison with wire transfer costs-- but we should
> look most strongly at difficult to forge market signals rather than
> just claims-- segwit usage gives us a pretty good indicator since most
> users would get a 50-70% fee reduction without even considering the
> second order effects from increased capacity.
> 
> As Jameson Lopp notes, more can be done for education though-- perhaps
> that market signal isn't efficient yet. But we should get it there.
> 
> But even independently of segwit we can also look at other inefficient
> transaction styles: uncompressed keys, unconfirmed chaining instead of
> send many batching, fee overpayment, etc... and the message there is
> similar.
> 
> I've also seen some evidence that a portion of the current high rate
> congestion is contrived traffic. To the extent that it's true there
> also should be some relief there soon as the funding for that runs
> out, in addition to expected traffic patterns, difficulty changes,
> etc.
> 
> 
> On Thu, Dec 21, 2017 at 9:30 PM, Melvin Carvalho via bitcoin-dev
>  > wrote:
> > I asked adam back at hcpp how the block chain would be secured in the long
> > term, once the reward goes away.  The base idea has always been that fees
> > would replace the block reward.
> >
> > At that time fees were approximately 10% of the block reward, but have now
> > reached 45%, with 50% potentially being crossed soon
> >
> > https://fork.lol/reward/feepct 
> >
> > While this bodes well for the long term security of the coin, I think there
> > is some legitimate concern that the fee per tx is prohibitive for some use
> > cases, at this point in the adoption curve.
> >
> > Observations of segwit adoption show around 10% at this point
> >
> > http://segwit.party/charts/ 
> >
> > Watching the mempool shows that the congestion is at a peak, though it's
> > quite possible this will come down over the long weekend.  I wonder if this
> > is of concern to some.
> >
> > https://dedi.jochen-hoenicke.de/queue/more/#24h 
> > 
> >
> > I thought these data points may be of interest and are mainly FYI.  Though
> > if further discussion is deemed appropriate, it would be interesting to hear
> > thoughts.
> >
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org 
> > 
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
> > 
> >
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org 
> 
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
> 
> 
> 

Re: [bitcoin-dev] Sign / Verify message against SegWit P2SH addresses.

2017-12-21 Thread Mark Friedenbach via bitcoin-dev
It doesn’t matter what it does under the hood. The api could be the same.

> On Dec 21, 2017, at 3:19 AM, Damian Williamson via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> 
> In all seriousness, being able to sign a message is an important feature 
> whether it is with Bitcoin Core or, with some other method. It is a good 
> feature and it would be worthwhile IMHO to update it for SegWit addresses. I 
> don't know about renewing it altogether, I like the current simplicity.
> 
> Regards,
> Damian Williamson
> 
> 
> Sometimes I like to sign a message just to verify that is what I have said.
> -
> Bitcoin: 1PMUf9aaQ41M4bgVbCAPVwAeuKvj8CwxJg
> 
> Signature:
> HwJPqyWF0CbdsR7x737HbNIDoRufsrMI5XYQsKZ+MrWCJ6K7imtLY00sTCmSMDigZxRuoxyYZyQUw/lL0m/MV9M=
> 
> (Of course, signed messages will verify better usually with plain text and 
> not HTML interpreted email - need a switch for outlook.com to send plaintext.)
> From: bitcoin-dev-boun...@lists.linuxfoundation.org 
> <bitcoin-dev-boun...@lists.linuxfoundation.org> on behalf of Mark Friedenbach 
> via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
> Sent: Wednesday, 20 December 2017 8:58 AM
> To: Pavol Rusnak; Bitcoin Protocol Discussion
> Subject: Re: [bitcoin-dev] Sign / Verify message against SegWit P2SH 
> addresses.
>  
> For what it’s worth, I think it would be quite easy to do better than the 
> implied solution of rejiggering the message signing system to support 
> non-P2PKH scripts. Instead, have the signature be an actual bitcoin 
> transaction with inputs that have the script being signed. Use the salted 
> hash of the message being signed as the FORKID as if this were a spin-off 
> with replay protection. This accomplishes three things:
> 
> (1) This enables signing by any infrastructure out there — including hardware 
> wallets and 2FA signing services — that have enabled support for FORKID 
> signing, which is a wide swath of the ecosystem because of Bitcoin Cash and 
> Bitcoin Gold.
> 
> (2) It generalizes the message signing to allow multi-party signing setups as 
> complicated (via sighash, etc.) as those bitcoin transactions allow, using 
> existing and future tools based on Partially Signed Bitcoin Transactions; and
> 
> (3) It unifies a single approach for message signing, proof of reserve (where 
> the inputs are actual UTXOs), and off-chain colored coins.
> 
> There’s the issue of size efficiency, but for the single-party message 
> signing application that can be handled by a BIP that specifies a template 
> for constructing the pseudo-transaction and its inputs from a raw script.
> 
> Mark
> 
> > On Dec 19, 2017, at 1:36 PM, Pavol Rusnak via bitcoin-dev 
> > <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > 
> > On 08/12/17 19:25, Dan Bryant via bitcoin-dev wrote:
> >> I know there are posts, and an issue opened against it, but is there
> >> anyone writing a BIP for Sign / Verify message against a SegWit address?
> > 
> > Dan, are you still planning to write this BIP?
> > 
> > -- 
> > Best Regards / S pozdravom,
> > 
> > Pavol "stick" Rusnak
> > CTO, SatoshiLabs
> > ___
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Sign / Verify message against SegWit P2SH addresses.

2017-12-19 Thread Mark Friedenbach via bitcoin-dev
For what it’s worth, I think it would be quite easy to do better than the 
implied solution of rejiggering the message signing system to support non-P2PKH 
scripts. Instead, have the signature be an actual bitcoin transaction with 
inputs that have the script being signed. Use the salted hash of the message 
being signed as the FORKID as if this were a spin-off with replay protection. 
This accomplishes three things:

(1) This enables signing by any infrastructure out there — including hardware 
wallets and 2FA signing services — that have enabled support for FORKID 
signing, which is a wide swath of the ecosystem because of Bitcoin Cash and 
Bitcoin Gold.

(2) It generalizes the message signing to allow multi-party signing setups as 
complicated (via sighash, etc.) as those bitcoin transactions allow, using 
existing and future tools based on Partially Signed Bitcoin Transactions; and

(3) It unifies a single approach for message signing, proof of reserve (where 
the inputs are actual UTXOs), and off-chain colored coins.

There’s the issue of size efficiency, but for the single-party message signing 
application that can be handled by a BIP that specifies a template for 
constructing the pseudo-transaction and its inputs from a raw script.

Mark

> On Dec 19, 2017, at 1:36 PM, Pavol Rusnak via bitcoin-dev 
>  wrote:
> 
> On 08/12/17 19:25, Dan Bryant via bitcoin-dev wrote:
>> I know there are posts, and an issue opened against it, but is there
>> anyone writing a BIP for Sign / Verify message against a SegWit address?
> 
> Dan, are you still planning to write this BIP?
> 
> -- 
> Best Regards / S pozdravom,
> 
> Pavol "stick" Rusnak
> CTO, SatoshiLabs
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Clarification about SegWit transaction size and bech32

2017-12-18 Thread Mark Friedenbach via bitcoin-dev
Why would I send you coins to anything other than the address you provided to 
me? If you send me a bech32 address I use the native segwit scripts. If you 
send me an old address, I do what it specifies instead. The recipient has 
control over what type of script the payment is sent to, without any ambiguity.

> On Dec 18, 2017, at 1:41 PM, m...@albertodeluigi.com wrote:
> 
> Hi Mark,
> thank you. I understand your point, but despite what we define as a fork, 
> when a software uses a particular address, it becomes part of the rules of 
> that software. If another software doesn't recognize that address as a 
> bitcoin address, then the rules it enforces aren't compatible with the 
> behaviour of the first software. If you send me your bitcoins, I can't 
> receive it, exactly like if it was in another chain. This happens even if 
> there isn't such a situation where miners verify that transaction on a chain, 
> while other miners reject it.
> 
> If we want to change the addresses, we need consensus and the coordinate 
> upgrade of the entire network. In case we haven't consensus, most of the 
> clients cannot send and receive bitcoins, which is a huge problem.
> For this reason I think it is something we should discuss in order to make a 
> coordinated upgrade, exactly like what we do when we propose a fork. And it 
> would be better to do it precisely as a part of a fork, like a 2x (or 
> whatever other upgrade gaining enough consensus)
> 
> Apart from the proposal of an upgrade to bench32, do you agree with the rest 
> of my points? I know segwit is valuable because it fixes tx malleability and 
> so on... thank you for your link, but that wasn't the point I wanted to 
> highlight!
> 
> Thank you,
> Alberto
> 
> 
> 
> 
> Il 2017-12-18 18:38 Mark Friedenbach ha scritto:
>> Addresses are entirely a user-interface issue. They don’t factor
>> into the bitcoin protocol at all.
>> The bitcoin protocol doesn’t have addresses. It has a generic
>> programmable signature framework called script. Addresses are merely a
>> UI convention for representing common script templates. 1.. addresses
>> and 3… addresses have script templates that are not as optimal as
>> could be constructed with post-segwit assumptions. The newer bech32
>> address just uses a different underlying template that achieves better
>> security guarantees (for pay-to-script) or lower fees (for
>> pay-to-pubkey-hash). But this is really a UI/UX issue.
>> A “fork” in bitcoin-like consensus systems has a very specific
>> meaning. Changing address formats is not a fork, soft or hard.
>> There are many benefits to segregated witness. You may find this page
>> helpful:
>> https://bitcoincore.org/en/2016/01/26/segwit-benefits/ [4]
>>> On Dec 18, 2017, at 8:40 AM, Alberto De Luigi via bitcoin-dev
>>>  wrote:
>>> Hello guys,
>>> I have a few questions about the SegWit tx size, I'd like to have
>>> confirmation about the following statements. Can you correct
>>> mistakes or inaccuracies? Thank you in advance.
>>> In general, SegWit tx costs more than legacy tx (source
>>> https://bitcoincore.org/en/2016/10/28/segwit-costs/ [1]):
>>> * Compared to P2PKH, P2WPKH uses 3 fewer bytes (-1%) in the
>>> scriptPubKey, and the same number of witness bytes as P2PKH
>>> scriptSig.
>>> * Compared to P2SH, P2WSH uses 11 additional bytes (6%) in the
>>> scriptPubKey, and the same number of witness bytes as P2SH
>>> scriptSig.
>>> * Compared to P2PKH, P2WPKH/P2SH uses 21 additional bytes (11%),
>>> due to using 24 bytes in scriptPubKey, 3 fewer bytes in scriptSig
>>> than in P2PKH scriptPubKey, and the same number of witness bytes as
>>> P2PKH scriptSig.
>>> * Compared to P2SH, P2WSH/P2SH uses 35 additional bytes (19%), due
>>> to using 24 bytes in scriptPubKey, 11 additional bytes in scriptSig
>>> compared to P2SH scriptPubKey, and the same number of witness bytes
>>> as P2SH scriptSig.
>>> But still it is convenient to adopt segwit because you move the
>>> bytes to the blockweight part, paying smaller fee. In general, a tx
>>> with 1 input and 1 output is about 190kb. If it's a Segwit tx, 82kb
>>> in the non-witness part (blocksize), 108 in the witness part
>>> (blockweight).
>>> See source:
>>> 4 bytes version
>>> 1 byte input count
>>> Input
>>> 36 bytes outpoint
>>> 1 byte scriptSigLen (0x00)
>>> 0 bytes scriptSig
>>> 4 bytes sequence
>>> 1 byte output count
>>> 8 bytes value
>>> 1 byte scriptPubKeyLen
>>> 22 bytes scriptPubKey (0x0014{20-byte keyhash})
>>> 4 bytes locktime
>> https://bitcoin.stackexchange.com/questions/59408/with-100-segwit-transactions-what-would-be-the-max-number-of-transaction-confi
>>> [2]
>>> Which means, if you fill a block entirely with this kind of tx, you
>>> can approximately double the capacity of the blockchain (blocksize
>>> capped to 1mb, blockweight a little bit more than 2mb)
>>> My concern is about segwit adoption by the exchanges.
>>> SegWit transactions cost 10bytes more than legacy 

Re: [bitcoin-dev] Clarification about SegWit transaction size and bech32

2017-12-18 Thread Mark Friedenbach via bitcoin-dev
Addresses are entirely a user-interface issue. They don’t factor into the 
bitcoin protocol at all.

The bitcoin protocol doesn’t have addresses. It has a generic programmable 
signature framework called script. Addresses are merely a UI convention for 
representing common script templates. 1.. addresses and 3… addresses have 
script templates that are not as optimal as could be constructed with 
post-segwit assumptions. The newer bech32 address just uses a different 
underlying template that achieves better security guarantees (for 
pay-to-script) or lower fees (for pay-to-pubkey-hash). But this is really a 
UI/UX issue.

A “fork” in bitcoin-like consensus systems has a very specific meaning. 
Changing address formats is not a fork, soft or hard.

There are many benefits to segregated witness. You may find this page helpful:

https://bitcoincore.org/en/2016/01/26/segwit-benefits/ 


> On Dec 18, 2017, at 8:40 AM, Alberto De Luigi via bitcoin-dev 
>  wrote:
> 
> Hello guys,
> I have a few questions about the SegWit tx size, I'd like to have 
> confirmation about the following statements. Can you correct mistakes or 
> inaccuracies? Thank you in advance.
>  
> In general, SegWit tx costs more than legacy tx (source 
> https://bitcoincore.org/en/2016/10/28/segwit-costs/ 
> ):
>  
> Compared to P2PKH, P2WPKH uses 3 fewer bytes (-1%) in the scriptPubKey, and 
> the same number of witness bytes as P2PKH scriptSig.
> Compared to P2SH, P2WSH uses 11 additional bytes (6%) in the scriptPubKey, 
> and the same number of witness bytes as P2SH scriptSig.
> Compared to P2PKH, P2WPKH/P2SH uses 21 additional bytes (11%), due to using 
> 24 bytes in scriptPubKey, 3 fewer bytes in scriptSig than in P2PKH 
> scriptPubKey, and the same number of witness bytes as P2PKH scriptSig.
> Compared to P2SH, P2WSH/P2SH uses 35 additional bytes (19%), due to using 24 
> bytes in scriptPubKey, 11 additional bytes in scriptSig compared to P2SH 
> scriptPubKey, and the same number of witness bytes as P2SH scriptSig.
>  
> But still it is convenient to adopt segwit because you move the bytes to the 
> blockweight part, paying smaller fee. In general, a tx with 1 input and 1 
> output is about 190kb. If it's a Segwit tx, 82kb in the non-witness part 
> (blocksize), 108 in the witness part (blockweight).
> See source:
> 4 bytes version
> 1 byte input count
> Input
> 36 bytes outpoint
> 1 byte scriptSigLen (0x00)
> 0 bytes scriptSig
> 4 bytes sequence
> 1 byte output count
> 8 bytes value
> 1 byte scriptPubKeyLen
> 22 bytes scriptPubKey (0x0014{20-byte keyhash})
> 4 bytes locktime
> https://bitcoin.stackexchange.com/questions/59408/with-100-segwit-transactions-what-would-be-the-max-number-of-transaction-confi
>  
> 
>  
> Which means, if you fill a block entirely with this kind of tx, you can 
> approximately double the capacity of the blockchain (blocksize capped to 1mb, 
> blockweight a little bit more than 2mb)
>  
> My concern is about segwit adoption by the exchanges. 
> SegWit transactions cost 10bytes more than legacy transactions for each 
> output (vout is 256 bits instead of 160). Exchanges aggregate tx adding many 
> outputs, which is of course something good for bitcoin scalability, since 
> this way we save space and pay less fees.
> But when a tx has at least 10 outputs, using segwit you don't save space, 
> instead:
> - the total blockweight is at least 100bytes higher (10bytes x 10 outputs), 
> so the blockchain is heavier 
> - you don't save space inside the blocksize, so you cannot validate more 
> transactions of this kind (with many outputs), nor get cheaper fee
> - without cheaper fees exchanges have no incentives for segwit adoption 
> before they decide to adopt LN
>  
> In general we can say that using SegWit:
> - you decrease the fee only for some specific kind of transactions, and just 
> because you move some bytes to the blockweight
> - you don’t save space in the blockchain, on the contrary the total weight of 
> the blockchain increases (so it's clear to me why some time ago Luke tweeted 
> to not use SegWit unless really necessary... but then it's not clear why so 
> much haste in promoting BIP148 the 1st august risking a split)
>  
> If it's all correct, does something change with bech32? I'm reading bech32 
> allows to save about 22% of the space. Is this true for whatever kind of tx? 
> Immediate benefits of segwit for scalability are only with bech32?
>  
> Bech32 is non-compatible with the entire ecosystem (you cannot receive coins 
> from the quasi-totality of wallets in circulation), I would say it is a hard 
> fork. But the bare segwit is really so different? the soft fork is "soft" for 
> the reference client Bitcoin Core, but 

Re: [bitcoin-dev] Why not witnessless nodes?

2017-12-18 Thread Mark Friedenbach via bitcoin-dev
Sign-to-contract enables some interesting protocols, none of which are in wide 
use as far as I’m aware. But if they were (and arguably this is an area that 
should be more developed), then SPV nodes validating these protocols will need 
access to witness data. If a node is performing IBD with assumevalid set to 
true, and is also intending to prune history, then there’s no reason to fetch 
those witnesses as far as I’m aware. But it would be a great disservice to the 
network for nodes intending to serve SPV clients to prune this portion of the 
block history. 

> On Dec 18, 2017, at 8:19 AM, Eric Voskuil via bitcoin-dev 
>  wrote:
> 
> You can't know (assume) a block is valid unless you have previously validated 
> the block yourself. But in the case where you have, and then intend to rely 
> on it in a future sync, there is no need for witness data for blocks you are 
> not going to validate. So you can just not request it. 
> 
> However you will not be able to provide those blocks to nodes that *are* 
> validating; the client is pruned and therefore not a peer (cannot 
> reciprocate). (An SPV client is similarly not a peer; it is a more deeply 
> pruned client than the witnessless client.)
> 
> There is no other reason that a node requires witness data. SPV clients don't 
> need it as it is neither require it to verify header commitment to 
> transactions nor to extract payment addresses from them.
> 
> The harm to the network by pruning is that eventually it can become harder 
> and even impossible for anyone to validate the chain. But because you are 
> fully validating you individually remain secure, so there is no individual 
> incentive working against this system harm.
> 
> e
> 
> On Dec 18, 2017, at 08:35, Kalle Rosenbaum  > wrote:
> 
>> 2017-12-18 13:43 GMT+01:00 Eric Voskuil > >:
>> 
>> > On Dec 18, 2017, at 03:32, Kalle Rosenbaum via bitcoin-dev 
>> > > > > wrote:
>> >
>> > Dear list,
>> >
>> > I find it hard to understand why a full node that does initial block
>> > download also must download witnesses if they are going to skip 
>> > verification anyway.
>> 
>> Why run a full node if you are not going to verify the chain?
>> 
>> I meant to say "I find it hard to understand why a full node that does 
>> initial block
>> download also must download witnesses when it is going to skip verification 
>> of the witnesses anyway."
>> 
>> I'm referring to the "assumevalid" feature of Bitcoin Core that skips 
>> signature verification up to block X. Or have I misunderstood assumevalid?
>> 
>> /Kalle
>>  
>> 
>> > If my full node skips signature verification for
>> > blocks earlier than X, it seems the reasons for downloading the
>> > witnesses for those blocks are:
>> >
>> > * to be able to send witnesses to other nodes.
>> >
>> > * to verify the witness root hash of the blocks
>> >
>> > I suppose that it's important to verify the witness root hash because
>> > a bad peer may send me invalid witnesses during initial block
>> > download, and if I don't verify that the witness root hash actually
>> > commits to them, I will get banned by peers requesting the blocks from
>> > me because I send them garbage.
>> > So both the reasons above (there may be more that I don't know about)
>> > are actually the same reason: To be able to send witnesses to others
>> > without getting banned.
>> >
>> > What if a node could chose not to download witnesses and thus chose to
>> > send only witnessless blocks to peers. Let's call these nodes
>> > witnessless nodes. Note that witnessless nodes are only witnessless
>> > for blocks up to X. Everything after X is fully verified.
>> >
>> > Witnessless nodes would be able to sync faster because it needs to
>> > download less data to calculate their UTXO set. They would therefore
>> > more quickly be able to provide full service to SPV wallets and its
>> > local wallets as well as serving blocks to other witnessless nodes
>> > with same or higher assumevalid block. For witnessless nodes with
>> > lower assumevalid they can serve at least some blocks. It could also
>> > serve blocks to non-segwit nodes.
>> >
>> > Do witnessless nodes risk dividing the network in two parts, one
>> > witnessless and one with full nodes, with few connections between the
>> > parts?
>> >
>> > So basically, what are the reasons not to implement witnessless
>> > nodes?
>> >
>> > Thank you,
>> > /Kalle
>> > ___
>> > bitcoin-dev mailing list
>> > bitcoin-dev@lists.linuxfoundation.org 
>> > 
>> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev 
>> > 
>> 
> ___
> bitcoin-dev 

Re: [bitcoin-dev] Making OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

2017-11-27 Thread Mark Friedenbach via bitcoin-dev
It is relevant to note that BIP 117 makes an insecure form of CODESEPARATOR 
delegation possible, which could be made secure if some sort of 
CHECKSIGFROMSTACK opcode is added at a later point in time. It is not IMHO a 
very elegant way to achieve delegation, however, so I hope that one way or 
another this could be resolved quickly so it doesn’t hold up either one of 
those valuable additions.

I have no objections to making them nonstandard, or even to make them invalid 
if someone with a better grasp of history can attest that CODESEPARATOR was 
known to be entirely useless before the introduction of P2SH—not the same as 
saying it was useless, but that it was widely known to not accomplish what a 
early-days script author might think it was doing—and the UTXO set contains no 
scriptPubKeys making use of the opcode, even from the early days. Although a 
small handful could be special cased, if they exist.

> On Nov 27, 2017, at 8:33 AM, Matt Corallo <lf-li...@mattcorallo.com> wrote:
> 
> I strongly disagree here - we don't only soft-fork out transactions that
> are "fundamentally insecure", that would be significantly too
> restrictive. We have generally been willing to soft-fork out things
> which clearly fall outside of best-practices, especially rather
> "useless" fields in the protocol eg soft-forking behavior into OP_NOPs,
> soft-forking behavior into nSequence, etc.
> 
> As a part of setting clear best-practices, making things non-standard is
> the obvious step, though there has been active discussion of
> soft-forking out FindAndDelete and OP_CODESEPARATOR for years now. I
> obviously do not claim that we should be proposing a soft-fork to
> blacklist FindAndDelete and OP_CODESEPARATOR usage any time soon, and
> assume that it would take at least a year or three from when it was made
> non-standard to when a soft-fork to finally remove them was proposed.
> This should be more than sufficient time for folks using such weird (and
> largely useless) parts of the protocol to object, which should be
> sufficient to reconsider such a soft-fork.
> 
> Independently, making them non-standard is a good change on its own, and
> if nothing else should better inform discussion about the possibility of
> anyone using these things.
> 
> Matt
> 
> On 11/15/17 14:54, Mark Friedenbach via bitcoin-dev wrote:
>> As good of an idea as it may or may not be to remove this feature from
>> the code base, actually doing so would be crossing a boundary that we
>> have not previously been willing to do except under extraordinary
>> duress. The nature of bitcoin is such that we do not know and cannot
>> know what transactions exist out there pre-signed and making use of
>> these features.
>> 
>> It may be a good idea to make these features non standard to further
>> discourage their use, but I object to doing so with the justification of
>> eventually disabling them for all transactions. Taking that step has the
>> potential of destroying value and is something that we have only done in
>> the past either because we didn’t understand forks and best practices
>> very well, or because the features (now disabled) were fundamentally
>> insecure and resulted in other people’s coins being vulnerable. This
>> latter concern does not apply here as far as I’m aware.
>> 
>> On Nov 15, 2017, at 8:02 AM, Johnson Lau via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org
>> <mailto:bitcoin-dev@lists.linuxfoundation.org 
>> <mailto:bitcoin-dev@lists.linuxfoundation.org>>> wrote:
>> 
>>> In https://github.com/bitcoin/bitcoin/pull/11423 I propose to
>>> make OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard
>>> 
>>> I think FindAndDelete() is one of the most useless and complicated
>>> functions in the script language. It is omitted from segwit (BIP143),
>>> but we still need to support it in non-segwit scripts. Actually,
>>> FindAndDelete() would only be triggered in some weird edge cases like
>>> using out-of-range SIGHASH_SINGLE.
>>> 
>>> Non-segwit scripts also use a FindAndDelete()-like function to remove
>>> OP_CODESEPARATOR from scriptCode. Note that in BIP143, only executed
>>> OP_CODESEPARATOR are removed so it doesn’t have the
>>> FindAndDelete()-like function. OP_CODESEPARATOR in segwit scripts are
>>> useful for Tumblebit so it is not disabled in this proposal
>>> 
>>> By disabling both, it guarantees that scriptCode serialized inside
>>> SignatureHash() must be constant
>>> 
>>> If we use a softfork to remove FindAndDelete() and OP_CODESEPARATOR
>>> from non-seg

Re: [bitcoin-dev] Making OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard

2017-11-15 Thread Mark Friedenbach via bitcoin-dev
As good of an idea as it may or may not be to remove this feature from the code 
base, actually doing so would be crossing a boundary that we have not 
previously been willing to do except under extraordinary duress. The nature of 
bitcoin is such that we do not know and cannot know what transactions exist out 
there pre-signed and making use of these features.

It may be a good idea to make these features non standard to further discourage 
their use, but I object to doing so with the justification of eventually 
disabling them for all transactions. Taking that step has the potential of 
destroying value and is something that we have only done in the past either 
because we didn’t understand forks and best practices very well, or because the 
features (now disabled) were fundamentally insecure and resulted in other 
people’s coins being vulnerable. This latter concern does not apply here as far 
as I’m aware.

> On Nov 15, 2017, at 8:02 AM, Johnson Lau via bitcoin-dev 
>  wrote:
> 
> In https://github.com/bitcoin/bitcoin/pull/11423 I propose to make 
> OP_CODESEPARATOR and FindAndDelete in non-segwit scripts non-standard
> 
> I think FindAndDelete() is one of the most useless and complicated functions 
> in the script language. It is omitted from segwit (BIP143), but we still need 
> to support it in non-segwit scripts. Actually, FindAndDelete() would only be 
> triggered in some weird edge cases like using out-of-range SIGHASH_SINGLE.
> 
> Non-segwit scripts also use a FindAndDelete()-like function to remove 
> OP_CODESEPARATOR from scriptCode. Note that in BIP143, only executed 
> OP_CODESEPARATOR are removed so it doesn’t have the FindAndDelete()-like 
> function. OP_CODESEPARATOR in segwit scripts are useful for Tumblebit so it 
> is not disabled in this proposal
> 
> By disabling both, it guarantees that scriptCode serialized inside 
> SignatureHash() must be constant
> 
> If we use a softfork to remove FindAndDelete() and OP_CODESEPARATOR from 
> non-segwit scripts, we could completely remove FindAndDelete() from the 
> consensus code later by whitelisting all blocks before the softfork block. 
> The first step is to make them non-standard in the next release.
> 
> 
>  
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Simplicity proposal - Jets?

2017-11-03 Thread Mark Friedenbach via bitcoin-dev
To reiterate, none of the current work focuses on Bitcoin integration, and many 
architectures are possible.

However the Jets would have to be specified and agreed to upfront for costing 
reasons, and so they would be known to all validators. There would be no reason 
to include anything more then the identifying hash in any contract using the 
jet.

> On Nov 3, 2017, at 5:59 AM, Hampus Sjöberg via bitcoin-dev 
>  wrote:
> 
> Thank you for your answer, Russel.
> 
> When a code path takes advantage of a jet, does the Simplicity code still 
> need to be publicly available/visible in the blockchain? I imagine that for 
> big algorithms (say for example EDCA verification/SHA256 hashing etc), it 
> would take up a lot of space in the blockchain.
> Is there any way to mitigate this?
> 
> I guess in a softfork for a jet, the Simplicity code for a jet could be 
> defined as "consensus", instead of needed to be provided within every script 
> output.
> When the Simplicity interpretor encounters an expression that has a jet, it 
> would run the C/Assembly code instead of interpreting the Simplicity code. By 
> formal verification we would be sure they match.
> 
> Greetings
> Hampus
> 
> 2017-11-03 2:10 GMT+01:00 Russell O'Connor via bitcoin-dev 
> :
>> Hi Jose,
>> 
>> Jets are briefly discussed in section 3.4 of 
>> https://blockstream.com/simplicity.pdf
>> 
>> The idea is that we can recognize some set of popular Simplicity 
>> expressions, and when the Simplicity interpreter encounters one of these 
>> expressions it can skip over the Simplicity interpreter and instead directly 
>> evaluate the function using specialized C or assembly code.
>> 
>> For example, when the Simplicity interpreter encounters the Simplicity 
>> expression for ECDSA verification, it might directly call into libsecp 
>> rather than continuing the ECDSA verification using interpreted Simplicity.
>> 
>> HTH.
>> 
>> 
>> On Nov 2, 2017 18:35, "JOSE FEMENIAS CAÑUELO via bitcoin-dev" 
>>  wrote:
>> Hi,
>> 
>> I am trying to follow this Simplicity proposal and I am seeing all over 
>> references to ‘jets’, but I haven’t been able to find any good reference to 
>> it.
>> Can anyone give me a brief explanation and or a link pointing to this 
>> feature?
>> Thanks
>> 
>>> On 31 Oct 2017, at 22:01, bitcoin-dev-requ...@lists.linuxfoundation.org 
>>> wrote:
>>> 
>>> The plan is that discounted jets will be explicitly labeled as jets in the
>>> commitment.  If you can provide a Merkle path from the root to a node that
>>> is an explicit jet, but that jet isn't among the finite number of known
>>> discounted jets,
>> 
>> 
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> 
>> 
>> 
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-11-01 Thread Mark Friedenbach via bitcoin-dev
Yes, if you use a witness script version you can save about 40 witness bytes by 
templating the MBV script, which I think is equivalent to what you are 
suggesting. 32 bytes from the saved hash, plus another 8 bytes or so from 
script templates and more efficient serialization.

I believe the conservatively correct approach is to do this in stages, however. 
First roll out MBV and tail call to witness v0. Then once there is experience 
with people using it in production, design and deploy a hashing template for 
script v1. It might be that we learn more and think of something better in the 
meantime.

> On Nov 1, 2017, at 1:43 AM, Luke Dashjr  wrote:
> 
> Mark,
> 
> I think I have found an improvement that can be made.
> 
> As you recall, a downside to this approach is that one must make two 
> commitments: first, to the particular "membership-checking script"; and then 
> in that script, to the particular merkle root of possible scripts.
> 
> Would there be any harm in, instead of checking membership, *calculating* the 
> root? If not, then we could define that instead of the witness program 
> committing to H(membership-check script), it rather commits to H(membership-
> calculation script | data added by an OP_ADDTOSCRIPTHASH). This would, I 
> believe, securely reduce the commitment of both to a single hash.
> 
> It also doesn't reduce flexibility, since one could omit OP_ADDTOSCRIPTHASH 
> from their "membership-calculation" script to get the previous membership-
> check behaviour, and use  OP_EQUAL in its place.
> 
> What do you think?
> 
> Luke
> 
> 
>> On Saturday 28 October 2017 4:40:01 AM Mark Friedenbach wrote:
>> I have completed updating the three BIPs with all the feedback that I have
>> received so far. In short summary, here is an incomplete list of the
>> changes that were made:
>> 
>> * Modified the hashing function fast-SHA256 so that an internal node cannot
>> be interpreted simultaneously as a leaf. * Changed MERKLEBRANCHVERIFY to
>> verify a configurable number of elements from the tree, instead of just
>> one. * Changed MERKLEBRANCHVERIFY to have two modes: one where the inputs
>> are assumed to be hashes, and one where they are run through double-SHA256
>> first. * Made tail-call eval compatible with BIP141’s CLEANSTACK consensus
>> rule by allowing parameters to be passed on the alt-stack. * Restricted
>> tail-call eval to segwit scripts only, so that checking sigop and opcode
>> limits of the policy script would not be necessary.
>> 
>> There were a bunch of other small modifications, typo fixes, and
>> optimizations that were made as well.
>> 
>> I am now ready to submit these BIPs as a PR against the bitcoin/bips repo,
>> and I request that the BIP editor assign numbers.
>> 
>> Thank you,
>> Mark Friedenbach
>> 
>>> On Sep 6, 2017, at 5:38 PM, Mark Friedenbach 
>>> wrote:
>>> 
>>> I would like to propose two new script features to be added to the
>>> bitcoin protocol by means of soft-fork activation. These features are
>>> a new opcode, MERKLE-BRANCH-VERIFY (MBV) and tail-call execution
>>> semantics.
>>> 
>>> In brief summary, MERKLE-BRANCH-VERIFY allows script authors to force
>>> redemption to use values selected from a pre-determined set committed
>>> to in the scriptPubKey, but without requiring revelation of unused
>>> elements in the set for both enhanced privacy and smaller script
>>> sizes. Tail-call execution semantics allows a single level of
>>> recursion into a subscript, providing properties similar to P2SH while
>>> at the same time more flexible.
>>> 
>>> These two features together are enough to enable a range of
>>> applications such as tree signatures (minus Schnorr aggregation) as
>>> described by Pieter Wuille [1], and a generalized MAST useful for
>>> constructing private smart contracts. It also brings privacy and
>>> fungibility improvements to users of counter-signing wallet/vault
>>> services as unique redemption policies need only be revealed if/when
>>> exceptional circumstances demand it, leaving most transactions looking
>>> the same as any other MAST-enabled multi-sig script.
>>> 
>>> I believe that the implementation of these features is simple enough,
>>> and the use cases compelling enough that we could BIP 8/9 rollout of
>>> these features in relatively short order, perhaps before the end of
>>> the year.
>>> 
>>> I have written three BIPs to describe these features, and their
>>> associated implementation, for which I now invite public review and
>>> discussion:
>>> 
>>> Fast Merkle Trees
>>> BIP: https://gist.github.com/maaku/41b0054de0731321d23e9da90ba4ee0a
>>> Code: https://github.com/maaku/bitcoin/tree/fast-merkle-tree
>>> 
>>> MERKLEBRANCHVERIFY
>>> BIP: https://gist.github.com/maaku/bcf63a208880bbf8135e453994c0e431
>>> Code: https://github.com/maaku/bitcoin/tree/merkle-branch-verify
>>> 
>>> Tail-call execution semantics
>>> BIP: https://gist.github.com/maaku/f7b2e710c53f601279549aa74eeb5368
>>> 

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-31 Thread Mark Friedenbach via bitcoin-dev
I don’t think you need to set an order of operations, just treat the jet as 
TRUE, but don’t stop validation. Order of operations doesn’t matter. Either way 
it’ll execute both branches and terminate of the understood conditions don’t 
hold.

But maybe I’m missing something here. 

> On Oct 31, 2017, at 2:01 PM, Russell O'Connor  wrote:
> 
> That approach is worth considering.  However there is a wrinkle that 
> Simplicity's denotational semantics doesn't imply an order of operations.  
> For example, if one half of a pair contains a assertion failure 
> (fail-closed), and the other half contains a unknown jet (fail-open), then 
> does the program succeed or fail?
> 
> This could be solved by providing an order of operations; however I fear that 
> will complicate formal reasoning about Simplicity expressions.  Formal 
> reasoning is hard enough as is and I hesitate to complicate the semantics in 
> ways that make formal reasoning harder still.
> 
> 
> On Oct 31, 2017 15:47, "Mark Friedenbach"  wrote:
> Nit, but if you go down that specific path I would suggest making just
> the jet itself fail-open. That way you are not so limited in requiring
> validation of the full contract -- one party can verify simply that
> whatever condition they care about holds on reaching that part of the
> contract. E.g. maybe their signature is needed at the top level, and
> then they don't care what further restrictions are placed.
> 
> On Tue, Oct 31, 2017 at 1:38 PM, Russell O'Connor via bitcoin-dev
>  wrote:
> > (sorry, I forgot to reply-all earlier)
> >
> > The very short answer to this question is that I plan on using Luke's
> > fail-success-on-unknown-operation in Simplicity.  This is something that
> > isn't detailed at all in the paper.
> >
> > The plan is that discounted jets will be explicitly labeled as jets in the
> > commitment.  If you can provide a Merkle path from the root to a node that
> > is an explicit jet, but that jet isn't among the finite number of known
> > discounted jets, then the script is automatically successful (making it
> > anyone-can-spend).  When new jets are wanted they can be soft-forked into
> > the protocol (for example if we get a suitable quantum-resistant digital
> > signature scheme) and the list of known discounted jets grows.  Old nodes
> > get a merkle path to the new jet, which they view as an unknown jet, and
> > allow the transaction as a anyone-can-spend transaction.  New nodes see a
> > regular Simplicity redemption.  (I haven't worked out the details of how the
> > P2P protocol will negotiate with old nodes, but I don't forsee any
> > problems.)
> >
> > Note that this implies that you should never participate in any Simplicity
> > contract where you don't get access to the entire source code of all
> > branches to check that it doesn't have an unknown jet.
> >
> > On Mon, Oct 30, 2017 at 5:42 PM, Matt Corallo 
> > wrote:
> >>
> >> I admittedly haven't had a chance to read the paper in full details, but I
> >> was curious how you propose dealing with "jets" in something like Bitcoin.
> >> AFAIU, other similar systems are left doing hard-forks to reduce the
> >> sigops/weight/fee-cost of transactions every time they want to add useful
> >> optimized drop-ins. For obvious reasons, this seems rather impractical and 
> >> a
> >> potentially critical barrier to adoption of such optimized drop-ins, which 
> >> I
> >> imagine would be required to do any new cryptographic algorithms due to the
> >> significant fee cost of interpreting such things.
> >>
> >> Is there some insight I'm missing here?
> >>
> >> Matt
> >>
> >>
> >> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev
> >>  wrote:
> >>>
> >>> I've been working on the design and implementation of an alternative to
> >>> Bitcoin Script, which I call Simplicity.  Today, I am presenting my design
> >>> at the PLAS 2017 Workshop on Programming Languages and Analysis for
> >>> Security.  You find a copy of my Simplicity paper at
> >>> https://blockstream.com/simplicity.pdf
> >>>
> >>> Simplicity is a low-level, typed, functional, native MAST language where
> >>> programs are built from basic combinators.  Like Bitcoin Script, 
> >>> Simplicity
> >>> is designed to operate at the consensus layer.  While one can write
> >>> Simplicity by hand, it is expected to be the target of one, or multiple,
> >>> front-end languages.
> >>>
> >>> Simplicity comes with formal denotational semantics (i.e. semantics of
> >>> what programs compute) and formal operational semantics (i.e. semantics of
> >>> how programs compute). These are both formalized in the Coq proof 
> >>> assistant
> >>> and proven equivalent.
> >>>
> >>> Formal denotational semantics are of limited value unless one can use
> >>> them in practice to reason about programs. I've used Simplicity's formal
> >>> 

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-31 Thread Mark Friedenbach via bitcoin-dev
Nit, but if you go down that specific path I would suggest making just
the jet itself fail-open. That way you are not so limited in requiring
validation of the full contract -- one party can verify simply that
whatever condition they care about holds on reaching that part of the
contract. E.g. maybe their signature is needed at the top level, and
then they don't care what further restrictions are placed.

On Tue, Oct 31, 2017 at 1:38 PM, Russell O'Connor via bitcoin-dev
 wrote:
> (sorry, I forgot to reply-all earlier)
>
> The very short answer to this question is that I plan on using Luke's
> fail-success-on-unknown-operation in Simplicity.  This is something that
> isn't detailed at all in the paper.
>
> The plan is that discounted jets will be explicitly labeled as jets in the
> commitment.  If you can provide a Merkle path from the root to a node that
> is an explicit jet, but that jet isn't among the finite number of known
> discounted jets, then the script is automatically successful (making it
> anyone-can-spend).  When new jets are wanted they can be soft-forked into
> the protocol (for example if we get a suitable quantum-resistant digital
> signature scheme) and the list of known discounted jets grows.  Old nodes
> get a merkle path to the new jet, which they view as an unknown jet, and
> allow the transaction as a anyone-can-spend transaction.  New nodes see a
> regular Simplicity redemption.  (I haven't worked out the details of how the
> P2P protocol will negotiate with old nodes, but I don't forsee any
> problems.)
>
> Note that this implies that you should never participate in any Simplicity
> contract where you don't get access to the entire source code of all
> branches to check that it doesn't have an unknown jet.
>
> On Mon, Oct 30, 2017 at 5:42 PM, Matt Corallo 
> wrote:
>>
>> I admittedly haven't had a chance to read the paper in full details, but I
>> was curious how you propose dealing with "jets" in something like Bitcoin.
>> AFAIU, other similar systems are left doing hard-forks to reduce the
>> sigops/weight/fee-cost of transactions every time they want to add useful
>> optimized drop-ins. For obvious reasons, this seems rather impractical and a
>> potentially critical barrier to adoption of such optimized drop-ins, which I
>> imagine would be required to do any new cryptographic algorithms due to the
>> significant fee cost of interpreting such things.
>>
>> Is there some insight I'm missing here?
>>
>> Matt
>>
>>
>> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev
>>  wrote:
>>>
>>> I've been working on the design and implementation of an alternative to
>>> Bitcoin Script, which I call Simplicity.  Today, I am presenting my design
>>> at the PLAS 2017 Workshop on Programming Languages and Analysis for
>>> Security.  You find a copy of my Simplicity paper at
>>> https://blockstream.com/simplicity.pdf
>>>
>>> Simplicity is a low-level, typed, functional, native MAST language where
>>> programs are built from basic combinators.  Like Bitcoin Script, Simplicity
>>> is designed to operate at the consensus layer.  While one can write
>>> Simplicity by hand, it is expected to be the target of one, or multiple,
>>> front-end languages.
>>>
>>> Simplicity comes with formal denotational semantics (i.e. semantics of
>>> what programs compute) and formal operational semantics (i.e. semantics of
>>> how programs compute). These are both formalized in the Coq proof assistant
>>> and proven equivalent.
>>>
>>> Formal denotational semantics are of limited value unless one can use
>>> them in practice to reason about programs. I've used Simplicity's formal
>>> semantics to prove correct an implementation of the SHA-256 compression
>>> function written in Simplicity.  I have also implemented a variant of ECDSA
>>> signature verification in Simplicity, and plan to formally validate its
>>> correctness along with the associated elliptic curve operations.
>>>
>>> Simplicity comes with easy to compute static analyses that can compute
>>> bounds on the space and time resources needed for evaluation.  This is
>>> important for both node operators, so that the costs are knows before
>>> evaluation, and for designing Simplicity programs, so that smart-contract
>>> participants can know the costs of their contract before committing to it.
>>>
>>> As a native MAST language, unused branches of Simplicity programs are
>>> pruned at redemption time.  This enhances privacy, reduces the block weight
>>> used, and can reduce space and time resource costs needed for evaluation.
>>>
>>> To make Simplicity practical, jets replace common Simplicity expressions
>>> (identified by their MAST root) and directly implement them with C code.  I
>>> anticipate developing a broad set of useful jets covering arithmetic
>>> operations, elliptic curve operations, and cryptographic operations
>>> 

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Mark Friedenbach via bitcoin-dev
I was just making a factual observation/correction. This is Russell’s project 
and I don’t want to speak for him. Personally I don’t think the particulars of 
bitcoin integration design space have been thoroughly explored enough to 
predict the exact approach that will be used.

It is possible to support a standard library of jets that are general purpose 
enough to allow the validation of new crypto primitives, like reusing sha2 to 
make Lamport signatures. Or use curve-agnostic jets to do Weil pairing 
validation. Or string manipulation and serialization jets to implement 
covenants. So I don’t think the situation is as dire as you make it sound.

> On Oct 30, 2017, at 3:14 PM, Matt Corallo  wrote:
> 
> Are you anticipating it will be reasonably possible to execute more
> complicated things in interpreted form even after "jets" are put in
> place? If not its just a soft-fork to add new script operations and
> going through the effort of making them compatible with existing code
> and using a full 32 byte hash to represent them seems wasteful - might
> as well just add a "SHA256 opcode".
> 
> Either way it sounds like you're assuming a pretty aggressive soft-fork
> cadence? I'm not sure if that's so practical right now (or are you
> thinking it would be more practical if things were
> drop-in-formally-verified-equivalent-replacements?).
> 
> Matt
> 
>> On 10/30/17 17:56, Mark Friedenbach wrote:
>> Script versions makes this no longer a hard-fork to do. The script
>> version would implicitly encode which jets are optimized, and what their
>> optimized cost is.
>> 
>>> On Oct 30, 2017, at 2:42 PM, Matt Corallo via bitcoin-dev
>>> >> > wrote:
>>> 
>>> I admittedly haven't had a chance to read the paper in full details,
>>> but I was curious how you propose dealing with "jets" in something
>>> like Bitcoin. AFAIU, other similar systems are left doing hard-forks
>>> to reduce the sigops/weight/fee-cost of transactions every time they
>>> want to add useful optimized drop-ins. For obvious reasons, this seems
>>> rather impractical and a potentially critical barrier to adoption of
>>> such optimized drop-ins, which I imagine would be required to do any
>>> new cryptographic algorithms due to the significant fee cost of
>>> interpreting such things.
>>> 
>>> Is there some insight I'm missing here?
>>> 
>>> Matt
>>> 
>>> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev
>>> >> > wrote:
>>> 
>>>I've been working on the design and implementation of an
>>>alternative to Bitcoin Script, which I call Simplicity.  Today, I
>>>am presenting my design at the PLAS 2017 Workshop
>>> on Programming Languages and
>>>Analysis for Security.  You find a copy of my Simplicity paper at
>>>https://blockstream.com/simplicity.pdf
>>>
>>> 
>>>Simplicity is a low-level, typed, functional, native MAST language
>>>where programs are built from basic combinators.  Like Bitcoin
>>>Script, Simplicity is designed to operate at the consensus layer. 
>>>While one can write Simplicity by hand, it is expected to be the
>>>target of one, or multiple, front-end languages.
>>> 
>>>Simplicity comes with formal denotational semantics (i.e.
>>>semantics of what programs compute) and formal operational
>>>semantics (i.e. semantics of how programs compute). These are both
>>>formalized in the Coq proof assistant and proven equivalent.
>>> 
>>>Formal denotational semantics are of limited value unless one can
>>>use them in practice to reason about programs. I've used
>>>Simplicity's formal semantics to prove correct an implementation
>>>of the SHA-256 compression function written in Simplicity.  I have
>>>also implemented a variant of ECDSA signature verification in
>>>Simplicity, and plan to formally validate its correctness along
>>>with the associated elliptic curve operations.
>>> 
>>>Simplicity comes with easy to compute static analyses that can
>>>compute bounds on the space and time resources needed for
>>>evaluation.  This is important for both node operators, so that
>>>the costs are knows before evaluation, and for designing
>>>Simplicity programs, so that smart-contract participants can know
>>>the costs of their contract before committing to it.
>>> 
>>>As a native MAST language, unused branches of Simplicity programs
>>>are pruned at redemption time.  This enhances privacy, reduces the
>>>block weight used, and can reduce space and time resource costs
>>>needed for evaluation.
>>> 
>>>To make Simplicity practical, jets replace common Simplicity
>>>expressions (identified by their MAST root) and directly implement
>>> 

Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Mark Friedenbach via bitcoin-dev
Script versions makes this no longer a hard-fork to do. The script version 
would implicitly encode which jets are optimized, and what their optimized cost 
is.

> On Oct 30, 2017, at 2:42 PM, Matt Corallo via bitcoin-dev 
>  wrote:
> 
> I admittedly haven't had a chance to read the paper in full details, but I 
> was curious how you propose dealing with "jets" in something like Bitcoin. 
> AFAIU, other similar systems are left doing hard-forks to reduce the 
> sigops/weight/fee-cost of transactions every time they want to add useful 
> optimized drop-ins. For obvious reasons, this seems rather impractical and a 
> potentially critical barrier to adoption of such optimized drop-ins, which I 
> imagine would be required to do any new cryptographic algorithms due to the 
> significant fee cost of interpreting such things.
> 
> Is there some insight I'm missing here?
> 
> Matt
> 
> On October 30, 2017 11:22:20 AM EDT, Russell O'Connor via bitcoin-dev 
>  wrote:
> I've been working on the design and implementation of an alternative to 
> Bitcoin Script, which I call Simplicity.  Today, I am presenting my design at 
> the PLAS 2017 Workshop  on Programming 
> Languages and Analysis for Security.  You find a copy of my Simplicity paper 
> at https://blockstream.com/simplicity.pdf 
> 
> 
> Simplicity is a low-level, typed, functional, native MAST language where 
> programs are built from basic combinators.  Like Bitcoin Script, Simplicity 
> is designed to operate at the consensus layer.  While one can write 
> Simplicity by hand, it is expected to be the target of one, or multiple, 
> front-end languages.
> 
> Simplicity comes with formal denotational semantics (i.e. semantics of what 
> programs compute) and formal operational semantics (i.e. semantics of how 
> programs compute). These are both formalized in the Coq proof assistant and 
> proven equivalent.
> 
> Formal denotational semantics are of limited value unless one can use them in 
> practice to reason about programs. I've used Simplicity's formal semantics to 
> prove correct an implementation of the SHA-256 compression function written 
> in Simplicity.  I have also implemented a variant of ECDSA signature 
> verification in Simplicity, and plan to formally validate its correctness 
> along with the associated elliptic curve operations.
> 
> Simplicity comes with easy to compute static analyses that can compute bounds 
> on the space and time resources needed for evaluation.  This is important for 
> both node operators, so that the costs are knows before evaluation, and for 
> designing Simplicity programs, so that smart-contract participants can know 
> the costs of their contract before committing to it.
> 
> As a native MAST language, unused branches of Simplicity programs are pruned 
> at redemption time.  This enhances privacy, reduces the block weight used, 
> and can reduce space and time resource costs needed for evaluation.
> 
> To make Simplicity practical, jets replace common Simplicity expressions 
> (identified by their MAST root) and directly implement them with C code.  I 
> anticipate developing a broad set of useful jets covering arithmetic 
> operations, elliptic curve operations, and cryptographic operations including 
> hashing and digital signature validation.
> 
> The paper I am presenting at PLAS describes only the foundation of the 
> Simplicity language.  The final design includes extensions not covered in the 
> paper, including
> 
> - full convent support, allowing access to all transaction data.
> - support for signature aggregation.
> - support for delegation.
> 
> Simplicity is still in a research and development phase.  I'm working to 
> produce a bare-bones SDK that will include 
> 
> - the formal semantics and correctness proofs in Coq
> - a Haskell implementation for constructing Simplicity programs
> - and a C interpreter for Simplicity.
> 
> After an SDK is complete the next step will be making Simplicity available in 
> the Elements project  so that anyone can start 
> experimenting with Simplicity in sidechains. Only after extensive vetting 
> would it be suitable to consider Simplicity for inclusion in Bitcoin.
> 
> Simplicity has a long ways to go still, and this work is not intended to 
> delay consideration of the various Merkelized Script proposals that are 
> currently ongoing.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Simplicity: An alternative to Script

2017-10-30 Thread Mark Friedenbach via bitcoin-dev
So enthused that this is public now! Great work. 

Sent from my iPhone

> On Oct 30, 2017, at 8:22 AM, Russell O'Connor via bitcoin-dev 
>  wrote:
> 
> I've been working on the design and implementation of an alternative to 
> Bitcoin Script, which I call Simplicity.  Today, I am presenting my design at 
> the PLAS 2017 Workshop on Programming Languages and Analysis for Security.  
> You find a copy of my Simplicity paper at 
> https://blockstream.com/simplicity.pdf
> 
> Simplicity is a low-level, typed, functional, native MAST language where 
> programs are built from basic combinators.  Like Bitcoin Script, Simplicity 
> is designed to operate at the consensus layer.  While one can write 
> Simplicity by hand, it is expected to be the target of one, or multiple, 
> front-end languages.
> 
> Simplicity comes with formal denotational semantics (i.e. semantics of what 
> programs compute) and formal operational semantics (i.e. semantics of how 
> programs compute). These are both formalized in the Coq proof assistant and 
> proven equivalent.
> 
> Formal denotational semantics are of limited value unless one can use them in 
> practice to reason about programs. I've used Simplicity's formal semantics to 
> prove correct an implementation of the SHA-256 compression function written 
> in Simplicity.  I have also implemented a variant of ECDSA signature 
> verification in Simplicity, and plan to formally validate its correctness 
> along with the associated elliptic curve operations.
> 
> Simplicity comes with easy to compute static analyses that can compute bounds 
> on  the space and time resources needed for evaluation.  This is important 
> for both node operators, so that the costs are knows before evaluation, and 
> for designing Simplicity programs, so that smart-contract  participants can 
> know the costs of their contract before committing to it.
> 
> As a native MAST language, unused branches of Simplicity programs are pruned 
> at redemption time.  This enhances privacy, reduces the block weight used, 
> and can reduce space and time resource costs needed for evaluation.
> 
> To make Simplicity practical, jets replace common Simplicity expressions 
> (identified by their MAST root) and directly implement them with C code.  I 
> anticipate developing a broad set of useful jets covering arithmetic 
> operations, elliptic curve operations, and cryptographic operations including 
> hashing and digital signature validation.
> 
> The paper I am presenting at PLAS describes only the foundation of the 
> Simplicity language.  The final design includes extensions not covered in the 
> paper, including
> 
> - full convent support, allowing access to all transaction data.
> - support for signature aggregation.
> - support for delegation.
> 
> Simplicity is still in a research and development phase.  I'm working to 
> produce a bare-bones SDK that will include 
> 
> - the formal semantics and correctness proofs in Coq
> - a Haskell implementation for constructing Simplicity programs
> - and a C interpreter for Simplicity.
> 
> After an SDK is complete the next step will be making Simplicity available in 
> the Elements project so that anyone can start experimenting with Simplicity 
> in sidechains. Only after extensive vetting would it be suitable to consider 
> Simplicity for inclusion in Bitcoin.
> 
> Simplicity has a long ways to go still, and this work is not intended to 
> delay consideration of the various Merkelized Script proposals that are 
> currently ongoing.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-10-27 Thread Mark Friedenbach via bitcoin-dev
I have completed updating the three BIPs with all the feedback that I have 
received so far. In short summary, here is an incomplete list of the changes 
that were made:

* Modified the hashing function fast-SHA256 so that an internal node cannot be 
interpreted simultaneously as a leaf.
* Changed MERKLEBRANCHVERIFY to verify a configurable number of elements from 
the tree, instead of just one.
* Changed MERKLEBRANCHVERIFY to have two modes: one where the inputs are 
assumed to be hashes, and one where they are run through double-SHA256 first.
* Made tail-call eval compatible with BIP141’s CLEANSTACK consensus rule by 
allowing parameters to be passed on the alt-stack.
* Restricted tail-call eval to segwit scripts only, so that checking sigop and 
opcode limits of the policy script would not be necessary.

There were a bunch of other small modifications, typo fixes, and optimizations 
that were made as well.

I am now ready to submit these BIPs as a PR against the bitcoin/bips repo, and 
I request that the BIP editor assign numbers.

Thank you,
Mark Friedenbach

> On Sep 6, 2017, at 5:38 PM, Mark Friedenbach  wrote:
> 
> I would like to propose two new script features to be added to the
> bitcoin protocol by means of soft-fork activation. These features are
> a new opcode, MERKLE-BRANCH-VERIFY (MBV) and tail-call execution
> semantics.
> 
> In brief summary, MERKLE-BRANCH-VERIFY allows script authors to force
> redemption to use values selected from a pre-determined set committed
> to in the scriptPubKey, but without requiring revelation of unused
> elements in the set for both enhanced privacy and smaller script
> sizes. Tail-call execution semantics allows a single level of
> recursion into a subscript, providing properties similar to P2SH while
> at the same time more flexible.
> 
> These two features together are enough to enable a range of
> applications such as tree signatures (minus Schnorr aggregation) as
> described by Pieter Wuille [1], and a generalized MAST useful for
> constructing private smart contracts. It also brings privacy and
> fungibility improvements to users of counter-signing wallet/vault
> services as unique redemption policies need only be revealed if/when
> exceptional circumstances demand it, leaving most transactions looking
> the same as any other MAST-enabled multi-sig script.
> 
> I believe that the implementation of these features is simple enough,
> and the use cases compelling enough that we could BIP 8/9 rollout of
> these features in relatively short order, perhaps before the end of
> the year.
> 
> I have written three BIPs to describe these features, and their
> associated implementation, for which I now invite public review and
> discussion:
> 
> Fast Merkle Trees
> BIP: https://gist.github.com/maaku/41b0054de0731321d23e9da90ba4ee0a
> Code: https://github.com/maaku/bitcoin/tree/fast-merkle-tree
> 
> MERKLEBRANCHVERIFY
> BIP: https://gist.github.com/maaku/bcf63a208880bbf8135e453994c0e431
> Code: https://github.com/maaku/bitcoin/tree/merkle-branch-verify
> 
> Tail-call execution semantics
> BIP: https://gist.github.com/maaku/f7b2e710c53f601279549aa74eeb5368
> Code: https://github.com/maaku/bitcoin/tree/tail-call-semantics
> 
> Note: I have circulated this idea privately among a few people, and I
> will note that there is one piece of feedback which I agree with but
> is not incorporated yet: there should be a multi-element MBV opcode
> that allows verifying multiple items are extracted from a single
> tree. It is not obvious how MBV could be modified to support this
> without sacrificing important properties, or whether should be a
> separate multi-MBV opcode instead.
> 
> Kind regards,
> Mark Friedenbach

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin-dev Digest, Vol 29, Issue 24

2017-10-20 Thread Mark Friedenbach via bitcoin-dev
You could do that today, with one of the 3 interoperable Lightning 
implementations available. Lowering the block interval on the other hand comes 
with a large number of centralizing downsides documented elsewhere. And getting 
down to 1sec or less on a global network is simply impossible due to the speed 
of light. 

If you want point of sale support, I suggest looking into the excellent work 
the Lightning teams have done.

> On Oct 20, 2017, at 7:24 PM, Ilan Oh via bitcoin-dev 
>  wrote:
> 
> The only blocktime reduction that would be a game changer, would be a 1 
> second blocktime or less, and by less I mean much less maybe 1000 
> blocks/second. Which would enable decentralized high frequency trading or 
> playing WoW on blockchain and other cool stuff. 
> 
> But technology is not developped enough as far as I now, maybe with quantum 
> computers in the future, and it is even bitcoins goal?
> 
> Also there is a guy who wrote a script to avoid "sybil attack" from 2x
> https://github.com/mariodian/ban-segshit8x-nodes
> 
> I don't know what it's worth, maybe check it out, I'm not huge support of 
> that kind of methods.
> 
> Ilansky
> 
> 
> Le 20 oct. 2017 14:01,  a 
> écrit :
>> Send bitcoin-dev mailing list submissions to
>> bitcoin-dev@lists.linuxfoundation.org
>> 
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> or, via email, send a message with subject or body 'help' to
>> bitcoin-dev-requ...@lists.linuxfoundation.org
>> 
>> You can reach the person managing the list at
>> bitcoin-dev-ow...@lists.linuxfoundation.org
>> 
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of bitcoin-dev digest..."
>> 
>> 
>> Today's Topics:
>> 
>>1. Improving Scalability via Block Time Decrease (Jonathan Sterling)
>>2. Re: Improving Scalability via Block Time Decrease
>>   (=?UTF-8?Q?Ad=c3=a1n_S=c3=a1nchez_de_Pedro_Crespo?=)
>> 
>> 
>> --
>> 
>> Message: 1
>> Date: Thu, 19 Oct 2017 14:52:48 +0800
>> From: Jonathan Sterling 
>> To: bitcoin-dev@lists.linuxfoundation.org
>> Subject: [bitcoin-dev] Improving Scalability via Block Time Decrease
>> Message-ID:
>> 
>> Content-Type: text/plain; charset="utf-8"
>> 
>> The current ten-minute block time was chosen by Satoshi as a tradeoff
>> between confirmation time and the amount of work wasted due to chain
>> splits. Is there not room for optimization in this number from:
>> 
>> A. Advances in technology in the last 8-9 years
>> B. A lack of any rigorous formula being used to determine what's the
>> optimal rate
>> C. The existence of similar chains that work at a much lower block times
>> 
>> Whilst I think we can all agree that 10 second block times would result in
>> a lot of chain splits due to Bitcoins 12-13 second propagation time (to 95%
>> of nodes), I think we'll find that we can go lower than 10 minutes without
>> much issue. Is this something that should be looked at or am I an idiot who
>> needs to read more? If I'm an idiot, I apologize; kindly point me in the
>> right direction.
>> 
>> Things I've read on the subject:
>> https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a
>> (section header "Why Bitcoin Block Time Is 10 Minutes ?")
>> https://bitcointalk.org/index.php?topic=176108.0
>> https://bitcoin.stackexchange.com/questions/1863/why-was-the-target-block-time-chosen-to-be-10-minutes
>> 
>> Kind Regards,
>> 
>> Jonathan Sterling
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: 
>> 
>> 
>> --
>> 
>> Message: 2
>> Date: Thu, 19 Oct 2017 15:41:51 +0200
>> From: "=?UTF-8?Q?Ad=c3=a1n_S=c3=a1nchez_de_Pedro_Crespo?="
>> 
>> To: bitcoin-dev@lists.linuxfoundation.org
>> Subject: Re: [bitcoin-dev] Improving Scalability via Block Time
>> Decrease
>> Message-ID: <40b6ef7b-f518-38cd-899a-8f301bc7a...@stampery.com>
>> Content-Type: text/plain; charset=utf-8
>> 
>> Blockchains with fast confirmation times are currently believed to
>> suffer from reduced security due to a high stale rate.
>> 
>> As blocks take a certain time to propagate through the network, if miner
>> A mines a block and then miner B happens to mine another block before
>> miner A's block propagates to B, miner B's block will end up wasted and
>> will not "contribute to network security".
>> 
>> Furthermore, there is a centralization issue: if miner A is a mining
>> pool with 30% hashpower and B has 10% hashpower, A will have a risk of
>> producing a stale block 70% 

Re: [bitcoin-dev] New difficulty algorithm part 2

2017-10-12 Thread Mark Friedenbach via bitcoin-dev


> On Oct 12, 2017, at 3:40 AM, ZmnSCPxj via bitcoin-dev 
>  wrote:
> 
> As most Core developers hodl vast amounts, it is far more likely that any 
> hardfork that goes against what Core wishes will collapse, simply by Core 
> developers acting in their capacity as hodlers of Bitcoin, without needing to 
> do any special action in their capacity as developers.

While this might be true of some, it is most certainly not true of many, and it 
is a very dangerous men to the safety and well being of people on this list.

You don’t get bitcoin for being a bitcoin developer, and there is no reason to 
suppose a developer has any more or less bitcoin than anyone else in the 
industry.

It is certainly the case that a large number of people and organizations who 
are not developers hold massive amounts of bitcoin (hundred of thousands each, 
millions in aggregate). 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] New difficulty algorithm needed for SegWit2x fork? (reformatted text)

2017-10-10 Thread Mark Friedenbach via bitcoin-dev
You phrase the question as if “deploying a hard fork to bitcoin” would protect 
the bitcoin chain from the attack. But that’s not what happens. If you are hard 
forking from the perspective of deployed nodes, you are an different ledger, 
regardless of circumstance or who did it. Instead of there being one altcoin 
fighting to take hashpower from bitcoin, there’d now be 2. It is not at all 
obvious to me that this would be a better outcome.

If that isn’t reason enough, changing the difficulty adjustment algorithm 
doesn’t solve the underlying issue―hashpower not being aligned with users’ (or 
even its owners’) interests. Propose a fix to the underlying cause and that 
might be worth considering, if it passes peer review. But without that you’d 
just be making the state of affairs arguably worse.

And so yes, *if* this incentive problem can’t be solved, and the unaltered 
bitcoin chain dies from disuse after suffering a hashpower attack, especially a 
centrally and/or purposefully instigated one, then bitcoin would be failed a 
failed project.

The thesis (and value proposition) of bitcoin is that a particular category of 
economic incentives can be used to solve the problem of creating a secure 
trustess ledger. If those incentives failed, then he thesis of bitcoin would 
have been experimentally falsified, yes. Maybe the incentives can be made 
better to save the project, but we’d have to fix the source of the problem not 
the symptoms.

> On Oct 10, 2017, at 6:44 PM, Ben Kloester <benkloes...@gmail.com> wrote:
> 
> Mark, this seems an awful lot like an answer of "no", to my question "Is 
> there a contingency plan in the case that the incumbent chain following the 
> Bitcoin Core consensus rules comes under 51% attack?" - is this a correct 
> interpretation?
> 
> In fact, beyond a no, it seems like a "no, and I disagree with the idea of 
> creating one".
> 
> So if Bitcoin comes under successful 51%, the project, in your vision, has 
> simply failed?
> 
> Ben Kloester
> 
> 
>> On 10 October 2017 at 13:19, Mark Friedenbach via bitcoin-dev 
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> The problem of fast acting but non vulnerable difficulty adjustment 
>> algorithms is interesting. I would certainly like to see this space further 
>> explored, and even have some ideas myself.
>> 
>> However without commenting on the technical merits of this specific 
>> proposal, I think it must be said upfront that the stated goal is not good. 
>> The largest technical concern (ignoring governance) over B2X is that it is a 
>> rushed, poorly reviewed hard fork. Hard forks should not be rushed, and they 
>> should receive more than the usual level of expert and community review.
>> 
>> I’m that light, doing an even more rushed hard fork on an even newer idea 
>> with even less review would be hypocritical at best. I would suggest 
>> reframing as a hardfork wishlist research problem for the next properly 
>> planned hard fork, if one occurs. You might also find the hardfork research 
>> group a more accommodating venue for this discussion:
>> 
>> https://bitcoinhardforkresearch.github.io/
>> 
>>> On Oct 9, 2017, at 3:57 PM, Scott Roberts via bitcoin-dev 
>>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>> 
>>> Sorry, my previous email did not have the plain text I intended.
>>> 
>>> Background: 
>>> 
>>> The bitcoin difficulty algorithm does not seem to be a good one. If there 
>>> is a fork due to miners seeking maximum profit without due regard to 
>>> security, users, and nodes, the "better" coin could end up being the 
>>> minority chain. If 90% of hashrate is really going to at least initially go 
>>> towards using SegWit2x, BTC would face 10x delays in confirmations 
>>> until the next difficulty adjustment, negatively affecting its price 
>>> relative 
>>> to BTC1, causing further delays from even more miner abandonment 
>>> (until the next adjustment). The 10% miners remaining on BTC do not 
>>> inevitably lose by staying to endure 10x delays because they have 10x 
>>> less competition, and the same situation applies to BTC1 miners. If the 
>>> prices are the same and stable, all seems well for everyone, other things 
>>> aside. But if the BTC price does not fall to reflect the decreased 
>>> hashrate, 
>>> he situation seems to be a big problem for both coins: BTC1 miners will 
>>> jump back to BTC when the difficulty adjustment occurs, initiating a 
>>> potentially never-ending oscillation between the two coins, potentially 
>>> worse than what BCH is experi

Re: [bitcoin-dev] New difficulty algorithm needed for SegWit2x fork? (reformatted text)

2017-10-09 Thread Mark Friedenbach via bitcoin-dev
The problem of fast acting but non vulnerable difficulty adjustment algorithms 
is interesting. I would certainly like to see this space further explored, and 
even have some ideas myself.

However without commenting on the technical merits of this specific proposal, I 
think it must be said upfront that the stated goal is not good. The largest 
technical concern (ignoring governance) over B2X is that it is a rushed, poorly 
reviewed hard fork. Hard forks should not be rushed, and they should receive 
more than the usual level of expert and community review.

I’m that light, doing an even more rushed hard fork on an even newer idea with 
even less review would be hypocritical at best. I would suggest reframing as a 
hardfork wishlist research problem for the next properly planned hard fork, if 
one occurs. You might also find the hardfork research group a more 
accommodating venue for this discussion:

https://bitcoinhardforkresearch.github.io/

> On Oct 9, 2017, at 3:57 PM, Scott Roberts via bitcoin-dev 
>  wrote:
> 
> Sorry, my previous email did not have the plain text I intended.
> 
> Background: 
> 
> The bitcoin difficulty algorithm does not seem to be a good one. If there 
> is a fork due to miners seeking maximum profit without due regard to 
> security, users, and nodes, the "better" coin could end up being the 
> minority chain. If 90% of hashrate is really going to at least initially go 
> towards using SegWit2x, BTC would face 10x delays in confirmations 
> until the next difficulty adjustment, negatively affecting its price relative 
> to BTC1, causing further delays from even more miner abandonment 
> (until the next adjustment). The 10% miners remaining on BTC do not 
> inevitably lose by staying to endure 10x delays because they have 10x 
> less competition, and the same situation applies to BTC1 miners. If the 
> prices are the same and stable, all seems well for everyone, other things 
> aside. But if the BTC price does not fall to reflect the decreased hashrate, 
> he situation seems to be a big problem for both coins: BTC1 miners will 
> jump back to BTC when the difficulty adjustment occurs, initiating a 
> potentially never-ending oscillation between the two coins, potentially 
> worse than what BCH is experiencing.  They will not issue coins too fast 
> like BCH because that is a side effect of the asymmetry in BCH's rise and 
> fall algorithm. 
> 
> Solution: 
> 
> Hard fork to implement a new difficulty algorithm that uses a simple rolling 
> average with a much smaller window.  Many small coins have done this as 
> a way to stop big miners from coming on and then suddenly leaving, leaving 
> constant miners stuck with a high difficulty for the rest of a (long) 
> averaging 
> window.  Even better, adjust the reward based on recent solvetimes to 
> motivate more mining (or less) if the solvetimes are too slow (or too fast). 
> This will keep keep coin issuance rate perfectly on schedule with real time. 
> 
> I recommend the following for Bitcoin, as fast, simple, and better than any 
> other difficulty algorithm I'm aware of.  This is the result of a lot of work 
> the 
> past year. 
> 
> === Begin difficulty algorithm === 
> # Zawy v6 difficulty algorithm (modified for bitcoin) 
> # Unmodified Zawy v6 for alt coins: 
> # http://zawy1.blogspot.com/2017/07/best-difficulty-algorithm-zawy-v1b.html 
> # All my failed attempts at something better: 
> # 
> https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
>  
> # 
> # Keep negative solvetimes to correct bad timestamps. 
> # Do not be tempted to use: 
> # next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs]; 
> # ST= Solvetime, TS = timestamp 
> 
> # set constants until next hard fork: 
> 
> T=600; # coin's TargetSolvetime 
> N=30; # Averaging window. Smoother than N=15, faster response than N=60. 
> X=5; 
> limit = X^(2/N); # limit rise and fall in case of timestamp manipulation 
> adjust = 1/(1+0.67/N);  # keeps avg solvetime on track 
> 
> # begin difficulty algorithm 
> 
> avg_ST=0; avg_D=0; 
> for ( i=height;  i > height-N;  i--) {  # go through N most recent blocks 
> avg_ST += (TS[i] - TS[i-1]) / N; 
> avg_D += D[i]/N; 
> } 
> avg_ST = T*limit if avg_ST > T*limit; 
> avg_ST = T/limit if avg_ST < T/limit; 
> 
> next_D = avg_D * T / avg_ST * adjust; 
> 
> # Tim Olsen suggested changing reward to protect against hash attacks. 
> # Karbowanek coin suggested something similar. 
> # I could not find anything better than the simplest idea below. 
> # It was a great surprise that coin issuance rate came out perfect. 
> # BaseReward = coins per block 
> 
> next_reward = BaseReward * avg_ST / T; 
> 
> === end algo  
> 
> Due to the limit and keeping negative solvetimes in a true average, 
> timestamp errors resulting in negative solvetimes are corrected in the next 
> block. Otherwise, one would need to do like Zcash and cause a 5-block 
> delay 

Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-10-05 Thread Mark Friedenbach via bitcoin-dev
Here’s an additional (uncontroversial?) idea due to Russell O’Connor:

Instead of requiring that the last item popped off the stack in a CHECKMULTISIG 
be zero, have it instead be required that it is a bitfield specifying which 
pubkeys are used, or more likely the complement thereof. This allows signatures 
to be matched to pubkeys in the order given, and batch validated, with no risk 
of 3rd party malleability.

Mark

> On Sep 30, 2017, at 6:13 PM, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> I've put together a first draft for what I hope to be a good next step for 
> Segwit and Bitcoin scripting:
>https://github.com/luke-jr/bips/blob/witnessv1/bip-witnessv1.mediawiki
> 
> This introduces 5 key changes:
> 
> 1. Minor versions for witnesses, inside the witness itself. Essentially the 
> witness [major] version 1 simply indicates the witness commitment is SHA256d, 
> and nothing more.
> 
> The remaining two are witness version 1.0 (major 1, minor 0):
> 
> 2. As previously discussed, undefined opcodes immediately cause the script to 
> exit with success, making future opcode softforks a lot more flexible.
> 
> 3. If the final stack element is not exactly true or false, it is interpreted 
> as a tail-call Script and executed. (Credit to Mark Friedenbach)
> 
> 4. A new shorter fixed-length signature format, eliminating the need to guess 
> the signature size in advance. All signatures are 65 bytes, unless a 
> condition 
> script is included (see #5).
> 
> 5. The ability for signatures to commit to additional conditions, expressed 
> in 
> the form of a serialized Script in the signature itself. This would be useful 
> in combination with OP_CHECKBLOCKATHEIGHT (BIP 115), hopefully ending the 
> whole replay protection argument by introducing it early to Bitcoin before 
> any 
> further splits.
> 
> This last part is a big ugly right now: the signature must commit to the 
> script interpreter flags and internal "sigversion", which basically serve the 
> same purpose. The reason for this, is that otherwise someone could move the 
> signature to a different context in an attempt to exploit differences in the 
> various Script interpretation modes. I don't consider the BIP deployable 
> without this getting resolved, but I'm not sure what the best approach would 
> be. Maybe it should be replaced with a witness [major] version and witness 
> stack?
> 
> There is also draft code implementing [the consensus side of] this:
>https://github.com/bitcoin/bitcoin/compare/master...luke-jr:witnessv1
> 
> Thoughts? Anything I've overlooked / left missing that would be 
> uncontroversial and desirable? (Is any of this unexpectedly controversial for 
> some reason?)
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-10-01 Thread Mark Friedenbach via bitcoin-dev

> On Oct 1, 2017, at 2:32 PM, Johnson Lau  wrote:
> 
> So there are 3 proposals with similar goal but different designs. I try to 
> summarise some questions below:
> 
> 1. How do we allow further upgrade within v1 witness? Here are some options:
> a. Minor version in witness. (Johnson / Luke) I prefer this way, but we may 
> end up with many minor versions.

I'm not sure I agree with the "minor version" nomenclature, or that we would 
necessarily end up with any consensus-visible fields beyond 2.  There are two 
separate soft-fork version fields that were, I think it is fair to say now, 
inappropriately merged in the "script version” feature of segregated witness as 
described in BIP141.

First there is the witness type, which combined with the length of the 
commitment that follows specifies how data from the witness stack is used to 
calculate/verify the witness commitment in the scriptPubKey of the output being 
spent.  For v0 with a 20-byte hash, it says that those 20 bytes are the HASH160 
of the top element of the stack.  For v0 with a 32-byte hash, it says that 
those 32 bytes are the HASH256 of the top element of the stack.

Second there is the script version, which is not present as a separate field 
for witness type v0.  Implicitly though, the script version for v0,20-byte is 
that the witness consists of two elements, and these are interpreted as a 
pubkey and a signature.  For v0,32-byte the script version is that the witness 
consists of 1 or more elements; with max 520 byte size constraints for all but 
the top element, which has a higher limit of 10,000 bytes; and the top-most 
element is interpreted as a script and executed with the modified CHECKSIG 
behavior defined by BIP141 and the CLEANSTACK rule enforced.

These are separate roles, one not being derivative of the other.  In an ideal 
world the witness type (of which there are only 16 remaining without obsoleting 
BIP141) is used only to specify a new function for transforming the witness 
stack into a commitment for verification purposes.  Merklized script would be 
one example: v2,32-byte could be defined to require a witness stack of at least 
two elements, the top most of which is a Merkle inclusion proof of the second 
item in a tree whose root is given in the 32-byte payload of the output.  Maybe 
v3 would prove inclusion by means of some sort of RSA accumulator or something.

Such a specification says nothing about the features of the subscript drawn 
from the Merkle tree, or even whether it is bitcoin script at all vs something 
else (Simplicity, DEX, RISC-V, Joy, whatever).  All that is necessary is that a 
convention be adopted about where to find the script version from whatever data 
is left on the stack after doing the witness type check (hashing the script, 
calculating a Merkle root, checking inclusion in an RSA accumulator, whatever). 
 A simple rule is that it is serialized and prefixed to the beginning of the 
string that was checked against the commitment in the output.

So v0,32-byte says that the top item is hashed and that hash must match the 
32-byte value in the output.  This new v1,32-byte witness type being talked 
about in this thread would have exactly the same hashing rules, but will 
execute the resulting string based on its prefix, the script version, which is 
first removed before execution.

Sure first script version used could be a cleaned up script with a bunch of the 
weirdness removed (CHECKMULTISIG, I'm looking at you!); CLTV, CSV, and MBV drop 
arguments; disabled opcodes and unassigned NOPs become "return true"; etc.  
Maybe v2 adds new opcodes.  But we can imagine script version that do something 
totally different, like introduce a new script based on a strongly-typed 
Merklized lambda calculus, or a RISC-V executable format, or whatever.

This has pragmatic implications with the separation of witness type and script 
version: we could then define a "MAST" output that proves the script used is 
drawn from a set represented by the Merkle tree.  However different scripts in 
that tree can use different versions.  It would be useful if the most common 
script is the key aggregated everyone-signs outcome, which looks like a regular 
bitcoin payment, and then contingency cases can be handled by means of a 
complicated script written in some newly added general computation language or 
a whole emulated RISC-V virtual machine.

> b. OP_RETURNTRUE (Luke). I proposed this in an earlier version of BIP114 but 
> now I think it doesn’t interact well with signature aggregation, and I worry 
> that it would have some other unexpected effects.
> c. Generalised NOP method: user has to provide the returned value, so even 
> VERIFY-type code could do anything

I see no reason to do either. Gate new behavior based on script execution 
flags, which are set based on the script version.  Script versions not 
understood are treated as "return true" to begin with.  The interpreter isn't 
even going 

Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-10-01 Thread Mark Friedenbach via bitcoin-dev

> On Oct 1, 2017, at 12:41 PM, Russell O'Connor  wrote:
> 
> Creating a Bitcoin script that does not allow malleability is difficult and 
> requires wasting a lot of bytes to do so, typically when handling issues 
> around non-0-or-1 witness values being used with OP_IF, and dealing with 
> non-standard-zero values, etc.

Script validation flags of the correct place to do this. We already have policy 
validation flags that check for these things. They were not made consensus 
rules with Segwit v0 mainly due to concern over scope creep in an already large 
overhaul, of my memory is correct. Script versions and quadratic hashing fixes 
where the minimum necessary to allow segwit to activate safely while still 
enabling future upgrades that would otherwise have been hard forks. We knew 
that we would be later changing the EC signature scheme to be something that 
supported signature aggregation, and that would be more appropriate time to 
discuss such changes. As we are considering to do now (although witness 
versions means we don’t need to omnibus the script upgrade here either, so a v1 
before signature aggregation is ready is fine IMHO).

In any case if there is any general witness malleability due to opcode 
semantics that it’s not fixed by one of our existing policy flags, that is a 
bug and I would encourage you to report it.
> I'll argue that I don't want my counter-party going off and using a very 
> deeply nested key in order to subvert the fee rate we've agreed upon after 
> I've signed my part of the input.  If we are doing multi-party signing of 
> inputs we need to communicate anyways to construct the transaction.  I see no 
> problem with requiring my counter-party to choose their keys before I sign so 
> that I know up front what our fee rate is going to be.  If they lose their 
> keys and need a backup, they should have to come back to me to resign in 
> order that we can negotiate a new fee rate for the transaction and who is 
> going to be covering how much of the fee and on which inputs.

Arguing that every single user should be forced to restart an interactive 
signing session. That’s a very strong statement based on something that I would 
say is a preference that depends on circumstances.

What about an optional commitment to witness size in bytes? The value zero 
meaning “I don’t care.” I would argue that it should be a maximum however, and 
therefor serialized as part of the witness. The serialization of this would be 
very compact (1 plus the difference between actual and maximum, with zero 
meaning not used.)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-10-01 Thread Mark Friedenbach via bitcoin-dev
> On Oct 1, 2017, at 12:05 PM, Russell O'Connor  wrote:
> 
> Given the proposed fixed signature size, It seems better to me that we create 
> a SIGHASH_WITNESS_WEIGHT flag as opposed to SIGHASH_WITNESS_DEPTH.

For what benefit? If your script actually uses all the items on the stack, and 
if your script is not written in such a way as to allow malleability (which 
cannot be prevented in general), then they’re equivalent. Using weight instead 
of depth only needlessly restricts other parties to select a witness size 
up-front.

And to be clear, signing witness weight doesn’t mean the witness is not 
malleable. The signer could sign again with a different ECDSA nonce. Or if the 
signer is signing from a 2-of-3 wallet, a common scenario I hope, there are 3 
possible key combinations that could be used. If using MBV, a 3-element tree is 
inherently unbalanced and the common use case can have a smaller proof size.

Witnesses are not 3rd party malleable and we will maintain that property going 
forward with future opcodes.

> Mark, you seem to be arguing that in general we still want weight 
> malleability even with witness depth fixed, but I don't understand in what 
> scenario we would want that.

Any time all parties are not online at the same time in an interactive signing 
protocol, or for which individual parties have to reconfigure their signing 
choices due to failures. We should not restrict our script signature system to 
such a degree that it becomes difficult to create realistic signing setups for 
people using best practices (multi-key, 2FA, etc.) to sign. If I am a 
participant in a signing protocol, it would be layer violating to treat me as 
anything other than a black box, such that internal errors and timeouts in my 
signing setup don’t propagate upwards to the multi-party protocol.

For example, I should be able to try to 2FA sign, and if that fails go fetch my 
backup key and sign with that. But because it’s my infrequently used backup 
key, it might be placed deeper in the key tree and therefore signatures using 
it are larger. All the other signers need care is that slot #3 in the witness 
is where my Merkle proof goes. They shouldn’t have to restart and resign 
because my proof was a little larger than anticipated — and maybe they can’t 
resign because double-spend protections!

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-10-01 Thread Mark Friedenbach via bitcoin-dev
I would also suggest that the 520 byte push limitation be removed for v1 
scripts as well. MERKLEBRANCHVERIFY in particular could benefit from larger 
proof sizes. To do so safely would require reworking script internals to use 
indirect pointers and reference counting for items on stack, but this is worth 
doing generally, and introducing a per-input hashing limit equal to a small 
multiple of the witness size (or retaining the opcount limit).

> On Sep 30, 2017, at 6:13 PM, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> I've put together a first draft for what I hope to be a good next step for 
> Segwit and Bitcoin scripting:
>https://github.com/luke-jr/bips/blob/witnessv1/bip-witnessv1.mediawiki
> 
> This introduces 5 key changes:
> 
> 1. Minor versions for witnesses, inside the witness itself. Essentially the 
> witness [major] version 1 simply indicates the witness commitment is SHA256d, 
> and nothing more.
> 
> The remaining two are witness version 1.0 (major 1, minor 0):
> 
> 2. As previously discussed, undefined opcodes immediately cause the script to 
> exit with success, making future opcode softforks a lot more flexible.
> 
> 3. If the final stack element is not exactly true or false, it is interpreted 
> as a tail-call Script and executed. (Credit to Mark Friedenbach)
> 
> 4. A new shorter fixed-length signature format, eliminating the need to guess 
> the signature size in advance. All signatures are 65 bytes, unless a 
> condition 
> script is included (see #5).
> 
> 5. The ability for signatures to commit to additional conditions, expressed 
> in 
> the form of a serialized Script in the signature itself. This would be useful 
> in combination with OP_CHECKBLOCKATHEIGHT (BIP 115), hopefully ending the 
> whole replay protection argument by introducing it early to Bitcoin before 
> any 
> further splits.
> 
> This last part is a big ugly right now: the signature must commit to the 
> script interpreter flags and internal "sigversion", which basically serve the 
> same purpose. The reason for this, is that otherwise someone could move the 
> signature to a different context in an attempt to exploit differences in the 
> various Script interpretation modes. I don't consider the BIP deployable 
> without this getting resolved, but I'm not sure what the best approach would 
> be. Maybe it should be replaced with a witness [major] version and witness 
> stack?
> 
> There is also draft code implementing [the consensus side of] this:
>https://github.com/bitcoin/bitcoin/compare/master...luke-jr:witnessv1
> 
> Thoughts? Anything I've overlooked / left missing that would be 
> uncontroversial and desirable? (Is any of this unexpectedly controversial for 
> some reason?)
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-09-30 Thread Mark Friedenbach via bitcoin-dev
Clean stack should be eliminated for other possible future uses, the most 
obvious of which is recursive tail-call for general computation capability. I’m 
not arguing for that at this time, just arguing that we shouldn’t prematurely 
cut off an easy implementation of such should we want to. Clean stack must 
still exist as policy for future soft-fork safety, but being a consensus 
requirement was only to avoid witness malleability, which committing to the 
size of the witness also accomplishes.

Committing to the number of witness elements is fully sufficient, and using the 
number of elements avoids problems of not knowing the actual size in bytes at 
the time of signing, e.g. because the witness contains a merkle proof generated 
by another party from an unbalanced tree, and unbalanced trees are expected to 
be common (so that elements can be placed higher in the tree in accordance with 
their higher expected probability of usage). Other future extensions might also 
have variable-length proofs.

> On Sep 30, 2017, at 7:47 PM, Luke Dashjr  wrote:
> 
> Should it perhaps commit to the length of the serialised witness data instead 
> or additionally? Now that signatures are no longer variable-length, that'd be 
> possible...
> 
> As far as tail-call needs are concerned, CLEANSTACK wouldn't have been 
> checked 
> until AFTER the tail-call in the first draft. But I suppose eliminating it 
> for 
> other possible future purposes is still useful.
> 
> Luke

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Version 1 witness programs (first draft)

2017-09-30 Thread Mark Friedenbach via bitcoin-dev
The CLEANSTACK rule should be eliminated, and instead the number of items on 
the stack should be incorporated into the signature hash. That way any script 
with a CHECKSIG is protected from witness extension malleability, and those 
rare ones that do not use signature operations can have a “DEPTH 1 EQUALVERIFY” 
at the end. This allows for much simpler tail-call evaluation as you don’t need 
to pass arguments on the alt-stack.

> On Sep 30, 2017, at 6:13 PM, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> I've put together a first draft for what I hope to be a good next step for 
> Segwit and Bitcoin scripting:
>https://github.com/luke-jr/bips/blob/witnessv1/bip-witnessv1.mediawiki
> 
> This introduces 5 key changes:
> 
> 1. Minor versions for witnesses, inside the witness itself. Essentially the 
> witness [major] version 1 simply indicates the witness commitment is SHA256d, 
> and nothing more.
> 
> The remaining two are witness version 1.0 (major 1, minor 0):
> 
> 2. As previously discussed, undefined opcodes immediately cause the script to 
> exit with success, making future opcode softforks a lot more flexible.
> 
> 3. If the final stack element is not exactly true or false, it is interpreted 
> as a tail-call Script and executed. (Credit to Mark Friedenbach)
> 
> 4. A new shorter fixed-length signature format, eliminating the need to guess 
> the signature size in advance. All signatures are 65 bytes, unless a 
> condition 
> script is included (see #5).
> 
> 5. The ability for signatures to commit to additional conditions, expressed 
> in 
> the form of a serialized Script in the signature itself. This would be useful 
> in combination with OP_CHECKBLOCKATHEIGHT (BIP 115), hopefully ending the 
> whole replay protection argument by introducing it early to Bitcoin before 
> any 
> further splits.
> 
> This last part is a big ugly right now: the signature must commit to the 
> script interpreter flags and internal "sigversion", which basically serve the 
> same purpose. The reason for this, is that otherwise someone could move the 
> signature to a different context in an attempt to exploit differences in the 
> various Script interpretation modes. I don't consider the BIP deployable 
> without this getting resolved, but I'm not sure what the best approach would 
> be. Maybe it should be replaced with a witness [major] version and witness 
> stack?
> 
> There is also draft code implementing [the consensus side of] this:
>https://github.com/bitcoin/bitcoin/compare/master...luke-jr:witnessv1
> 
> Thoughts? Anything I've overlooked / left missing that would be 
> uncontroversial and desirable? (Is any of this unexpectedly controversial for 
> some reason?)
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Rebatable fees & incentive-safe fee markets

2017-09-29 Thread Mark Friedenbach via bitcoin-dev
This is correct. Under assumptions of a continuous mempool model however this 
should be considered the outlier behavior, other than a little bit of empty 
space at the end, now and then. A maximum fee rate calculated as a filter over 
past block rates could constrain this outlier behavior from ever happening too.

> On Sep 29, 2017, at 3:43 AM, Daniele Pinna  wrote:
> 
> Maybe I'm getting this wrong but wouldn't this scheme imply that a miner is 
> incentivized to limit the amount of transactions in a block to capture the 
> maximum fee of the ones included?
> 
> As an example, mined blocks currently carry ~0.8 btc in fees right now. If I 
> were to submit a transaction paying 1 btc in maximal money fees, then the 
> miner would be incentivized to include my transaction alone to avoid that 
> lower fee paying transactions reduce the amount of fees he can earn from my 
> transaction alone. This would mean that I could literally clog the network by 
> paying 1btc every ten minutes.
> 
> Am I missing something?
> 
> Daniele 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Rebatable fees & incentive-safe fee markets

2017-09-28 Thread Mark Friedenbach via bitcoin-dev
Only if your keys are online and the transaction is self-signed. It wouldn’t 
let you pre-sign a transaction for a third party to broadcast and have it clear 
at just the market rate in the future. Like a payment channel refund, for 
example.

> On Sep 28, 2017, at 7:17 PM, Nathan Wilcox via bitcoin-dev 
>  wrote:
> 
> 
> 
>> On Fri, Sep 29, 2017 at 11:10 AM, Peter Todd via bitcoin-dev 
>>  wrote:
>> On Fri, Sep 29, 2017 at 01:53:55AM +, Matt Corallo via bitcoin-dev wrote:
>> > I'm somewhat curious what the authors envisioned the real-world 
>> > implications of this model to be. While blindly asking users to enter what 
>> > they're willing to pay always works in theory, I'd imagine in such a world 
>> > the fee selection UX would be similar to what it is today - users are 
>> > provided a list of options with feerates and expected confirmation times 
>> > from which to select. Indeed, in a world where users pay a lower fee if 
>> > they paid more than necessary fee estimation could be more willing to 
>> > overshoot and the UX around RBF and CPFP could be simplified greatly, but 
>> > I'm not actually convinced that it would result in higher overall mining 
>> > revenue.
>> 
>> Note too that the fee users are willing to pay often changes over time.
>> 
>> My OpenTimestamps service is a perfect example: getting a timestamp confirmed
>> within 10 minutes of the previous one has little value to me, but if the
>> previous completed timestamp was 24 hours ago I'm willing to pay 
>> significantly
>> more money because the time delay is getting significant enough to affect the
>> trustworthyness of the entire service. So the fee selection mechanism is
>> nothing more than a RBF-using loop that bumps the fee every time a block gets
>> mined w/o confirming my latest transaction.
>> 
>> This kind of time sensitivity is probably true of a majority of Bitcoin
>> use-cases, with the caveat that often the slope will be negative eventually:
>> after a point in time completing the transaction has no value.
>> 
> 
> Wouldn't this RBF loop behave pretty much the same in the Monopolistic Price 
> Mechanism? (I haven't grokked RSOP yet.)
> 
> In fact, so long as RBF works, isn't it possible to raise Pay-Your-Bid fees 
> and Monopolistic Price fees over time to express the time curve preference?
> 
>  
>> --
>> https://petertodd.org 'peter'[:-1]@petertodd.org
>> 
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Rebatable fees & incentive-safe fee markets

2017-09-28 Thread Mark Friedenbach via bitcoin-dev


> On Sep 28, 2017, at 7:02 PM, Peter Todd <p...@petertodd.org> wrote:
> 
>> On Thu, Sep 28, 2017 at 06:06:29PM -0700, Mark Friedenbach via bitcoin-dev 
>> wrote:
>> Unlike other proposed fixes to the fee model, this is not trivially
>> broken by paying the miner out of band.  If you pay out of band fee
>> instead of regular fee, then your transaction cannot be included with
>> other regular fee paying transactions without the miner giving up all
>> regular fee income.  Any transaction paying less fee in-band than the
>> otherwise minimum fee rate needs to also provide ~1Mvbyte * fee rate
>> difference fee to make up for that lost income.  So out of band fee is
>> only realistically considered when it pays on top of a regular feerate
>> paying transaction that would have been included in the block anyway.
>> And what would be the point of that?
> 
> This proposed fix is itself broken, because the miner can easily include 
> *only*
> transactions paying out-of-band, at which point the fee can be anything.

And in doing so either reduce the claimable income from other transactions 
(miner won’t do that), or require paying more non-rebateable fee than is needed 
to get in the block (why would the user do that?)

This is specifically addressed in the text you quoted. 

> Equally, miners can provide fee *rebates*, forcing up prices for everyone else
> while still allowing them to make deals.

Discounted by the fact rebates would not be honored by other miners. The rebate 
would have to be higher than what they could get from straight fee collection, 
making it less profitable than doing nothing. 

> Also, remember that you can pay fees via anyone-can-spend outputs, as miners
> have full ability to control what transactions end up spending those outputs.

You’d still have to pay the minimum fee rate of the other transactions or you’d 
bring down the miners income. Otherwise this is nearly the same cost as the 
rebate fee, since they both involve explicit outputs claimed by the miner, but 
the rebate goes back to you. So why would you not want to do that instead?

A different way of looking at this proposal is that it creates a penalty for 
out of band payments. 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Rebatable fees & incentive-safe fee markets

2017-09-28 Thread Mark Friedenbach via bitcoin-dev
This article by Ron Lavi, Or Sattath, and Aviv Zohar was forwarded to
me and is of interest to this group:

"Redesigning Bitcoin's fee market"
https://arxiv.org/abs/1709.08881

I'll briefly summarize before providing some commentary of my own,
including transformation of the proposed mechanism into a relatively
simple soft-fork.  The article points out that bitcoin's auction
model for transaction fees / inclusion in a block is broken in the
sense that it does not achieve maximum clearing price* and to prevent
strategic bidding behavior.

(* Maximum clearing price meaning highest fee the user is willing to
   pay for the amount of time they had to wait.  In other words, miner
   income.  While this is a common requirement of academic work on
   auction protocols, it's not obvious that it provides intrinsic
   benefit to bitcoin for miners to extract from users the maximum
   amount of fee the market is willing to support.  However strategic
   bidding behavior (e.g. RBF and CPFP) does have real network and
   usability costs, which a more "correct" auction model would reduce
   in some use cases.)

Bitcoin is a "pay your bid" auction, where the user makes strategic
calculations to determine what bid (=fee) is likely to get accepted
within the window of time in which they want confirmation.  This bid
can then be adjusted through some combination of RBF or CPFP.

The authors suggest moving to a "pay lowest winning bid" model where
all transactions pay only the smallest fee rate paid by any
transaction in the block, for which the winning strategy is to bid the
maximum amount you are willing to pay to get the transaction
confirmed:

> Users can then simply set their bids truthfully to exactly the
> amount they are willing to pay to transact, and do not need to
> utilize fee estimate mechanisms, do not resort to bid shading and do
> not need to adjust transaction fees (via replace-by-fee mechanisms)
> if the mempool grows.


Unlike other proposed fixes to the fee model, this is not trivially
broken by paying the miner out of band.  If you pay out of band fee
instead of regular fee, then your transaction cannot be included with
other regular fee paying transactions without the miner giving up all
regular fee income.  Any transaction paying less fee in-band than the
otherwise minimum fee rate needs to also provide ~1Mvbyte * fee rate
difference fee to make up for that lost income.  So out of band fee is
only realistically considered when it pays on top of a regular feerate
paying transaction that would have been included in the block anyway.
And what would be the point of that?


As an original contribution, I would like to note that something
strongly resembling this proposal could be soft-forked in very easily.
The shortest explanation is:

For scriptPubKey outputs of the form "", where
the pushed data evaluates as true, a consensus rule is added that
the coinbase must pay any fee in excess of the minimum fee rate
for the block to the push value, which is a scriptPubKey.

Beyond fixing the perceived problems of bitcoin's fee auction model
leading to costly strategic behavior (whether that is a problem is a
topic open to debate!), this would have the additional benefits of:

1. Allowing pre-signed transactions, of payment channel close-out
   for example, to provide sufficient fee for confirmation without
   knowledge of future rates or overpaying or trusting a wallet to
   be online to provide CPFP fee updates.

2. Allowing explicit fees in multi-party transaction creation
   protocols where final transaction sizes are not known prior to
   signing by one or more of the parties, while again not
   overpaying or trusting on CPFP, etc.

3. Allowing applications with expensive network access to pay
   reasonable fees for quick confirmation, without overpaying or
   querying a trusted fee estimator.  Blockstream Satellite helps
   here, but rebateable fees provides an alternative option when
   full block feeds are not available for whatever reason.

Using a fee rebate would carry a marginal cost of 70-100 vbytes per
instance.  This makes it a rather expensive feature, and therefore in
my own estimation not something that is likely to be used by most
transactions today.  However the cost is less than CPFP, and so I
expect that it would be a heavily used feature in things like payment
channel refund and uncooperative close-out transactions.


Here is a more worked out proposal, suitable for critiquing:

1. A transaction is allowed to specify an _Implicit Fee_, as usual, as
   well as one or more explicit _Rebateable Fees_.  A rebateable fee
   is an ouput with a scriptPubKey that consists of a single, minimal,
   nonzero push of up to 42 bytes.  Note that this is an always-true
   script that requires no signature to spend.

2. The _Fee Rate_ of a transaction is a fractional number equal to the
   combined implicit and rebateable fee divided 

Re: [bitcoin-dev] Address expiration times should be added to BIP-173

2017-09-28 Thread Mark Friedenbach via bitcoin-dev
First, there’s been no discussion so far for address expiration to be part of 
“the protocol” which usually means consensus rules or p2p. This is purely about 
wallets and wallet information exchange protocols.

There’s no way for the sender to know whether an address has been used without 
a complete copy of the block chain and more indexes than even Bitcoin Core 
maintains. It’s simply not an option now, let alone as the blockchain grows 
into the future.

> On Sep 27, 2017, at 1:23 PM, Nick Pudar via bitcoin-dev 
>  wrote:
> 
> As a long term silent reader of this list, I felt compelled to comment on 
> this address expiration topic.  I don't believe that address expiration 
> should be part of the protocol.  I think instead that the "sending" feature 
> should by default offer guidance to request a fresh address from the 
> recipient.  Also allow the receiver of funds to be able to generate an 
> "invoice" that the sender acts on.
> 
> I also think that re-directs are fraught with privacy issues.  At the end of 
> the day, the ultimate burden is on the sender (with much self interest from 
> the receiver) that the correct address is being used.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Address expiration times should be added to BIP-173

2017-09-27 Thread Mark Friedenbach via bitcoin-dev
While there is a lot that I would like to comment on, for the moment I will 
just mention that you should consider using the 17 bit relative time format 
used in CSV as an offset from the birthdate of the address, a field all 
addresses should also have.

This would also mean that addresses cannot last more than a year without user 
override, which might actually be a plus, but you could also extend the field 
by a few bits too if that was deemed not acceptable. An address should not be 
considered valid longer than anticipated lifetime of the underlying 
cryptosystem in any case, so every address should have an expiry.

> On Sep 27, 2017, at 9:06 AM, Peter Todd via bitcoin-dev 
>  wrote:
> 
> Re-use of old addresses is a major problem, not only for privacy, but also
> operationally: services like exchanges frequently have problems with users
> sending funds to addresses whose private keys have been lost or stolen; there
> are multiple examples of exchanges getting hacked, with users continuing to
> lose funds well after the actual hack has occured due to continuing deposits.
> This also makes it difficult operationally to rotate private keys. I 
> personally
> have even lost funds in the past due to people sending me BTC to addresses 
> that
> I gave them long ago for different reasons, rather than asking me for fresh
> one.
> 
> To help combat this problem, I suggest that we add a UI-level expiration time
> to the new BIP173 address format. Wallets would be expected to consider
> addresses as invalid as a destination for funds after the expiration time is
> reached.
> 
> Unfortunately, this proposal inevitably will raise a lot of UI and terminology
> questions. Notably, the entire notion of addresses is flawed from a user point
> of view: their experience with them should be more like "payment codes", with 
> a
> code being valid for payment for a short period of time; wallets should not be
> displaying addresses as actually associated with specific funds. I suspect
> we'll see users thinking that an expired address risks the funds themselves;
> some thought needs to be put into terminology.
> 
> Being just an expiration time, seconds-level resolution is unnecessary, and
> may give the wrong impression. I'd suggest either:
> 
> 1) Hour resolution - 2^24 hours = 1914 years
> 2) Month resolution - 2^16 months = 5458 years
> 
> Both options have the advantage of working well at the UI level regardless of
> timezone: the former is sufficiently short that UI's can simply display an
> "exact" time (though note different leap second interpretations), while the
> latter is long enough that rounding off to the nearest day in the local
> timezone is fine.
> 
> Supporting hour-level (or just seconds) precision has the advantage of making
> it easy for services like exchanges to use addresses with relatively short
> validity periods, to reduce the risks of losses after a hack. Also, using at
> least hour-level ensures we don't have any year 2038 problems.
> 
> Thoughts?
> 
> -- 
> https://petertodd.org 'peter'[:-1]@petertodd.org
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-22 Thread Mark Friedenbach via bitcoin-dev
There is no harm in the value being a maximum off by a few bytes.

> On Sep 22, 2017, at 2:54 PM, Sergio Demian Lerner  
> wrote:
> 
> If the variable size increase is only a few bytes, then three possibilities 
> arise:
> 
> - one should allow signatures to be zero padded (to reach the maximum size) 
> and abandon strict DER encoding
> 
> - one should allow spare witness stack elements (to pad the size to match the 
> maximum size) and remove the cleanstack rule. But this is tricky because 
> empty stack elements must be counted as 1 byte.
> 
> - signers must loop the generation of signatures until the signature 
> generated is of its maximum size.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-22 Thread Mark Friedenbach via bitcoin-dev
You generally know the witness size to within a few bytes right before signing. 
Why would you not? You know the size of ECDSA signatures. You can be told the 
size of a hash preimage by the other party. It takes some contriving to come up 
with a scheme where one party has variable-length signatures of their chosing

> On Sep 22, 2017, at 2:32 PM, Sergio Demian Lerner  
> wrote:
> 
> But generally before one signs a transaction one does not know the signature 
> size (which may be variable). One can only estimate the maximum size. 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-22 Thread Mark Friedenbach via bitcoin-dev

> On Sep 22, 2017, at 1:32 PM, Sergio Demian Lerner  
> wrote:
> 
> 
> 
> There are other solutions to this problem that could have been taken
> instead, such as committing to the number of items or maximum size of
> the stack as part of the sighash data, but cleanstack was the approach
> taken. 
> 
> The lack of signed maximum segwit stack size was one of the objections to 
> segwit I presented last year. This together with the unlimited segwit stack 
> size.
> 
> However, committing to the maximum stack size (in bytes) for an input is 
> tricky. The only place where this could be packed is in sequence_no, with a 
> soft-fork. E.g. when transaction version is 2 and and only when lock_time is 
> zero.
> 
> For transactions with locktime >0, we could soft-fork so transactions add a 
> last zero-satoshi output whose scriptPub contains OP_RETURN and followed by N 
> VarInts, containing the maximum stack size of each input. 
> Normally, for a 400 byte, 2-input transaction, this will add 11 bytes, or a 
> 2.5% overhead.

There’s no need to put it in the transaction itself. You put it in the witness 
and it is either committed to as part of the witness (in which case it has to 
hold for all possible spend paths), or at spend time by including it in the 
data signed by CHECKSIG.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] An explanation and justification of the tail-call and MBV approach to MAST

2017-09-20 Thread Mark Friedenbach via bitcoin-dev
Over the past few weeks I've been explaining the MERKLEBRANCHVERIFY
opcode and tail-call execution semantics to a variety of developers,
and it's come to my attention that the BIPs presentation of the
concept is not as clear as it could be. Part of this is the fault of
standards documents being standards documents whose first and foremost
responsibility is precision, not pedagogy.

I think there's a better way to explain this approach to achieving
MAST, and it's worked better in the face to face and whiteboard
conversations I've had. I'm forwarding it to this list in case there
are others who desire a more clear explanation of what the
MERKLEBRANCHVERIFY and tail-call BIPs are trying to achieve, and what
any of it has to do with MAST / Merklized script.

I've written for all audiences so I apologize if it starts of at a
newbie level, but I encourage you to skim, not skip as I quickly start
varying this beginner material in atypical ways.


Review of P2SH

It's easiest to explain the behavior and purpose of these BIPs by
starting with P2SH, which we are generalizing from. BIP 16 (Pay to
Script Hash) specifies a form of implicit script recursion where a
redeem script is provided in the scriptSig, and the scriptPubKey is a
program that verifies the redeem script hashes to the committed value,
with the following template:

  HASH160  EQUAL

This script specifies that the redeem script is pulled from the stack,
its hash is compared against the expected value, and by fiat it is
declared that the redeem script is then executed with the remaining
stack items as arguments.

Sortof. What actually happens of course is that the above scriptPubKey
template is never executed, but rather the interpreter sees that it
matches this exact template format, and thereby proceeds to carry out
the same logic as a hard-coded behavior.


Generalizing P2SH with macro-op fusion

This template-matching is unfortunate because otherwise we could
imagine generalizing this approach to cover other use cases beyond
committing to and executing a single redeem script. For example, if we
instead said that anytime the script interpreter encountered the
3-opcode sequence "HASH160 <20-byte-push> EQUAL" it switched to
interpreting the top element as if it were a script, that would enable
not just BIP 16 but also constructs like this:

  IF
HASH160  EQUAL
  ELSE
HASH160  EQUAL
  ENDIF

This script conditionally executes one of two redeem scripts committed
to in the scriptPubKey, and at execution only reveals the script that
is actually used. All an observer learns of the other branch is the
script hash. This is a primitive form of MAST!

The "if 3-opcode P2SH template is encountered, switch to subscript"
rule is a bit difficult to work with however. It's not a true EVAL
opcode because control never returns back to the top-level script,
which makes some important aspects of the implementation easier, but
only at the cost of complexity somewhere else. What if there are
remaining opcodes in the script, such as the ELSE clause and ENDIF in
the script above?  They would never be executed, but does e.g. the
closing ENDIF still need to be present? Or what about the standard
pay-to-pubkey-hash "1Address" output:

  DUP HASH160 <20-byte-key-hash> EQUALVERIFY CHECKSIG

That almost looks like the magic P2SH template, except there is an
EQUALVERIFY instead of an EQUAL. The script interpreter should
obviously not treat the pubkey of a pay-to-pubkey-hash output as a
script and recurse into it, whereas it should for a P2SH style
script. But isn't the distinction kinda arbitrary?

And of course the elephant in the room is that by choosing not to
return to the original execution context we are no longer talking
about a soft-fork. Work out, for example, what will happen with the
following script:

  [TRUE] HASH160  EQUAL FALSE

(It returns false on a node that doesn't understand generalized
3-opcode P2SH recursion, true on a node that does.)


Implicit tail-call execution semantics and P2SH

Well there's a better approach than trying to create a macro-op fusion
franken-EVAL. We have to run scripts to the end to for any proposal to
be a soft-fork, and we want to avoid saving state due to prior
experience of that leading to bugs in BIP 12. That narrows our design
space to one option: allow recursion only as the final act of a
script, as BIP 16 does, but for any script not just a certain
template. That way we can safely jump into the subscript without
bothering to save local state because termination of the subscript is
termination of the script as a whole. In computer science terms, this
is known as tail-call execution semantics.

To illustrate, consider the following scriptPubKey:

  DUP HASH160 <20-byte-hash> EQUALVERIFY

This script is almost exactly the same as the P2SH template, except
that it leaves the redeem script on the stack rather than consuming
it, thanks to the DUP, while it _does_ consume the boolean value at
the end because of 

Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-20 Thread Mark Friedenbach via bitcoin-dev

> On Sep 19, 2017, at 10:13 PM, Johnson Lau  wrote:
> 
> If we don’t want this ugliness, we could use a new script version for every 
> new op code we add. In the new BIP114 (see link above), I suggest to move the 
> script version to the witness, which is cheaper.

To be clear, I don’t think it is so much that the version should be moved to 
the witness, but rather that there are two separate version values here — one 
in the scriptPubKey which specifies the format and structure of the segwit 
commitment itself, and another in the witness which gates functionality in 
script or whatever else is used by that witness type. Segwit just unfortunately 
didn’t include the latter, an oversight that should be corrected on the on the 
next upgrade opportunity.

The address-visible “script version” field should probably be renamed “witness 
type” as it will only be used in the future to encode how to check the witness 
commitment in the scriptPubKey against the data provided in the witness. 
Upgrades and improvements to the features supported by those witness types 
won’t require new top-level witness types to be defined. Defining a new opcode, 
even one with modifies the stack, doesn’t change the hashing scheme used by the 
witness type.

v0,32-bytes is presently defined to calculate the double-SHA256 hash of the 
top-most serialized item on the stack, and compare that against the 32-byte 
commitment value. Arguably it probably should have hashed the top two values, 
one of which would have been the real script version. This could be fixed 
however, even without introducing a new witness type. Do a soft-fork upgrade 
that checks if the witness redeem script is push-only, and if so then pop the 
last push off as the script version (>= 1), and concatenate the rest to form 
the actual redeem script. We inherit a little technical debt from having to 
deal with push limits, but we avoid burning v0 in an upgrade to v1 that does 
little more than add a script version.

v1,32-bytes would then be used for a template version of MAST, or whatever 
other idea comes along that fundamentally changes the way the witness 
commitment is calculated.

Mark
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] cleanstack alt stack & softfork improvements (Was: Merkle branch verification & tail-call semantics for generalized MAST)

2017-09-19 Thread Mark Friedenbach via bitcoin-dev

> On Sep 18, 2017, at 8:09 PM, Luke Dashjr <l...@dashjr.org> wrote:
> 
> On Tuesday 19 September 2017 12:46:30 AM Mark Friedenbach via bitcoin-dev 
> wrote:
>> After the main discussion session it was observed that tail-call semantics
>> could still be maintained if the alt stack is used for transferring
>> arguments to the policy script.
> 
> Isn't this a bug in the cleanstack rule?

Well in the sense that "cleanstack" doesn't do what it says, sure.

However cleanstack was introduced as a consensus rule to prevent a
possible denial of service vulnerability where a third party could
intercept any* transaction broadcast and arbitrarily add data to the
witness stack, since witness data is not covered by a checksig.

Cleanstack as-is accomplishes this because any extra items on the
stack would pass through all realistic scripts, remaining on the stack
and thereby violating the rule. There is no reason to prohibit extra
items on the altstack as those items can only arrive there
purposefully as an action of the script itself, not a third party
malleation of witness data. You could of course use DEPTH to write a
script that takes a variable number of parameters and sends them to
the altstack. Such a script would be malleable if those extra
parameters are not used. But that is predicated on the script being
specifically written in such a way as to be vulnerable; why protect
against that?

There are other solutions to this problem that could have been taken
instead, such as committing to the number of items or maximum size of
the stack as part of the sighash data, but cleanstack was the approach
taken. Arguably for a future script version upgrade one of these other
approaches should be taken to allow for shorter tail-call scripts.

Mark

* Well, almost any. You could end the script with DEPTH EQUAL and that
  is a compact way of ensuring the stack is clean (assuming the script
  finished with just "true" on the stack). Nobody does this however
  and burning two witness bytes of every redeem script going forward
  as a protective measure seems like an unnecessary ask.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-18 Thread Mark Friedenbach via bitcoin-dev
As some of you may know, the MAST proposal I sent to the mailing list
on September 6th was discussed that the in-person CoreDev meetup in
San Francisco. In this email I hope to summarize the outcome of that
discussion. As chatham house rules were in effect, I will refrain from
attributing names to this summary..

* An introductory overview of the BIPs was presented, for the purpose
  of familiarizing the audience with what they are attempting to
  accomplish and how they do so.

* There was a discussion of a single vs multi-element MBV opcode. It
  was put forward that there should perhaps be different opcodes for
  the sake of script analysis, since a multi-element MBV will
  necessarily consume a variable number of inputs. However it was
  countered that if the script encodes the number of elements as an
  integer push to the top of the stack immediately before the opcode,
  then static analyzability is maintained in such instances. I took
  the action item to investigate what an ideal serialization format
  would be for a multi-element proof, which is the only thing holding
  back a multi-element MBV proposal.

* It was pointed out that the non-clean-stack tail-call semantics is
  not compatible with segwit's consensus-enforcement of the clean
  stack rule. Some alternatives were suggested, such as changing
  deployment mechanisms. After the main discussion session it was
  observed that tail-call semantics could still be maintained if the
  alt stack is used for transferring arguments to the policy script. I
  will be updating the BIP and example implementation accordingly.

* The observation was made that single-layer tail-call semantics can
  be thought of as really being P2SH with user-specified hashing. If
  the P2SH script template had been constructed slightly differently
  such as to not consume the script, it would even have been fully
  compatible with tail-call semantics.

* It was mentioned that using script versioning to deploy a MAST
  template allows for saving 32 bytes of witness per input, as the
  root hash is contained directly in the output being spent. The
  downside however is losing the permissionless innovation that comes
  with a programmable hashing mechanism.

* The discussion generally drifted into a wider discussion about
  script version upgrades and related issues, such as whether script
  versions should exist in the witness as well, and the difference in
  meaning between the two. This is an important subject, but only of
  relevance in far as using a script version upgrade to deploy MAST
  would add significant delay from having to sort through these issues
  first.

This feedback led to some minor tweaks to the proposal, which I will
be making, as well as the major feature request of a multi-element
MERKLE-BLOCK-VERIFY opcode which requires a little bit more effort to
accomplish. I will report back to this list again when that work is
done.

Sincerely,
Mark Friedenbach
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] proposal: extend WIF format for segwit

2017-09-17 Thread Mark Friedenbach via bitcoin-dev
Bech32 and WIF payload format are mostly orthogonal issues. You can design a 
new wallet import format now and later switch it to Bech32.

> On Sep 17, 2017, at 7:42 AM, AJ West via bitcoin-dev 
>  wrote:
> 
> Hi I have a small interjection about the point on error correction (excuse me 
> if it seems elementary). Isn't there an argument to be made where a wallet 
> software should never attempt to figure out the 'correct' address, or in this 
> case private key? I don't think it's crazy to suggest somebody could import a 
> slightly erroneous WIF, the software gracefully error-corrects any problem, 
> but then the user copies that error onward such as in their backup processes 
> like a paper wallet. I always hate to advocate against a feature, I'm just 
> worried too much error correcting removes the burden of exactitude and 
> attention of the user (eg. "I know I can have up to 4 errors").
> 
> I'm pretty sure I read those arguments somewhere in a documentation or issue 
> tracker/forum post. Maybe I'm misunderstanding the bigger picture in this 
> particular case, but I was just reminded of that concept (even if it only 
> applies generally).
> 
> Thanks,
> AJ West
> 
>> On Sun, Sep 17, 2017 at 4:10 AM, Thomas Voegtlin via bitcoin-dev 
>>  wrote:
>> On 17.09.2017 04:29, Pieter Wuille wrote:
>> >
>> > This has been a low-priority thing for me, though, and the computation work
>> > to find a good checksum is significant.
>> >
>> 
>> Thanks for the info. I guess this means that a bech32 format for private
>> keys is not going to happen soon. Even if such a format was available,
>> the issue would remain for segwit-in-p2sh addresses, which use base58.
>> 
>> The ambiguity of the WIF format is currently holding me from releasing a
>> segwit-capable version of Electrum. I believe it is not acceptable to
>> use the current WIF format with segwit scripts; that would just create
>> technological debt, forcing wallets to try all possible scripts. There
>> is a good reason why WIF adds a 0x01 byte for compressed pubkeys; it
>> makes it unambiguous.
>> 
>> I see only two options:
>>  1. Disable private keys export in Electrum Segwit wallets, until a
>> common WIF extension has been agreed on.
>>  2. Define my own WIF extension for Electrum, and go ahead with it.
>> 
>> Defining my own format does make sense for the xpub/xprv format, because
>> Electrum users need to share master public keys across Electrum wallets.
>> It makes much less sense for WIF, though, because WIF is mostly used to
>> import/sweep keys from other wallets.
>> 
>> I would love to know what other wallet developers are going to do,
>> especially Core. Are you going to export private keys used in segwit
>> scripts in the current WIF format?
>> 
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-12 Thread Mark Friedenbach via bitcoin-dev
On Sep 12, 2017, at 1:55 AM, Johnson Lau  wrote:

> This is ugly and actually broken, as different script path may
> require different number of stack items, so you don't know how many
> OP_TOALTSTACK do you need. Easier to just use a new witness version

DEPTH makes this relatively easy to do. Just repeat the following for
the maximum number of stack elements that might be used:

  DEPTH 1SUB IF SWAP TOALTSTACK ENDIF

There are probably more compact alternatives.

Using a new script version is easier, but not faster. There's a number
of things that might be fixed in a v1 upgrade, and various design
decisions to sort out regarding specification of a witness version
(version in the witness rather than the scriptPubKey).

Tree signatures and MAST are immediately useful to many services,
however, and I would hate to delay usage by six months to a year or
more by serializing dependencies instead of doing them in parallel.

> Otherwise, one could attack relay and mining nodes by sending many
> small size txs with many sigops, forcing them to validate, and
> discard due to insufficient fees.
>
> Technically it might be ok if we commit the total validation cost
> (sigop + hashop + whatever) as the first witness stack item

That is what I'm suggesting. And yes, there are changes that would
have to be made to the p2p layer and transaction processing to handle
this safely. I'm arguing that the cost of doing so is worth it, and a
better path forward.

> Without the limit I think we would be DoS-ed to dead

4MB of secp256k1 signatures takes 10s to validate on my 5 year old
laptop (125,000 signatures, ignoring public keys and other things that
would consume space). That's much less than bad blocks that can be
constructed using other vulnerabilities.

> So to make it functionally comparable with your proposal, the
> IsMSV0Stack() function is not needed. The new 249-254 lines in
> interpreter.cpp could be removed. The new 1480-1519 lines could be
> replaced by a few lines copied from the existing P2WSH code. I can
> make a minimal version if you want to see how it looks like

That's alright, I don't think it's necessary to purposefully restrict
one to compare them head to head with the same features. They are
different proposals with different pros and cons.

Kind regards,
Mark Friedenbach

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fast Merkle Trees

2017-09-07 Thread Mark Friedenbach via bitcoin-dev
TL;DR I'll be updating the fast Merkle-tree spec to use a different
  IV, using (for infrastructure compatability reasons) the scheme
  provided by Peter Todd.

This is a specific instance of a general problem where you cannot
trust scripts given to you by another party. Notice that we run into
the same sort of problem when doing key aggregation, in which you must
require the other party to prove knowledge of the discrete log before
using their public key, or else key cancellation can occur.

With script it is a little bit more complicated as you might want
zero-knowledge proofs of hash pre-images for HTLCs as well as proofs
of DL knowledge (signatures), but the basic idea is the same. Multi-
party wallet level protocols for jointly constructing scriptPubKeys
should require a 'delinearization' step that proves knowledge of
information necessary to complete each part of the script, as part of
proving the safety of a construct.

I think my hangup before in understanding the attack you describe was
in actualizing it into a practical attack that actually escalates the
attacker's capabilities. If the attacker can get you to agree to a
MAST policy that is nothing more than a CHECKSIG over a key they
presumably control, then they don't need to do any complicated
grinding. The attacker in that scenario would just actually specify a
key they control and take the funds that way.

Where this presumably leads to an actual exploit is when you specify a
script that a curious counter-party actually takes the time to
investigate and believes to be secure. For example, a script that
requires a signature or pre-image revelation from that counter-party.
That would require grinding not a few bytes, but at minimum 20-33
bytes for either a HASH160 image or the counter-party's key.

If I understand the revised attack description correctly, then there
is a small window in which the attacker can create a script less than
55 bytes in length, where nearly all of the first 32 bytes are
selected by the attacker, yet nevertheless the script seems safe to
the counter-party. The smallest such script I was able to construct
was the following:

 CHECKSIGVERIFY HASH160  EQUAL

This is 56 bytes and requires only 7 bits of grinding in the fake
pubkey. But 56 bytes is too large. Switching to secp256k1 serialized
32-byte pubkeys (in a script version upgrade, for example) would
reduce this to the necessary 55 bytes with 0 bits of grinding. A
smaller variant is possible:

DUP HASH160  EQUALVERIFY CHECKSIGVERIFY HASH160 
 EQUAL

This is 46 bytes, but requires grinding 96 bits, which is a bit less
plausible.

Belts and suspenders are not so terrible together, however, and I
think there is enough of a justification here to look into modifying
the scheme to use a different IV for hash tree updates. This would
prevent even the above implausible attacks.


> On Sep 7, 2017, at 11:55 AM, Russell O'Connor  wrote:
> 
> 
> 
> On Thu, Sep 7, 2017 at 1:42 PM, Mark Friedenbach  > wrote:
> I've been puzzling over your email since receiving it. I'm not sure it
> is possible to perform the attack you describe with the tree structure
> specified in the BIP. If I may rephrase your attack, I believe you are
> seeking a solution to the following:
> 
> Want: An innocuous script and a malign script for which
> 
>double-SHA256(innocuous)
> 
> is equal to either
> 
>fast-SHA256(double-SHA256(malign) || r) or
>fast-SHA256(r || double-SHA256(malign))
> 
> or  fast-SHA256(fast-SHA256(double-SHA256(malign) || r1) || r0)
> or  fast-SHA256(fast-SHA256(r1 || double-SHA256(malign)) || r0)
> or ...
>  
> where r is a freely chosen 32-byte nonce. This would allow the
> attacker to reveal the innocuous script before funds are sent to the
> MAST, then use the malign script to spend.
> 
> Because of the double-SHA256 construction I do not see how this can be
> accomplished without a full break of SHA256. 
> 
> The particular scenario I'm imagining is a collision between
> 
> double-SHA256(innocuous)
> 
> and 
> 
> fast-SHA256(fast-SHA256(fast-SHA256(double-SHA256(malign) || r2) || r1) 
> || r0).
> 
> where innocuous is a Bitcoin Script that is between 32 and 55 bytes long.
> 
> Observe that when data is less than 55 bytes then double-SHA256(data) = 
> fast-SHA256(fast-SHA256(padding-SHA256(data)) || 0x8000...100) (which is 
> really the crux of the matter).
> 
> Therefore, to get our collision it suffices to find a collision between
> 
> padding-SHA256(innocuous)
> 
> and
> 
> fast-SHA256(double-SHA256(malign) || r2) || r1
> 
> r1 can freely be set to the second half of padding-SHA256(innocuous), so it 
> suffices to find a collision between
> 
>fast-SHA256(double-SHA256(malign) || r2)
> 
> and the first half of padding-SHA256(innocuous) which is equal to the first 
> 32 bytes of innocuous.
> 
> Imagine the first opcode of innocuous is the push of a value 

Re: [bitcoin-dev] Fast Merkle Trees

2017-09-06 Thread Mark Friedenbach via bitcoin-dev
This design purposefully does not distinguish leaf nodes from internal nodes. 
That way it chained invocations can be used to validate paths longer than 32 
branches. Do you see a vulnerability due to this lack of distinction?

> On Sep 6, 2017, at 6:59 PM, Russell O'Connor <rocon...@blockstream.io> wrote:
> 
> The fast hash for internal nodes needs to use an IV that is not the standard 
> SHA-256 IV. Instead needs to use some other fixed value, which should itself 
> be the SHA-256 hash of some fixed string (e.g. the string "BIP ???" or "Fash 
> SHA-256").
> 
> As it stands, I believe someone can claim a leaf node as an internal node by 
> creating a proof that provides a phony right-hand branch claiming to have 
> hash 0x8..100 (which is really the padding value for the second half 
> of a double SHA-256 hash).
> 
> (I was schooled by Peter Todd by a similar issue in the past.)
> 
>> On Wed, Sep 6, 2017 at 8:38 PM, Mark Friedenbach via bitcoin-dev 
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> Fast Merkle Trees
>> BIP: https://gist.github.com/maaku/41b0054de0731321d23e9da90ba4ee0a
>> Code: https://github.com/maaku/bitcoin/tree/fast-merkle-tree
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-06 Thread Mark Friedenbach via bitcoin-dev
I would like to propose two new script features to be added to the
bitcoin protocol by means of soft-fork activation. These features are
a new opcode, MERKLE-BRANCH-VERIFY (MBV) and tail-call execution
semantics.

In brief summary, MERKLE-BRANCH-VERIFY allows script authors to force
redemption to use values selected from a pre-determined set committed
to in the scriptPubKey, but without requiring revelation of unused
elements in the set for both enhanced privacy and smaller script
sizes. Tail-call execution semantics allows a single level of
recursion into a subscript, providing properties similar to P2SH while
at the same time more flexible.

These two features together are enough to enable a range of
applications such as tree signatures (minus Schnorr aggregation) as
described by Pieter Wuille [1], and a generalized MAST useful for
constructing private smart contracts. It also brings privacy and
fungibility improvements to users of counter-signing wallet/vault
services as unique redemption policies need only be revealed if/when
exceptional circumstances demand it, leaving most transactions looking
the same as any other MAST-enabled multi-sig script.

I believe that the implementation of these features is simple enough,
and the use cases compelling enough that we could BIP 8/9 rollout of
these features in relatively short order, perhaps before the end of
the year.

I have written three BIPs to describe these features, and their
associated implementation, for which I now invite public review and
discussion:

Fast Merkle Trees
BIP: https://gist.github.com/maaku/41b0054de0731321d23e9da90ba4ee0a
Code: https://github.com/maaku/bitcoin/tree/fast-merkle-tree

MERKLEBRANCHVERIFY
BIP: https://gist.github.com/maaku/bcf63a208880bbf8135e453994c0e431
Code: https://github.com/maaku/bitcoin/tree/merkle-branch-verify

Tail-call execution semantics
BIP: https://gist.github.com/maaku/f7b2e710c53f601279549aa74eeb5368
Code: https://github.com/maaku/bitcoin/tree/tail-call-semantics

Note: I have circulated this idea privately among a few people, and I
will note that there is one piece of feedback which I agree with but
is not incorporated yet: there should be a multi-element MBV opcode
that allows verifying multiple items are extracted from a single
tree. It is not obvious how MBV could be modified to support this
without sacrificing important properties, or whether should be a
separate multi-MBV opcode instead.

Kind regards,
Mark Friedenbach
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys

2017-08-28 Thread Mark Friedenbach via bitcoin-dev

> On Aug 28, 2017, at 8:29 AM, Alex Nagy via bitcoin-dev 
>  wrote:
> 
> If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any way Bob 
> can safely issue Native P2WPKH outputs to Alice?
> 

No, and the whole issue of compressed vs uncompressed is a red herring. If 
Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, she is saying to Bob “I 
will accept payment to the scriptPubKey [DUP HASH160 
PUSHDATA(20)[e4e517ee07984a4000cd7b00cbcb545911c541c4] EQUALVERIFY CHECKSIG]”.

Payment to any other scriptPubKey may not be recognized by Alice.___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] UTXO growth scaling solution proposal

2017-08-22 Thread Mark Friedenbach via bitcoin-dev
Lock time transactions have been valid for over a year now I believe. In any 
case we can't scan the block chain for usage patterns in UTXOs because P2SH 
puts the script in the signature on spend.

> On Aug 22, 2017, at 4:29 PM, Thomas Guyot-Sionnest via bitcoin-dev 
>  wrote:
> 
> I'm just getting the proposal out... if we decide to go forward (pretty huge 
> "if" right now) whenever it kicks in after 15, 50 or 100 years should be 
> decided as early as possible.
> 
> Are CheckLockTimeVerify transactions accepted yet? I thought most special 
> transactions were only accepted on Testnet... In any case we should be able 
> to scan the blockchain and look for any such transaction. And I hate to make 
> this more complex, but maybe re-issuing the tx from coinbase could be an 
> option?
> 
> --
> Thomas
> 
>> On 22/08/17 06:58 PM, Rodney Morris via bitcoin-dev wrote:
>> Thomas et.al.
>> 
>> So, in your minds, anyone who locked up coins using CLTV for their child to 
>> receive on their 21st birthday, for the sake of argument, has effectively 
>> forfeit those coins after the fact?  You are going to force anyone who took 
>> coins offline (cryptosteel, paper, doesn't matter) to bring those coins back 
>> online, with the inherent security risks?
>> 
>> In my mind, the only sane way to even begin discussing an approach 
>> implementing such a thing - where coins "expire" after X years - would be to 
>> give the entire ecosystem X*N years warning, where N > 1.5.  I'd also 
>> suggest X would need to be closer to the life span of a human than zero.  
>> Mind you, I'd suggest this "feature" would need to be coded and deployed as 
>> a future-hard-fork X*N years ahead of time.  A-la Satoshi's blog post 
>> regarding increasing block size limit, a good enough approximation would be 
>> to add a block height check to the code that approximates X*N years, based 
>> on 10 minute blocks.  The transparency around such a change would need to be 
>> radical and absolute.
>> 
>> I'd also suggest that, similar to CLTV, it only makes sense to discuss 
>> creating a "never expire" transaction output, if such a feature were being 
>> seriously considered.
>> 
>> If you think discussions around a block size increase were difficult, then 
>> we'll need a new word to describe the challenges and vitriol that would 
>> arise in arguments that will follow this discussion should it be seriously 
>> proposed, IMHO.
>> 
>> I also don't think it's reasonable to conflate the discussion herein with 
>> discussion about what to do when ECC or SHA256 is broken.  The 
>> weakening/breaking of ECC poses a real risk to the stability of Bitcoin - 
>> the possible release of Satoshi's stash being the most obvious example - and 
>> what to do about that will require serious consideration when the time 
>> comes.  Even if the end result is the same - that coins older than "X" will 
>> be invalidated - everything else important about the scenarios are different 
>> as far as I can see.
>> 
>> Rodney
>> 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] UTXO growth scaling solution proposal

2017-08-22 Thread Mark Friedenbach via bitcoin-dev
A fun exercise to be sure, but perhaps off topic for this list?

> On Aug 22, 2017, at 1:06 PM, Erik Aronesty via bitcoin-dev 
>  wrote:
> 
> > The initial message I replied to stated:
> 
> Yes, 3 years is silly.  But coin expiration and quantum resistance is 
> something I've been thinking about for a while, so I tried to steer the 
> conversation away from stealing old money for no reason ;).   Plus I like the 
> idea of making Bitcoin "2000 year proof".
> 
> - I cannot imagine either SHA256 or any of our existing wallet formats 
> surviving 200 years, if we expect both moores law and quantum computing to be 
> a thing.   I would expect the PoW to be rendered obsolete before the Bitcoin 
> addresses.
> 
>  - A PoW change using Keccak and a flexible number of bits can be designed as 
> a "future hard fork".  That is:  the existing POW can be automatically 
> rendered obsolete... but only in the event that difficulty rises to the level 
> of obsolescence.   Then the code for a new algorithm with a flexible number 
> of bits and a difficulty that can scale for thousands of years can then 
> automatically kick in.
> 
>  - A new addresses format and signing protocols that use a flexible number of 
> bits can be introduced.   The maximum number of supported bits can be 
> configurable, and trivially changed.   These can be made immediately 
> available but completely optional.
> 
>  - The POW difficulty can be used to inform the expiration of any addresses 
> that can be compromised within 5 years assuming this power was somehow used 
> to compromise them.   Some mechanism for translating global hashpower to 
> brute force attack power can be researched, and consesrvative estimates made. 
>   Right now, it's like "heat death of the universe" amount of time to crack 
> with every machine on the planet.   But hey... things change and 2000 years 
> is a long time.   This information can be used to inform the expiration and 
> reclamation of old, compromised public addresses.
> 
> - Planning a hard fork 100 to 1000 years out is a fun exercise
> 
> 
> 
> 
>> On Tue, Aug 22, 2017 at 2:55 PM, Chris Riley  wrote:
>> The initial message I replied to stated in part, "Okay so I quite like this 
>> idea. If we start removing at height 63 or 84 (gives us 4-8 years to 
>> develop this solution), it stays nice and neat with the halving interval"
>> 
>> That is less than 3 years or less than 7 years  away. Much sooner than it is 
>> believed QC or Moore's law could impact bitcoin.  Changing bitcoin so as to 
>> require that early coins start getting "scavenged" at that date seems 
>> unneeded and irresponsible.  Besides, your ECDSA is only revealed when you 
>> spend the coins which does provide some quantum resistance.  Hal was just an 
>> example of people putting their coins away expecting them to be there at X 
>> years in the future, whether it is for himself or for his kids and wife.  
>> 
>> :-)
>> 
>> 
>> 
>>> On Tue, Aug 22, 2017 at 1:33 PM, Matthew Beton  
>>> wrote:
>>> Very true, if Moore's law is still functional in 200 years, computers will 
>>> be 2^100 times faster (possibly more if quantum computing becomes 
>>> commonplace), and so old wallets may be easily cracked.
>>> 
>>> We will need a way to force people to use newer, higher security wallets, 
>>> and turning coins to mining rewards is better solution than them just being 
>>> hacked.
>>> 
>>> 
 On Tue, 22 Aug 2017, 7:24 pm Thomas Guyot-Sionnest  wrote:
 In any case when Hal Finney do not wake up from his 200years 
 cryo-preservation (because unfortunately for him 200 years earlier they 
 did not know how to preserve a body well enough to resurrect it) he would 
 find that advance in computer technology made it trivial for anyone to 
 steal his coins using the long-obsolete secp256k1 ec curve (which was done 
 long before, as soon as it became profitable to crack down the huge stash 
 of coins stale in the early blocks)
 
 I just don't get that argument that you can't be "your own bank". The only 
 requirement coming from this would be to move your coins about once every 
 10 years or so, which you should be able to do if you have your private 
 keys (you should!). You say it may be something to consider when computer 
 breakthroughs makes old outputs vulnerable, but I say it's not "if" but 
 "when" it happens, and by telling firsthand people that their coins 
 requires moving every once in a long while you ensure they won't do stupid 
 things or come back 50 years from now and complain their addresses have 
 been scavenged.
 
 --
 Thomas
 
 
> On 22/08/17 10:29 AM, Erik Aronesty via   bitcoin-dev wrote:
> I agree, it is only a good idea in the event of a quantum computing 
> threat to the security of Bitcoin.   
> 
>> On Tue, Aug 

Re: [bitcoin-dev] Would anyone object to adding a dlopen message hook system?

2017-08-13 Thread Mark Friedenbach via bitcoin-dev
Jonas, I think his proposal is to enable extending the P2P layer, e.g.
adding new message types. Are you suggesting having externalized
message processing? That could be done via RPC/ZMQ while opening up a
much more narrow attack surface than dlopen, although I imagine such
an interface would require a very complex API specification.

On Sun, Aug 13, 2017 at 1:00 PM, Jonas Schnelli via bitcoin-dev
 wrote:
> Hi Erik
>
> Thanks for your proposal.
> In general, modularisation is a good thing, though proposing core to add 
> modules wie dlopen() seems the wrong direction.
> Core already has the problem of running to many things in the same process. 
> The consensus logic, p2p system as well as the wallet AND the GUI do all 
> share the same process (!).
>
> A module approach like you describe would be a security nightmare (and Core 
> is currently in the process of separating out the wallet and the GUI into its 
> own process).
>
> What does speak against using the existing IPC interfaces like RPC/ZMQ?
> RPC can be bidirectional using long poll.
>
> /jonas
>
>> I was thinking about something like this that could add the ability for 
>> module extensions in the core client.
>>
>> When messages are received, modules hooks are called with the message data.
>>
>> They can then handle, mark the peer invalid, push a message to the peer or 
>> pass through an alternate command.  Also, modules could have their own 
>> private commands prefixed by "x:" or something like that.
>>
>> The idea is that the base P2P layer is left undisturbed, but there is now a 
>> way to create "enhanced features" that some peers support.
>>
>> My end goal is to support using lightning network micropayments to allow 
>> people to pay for better node access - creating a market for node services.
>>
>> But I don't think this should be "baked in" to core.   Nor do I think it 
>> should be a "patch".   It should be a linked-in module, optionally compiled 
>> and added to bitcoin conf, then loaded via dlopen().Modules should be 
>> slightly robust to Bitcoin versions changing out from under them, but not if 
>> the network layer is changed.   This can be ensured by a) keeping a module 
>> version number, and b) treating module responses as if they were just 
>> received from the network.   Any module incompatibility should throw an 
>> exception...ensuring broken peers don't stay online.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Miners forced to run non-core code in order to get segwit activated

2017-06-20 Thread Mark Friedenbach via bitcoin-dev
(148/91/segwit2x(per today)) effectively guarantee a chainsplit-- so I
>>> > don't think that holds.
>>>
>>> Segwit2x/BIP91/BIP148 will orphan miners that do not run a Segwit2x (or
>>> BIP148) node, because they wouldn't have the new consensus rule of requiring
>>> all blocks to signal for segwit.
>>> I don't believe there would be any long lasting chainsplit though
>>> (because of the ~80% hashrate support on segwit2x), perhaps 2-3 blocks if we
>>> get unlucky.
>>>
>>> Hampus
>>>
>>> 2017-06-20 23:49 GMT+02:00 Gregory Maxwell via bitcoin-dev
>>> <bitcoin-dev@lists.linuxfoundation.org>:
>>>>
>>>> On Tue, Jun 20, 2017 at 3:44 PM, Erik Aronesty via bitcoin-dev
>>>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>> > Because a large percentage of miners are indifferent, right now miners
>>>> > have
>>>> > to choose between BIP148 and Segwit2x if they want to activate Segwit.
>>>>
>>>> Miners can simply continuing signaling segwit, which will leave them
>>>> at least soft-fork compatible with BIP148 and BIP91 (and god knows
>>>> what "segwit2x" is since they keep changing the actual definition and
>>>> do not have a specification; but last I saw the near-term behavior the
>>>> same as BIP91 but with a radically reduced activation window, so the
>>>> story would be the same there in the near term).
>>>>
>>>> Ironically, it looks like most of the segwit2x signaling miners are
>>>> faking it (because they're not signaling segwit which it requires).
>>>> It'll be unfortunate if some aren't faking it and start orphaning
>>>> their own blocks because they are failing to signal segwit.
>>>>
>>>> I don't think the rejection of segwit2x from Bitcoin's developers
>>>> could be any more resolute than what we've already seen:
>>>> https://en.bitcoin.it/wiki/Segwit_support
>>>>
>>>> On Tue, Jun 20, 2017 at 5:22 PM, Mark Friedenbach via bitcoin-dev
>>>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>> > I think it is very naïve to assume that any shift would be temporary.
>>>> > We have a hard enough time getting miners to proactively upgrade to
>>>> > recent versions of the reference bitcoin daemon. If miners interpret
>>>> > the situation as being forced to run non-reference software in order
>>>> > to prevent a chain split because a lack of support from Bitcoin Core,
>>>> > that could be a one-way street.
>>>>
>>>> I think this is somewhat naive and sounds a lot like the repeat of the
>>>> previously debunked "XT" and "Classic" hysteria.
>>>>
>>>> There is a reason that segwit2x is pretty much unanimously rejected by
>>>> the technical community.  And just like with XT/Classic/Unlimited
>>>> you'll continue to see a strong correlation with people who are
>>>> unwilling and unable to keep updating the software at an acceptable
>>>> level of quality-- esp. because the very founding on their fork is
>>>> predicated on discarding those properties.
>>>>
>>>> If miners want to go off and create an altcoin-- welp, thats something
>>>> they can always do,  and nothing about that will force anyone to go
>>>> along with it.
>>>>
>>>> As far as prevent a chain split goes, all those things
>>>> (148/91/segwit2x(per today)) effectively guarantee a chainsplit-- so I
>>>> don't think that holds.
>>>> ___
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists.linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>>
>>>
>>> ___
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>>
>>> ___
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Miners forced to run non-core code in order to get segwit activated

2017-06-20 Thread Mark Friedenbach via bitcoin-dev
Why do you say activation by August 1st is likely? That would require an entire 
difficulty adjustment period with >=95% bit1 signaling. That seems a tall order 
to organize in the scant few weeks remaining. 

> On Jun 20, 2017, at 3:29 PM, Jacob Eliosoff via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> 
> If segwit is activated before Aug 1, as now seems likely, there will be no 
> split that day.  But if activation is via Segwit2x (also likely), and at 
> least some nodes do & some don't follow through with the HF 3mo later (again, 
> likely), agreed w/ Greg that *then* we'll see a split - probably in Sep/Oct.  
> How those two chains will match up and how the split will play out is 
> anyone's guess...
> 
> 
> 
> On Jun 20, 2017 6:16 PM, "Hampus Sjöberg via bitcoin-dev" 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Ironically, it looks like most of the segwit2x signaling miners are
> > faking it (because they're not signaling segwit which it requires).
> > It'll be unfortunate if some aren't faking it and start orphaning
> > their own blocks because they are failing to signal segwit.
> 
> Well, they're doing some kind of "pre-signaling" in the coinbase at the 
> moment, because the segwit2x project is still in alpha-phase according to the 
> timeline. They're just showing commitment.
> I'm sure they will begin signaling on version bit 4/BIP91 as well as actually 
> running a segwit2x node when the time comes.
> 
> 
> > As far as prevent a chain split goes, all those things
> > (148/91/segwit2x(per today)) effectively guarantee a chainsplit-- so I
> > don't think that holds.
> 
> Segwit2x/BIP91/BIP148 will orphan miners that do not run a Segwit2x (or 
> BIP148) node, because they wouldn't have the new consensus rule of requiring 
> all blocks to signal for segwit.
> I don't believe there would be any long lasting chainsplit though (because of 
> the ~80% hashrate support on segwit2x), perhaps 2-3 blocks if we get unlucky.
> 
> Hampus
> 
> 2017-06-20 23:49 GMT+02:00 Gregory Maxwell via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org>:
>> On Tue, Jun 20, 2017 at 3:44 PM, Erik Aronesty via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> > Because a large percentage of miners are indifferent, right now miners have
>> > to choose between BIP148 and Segwit2x if they want to activate Segwit.
>> 
>> Miners can simply continuing signaling segwit, which will leave them
>> at least soft-fork compatible with BIP148 and BIP91 (and god knows
>> what "segwit2x" is since they keep changing the actual definition and
>> do not have a specification; but last I saw the near-term behavior the
>> same as BIP91 but with a radically reduced activation window, so the
>> story would be the same there in the near term).
>> 
>> Ironically, it looks like most of the segwit2x signaling miners are
>> faking it (because they're not signaling segwit which it requires).
>> It'll be unfortunate if some aren't faking it and start orphaning
>> their own blocks because they are failing to signal segwit.
>> 
>> I don't think the rejection of segwit2x from Bitcoin's developers
>> could be any more resolute than what we've already seen:
>> https://en.bitcoin.it/wiki/Segwit_support
>> 
>> On Tue, Jun 20, 2017 at 5:22 PM, Mark Friedenbach via bitcoin-dev
>> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> > I think it is very naïve to assume that any shift would be temporary.
>> > We have a hard enough time getting miners to proactively upgrade to
>> > recent versions of the reference bitcoin daemon. If miners interpret
>> > the situation as being forced to run non-reference software in order
>> > to prevent a chain split because a lack of support from Bitcoin Core,
>> > that could be a one-way street.
>> 
>> I think this is somewhat naive and sounds a lot like the repeat of the
>> previously debunked "XT" and "Classic" hysteria.
>> 
>> There is a reason that segwit2x is pretty much unanimously rejected by
>> the technical community.  And just like with XT/Classic/Unlimited
>> you'll continue to see a strong correlation with people who are
>> unwilling and unable to keep updating the software at an acceptable
>> level of quality-- esp. because the very founding on their fork is
>> predicated on discarding those properties.
>> 
>> If miners want to go off and create an altcoin-- welp, thats something
>> they can always do,  and nothing about that will force anyone to go
>> along with it.
>&

Re: [bitcoin-dev] Miners forced to run non-core code in order to get segwit activated

2017-06-20 Thread Mark Friedenbach via bitcoin-dev
I think it is very naïve to assume that any shift would be temporary.
We have a hard enough time getting miners to proactively upgrade to
recent versions of the reference bitcoin daemon. If miners interpret
the situation as being forced to run non-reference software in order
to prevent a chain split because a lack of support from Bitcoin Core,
that could be a one-way street.

On Tue, Jun 20, 2017 at 9:49 AM, Hampus Sjöberg via bitcoin-dev
 wrote:
> I don't think it's a huge deal if the miners need to run a non-Core node
> once the BIP91 deployment of Segwit2x happens. The shift will most likely be
> temporary.
>
> I agree that the "-bip148"-option should be merged, though.
>
> 2017-06-20 17:44 GMT+02:00 Erik Aronesty via bitcoin-dev
> :
>>
>> Are we going to merge BIP91 or a -BIP148 option to core for inclusion in
>> the next release or so?
>>
>> Because a large percentage of miners are indifferent, right now miners
>> have to choose between BIP148 and Segwit2x if they want to activate Segwit.
>>
>> Should we be forcing miners to choose to run non-core code in order to
>> activate a popular feature?
>>
>> - Erik
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP148 temporary service bit (1 << 27)

2017-06-19 Thread Mark Friedenbach via bitcoin-dev
It is essential that BIP-148 nodes connect to at least two other BIP-148 nodes 
to prevent a network partition in August 1st. The temporary service but is how 
such nodes are able to detect each other.

> On Jun 19, 2017, at 12:46 PM, Tom Zander via bitcoin-dev 
>  wrote:
> 
>> On Monday, 19 June 2017 21:26:22 CEST Luke Dashjr via bitcoin-dev wrote:
>> To ease the transition to BIP148 and to minimise risks in the event miners
>> choose to perform a chain split attack, at least Bitcoin Knots will be
>> using the temporary service bit (1 << 27) to indicate BIP148 support.
>> 
>> Once the transition is complete, this will no longer be necessary, and the
>> bit will be (manually) returned for reuse.
>> 
>> I encourage other software implementing BIP148 (both full and light nodes)
>> to set and use this service bit to avoid network partitioning risks.
> 
> I'm curious what you action on the finding (or not) of a peer with this bit 
> set (or not).
> Can you link to the github commit where you implemented this?
> -- 
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hypothetical 2 MB hardfork to follow BIP148

2017-05-30 Thread Mark Friedenbach via bitcoin-dev
The 1MB classic block size is not redundant after segwit activation.
Segwit prevents the quadratic hashing problems, but only for segwit
outputs. The 1MB classic block size prevents quadratic hashing
problems from being any worse than they are today.

Mark

On Tue, May 30, 2017 at 6:27 AM, Jorge Timón via bitcoin-dev
 wrote:
> Why not simply remove the (redundant after sw activation) 1 mb size
> limit check and increasing the weight limit without changing the
> discount or having 2 limits?
>
>
> On Wed, May 24, 2017 at 1:07 AM, Erik Aronesty via bitcoin-dev
>  wrote:
>> Personally, I would prefer if a 2MB lock-in that uses BIP103 for the timing.
>>
>> I think up to 20% per year can be absorbed by averages in bandwidth/CPU/RAM
>> growth, of which bandwidth seems the most constraining.
>>
>> - Erik
>>
>>
>> On Tue, May 23, 2017 at 4:23 PM, Luke Dashjr via bitcoin-dev
>>  wrote:
>>>
>>> In light of some recent discussions, I wrote up this BIP for a real 2 MB
>>> block
>>> size hardfork following Segwit BIP148 activation. This is not part of any
>>> agreement I am party to, nor anything of that sort. Just something to
>>> throw
>>> out there as a possible (and realistic) option.
>>>
>>> Note that I cannot recommend this to be adopted, since frankly 1 MB blocks
>>> really are still too large, and this blunt-style hardfork quite risky even
>>> with consensus. But if the community wishes to adopt (by unanimous
>>> consensus)
>>> a 2 MB block size hardfork, this is probably the best way to do it right
>>> now.
>>> The only possible way to improve on this IMO would be to integrate it into
>>> MMHF/"spoonnet" style hardfork (and/or add other unrelated-to-block-size
>>> HF
>>> improvements).
>>>
>>> I have left Author blank, as I do not intend to personally champion this.
>>> Before it may be assigned a BIP number, someone else will need to step up
>>> to
>>> take on that role. Motivation and Rationale are blank because I do not
>>> personally think there is any legitimate rationale for such a hardfork at
>>> this
>>> time; if someone adopts this BIP, they should complete these sections. (I
>>> can
>>> push a git branch with the BIP text if someone wants to fork it.)
>>>
>>> 
>>> BIP: ?
>>> Layer: Consensus (hard fork)
>>> Title: Post-segwit 2 MB block size hardfork
>>> Author: FIXME
>>> Comments-Summary: No comments yet.
>>> Comments-URI: ?
>>> Status: Draft
>>> Type: Standards Track
>>> Created: 2017-05-22
>>> License: BSD-2-Clause
>>> 
>>>
>>> ==Abstract==
>>>
>>> Legacy Bitcoin transactions are given the witness discount, and a block
>>> size
>>> limit of 2 MB is imposed.
>>>
>>> ==Copyright==
>>>
>>> This BIP is licensed under the BSD 2-clause license.
>>>
>>> ==Specification==
>>>
>>> Upon activation, a block size limit of 200 bytes is enforced.
>>> The block weight limit remains at 400 WU.
>>>
>>> The calculation of block weight is modified:
>>> all witness data, including both scriptSig (used by pre-segwit inputs) and
>>> segwit witness data, is measured as 1 weight-unit (WU), while all other
>>> data
>>> in the block is measured as 4 WU.
>>>
>>> The witness commitment in the generation transaction is no longer
>>> required,
>>> and instead the txid merkle root in the block header is replaced with a
>>> hash
>>> of:
>>>
>>> 1. The witness reserved value.
>>> 2. The witness merkle root hash.
>>> 3. The transaction ID merkle root hash.
>>>
>>> The maximum size of a transaction stripped of witness data is limited to 1
>>> MB.
>>>
>>> ===Deployment===
>>>
>>> This BIP is deployed by flag day, in the block where the median-past time
>>> surpasses 1543503872 (2018 Nov 29 at 15:04:32 UTC).
>>>
>>> It is assumed that when this flag day has been reached, Segwit has been
>>> activated via BIP141 and/or BIP148.
>>>
>>> ==Motivation==
>>>
>>> FIXME
>>>
>>> ==Rationale==
>>>
>>> FIXME
>>>
>>> ==Backwards compatibility==
>>>
>>> This is a hardfork, and as such not backward compatible.
>>> It should not be deployed without consent of the entire Bitcoin community.
>>> Activation is scheduled for 18 months from the creation date of this BIP,
>>> intended to give 6 months to establish consensus, and 12 months for
>>> deployment.
>>>
>>> ==Reference implementation==
>>>
>>> FIXME
>>>
>>>
>>>
>>> ___
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___

Re: [bitcoin-dev] I do not support the BIP 148 UASF

2017-04-15 Thread Mark Friedenbach via bitcoin-dev
Greg,

If I understand correctly, the crux of your argument against BIP148 is that
it requires the segwit BIP9 activation flag to be set in every block after
Aug 1st, until segwit activates. This will cause miners which have not
upgrade and indicated support for BIP141 (the segwit BIP) to find their
blocks ignored by UASF nodes, at least for the month or two it takes to
activate segwit.

Isn't this however the entire point of BIP148? I understand if you object
to this, but let's be clear that this is a design requirement of the
proposal, not a technical oversight. The alternative you present (new BIP
bit) has the clear downside of not triggering BIP141 activation, and
therefore not enabling the new consensus rules on already deployed full
nodes. BIP148 is making an explicit choice to favor dragging along those
users which have upgraded to BIP141 support over those miners who have
failed to upgrade.

On an aside, I'm somewhat disappointed that you have decided to make a
public statement against the UASF proposal. Not because we disagree -- that
is fine -- but because any UASF must be a grassroots effort and
endorsements (or denouncements) detract from that.

Mark Friedenbach
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness App Development

2016-01-19 Thread Mark Friedenbach via bitcoin-dev
Wouldn't this be entirely on topic for #bitcoin-dev? It's probably better
not to fragment the communication channels and associated infrastructure
(logs, bots, etc.)

On Tue, Jan 19, 2016 at 3:54 AM, Eric Lombrozo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hello, folks.
>
> I wanted to let all of you know a new IRC channel has been created called
> #segwit-dev where we welcome all discussion pertaining to integrating and
> supporting segregated witness transactions in wallets as well as comments
> or suggestions for improvement to the spec. Please come join us. :)
>
>
> ——
> Eric
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-20 Thread Mark Friedenbach via bitcoin-dev
I am fully in support of the plan laid out in "Capacity increases for the
bitcoin system".

This plan provides real benefit to the ecosystem in solving a number of
longstanding problems in bitcoin. It improves the scalability of bitcoin
considerably.

Furthermore it is time that we stop bikeshedding, start implementing, and
move forward, lest we lose more developers to the toxic atmosphere this
hard-fork debacle has created.

On Mon, Dec 21, 2015 at 12:33 PM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote:
> > On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell via
> bitcoin-dev wrote:
> >> TL;DR: I propose we work immediately towards the segwit 4MB block
> >> soft-fork which increases capacity and scalability, and recent speedups
> >> and incoming relay improvements make segwit a reasonable risk. BIP9
> >> and segwit will also make further improvements easier and faster to
> >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based
> >> scaling, while building additional tools that would make bandwidth
> >> increases safer long term. Further work will prepare Bitcoin for further
> >> increases, which will become possible when justified, while also
> providing
> >> the groundwork to make them justifiable.
> >
> > Sounds good to me.
>
> Better late than never, let me comment on why I believe pursuing this plan
> is important.
>
> For months, the block size debate, and the apparent need for agreement on
> a hardfork has distracted from needed engineering work, fed the external
> impression that nothing is being done, and generally created a toxic
> environment to work in. It has affected my own productivity and health, and
> I do not think I am alone.
>
> I believe that soft-fork segwit can help us out of this deadlock and get
> us going again. It does not require the pervasive assumption that the
> entire world will simultaneously switch to new consensus rules like a
> hardfork does, while at the same time:
> * Give a short-term capacity bump
> * Show the world that scalability is being worked on
> * Actually improve scalability (as opposed to just scale) by reducing
> bandwidth/storage and indirectly improving the effectiveness of systems
> like Lightning.
> * Solve several unrelated problems at the same time (fraud proofs, script
> extensibility, malleability, ...).
>
> So I'd like to ask the community that we work towards this plan, as it
> allows to make progress without being forced to make a possibly divisive
> choice for one hardfork or another yet.
>
> --
> Pieter
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-18 Thread Mark Friedenbach via bitcoin-dev
Not entirely correct, no. Edge cases also matter. Segwit is described as
4MB because that is the largest possible combined block size that can be
constructed. BIP 102 + segwit would allow a maximum relay of 8MB. So you
have to be confident that an 8MB relay size would be acceptable, even if a
block full of actual transactions would be closer to 3.5MB.

On Fri, Dec 18, 2015 at 6:01 PM, sickpig--- via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Anthony,
>
>
> On Thu, Dec 17, 2015 at 6:55 PM, Anthony Towns via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Thu, Dec 17, 2015 at 04:51:19PM +0100, sickpig--- via bitcoin-dev
>> wrote:
>> > On Thu, Dec 17, 2015 at 2:09 PM, Jorge Timón wrote:
>> > > Unless I'm missing something, 2 mb x4 = 8mb, so bip102 + SW is already
>> > > equivalent to the 2-4-8 "compromise" proposal [...]
>> > isn't SegWit gain ~75%? hence 2mb x 1.75 = 3.5.
>>
>> Segwit as proposed gives a 75% *discount* to witness data with the
>> same limit, so at a 1MB limit, that might give you (eg) 2.05MB made up
>> of 650kB of base block data plus 1.4MB of witness data; where 650kB +
>> 1.4MB/4 = 1MB at the 1MB limit; or 4.1MB made up of 1.3MB of base plus
>> 2.8MB of witness, for 1.3MB+2.8MB/4 = 2MB at a 2MB limit.
>>
>> > 4x is theoric gain you get in case of 2-2 multisig txs.
>>
>> With segregated witness, 2-2 multisig transactions are made up of 94B
>> of base data, plus about 214B of witness data; discounting the witness
>> data by 75% gives 94+214/4=148 bytes. That compares to about 301B for
>> a 2-2 multisig transaction with P2SH rather than segwit, and 301/148
>> gives about a 2.03x gain, not a 4x gain. A 2.05x gain is what I assumed
>> to get the numbers above.
>>
>> You get further improvements with, eg, 3-of-3 multisig, but to get
>> the full, theoretical 4x gain you'd need a fairly degenerate looking
>> transaction.
>>
>> Pay to public key hash with segwit lets you move about half the
>> transaction data into the witness, giving about a 1.6x improvement by
>> my count (eg 1.6MB = 800kB of base data plus 800kB of witness data,
>> where 800kB+800kB/4=1MB), so I think a gain of between 1.6 and 2.0 is
>> a reasonable expectation to have for the proposed segwit scheme overall.
>>
>>
> many thanks for the explanation.
>
> so it should be fair to say that BIP 102 + SW would bring a gain between
> 2*1.6 and 2*2.
>
> Just for the sake of simplicity if we take the middle of the interval we
> could say
> that BIP102 + SW will bring us a max block (virtual) size equal to 1MB * 2
> * 1.8 = 3.6
>
> Is it right?
>
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-17 Thread Mark Friedenbach via bitcoin-dev
There are many reasons to support segwit beyond it being a soft-fork. For
example:

* the limitation of non-witness data to no more than 1MB makes the
quadratic scaling costs in large transaction validation no worse than they
currently are;
* redeem scripts in witness use a more accurate cost accounting than
non-witness data (further improvements to this beyond what Pieter has
implemented are possible); and
* segwit provides features (e.g. opt-in malleability protection) which are
required by higher-level scaling solutions.

With that in mind I really don't understand the viewpoint that it would be
better to engage a strictly inferior proposal such as a simple adjustment
of the block size to 2MB.

On Thu, Dec 17, 2015 at 1:32 PM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> There are at least 2 proposals on the table:
>
> 1. SWSF (segwit soft fork) with 1MB virtual block limit, approximately
> equals to 2MB actual limit
>
> 2. BIP102: 2MB actual limit
>
> Since the actual limits for both proposals are approximately the same, it
> is not a determining factor in this discussion
>
> The biggest advantage of SWSF is its softfork nature. However, its
> complexity is not comparable with any previous softforks we had. It is
> reasonable to doubt if it could be ready in 6 months
>
> For BIP102, although it is a hardfork, it is a very simple one and could
> be deployed with ISM in less than a month. It is even simpler than BIP34,
> 66, and 65.
>
> So we have a very complicated softfork vs. a very simple hardfork. The
> only reason makes BIP102 not easy is the fact that it's a hardfork.
>
> The major criticism for a hardfork is requiring everyone to upgrade. Is
> that really a big problem?
>
> First of all, hardfork is not a totally unknown territory. BIP50 was a
> hardfork. The accident happened on 13 March 2013. Bitcoind 0.8.1 was
> released on 18 March, which only gave 2 months of grace period for everyone
> to upgrade. The actual hardfork happened on 16 August. Everything completed
> in 5 months without any panic or chaos. This experience strongly suggests
> that 5 months is already safe for a simple hardfork. (in terms of
> simplicity, I believe BIP102 is even simpler than BIP50)
>
> Another experience is from BIP66. The 0.10.0 was released on 16 Feb 2015,
> exactly 10 months ago. I analyze the data on https://bitnodes.21.co and
> found that 4600 out of 5090 nodes (90.4%) indicate BIP66 support.
> Considering this is a softfork, I consider this as very good adoption
> already.
>
> With the evidence from BIP50 and BIP66, I believe a 5 months
> pre-announcement is good enough for BIP102. As the vast majority of miners
> have declared their support for a 2MB solution, the legacy 1MB fork will
> certainly be abandoned and no one will get robbed.
>
>
> My primary proposal:
>
> Now - 15 Jan 2016: formally consult the major miners and merchants if they
> support an one-off rise to 2MB. I consider approximately 80% of mining
> power and 80% of trading volume would be good enough
>
> 16 - 31 Jan 2016: release 0.11.3 with BIP102 with ISM vote requiring 80%
> of hashing power
>
> 1 Jun 2016: the first day a 2MB block may be allowed
>
> Before 31 Dec 2016: release SWSF
>
>
>
> My secondary proposal:
>
> Now: Work on SWSF in a turbo mode and have a deadline of 1 Jun 2016
>
> 1 Jun 2016: release SWSF
>
> What if the deadline is not met? Maybe pushing an urgent BIP102 if things
> become really bad.
>
>
> In any case, I hope a clear decision and road map could be made now. This
> topic has been discussed to death. We are just bringing further uncertainty
> if we keep discussing.
>
>
> Matt Corallo via bitcoin-dev 於 2015-12-16 15:50 寫到:
>
>> A large part of your argument is that SW will take longer to deploy
>> than a hard fork, but I completely disagree. Though I do not agree
>> with some people claiming we can deploy SW significantly faster than a
>> hard fork, once the code is ready (probably a six month affair) we can
>> get it deployed very quickly. It's true the ecosystem may take some
>> time to upgrade, but I see that as a feature, not a bug - we can build
>> up some fee pressure with an immediate release valve available for
>> people to use if they want to pay fewer fees.
>>
>>  On the other hand, a hard fork, while simpler for the ecosystem to
>> upgrade to, is a 1-2 year affair (after the code is shipped, so at
>> least 1.5-2.5 from today if we all put off heads down and work). One
>> thing that has concerned me greatly through this whole debate is how
>> quickly people seem to think we can roll out a hard fork. Go look at
>> the distribution of node versions on the network today and work
>> backwards to get nearly every node upgraded... Even with a year
>> between fork-version-release and fork-activation, we'd still kill a
>> bunch of nodes and instead of reducing their security model, lead them
>> to be outright robbed.
>>
>>
> ___
>
> 

Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-12 Thread Mark Friedenbach via bitcoin-dev
A segwit supporting server would be required to support relaying segwit
transactions, although a non-segwit server could at least inform a wallet
of segwit txns observed, even if it doesn't relay all information necessary
to validate.

Non segwit servers and wallets would continue operations as if nothing had
occurred.
If this means essentially that a soft fork deployment of SegWit will
require SPV wallet servers to change their logic (or risk not being able to
send payments) then it does seem to me that a hard fork to deploy this non
controversial change is not only cleaner (on the data structure side) but
safer in terms of the potential to affect the user experience.


— Regards,


On Sat, Dec 12, 2015 at 1:43 AM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón  wrote:
>
>> This is basically what I meant by
>>
>> struct hashRootStruct
>> {
>> uint256 hashMerkleRoot;
>> uint256 hashWitnessesRoot;
>> uint256 hashextendedHeader;
>> }
>>
>> but my design doesn't calculate other_root as it appears in your tree (is
>> not necessary).
>>
>> It is necessary to maintain compatibility with SPV nodes/wallets.
>
> Any code that just checks merkle paths up into the block header would have
> to change if the structure of the merkle tree changed to be three-headed at
> the top.
>
> If it remains a binary tree, then it doesn't need to change at all-- the
> code that produces the merkle paths will just send a path that is one step
> deeper.
>
> Plus, it's just weird to have a merkle tree that isn't a binary tree.
>
> --
> --
> Gavin Andresen
>


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Mark Friedenbach via bitcoin-dev
My apologies for the apparent miscommunication earlier. It is of interest
to me that the soft-fork be done which is necessary to put a commitment in
the most efficient spot possible, in part because that commitment could be
used for other data such as the merged mining auxiliary blocks, which are
very sensitive to proof size.

Perhaps we have a different view of how the commitment transaction would be
generated. Just as GBT doesn't create the coinbase, it was my expectation
that it wouldn't generate the commitment transaction either -- but
generation of the commitment would be easy, requiring either the coinbase
txid 100 blocks back, or the commitment txid of the prior transaction (note
this impacts SPV mining). The truncation shouldn't be an issue because the
commitment txn would not be part of the list of transactions selected by
GBT, and in any case the truncation would change the witness data which
changes the commitment.

On Wed, Dec 9, 2015 at 4:03 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón  wrote:
> > From this question one could think that when you said "we can do the
> > cleanup hardfork later" earlier you didn't really meant it. And that
> > you will oppose to that hardfork later just like you are opposing to
> > it now.
> > As said I disagree that making a softfork first and then move the
> > commitment is less disruptive (because people will need to adapt their
> > software twice), but if the intention is to never do the second part
> > then of course I agree it would be less disruptive.
> > How long after the softfork would you like to do the hardfork?
> > 1 year after the softfork? 2 years? never?
>
> I think it would be logical to do as part of a hardfork that moved
> commitments generally; e.g. a better position for merged mining (such
> a hardfork was suggested in 2010 as something that could be done if
> merged mining was used), room for commitments to additional block
> back-references for compact SPV proofs, and/or UTXO set commitments.
> Part of the reason to not do it now is that the requirements for the
> other things that would be there are not yet well defined. For these
> other applications, the additional overhead is actually fairly
> meaningful; unlike the fraud proofs.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Mark Friedenbach via bitcoin-dev
Greg, if you have actual data showing that putting the commitment in the
last transaction would be disruptive, and how disruptive, that would be
appreciated. Of the mining hardware I have looked at, none of it cared at
all what transactions other than the coinbase are. You need to provide a
path to the coinbase for extranonce rolling, but the witness commitment
wouldn't need to be updated.

I'm sorry but it's not clear how this would be an incompatible upgrade,
disruptive to anything other than the transaction selection code. Maybe I'm
missing something? I'm not familiar with all the hardware or pooling setups
out there.

On Wed, Dec 9, 2015 at 2:29 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler  wrote:
> >>I agree, but nothing I have advocated creates significant technical
> >>debt. It is also a bad engineering practice to combine functional
> >>changes (especially ones with poorly understood system wide
> >>consequences and low user autonomy) with structural tidying.
> >
> > I don't think I would classify placing things in consensus critical code
> > when it doesn't need to be as "structural tidying".  Gavin said "pile on"
> > which you took as implying "a lot", he can correct me, but I believe he
> > meant "add to".
>
> Nothing being discussed would move something from consensus critical
> code to not consensus critical.
>
> What was being discussed was the location of the witness commitment;
> which is consensus critical regardless of where it is placed. Should
> it be placed in an available location which is compatible with the
> existing network, or should the block hashing data structure
> immediately be changed in an incompatible way to accommodate it in
> order to satisfy an ascetic sense of purity and to make fraud proofs
> somewhat smaller?
>
> I argue that the size difference in the fraud proofs is not
> interesting, the disruption to the network in an incompatible upgrade
> is interesting; and that if it really were desirable reorganization to
> move the commitment point could be done as part of a separate change
> that changes only the location of things (and/or other trivial
> adjustments); and that proceeding int this fashion would minimize
> disruption and risk... by making the incompatible changes that will
> force network wide software updates be as small and as simple as
> possible.
>
> >> (especially ones with poorly understood system wide consequences and low
> >> user autonomy)
> >
> > This implies there you have no confidence in the unit tests and
> functional
> > testing around Bitcoin and should not be a reason to avoid refactoring.
> > It's more a reason to increase testing so that you will have confidence
> when
> > you refactor.
>
> I am speaking from our engineering experience in a  public,
> world-wide, multi-vendor, multi-version, inter-operable, distributed
> system which is constantly changing and in production contains private
> code, unknown and assorted hardware, mixtures of versions, unreliable
> networks, undisclosed usage patterns, and more sources of complex
> behavior than can be counted-- including complex economic incentives
> and malicious participants.
>
> Even if we knew the complete spectrum of possible states for the
> system the combinatioric explosion makes complete testing infeasible.
>
> Though testing is essential one cannot "unit test" away all the risks
> related to deploying a new behavior in the network.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Alternative name for CHECKSEQUENCEVERIFY (BIP112)

2015-11-25 Thread Mark Friedenbach via bitcoin-dev
Looks like I'm the long dissenting voice here? As the originator of the
name CHECKSEQUENCEVERIFY, perhaps I can explain why the name was
appropriately chosen and why the proposed alternatives don't stand up.

First, the names are purposefully chosen to illustrate what they do:

What does CHECKLOCKTIMEVERIFY do? It verifies the range of tx.nLockTime.
What does CHECKSEQUENCEVERIFY do? It verifies the range of txin.nSequence.

Second, the semantics are not limited to relative lock-time / maturity
only. They both leave open ranges with possible, but currently undefined
future consensus-enforced behavior. We don't know what sort of future
behavior these values might trigger, but the associated opcodes are generic
enough to handle them:

CHECKLOCKTIMEVERIFY will pass an nSequence between 1985 and 2009, even
though such constraints have no meaning in Bitcoin.
CHECKSEQUENCEVERIFY is explicitly written to permit a 5-byte push operand,
while checking only 17 of the available 39 bits of both the operand and the
nSequence. Indeed the most recent semantic change of CSV was justified in
part because it relaxes all constraints over the values of these bits
freeing them for other purposes in transaction validation and/or future
extensions of the opcode semantics.

Third, single-byte opcode space is limited. There are less than 10 such
opcodes left. Maybe space won't be so precious in a post-segwitness world,
but I don't want to presume that just yet.


As for the alternatives, they capture only the initial use case of
nSequence. My objection would relax if nSequence were renamed, but I think
that would be too disruptive and unnecessary. In any case, the imagined use
cases for CHECKSEQUENCEVERIFY has to do with sequencing execution pathways
of script, so it's not a stretch in meaning. Previously CHECKMATURITYVERIFY
was a hypothicated opcode that directly checked the minimum age of inputs
of a transaction. The indirect naming of CHECKSEQUENCEVERIFY on the other
hand is due to its indirect behavior. RELATIVELOCKTIMEVERIFY was also a
hypothicated opcode that would check a ficticious nRelativeLockTime field,
which does not exist. Again my objection would go away if we renamed
nSequence, but I actually think the nSequence name is better...

On Tue, Nov 24, 2015 at 2:30 AM, Btc Drak via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> BIP68 introduces relative lock-time semantics to part of the nSequence
> field leaving the majority of bits undefined for other future applications.
>
> BIP112 introduces opcode CHECKSEQUENCEVERIFY (OP_CSV) that is specifically
> limited to verifying transaction inputs according to BIP68's relative
> lock-time[1], yet the _name_ OP_CSV is much boarder than that. We spent
> months limiting the number of bits used in BIP68 so they would be available
> for future use cases, thus we have acknowledged there will be completely
> different usecases that take advantage of unused nSequence bits.
>
> For this reason I believe the BIP112 should be renamed specifically for
> it's usecase, which is verifying the time/maturity of transaction inputs
> relative to their inclusion in a block.
>
> Suggestions:-
>
> CHECKMATURITYVERIFY
> RELATIVELOCKTIMEVERIFY
> RCHECKLOCKTIMEVERIFY
> RCLTV
>
> We could of course softfork additional meaning into OP_CSV each time we
> add new sequence number usecases, but that would become obscure and
> confusing. We have already shown there is no shortage of opcodes so it
> makes no sense to cram everything into one generic opcode.
>
> TL;DR: let's give BIP112 opcode a name that reflects it's actual usecase
> rather than focusing on the bitcoin internals.
>
> [1]
> https://github.com/bitcoin/bitcoin/pull/6564/files#diff-be2905e2f5218ecdbe4e55637dac75f3R1223
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change

2015-10-15 Thread Mark Friedenbach via bitcoin-dev
Adam, there is really no justification I can see to lower the interblock
interval on the Bitcoin blockchain, primarily due to the effects of network
latency. Lowering the interblock interval and raising the block size are
not equal alternatives - you can always get more throughput in bitcoin by
raising the block size than by lowering the interblock time. And that's
without considering the effect shorter intervals would have on e.g. SPV
client bandwidth or sidechain connectivity proofs. So I find it very
unlikely that such granularity would ever be needed on the Bitcoin block
chain, although if were to happen then extra bits from nSequence could be
used in a soft-fork compatible way.

However it is true that various sidechains such as Liquid will have a much
shorter interblock interval than 10min, as well as customer demand for
protocols with shorter timeouts. It would be nice if such systems did not
HAVE to resort to complex bit shifting to support more precision, and if
protocols written for bitcoin could be reused on such systems with minimal
or no modification.

To that end, it might be preferable to move the flag bit indicating use of
seconds from bit 16 to bit 23 and (by convention only) reserve bits 17..22
to provide higher granularity in a sidechain environment. This keeps the
size of a stack push to 3 bytes while also keeping sufficient room for
high-order bits of relative lock-time in a sidechain that supports shorter
block intervals.

Another alternative is to put the units flag in the least significant bit,
which has the advantage of allowing both units of lock-time to make use of
1-2 byte pushes, but the disadvantage of making lock times of 64..127
2-bytes instead of 1-byte.

Thoughts?

On Thu, Oct 15, 2015 at 9:37 AM, Adam Back via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Does that pre-judge that block interval would never change from
> 10mins?  Eg say with IBLT or fountain codes etc and security arguments
> for the current limitations of them are found, such that orphan rates
> can remain low in a decentralised way with 1min blocks, then the
> locktime granularity would be coarse relative to the block interval
> (with 512s locktime granularity.
>
> Adam
>
> On 15 October 2015 at 18:27, Btc Drak via bitcoin-dev
>  wrote:
> > Alex,
> >
> > I am sorry for not communicating more clearly. Mark and I discussed your
> > concerns from the last meeting and he made the change. The BIP text still
> > needs to be updated, but the discussed change was added to the PR, albeit
> > squashed making it more non-obvious. BIP68 now explicitly uses 16 bits
> with
> > a bitmask. Please see the use of SEQUENCE_LOCKTIME_MASK and
> > SEQUENCE_LOCKTIME_GRANULARITY in the PR
> > https://github.com/bitcoin/bitcoin/pull/6312.
> >
> > /* If CTxIn::nSequence encodes a relative lock-time, this mask is
> >  * applied to extract that lock-time from the sequence field. */
> > static const uint32_t SEQUENCE_LOCKTIME_MASK = 0x;
> >
> > /* In order to use the same number of bits to encode roughly the
> >  * same wall-clock duration, and because blocks are naturally
> >  * limited to occur every 600s on average, the minimum granularity
> >  * for time-based relative lock-time is fixed at 512 seconds.
> >  * Converting from CTxIn::nSequence to seconds is performed by
> >  * multiplying by 512 = 2^9, or equivalently shifting up by
> >  * 9 bits. */
> > static const int SEQUENCE_LOCKTIME_GRANULARITY = 9;
> >
> > I am also much happier with this last tightening up of the specification
> > because it removes ambiguity. 512s granularity makes sense within the
> > context of the 10 minute block target.
> >
> > Thank you for spending so much time carefully considering this BIP and
> > reference implementation and please let me know if there there are any
> > remaining nits so we can get those addressed.
> >
> >
> >
> >
> >
> > On Thu, Oct 15, 2015 at 2:47 PM, Alex Morcos via bitcoin-dev
> >  wrote:
> >>
> >> Mark,
> >>
> >> You seemed interested in changing BIP 68 to use 16 bits for sequence
> >> number in both the block and time versions, making time based sequence
> >> numbers have a resolution of 512 seconds.
> >>
> >> I'm in favor of this approach because it leaves aside 14 bits for
> further
> >> soft forks within the semantics of BIP 68.
> >>
> >> It would be nice to know if you're planning this change, and perhaps
> >> people can hold off on review until things are finalized.
> >>
> >> I'd cast my "vote" against BIP 68 without this change, but am also open
> to
> >> being convinced otherwise.
> >>
> >> What are other peoples opinions on this?
> >>
> >> On Thu, Oct 8, 2015 at 9:38 PM, Rusty Russell via bitcoin-dev
> >>  wrote:
> >>>
> >>> Peter Todd  writes:
> >>> > On Tue, Oct 06, 2015 at 12:28:49PM +1030, Rusty Russell 

Re: [bitcoin-dev] CHECKSEQUENCEVERIFY - We need more usecases to motivate the change

2015-10-05 Thread Mark Friedenbach via bitcoin-dev
Alex, decreasing granularity is a soft-fork, increasing is a hard-fork.
Therefore I've kept the highest possible precision (1 second, 1 block) with
the expectation that at some point in the future if we need more low-order
bits we can soft-fork them to other purposes, we can decrease granularity
at that time.

On Mon, Oct 5, 2015 at 3:03 PM, Alex Morcos via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Peter,
>
> Your concern about whether this is the best way to use the nSequence
> field; would that be addressed by providing more high order bits to signal
> different uses of the field?  At a certain point we're really not limiting
> the future at all and there is something to be said for not letting the
> perfect be the enemy of the good.  I think it would be nice to make forward
> progress on BIPS 68,112, and 113 and move towards getting them finalized
> and implemented.  (Although I do suspect they aren't quite ready to go out
> with CLTV)
>
> What is the reasoning for having single second resolution on the time
> based sequence number locks?  Might it not make some sense to reduce that
> resolution and leave more low order bits as well?
>
> Alex
>
> On Sun, Oct 4, 2015 at 8:04 AM, s7r via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> Hi aj,
>>
>> On 10/4/2015 11:35 AM, Anthony Towns via bitcoin-dev wrote:
>> > On Sat, Oct 03, 2015 at 04:30:56PM +0200, Peter Todd via
>> > bitcoin-dev wrote:
>> >> So we need to make the case for two main things: 1) We have
>> >> applications that need a relative (instead of absolute CLTV) 2)
>> >> Additionally to RCLTV, we need to implement this via nSequence
>> >
>> >> However I don't think we've done a good job showing why we need
>> >> to implement this feature via nSequence. BIP68 describes the new
>> >> nSequence semantics, and gives the rational for them as being a
>> >> "Consensus-enforced tx replacement" mechanism, with a
>> >> bidirectional payment channel as an example of this in action.
>> >> However, the bidirectional payment channel concept itself can be
>> >> easily implemented with CLTV alone.
>> >
>> > Do you mean "with RCLTV alone" here?
>> >
>> > RCLTV/OP_CSV is used in lightning commitment transactions to
>> > enforce a delay between publishing the commitment transaction, and
>> > spending the output -- that delay is needed so that the
>> > counterparty has time to prove the commitment was revoked and claim
>> > the outputs as a penalty.
>> >
>>
>> I partially understand - can you please provide a simple Alice and Bob
>> example here with the exact scenario? Thanks. Why is there a need to
>> 'delay between publishing the commitment transaction and spending the
>> output'? If the absolute CLTV script reached its maturity it means
>> something went wrong (e.g. counterparty cheated or got hit by a bus)
>> so what is with the delay time needed for proving that the commitment
>> was revoked? I assume an absolute CLTV script reaching its maturity
>> (nLockTime) is the proof itself that the commitment was revoked - but
>> maybe I'm missing something obvious, sorry if this is the case.
>>
>> > Using absolute CLTV instead would mean that once the effective
>> > delay a commitment transaction has decreases over time -- initially
>> > it will be longer than desirable, causing unwanted delays in
>> > claiming funds when no cheating is going on; but over time it will
>> > become too short, which means there is not enough time to prove
>> > cheating (and the channel has to be closed prematurely). You can
>> > trade those things off and pick something that works, but it's
>> > always going to be bad.
>> >
>> I agree, I see the logic here. Absolute CLTV is not necessary inferior
>> to RCLTV - there are use cases and use cases. For example, you can
>> avoid unnecessary waiting until the nLockTime expires if you use
>> absolute CLTV in combination with P2SH (2/2). Again, it always depends
>> on the use case - it might be a good solution, it might not be such a
>> good solution, but even absolute CLTV alone clearly fixes a lot of
>> things and takes smart contracts to the next level.
>>
>> >> There is a small drawback in that the initial transaction could
>> >> be delayed, reducing the overall time the channel exists, but the
>> >> protocol already assumes that transactions can be reliably
>> >> confirmed within a day - significantly less than the proposed 30
>> >> days duration of the channel.
>> >
>> > Compared to using a CLTV with 30 days duration, With RCLTV a
>> > channel could be available for years (ie 20x longer), but in the
>> > case of problems funds could be reclaimed within hours or days (ie
>> > 30x faster).
>> >
>> Indeed. I for one _need_ CLTV / RCLTV in my day to day use cases, it
>> would be neat to have both, but if I can only have (for the time
>> being) absolute CLTV so be it - it's still a lot better.
>>
>> > But that's all about RCLTV vs CLTV, not about 

Re: [bitcoin-dev] Is it possible for there to be two chains after a hard fork?

2015-09-29 Thread Mark Friedenbach via bitcoin-dev
You don't need to appeal to human psychology. At 75% threshold, it takes
only 25.01% of the hashpower to report but not actually enforce the fork to
cause the majority hashpower to remain on the old chain, but for upgraded
clients to start rejecting the old chain. With 95% the same problem exists
but with a threshold of 45.01%. BIP 66 showed this not to be a hypothetical
concern.

On Tue, Sep 29, 2015 at 7:17 AM, Jonathan Toomim (Toomim Bros) via
bitcoin-dev  wrote:

> At the 95% threshold, I don't think it would happen unless there was a
> very strong motivating factor, like a small group believing that CLTV was a
> conspiracy run by the NSA agent John Titor to contaminate our precious
> bodily fluids with time-traveling traveler's cheques.
>
> At the 75% threshold, I think it could happen with mostly rational users,
> but even then it's not very likely with most forks. With the blocksize
> issue, there are some people who get very religious about things like
> decentralization or fee markets and think that even 1 MB is too large; I
> could see them making financial sacrifices in order to try to make a
> small-block parallel fork a reality, one that is true to their vision of
> what's needed to make Bitcoin true and pure, or whatever.
>
>
>
>
> On Sep 29, 2015, at 7:04 AM, Gavin Andresen 
> wrote:
>
> I keep seeing statements like this:
>
> On Tue, Sep 29, 2015 at 9:30 AM, Jonathan Toomim (Toomim Bros) via
> bitcoin-dev  wrote:
>
>> As a further benefit to hard forks, anybody who is ideologically opposed
>> to the change can continue to use the old version successfully, as long as
>> there are enough miners to keep the fork alive.
>
>
> ... but I can't see how that would work.
>
> Lets say there is a hard fork, and 5% of miners stubbornly refuse to go
> along with the 95% majority (for this thought experiment, it doesn't matter
> if the old rules or new rules 'win').
>
> Lets further imagine that some exchange decides to support that 5% and
> lets people trade coins from that fork (one of the small altcoin exchanges
> would definitely do this if they think they can make a profit).
>
> Now, lets say I've got a lot of pre-fork bitcoin; they're valid on both
> sides of the fork. I support the 95% chain (because I'm not insane), but
> I'm happy to take people's money if they're stupid enough to give it to me.
>
> So, I do the following:
>
> 1) Create a send-to-self transaction on the 95% fork that is ONLY valid on
> the 95% fork (maybe I CoinJoin with a post-fork coinbase transaction, or
> just move my coins into then out of an exchange's very active hot wallet so
> I get coins with a long transaction history on the 95% side of the fork).
>
> 2) Transfer  those same coins to the 5% exchange and sell them for
> whatever price I can get (I don't care how low, it is free money to me-- I
> will still own the coins on the 95% fork).
>
> I have to do step (1) to prevent the exchange from taking the
> transfer-to-exchange transaction and replaying it on the 95% chain.
>
> I don't see any way of preventing EVERYBODY who has coins on the 95% side
> of the fork from doing that. The result would be a huge free-fall in price
> as I, and everybody else, rushes to get some free money from anybody
> willing to pay us to remain idealogically pure.
>
> Does anybody think something else would happen, and do you think that
> ANYBODY would stick to the 5% fork in the face of enormously long
> transaction confirmation times (~3 hours), a huge transaction backlog as
> lots of the 95%'ers try to sell their coins before the price drops, and a
> massive price drop for coins on the 5% fork.
>
> --
> --
> Gavin Andresen
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On bitcoin-dev list admin and list noise

2015-09-29 Thread Mark Friedenbach via bitcoin-dev
This mailing list was never meant to be a place "to hold the bitcoin
development community accountable for its actions [sic]." I know other
developers that have switched to digest-only or unsubscribed. I know if
this became a channel for PR and populist venting as you describe, I would
leave as well. This mailing list is meant to be a place to discuss ongoing
bitcoin development issues relating to the protocol and its instantiation
in bitcoin core. Please don't decrease the utility of this list by
expanding scope.

On Tue, Sep 29, 2015 at 9:38 AM, Santino Napolitano via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'm not intending to completely dismiss your concerns but as a data point:
> I read this list daily and it usually takes 15 minutes or less while I
> drink a cup of coffee.
>
> My concern is that this is one of the (maybe *the*) last uncensored
> persisted forums related to technical bitcoin discussion with wide
> viewership. It is increasingly difficult for an average somewhat technical
> person to attempt to hold the bitcoin development community accountable for
> its actions or have their voice heard. It would really be a shame if
> messages like this were relegated to a black hole where things like the
> exchange rate and various scammy spam goes to fester.
>
> >Debates over bitcoin philosophy, broader context, etc. will start seeing
> grumpy list admins squawk about "off-topic!"
>
> Who will draw this line? It's unclear to me who the list admins are.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-27 Thread Mark Friedenbach via bitcoin-dev
Agree with all CLTV and nVersionBits points. We should deploy a lock-time
soft-fork ASAP, using the tried and true IsSuperMajoirty test.

However your information regarding BIPs 68 (sequence numbers), 112
(checksequenceverify) and 113 (median time past) is outdated. Debate
regarding semantics has been settled, and there are working implementations
ready for merge on github. See pull requests #6312, #6564, and #6566. I
don’t know what the hold up has been regarding further reviews and merging,
but it is ready.

If you believe there are reasons #6312, #6564, or #6566 should not be
merged, please speak up. Otherwise it appears there is consensus on these
changes. They are related, and there is no reason not to include them in
the soft-fork, delaying applications using these features by 6-12 months.

On Sun, Sep 27, 2015 at 11:50 AM, Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Summary
> ---
>
> It's time to deploy BIP65 CHECKLOCKTIMEVERIFY.
>
> I've backported the CLTV op-code and a IsSuperMajority() soft-fork to
> the v0.10 and v0.11 branches, pull-reqs #6706 and #6707 respectively. A
> pull-req for git HEAD for the soft-fork deployment has been open since
> June 28th, #6351 - the opcode implementation itself was merged two
> months ago.
>
> We should release a v0.10.3 and v0.11.1 with CLTV and get the ball
> rolling on miner adoption. We have consensus that we need CLTV, we have
> a well tested implementation, and we have a well-tested deployment
> mechanism. We also don't need to wait for other soft-fork proposals to
> catch up - starting the CLTV deployment process isn't going to delay
> future soft-forks, or for that matter, hard-forks.
>
> I think it's possible to safely get CLTV live on mainnet before the end
> of the year. It's time we get this over with and done.
>
>
> Detailed Rational
> -
>
> 1) There is a clear need for CLTV
>
> Escrow and payment channels both benefit greatly from CLTV. In
> particular, payment channel implementations are made significantly
> simpler with CLTV, as well as more secure by removing the malleability
> vulnerability.
>
> Why are payment channels important? There's a lot of BTC out there
> vulnerable to theft that doesn't have to be. For example, just the other
> day I was talking with Nick Sullivan about ChangeTip's vulnerability to
> theft, as well as regulatory uncertainty about whether or not they're a
> custodian of their users' funds. With payment channels ChangeTip would
> only be able to spend as much of a deposit as a user had spent, keeping
> the rest safe from theft. Similarly, in the other direction - ChangeTip
> to their users - in many cases it is feasible to also use payment
> channels to immediately give users control of their funds as they
> receive them, again protecting users and helping make the case that
> they're not a custodian. In the future I'm sure we'll see fancy
> bi-directional payment channels serving this role, but lets not let
> perfect be the enemy of good.
>
>
> 2) We have consensus on the semantics of the CLTV opcode
>
> Pull-req #6124 - the implementation of the opcode itself - was merged
> nearly three months ago after significant peer review and discussion.
> Part of that review process included myself(1) and mruddy(2) writing
> actual demos of CLTV. The chance of the CLTV semantics changing now is
> near-zero.
>
>
> 3) We have consensus that Bitcoin should adopt CLTV
>
> The broad peer review and discussion that got #6124 merged is a clear
> sign that we expect CLTV to be eventually adopted. The question isn't if
> CLTV should be added to the Bitcoin protocol, but rather when.
>
>
> 4) The CLTV opcode and IsSuperMajority() deployment code has been
>thoroughly tested and reviewed
>
> The opcode implementation is very simple, yet got significant review,
> and it has solid test coverage by a suite of tx-(in)valid.json tests.
> The tests themselves have been reviewed by others, resulting in Esteban
> Ordano's pull-req #6368 by Esteban Ordano which added a few more cases.
>
> As for the deployment code, both the actual IsSuperMajority() deployment
> code and associated unit-tests tests were copied nearly line-by-line
> from the succesful BIP66. I did this deliberately to make all the peer
> review and testing of the deployment mechanism used in BIP66 be equally
> valid for CLTV.
>
>
> 5) We can safely deploy CLTV with IsSuperMajority()
>
> We've done two soft-forks so far with the IsSuperMajority() mechanism,
> BIP34 and BIP66. In both cases the IsSuperMajority() mechanism itself
> worked flawlessly. As is well-known BIP66 in combination with a large %
> of the hashing power running non-validating "SPV" mining operations did
> lead to a temporary fork, however the root cause of this issue is
> unavoidable and not unique to IsSuperMajority() soft-forks.
>
> Pragmatically speaking, now that miners are well aware of the issue it
> will be easy for them to avoid a repeat of that 

Re: [bitcoin-dev] CI Build for Bitcoin - Some Basic Questions about Gitian and other stuff

2015-09-23 Thread Mark Friedenbach via bitcoin-dev
The builds made by Travis are for the purpose of making sure that the
source code compiles and tests run successfully on all supported platforms.
The binaries are not used anywhere else because Travis is not a trusted
platform.

The binaries on bitcoin.org are built using the gitian process and signed
by a quorum of developers.

On Wed, Sep 23, 2015 at 10:13 AM, Roy Osherove via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Folks.
> I'm trying my hand at creating a reproducible build of my own for bitcoin
> and bitcoin-XT, using TeamCity.
> I believe it is the best way to learn something: To try to build it
> yourself.
> Here is what I think I know so far, and I would love corrections, plus
> questions:
>
>1. Bitcoin is built continuously on travis-CI at
>https://travis-ci.org/bitcoin/bitcoin/
>2.  there are many flavors that are built, but I'm not sure if all of
>them are actually used/necessary. are they all needed, or just to "just in
>case"?
>3.  There is a gitian build file for bitcoin, but is anyone actually
>using it? are the bin files on bitcoin.org taken from that? or the
>travis ci builds? or some other place?
>4. Are there any things that people would love to have in the build
>that do not exist there today? perhaps I can help with that?
>
> Here is what I have now: http://btcdev.osherove.com:8111/
> It does not do the matrix build yet, but it's coming. I'm just wondering
> if all the platforms need to be supported,and if gitian is truly required
> to be used, or used in parallel, or at all..
>
> Feedback appreciated.
>
> --
> Thanks,
>
> Roy Osherove
>
>- *@RoyOsherove* 
>- Read my new book *Notes to a Software Team Leader
> *
>- Or *my new course* about Beautiful Builds
>  and Continuous Delivery
>- +1-201-256-5575
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CI Build for Bitcoin - Some Basic Questions about Gitian and other stuff

2015-09-23 Thread Mark Friedenbach via bitcoin-dev
Well the gitian builds are made available on bitcoin.org. If you mean a
build server where gitian builds are automatically done and made available,
well that rather defeats the point of gitian.

The quorum signatures are accumulated here:
https://github.com/bitcoin/gitian.sigs (it's a manual process).

On Wed, Sep 23, 2015 at 10:31 AM, Roy Osherove  wrote:

> Thanks Mark.
> Is there a public server where the gitian builds can be viewed?
> Is there a public server that shows the quorum verifications or that shows
> how to join in on the verification if such as thing is helpful?
>
> On Wed, Sep 23, 2015 at 10:18 AM, Mark Friedenbach 
> wrote:
>
>> The builds made by Travis are for the purpose of making sure that the
>> source code compiles and tests run successfully on all supported platforms.
>> The binaries are not used anywhere else because Travis is not a trusted
>> platform.
>>
>> The binaries on bitcoin.org are built using the gitian process and
>> signed by a quorum of developers.
>>
>> On Wed, Sep 23, 2015 at 10:13 AM, Roy Osherove via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Folks.
>>> I'm trying my hand at creating a reproducible build of my own for
>>> bitcoin and bitcoin-XT, using TeamCity.
>>> I believe it is the best way to learn something: To try to build it
>>> yourself.
>>> Here is what I think I know so far, and I would love corrections, plus
>>> questions:
>>>
>>>1. Bitcoin is built continuously on travis-CI at
>>>https://travis-ci.org/bitcoin/bitcoin/
>>>2.  there are many flavors that are built, but I'm not sure if all
>>>of them are actually used/necessary. are they all needed, or just to 
>>> "just
>>>in case"?
>>>3.  There is a gitian build file for bitcoin, but is anyone actually
>>>using it? are the bin files on bitcoin.org taken from that? or the
>>>travis ci builds? or some other place?
>>>4. Are there any things that people would love to have in the build
>>>that do not exist there today? perhaps I can help with that?
>>>
>>> Here is what I have now: http://btcdev.osherove.com:8111/
>>> It does not do the matrix build yet, but it's coming. I'm just wondering
>>> if all the platforms need to be supported,and if gitian is truly required
>>> to be used, or used in parallel, or at all..
>>>
>>> Feedback appreciated.
>>>
>>> --
>>> Thanks,
>>>
>>> Roy Osherove
>>>
>>>- *@RoyOsherove* 
>>>- Read my new book *Notes to a Software Team Leader
>>> *
>>>- Or *my new course* about Beautiful Builds
>>>  and Continuous Delivery
>>>- +1-201-256-5575
>>>
>>>
>>>
>>> ___
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>>
>>
>
>
> --
> Thanks,
>
> Roy Osherove
>
>- *@RoyOsherove* 
>- Read my new book *Notes to a Software Team Leader
> *
>- Or *my new course* about Beautiful Builds
>  and Continuous Delivery
>- +1-201-256-5575
> - Timezone: Eastern Standard Time (New York)
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-20 Thread Mark Friedenbach via bitcoin-dev
Replying to this specific email only because it is the most recent in my
mail client.

Does this conversation have to happen on-list? It seems to have wandered
incredibly far off-topic.

On Sun, Sep 20, 2015 at 5:25 AM, Mike Hearn via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Also, in the US, despite overwhelming resistance on a broad scale,
>> legislation continues to be presented which would violate the 2nd amendment
>> right to keep and bear arms.
>
>
> And yet the proposed legislation goes nowhere, and the USA continues to
> stand alone in having the first world's weakest gun control laws.
>
> You are just supporting my point with this example. Obama would like to
> restrict guns, but can't, because they are too popular (in the USA).
>
> The comparison to BitTorrent is likewise weak: governments hardly care
> about piracy. They care enough to pass laws occasionally, but not enough to
> put serious effort into enforcement. Wake me up when the USA establishes a
> Copyright Enforcement Administration with the same budget and powers as the
> DEA.
>
> Internet based black markets exist only because governments tolerate them
> (for now). A ban on Tor, Bitcoin or both would send them back to the
> pre-2011 state where they were virtually non-existent. Governments tolerate
> this sort of abuse only because they believe, I think correctly, that
> Bitcoin can have great benefits for their ordinary voters and for now are
> willing to let the tech industry experiment.
>
> But for that state of affairs to continue, the benefits must actually
> appear. That requires growth.
>
> I think there's a difference between natural growth and the kind of growth
>> that's being proposed by bank-backed start-ups and pro-censorship entities.
>>
>
> What difference? Are you saying the people who come to Bitcoin because of
> a startup are somehow less "natural" than other users?
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fill-or-kill transaction

2015-09-17 Thread Mark Friedenbach via bitcoin-dev
Note that this violates present assumptions about transaction validity,
unless a constraint also exists that any output of such an expiry block is
not spent for at least 100 blocks.

Do you have a clean way of ensuring this?

On Thu, Sep 17, 2015 at 2:41 PM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Fill-or-kill tx is not a new idea and is discussed in the Scaling Bitcoin
> workshop. In Satoshi's implementation of nLockTime, a huge range of
> timestamp (from 1970 to 2009) is wasted. By exploiting this unused range
> and with compromise in the time resolution, a fill-or-kill system could be
> built with a softfork.
>
> ---
> Two new parameters, nLockTime2 and nKillTime are defined:
>
> nLockTime2 (Range: 0-1,853,010)
> 0: Tx could be confirmed at or after block 420,000
> 1: Tx could be confirmed at or after block 420,004
> .
> .
> 719,999: Tx could be confirmed at or after block 3,299,996 (about 55 years
> from now)
> 720,000: Tx could be confirmed if the median time-past >= 1,474,562,048
> (2016-09-22)
> 720,001: Tx could be confirmed if the median time-past >= 1,474,564,096
> (2016-09-22)
> .
> .
> 1,853,010 (max): Tx could be confirmed if the median time-past >=
> 3,794,966,528 (2090-04-04)
>
> nKillTime (Range: 0-2047)
> if nLockTime2 < 720,000, the tx could be confirmed at or before block
> (nLockTime2 + nKillTime * 4)
> if nLockTime2 >= 720,000, the tx could be confirmed if the median
> time-past <= (nLockTime2 - 720,001 + nKillTime) * 2048
>
> Finally, nLockTime = 500,000,000 + nKillTime + nLockTime2 * 2048
>
> Setting a bit flag in tx nVersion will activate the new rules.
>
> The resolution is 4 blocks or 2048s (34m)
> The maximum confirmation window is 8188 blocks (56.9 days) or 16,769,024s
> (48.5 days)
>
> For example:
> With nLockTime2 = 20 and nKillTime = 100, a tx could be confirmed only
> between block 420,080 and 420,480
> With nLockTime2 = 730,000 and nKillTime = 1000, a tx could be confirmed
> only between median time-past of 1,495,042,048 and 1,497,090,048
>
> 
> Why is this a softfork?
>
> Remember this formula: nLockTime = 500,000,000 + nKillTime + nLockTime2 *
> 2048
>
> For height based nLockTime2 (<= 719,999)
>
> For nLockTime2 = 0 and nKillTime = 0, nLockTime = 500,000,000, which means
> the tx could be confirmed after 1970-01-01 with the original lock time
> rule. As the new rule does not allow confirmation until block 420,000, it's
> clearly a softfork.
>
> It is not difficult to see that the growth of nLockTime will never catch
> up nLockTime2.
>
> At nLockTime2 = 719,999 and nKillTime = 2047, nLockTime = 1,974,559,999,
> which means 2016-09-22. However, the new rule will not allow confirmation
> until block 3,299,996 which is decades to go
>
>
>
> For time based nLockTime2 (> 720,000)
>
> For nLockTime2 = 720,000 and nKillTime = 0, nLockTime = 1,974,560,000,
> which means the tx could be confirmed after median time-past 1,474,560,000
> (assuming BIP113). However, the new rule will not allow confirmation until
> 1,474,562,048, therefore a soft fork.
>
> For nLockTime2 = 720,000 and nKillTime = 2047, nLockTime = 1,974,562,047,
> which could be confirmed at 1,474,562,047. Again, the new rule will not
> allow confirmation until 1,474,562,048. The 1 second difference makes it a
> soft fork.
>
> Actually, for every nLockTime2 value >= 720,000, the lock time with the
> new rule must be 1-2048 seconds later than the original rule.
>
> For nLockTime2 = 1,853,010 and nKillTime = 2047, nLockTime =
> 4,294,966,527, which is the highest possible value with the 32-bit nLockTime
>
> 
> User's perspective:
>
> A user wants his tx either filled or killed in about 3 hours. He will set
> a time-based nLockTime2 according to the current median time-past, and set
> nKillTime = 5
>
> A user wants his tx get confirmed in the block 63, the first block
> with reward below 10BTC. He is willing to pay high fee but don't want it
> gets into another block. He will set nLockTime2 = 210,000 and nKillTime = 0
>
> 
> OP_CLTV
>
> Time-based OP_CLTV could be upgraded to support time-based nLockTime2.
> However, height-based OP_CLTV is not compatible with nLockTime2. To spend a
> height-based OP_CLTV output, user must use the original nLockTime.
>
> We may need a new OP_CLTV2 which could verify both nLockTime and nLockTime2
>
> 
> 55 years after?
>
> The height-based nLockTime2 will overflow in 55 years. It is very likely a
> hard fork will happen to implement a better fill-or-kill system. If not, we
> could reboot everything with another tx nVersion for another 55 years.
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org

Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-17 Thread Mark Friedenbach via bitcoin-dev
Correction of a correction, in-line:

On Wed, Sep 16, 2015 at 5:51 PM, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > - Many interested or at least willing to accept a "short term bump", a
> > hard fork to modify block size limit regime to be cost-based via
> > "net-utxo" rather than a simple static hard limit.  2-4-8 and 17%/year
> > were debated and seemed "in range" with what might work as a short term
> > bump - net after applying the new cost metric.
>
> I would be careful to point out that hard numbers were deliberately NOT
> discussed. Though some general things were thrown out, they were not
> extensively discussed nor agreed to. I personally think 2-4 is "in
> range", though 8 maybe not so much. Of course it depends on exactly how
> the non-blocksize limit accounting/adjusting is done.
>
> Still, the "greatest common denominator" agreement did not seem to be
> agreeing to an increase which continues over time, but which instead
> limits itself to a set, smooth increase for X time and then requires a
> second hardfork if there is agreement on a need for more blocksize at
> that point.
>

Perhaps it is accurate to say that there wasn't consensus at all except
that (1) we think we can work together on resolving this impasse (yay!),
and (2) it is conceivable that changing from block size to some other
metric might provide the basis for a compromise on near-term numbers.

As an example, I do not think the net-UTXO metric provides any benefit with
respect to scalability, and in some ways makes the situation worse (even
though it helpfully solves an unrelated problem of spammy dust outputs).
But there are other possible metrics and I maintain hope that data will
show the benefit of another metric or other metrics combined with net-UTXO
in a way that will allow us to reach consensus.

As a further example, I also am quite concerned about 2-4-8MB with either
block size or net-UTXO as the base metric. As you say, it depends on how
the non-blocksize limit accounting/adjusting is done... But if a metric
were chosen that addressed my concerns (worst case propagation and
validation time), then I could be in favor of an initial bump that allowed
a larger number of typical transactions in a block.

But where I really need to disagree is on the requirement for a 2nd hard
fork. I will go on record as being definitively against this. While being
conservative with respect to exponentials, I would very much like to make
sure that there is a long-term growth curve as part of any proposal. I am
willing to accept a hard-fork if the adopted plan is too conservative, but
I do not want to be kicking the can down the road to a scheduled 2nd hard
fork that absolutely must occur. That, I feel, could be a more dangerous
outcome than an exponential that outlasts conservative historical trends.

I commend Jeff for writing a Chatham-rules summary of the outcome of some
hallway conversations that occurred. On the whole I think his summary does
represent the majority view of the opinions expressed by core developers at
the workshop. I will caution though that on nearly every issue there were
those expressed disagreement but did not fight the issue, and those who
said nothing and left unpolled opinions. Nevertheless this summary is
informative as it feeds forwards into the design of proposals that will be
made prior to the Hong Kong workshop in December, in order that they have a
higher likelihood of success.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Named Bitcoin Addresses

2015-09-10 Thread Mark Friedenbach via bitcoin-dev
Are you aware of the payment protocol?
On Sep 10, 2015 2:12 PM, "essofluffy . via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi Everyone,
>
> An issue I'm sure everyone here is familiar with is the problem concerning
> the fact that Bitcoin addresses are too complex to memorize and share.
> Current Bitcoin addresses can be very intimidating to new users. As Bitcoin
> grows it's necessary to provide a much more user friendly experience to the
> end user. I think that having the capability to assign a unique name to a
> Bitcoin address is in the best interest of Bitcoin and it's users.
> I've recently come up with a method for assigning a unique name to a
> specific Bitcoin address. I'm looking to get some feedback/criticism on
> this method that I have detailed below.
>
> Let’s run through Bob and Alice transacting with a Named Bitcoin Address.
> Bob wants to collect a payment from Alice for a service/good he is
> selling, but Alice wants to pay from her home computer where she securely
> keeps all her Bitcoin. So now Bob needs to give Alice his Bitcoin address
> and because Bob is using a Named Bitcoin Address and a supported wallet he
> can give her an easy to memorize and hard to mess up address. Bob’s address
> is simply ‘SendBitcoinsToBob’ which can easily be written down or
> memorized. Now Alice can go home send the Bitcoin from her own supported
> wallet and be positive that she sent it to Bob.
>
> Let’s look at how Bob’s supported wallet made that address.
>
> First Bob let’s his wallet know that he wants to create a new address. In
> response, his wallet simply asks him what he wants that address to be
> named. Bob then enters ‘SendBitcoinsToBob’ as his preferred address name.
> The wallet then let’s Bob know if his preferred address name is available.
> If it’s available the name is broadcasted to the network and ready to use.
>
> Now let’s get a little more technical.
>
> When Bob inputs his preferred address name the client has to make sure
> this name hasn’t been taken or else who knows where Alice will be sending
> her Bitcoins. The client does this by referencing a downloaded “directory”
> of names chosen by people using this system. This directory of names are
> transactions sent to an address without a private key (but still viewable
> on the blockchain) with the name appended to the transactions as an
> OP_RETURN output. These transactions are downloaded or indexed, depending
> on whether or not the wallet contains the full Blockchain or is an SPV
> wallet. Because of such a large amount of possible address names a binary
> search method is used to search through all this data efficiently. The
> names could be sorted in two ways, the first being the first character and
> the second being the total length of the name (I will being exploring
> additional methods to make this process more efficient). So now that Bob’s
> client has verified that the name has not been taken and is valid (valid
> meaning it's under 35 bytes long and only using chars 0-9 and a-z) it sends
> a transaction of 1 satoshi and a small fee to the address without a private
> key as talked about earlier. The transaction's OP_RETURN output consists of
> two parts. The implementation version of this method (up to 8 characters)
> and the name itself (up to 32 characters). Once the transaction is
> broadcasted to the network and confirmed the name is ready to be used.
>
> Let’s look at how Alice’s supported wallet sends her Bitcoin to Bob’s
> Named Bitcoin Address.
>
> When Alice enters in Bob’s address, ‘SendBitcoinsToBob’ Alice’s client
> references the same “directory” as Bob only on her device and searches for
> the OP_RETURN output of ‘SendBitcoinsToBob’ using a very similar binary
> search method as used for the verification of the availability of an
> address name. If a name isn’t found the client would simply return an
> error. If the name is found then the client will pull the information of
> that transaction and use the address it was sent from as the address to
> send the Bitcoin to.
>
> Essentially what this idea describes is a method to assign a name to a
> Bitcoin address in a way that is completely verifiable and independent of a
> third party.
>
> Please ask your questions and provide feedback.
>
> - Devin
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus based block size retargeting algorithm (draft)

2015-08-28 Thread Mark Friedenbach via bitcoin-dev
It is in their individual interests when the larger block that is allowed
for them grants them more fees.
On Aug 28, 2015 4:35 PM, Chris Pacia via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 When discussing this with Matt Whitlock earlier we basically concluded the
 block size will never increase under this proposal do to a collective
 action problem. If a miner votes for an increase and nobody else does, the
 blocksize will not increase yet he will still have to pay the difficulty
 penalty.

 It may be in everyone's collective interest to raise the block size but
 not their individual interest.
 On Aug 28, 2015 6:24 PM, Gavin via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 With this proposal, how much would it cost a miner to include an 'extra'
 500-byte transaction if the average block size is 900K and it costs the
 miner 20BTC in electricity/capital/etc to mine a block?

 If my understanding of the proposal is correct, it is:

 500/90 * 20 = 0.1 BTC

 ... Or $2.50 at today's exchange rate.

 That seems excessive.

 --
 Gavin Andresen


  On Aug 28, 2015, at 5:15 PM, Matt Whitlock via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
  This is the best proposal I've seen yet. Allow me to summarize:
 
  • It addresses the problem, in Jeff Garzik's BIP 100, of miners selling
 their block-size votes.
  • It addresses the problem, in Gavin Andresen's BIP 101, of blindly
 trying to predict future market needs versus future technological
 capacities.
  • It avoids a large step discontinuity in the block-size limit by
 starting with a 1-MB limit.
  • It throttles changes to ±10% every 2016 blocks.
  • It imposes a tangible cost (higher difficulty) on miners who vote to
 raise the block-size limit.
  • It avoids incentivizing miners to vote to lower the block-size limit.
 
  However, this proposal currently fails to answer a very important
 question:
 
  • What is the mechanism for activation of the new consensus rule? It is
 when a certain percentage of the blocks mined in a 2016-block retargeting
 period contain valid block-size votes?
 
 
  https://github.com/btcdrak/bips/blob/bip-cbbsra/bip-cbbrsa.mediawiki
 
 
  On Friday, 28 August 2015, at 9:28 pm, Btc Drak via bitcoin-dev wrote:
  Pull request: https://github.com/bitcoin/bips/pull/187
  ___
  bitcoin-dev mailing list
  bitcoin-dev@lists.linuxfoundation.org
  https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus based block size retargeting algorithm (draft)

2015-08-28 Thread Mark Friedenbach via bitcoin-dev
Ah, then my mistake. It seemed so similar to an idea that was proposed
before on this mailing list:

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008033.html

that my mind just filled in the gaps. I concur -- having miners -- or any
group -- vote on block size is not an intrinsically good thing. The the
original proposal due to Greg Maxwell et al was not a mechanism for
voting but rather a feedback control that made the maximum block size
that which generated the most fees.

On Fri, Aug 28, 2015 at 5:00 PM, Jorge Timón jti...@jtimon.cc wrote:

 On Sat, Aug 29, 2015 at 1:38 AM, Mark Friedenbach via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:
  It is in their individual interests when the larger block that is allowed
  for them grants them more fees.

 I realize now that this is not what Greg Maxwell proposed (aka
 flexcap): this is just miner's voting on block size but paying with
 higher difficulty when they vote for bigger blocks.
 As I said several times in other places, miners should not decide on
 the consensus rule to limit mining centralization.
 People keep talking about miners voting on the block size or
 softforking the size down if we went too far. But what if the
 hashing majority is perfectly fine with the mining centralization at
 that point in time?
 Then a softfork won't be useful and we're talking about an anti-miner
 fork (see
 https://github.com/bitcoin/bips/pull/181/files#diff-e331b8631759a4ed6a4cfb4d10f473caR158
 and
 https://github.com/bitcoin/bips/pull/181/files#diff-e331b8631759a4ed6a4cfb4d10f473caR175
 ).

 I believe miner's voting on the rule to limit mining centralization is
 a terrible idea.
 It sounds as bad as letting pharma companies write the regulations on
 new drugs safety, letting big food chains deciding on minimum food
 controls or car manufacturers deciding on indirect taxes for fuel.
 That's why I dislike both this proposal and BIP100.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-27 Thread Mark Friedenbach via bitcoin-dev
So I've created 2 new repositories with changed rules regarding
sequencenumbers:

https://github.com/maaku/bitcoin/tree/sequencenumbers2

This repository inverts (un-inverts?) the sequence number. nSequence=1
means 1 block relative lock-height. nSequence=LOCKTIME_THRESHOLD means 1
second relative lock-height. nSequence=0x8000 (most significant bit
set) is not interpreted as a relative lock-time.

https://github.com/maaku/bitcoin/tree/sequencenumbers3

This repository not only inverts the sequence number, but also interprets
it as a fixed-point number. This allows up to 5 year relative lock times
using blocks as units, and saves 12 low-order bits for future use. Or, up
to about 2 year relative lock times using seconds as units, and saves 4
bits for future use without second-level granularity. More bits could be
recovered from time-based locktimes by choosing a higher granularity (a
soft-fork change if done correctly).

On Tue, Aug 25, 2015 at 3:08 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 To follow up on this, let's say that you want to be able to have up to 1
 year relative lock-times. This choice is somewhat arbitrary and what I
 would like some input on, but I'll come back to this point.

  * 1 bit is necessary to enable/disable relative lock-time.

  * 1 bit is necessary to indicate whether seconds vs blocks as the unit of
 measurement.

  * 1 year of time with 1-second granularity requires 25 bits. However
 since blocks occur at approximately 10 minute intervals on average, having
 a relative lock-time significantly less than this interval doesn't make
 much sense. A granularity of 256 seconds would be greater than the Nyquist
 frequency and requires only 17 bits.

  * 1 year of blocks with 1-block granularity requires 16 bits.

 So time-based relative lock time requires about 19 bits, and block-based
 relative lock-time requires about 18 bits. That leaves 13 or 14 bits for
 other uses.

 Assuming a maximum of 1-year relative lock-times. But what is an
 appropriate maximum to choose? The use cases I have considered have only
 had lock times on the order of a few days to a month or so. However I would
 feel uncomfortable going less than a year for a hard maximum, and am having
 trouble thinking of any use case that would require more than a year of
 lock-time. Can anyone else think of a use case that requires 1yr relative
 lock-time?

 TL;DR

 On Sun, Aug 23, 2015 at 7:37 PM, Mark Friedenbach m...@friedenbach.org
 wrote:

 A power of 2 would be far more efficient here. The key question is how
 long of a relative block time do you need? Figure out what the maximum
 should be ( I don't know what that would be, any ideas?) and then see how
 many bits you have left over.
 On Aug 23, 2015 7:23 PM, Jorge Timón 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 On Mon, Aug 24, 2015 at 3:01 AM, Gregory Maxwell via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:
  Seperately, to Mark and Btcdrank: Adding an extra wrinkel to the
  discussion has any thought been given to represent one block with more
  than one increment?  This would leave additional space for future
  signaling, or allow, for example, higher resolution numbers for a
  sharechain commitement.

 No, I don't think anybody thought about this. I just explained this to
 Pieter using for example, 10 instead of 1.
 He suggested 600 increments so that it is more similar to timestamps.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-25 Thread Mark Friedenbach via bitcoin-dev
To follow up on this, let's say that you want to be able to have up to 1
year relative lock-times. This choice is somewhat arbitrary and what I
would like some input on, but I'll come back to this point.

 * 1 bit is necessary to enable/disable relative lock-time.

 * 1 bit is necessary to indicate whether seconds vs blocks as the unit of
measurement.

 * 1 year of time with 1-second granularity requires 25 bits. However since
blocks occur at approximately 10 minute intervals on average, having a
relative lock-time significantly less than this interval doesn't make much
sense. A granularity of 256 seconds would be greater than the Nyquist
frequency and requires only 17 bits.

 * 1 year of blocks with 1-block granularity requires 16 bits.

So time-based relative lock time requires about 19 bits, and block-based
relative lock-time requires about 18 bits. That leaves 13 or 14 bits for
other uses.

Assuming a maximum of 1-year relative lock-times. But what is an
appropriate maximum to choose? The use cases I have considered have only
had lock times on the order of a few days to a month or so. However I would
feel uncomfortable going less than a year for a hard maximum, and am having
trouble thinking of any use case that would require more than a year of
lock-time. Can anyone else think of a use case that requires 1yr relative
lock-time?

TL;DR

On Sun, Aug 23, 2015 at 7:37 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 A power of 2 would be far more efficient here. The key question is how
 long of a relative block time do you need? Figure out what the maximum
 should be ( I don't know what that would be, any ideas?) and then see how
 many bits you have left over.
 On Aug 23, 2015 7:23 PM, Jorge Timón 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 On Mon, Aug 24, 2015 at 3:01 AM, Gregory Maxwell via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:
  Seperately, to Mark and Btcdrank: Adding an extra wrinkel to the
  discussion has any thought been given to represent one block with more
  than one increment?  This would leave additional space for future
  signaling, or allow, for example, higher resolution numbers for a
  sharechain commitement.

 No, I don't think anybody thought about this. I just explained this to
 Pieter using for example, 10 instead of 1.
 He suggested 600 increments so that it is more similar to timestamps.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP-draft] CHECKSEQUENCEVERIFY - An opcode for relative locktime

2015-08-23 Thread Mark Friedenbach via bitcoin-dev
A power of 2 would be far more efficient here. The key question is how long
of a relative block time do you need? Figure out what the maximum should be
( I don't know what that would be, any ideas?) and then see how many bits
you have left over.
On Aug 23, 2015 7:23 PM, Jorge Timón 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On Mon, Aug 24, 2015 at 3:01 AM, Gregory Maxwell via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org wrote:
  Seperately, to Mark and Btcdrank: Adding an extra wrinkel to the
  discussion has any thought been given to represent one block with more
  than one increment?  This would leave additional space for future
  signaling, or allow, for example, higher resolution numbers for a
  sharechain commitement.

 No, I don't think anybody thought about this. I just explained this to
 Pieter using for example, 10 instead of 1.
 He suggested 600 increments so that it is more similar to timestamps.
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CLTV/CSV/etc. deployment considerations due to XT/Not-BitcoinXT miners

2015-08-20 Thread Mark Friedenbach via bitcoin-dev
No, the nVersion would be = 4, so that we don't waste any version values.

On Thu, Aug 20, 2015 at 10:32 AM, jl2012 via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Peter Todd via bitcoin-dev 於 2015-08-19 01:50 寫到:



 2) nVersion mask, with IsSuperMajority()

 In this option the nVersion bits set by XT/Not-Bitcoin-XT miners would
 be masked away, prior to applying standard IsSuperMajority() logic:

 block.nVersion  ~0x2007

 This means that CLTV/CSV/etc. miners running Bitcoin Core would create
 blocks with nVersion=8, 0b1000. From the perspective of the
 CLTV/CSV/etc.  IsSuperMajority() test, XT/Not-Bitcoin-XT miners would be
 advertising blocks that do not trigger the soft-fork.

 For the perpose of soft-fork warnings, the highest known version can
 remain nVersion=8, which is triggered by both XT/Not-Bitcoin-XT blocks
 as well as a future nVersion bits implementation. Equally,
 XT/Not-Bitcoin-XT soft-fork warnings will be triggered, by having an
 unknown bit set.

 When nVersion bits is implemented by the Bitcoin protocol, the plan of
 setting the high bits to 0b001 still works. The three lowest bits will
 be unusable for some time, but will be eventually recoverable as
 XT/Not-Bitcoin-XT mining ceases.

 Equally, further IsSuperMajority() softforks can be accomplished with
 the same masking technique.

 This option does complicate the XT-coin protocol implementation in the
 future. But that's their problem, and anyway, the maintainers
 (Hearn/Andresen) has strenuously argued(5) against the use of soft-forks
 and/or appear to be in favor of a more centralized mandatory update
 schedule.(6)


 If you are going to mask bits, would you consider to mask all bits except
 the 4th bit? So other fork proposals may use other bits for voting
 concurrently.

 And as I understand, the masking is applied only during the voting stage?
 After the softfork is fully enforced with 95% support, the nVersion will be
 simply =8, without any masking?

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CLTV/CSV/etc. deployment considerations due to XT/Not-BitcoinXT miners

2015-08-19 Thread Mark Friedenbach via bitcoin-dev
We can use nVersion  0x8 to signal support, while keeping the consensus
rule as nVersion = 4, right? That way we don't waste a bit after this all
clears up.
On Aug 18, 2015 10:50 PM, Peter Todd via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Deployment of the proposed CLTV, CSV, etc. soft-forks has been recently
 complicated by the existence of XT(1) and Not-Bitcoin-XT(2) miners. Both
 mine blocks with nVersion=0x2007, which would falsely trigger the
 previously suggested implementation using the IsSuperMajority()
 mechanism and nVersion=4 blocks. Additionally while the
 XT/Not-Bitcoin-XT software claims to support Wuille/Todd/Maxwell's
 nVersion soft-fork mechanism(3) a key component of it - fork
 deadlines(3) - is not implemented.


 XT/Not-Bitcoin-XT behavior
 --

 Both implementations produce blocks with nVersion=0x2007,
 or in binary: 0b001...111

 Neither implementation supports a fork deadline; both Not-Bitcoin-XT and
 XT will produce blocks with those bits set indefinitely under any
 circumstance, with the proviso that while XT has a hashing power
 majority, blocks it produces might not be part of the Bitcoin blockchain
 after Jan 11th 2016. (though this can flap back and forth if reorgs
 happen)

 Curiously the BIP101 draft was changed(4) at the last minute from using
 the nVersion bits compliant 0x2004 block nVersion, to using two more
 bits unnecessarily. The rational for doing this is unknown; the git
 commit message associated with the change suggested compatibility
 concerns, but what the concerns actually were isn't specified. Equally
 even though implementing the fork deadline would be very each in the XT
 implementation, this was not done. (the XT codebase has had almost no
 peer review)


 Options for CLTV/CSV/etc. deployment
 

 1) Plain IsSuperMajority() with nVersion=4

 This option can be ruled out immediately due to the high risk of
 premature triggering, without genuine 95% miner support.


 2) nVersion mask, with IsSuperMajority()

 In this option the nVersion bits set by XT/Not-Bitcoin-XT miners would
 be masked away, prior to applying standard IsSuperMajority() logic:

 block.nVersion  ~0x2007

 This means that CLTV/CSV/etc. miners running Bitcoin Core would create
 blocks with nVersion=8, 0b1000. From the perspective of the
 CLTV/CSV/etc.  IsSuperMajority() test, XT/Not-Bitcoin-XT miners would be
 advertising blocks that do not trigger the soft-fork.

 For the perpose of soft-fork warnings, the highest known version can
 remain nVersion=8, which is triggered by both XT/Not-Bitcoin-XT blocks
 as well as a future nVersion bits implementation. Equally,
 XT/Not-Bitcoin-XT soft-fork warnings will be triggered, by having an
 unknown bit set.

 When nVersion bits is implemented by the Bitcoin protocol, the plan of
 setting the high bits to 0b001 still works. The three lowest bits will
 be unusable for some time, but will be eventually recoverable as
 XT/Not-Bitcoin-XT mining ceases.

 Equally, further IsSuperMajority() softforks can be accomplished with
 the same masking technique.

 This option does complicate the XT-coin protocol implementation in the
 future. But that's their problem, and anyway, the maintainers
 (Hearn/Andresen) has strenuously argued(5) against the use of soft-forks
 and/or appear to be in favor of a more centralized mandatory update
 schedule.(6)


 3) Full nVersion bits implementation

 The most complex option would be to deploy via full nVersion bits
 implementation using flag bit #4 to trigger the fork. Compliant miners
 would advertise 0x2008 initially, followed by 0x2000 once the
 fork had triggered. The lowest three bits would be unusable for forks
 for some time, although they could be eventually recovered as
 XT/Not-Bitcoin-XT mining ceases.

 The main disadvantage of this option is high initial complexity - the
 reason why IsSuperMajority() was suggested for CLTV/CSV in the first
 place. That said, much of the code required has been implemented in XT
 for the BIP101 hard-fork logic, although as mentioned above, the code
 has had very little peer review.


 References
 --

 1) https://github.com/bitcoinxt/bitcoinxt

 2) https://github.com/xtbit/notbitcoinxt

 3) Version bits proposal,
 Pieter Wuille, May 26th 2015, Bitcoin-development mailing list,

 http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008282.html
 ,
 https://gist.github.com/sipa/bf69659f43e763540550

 4)
 https://github.com/bitcoin/bips/commit/3248c9f67bd7fcd1d05b8db7c5c56e4788deebfe

 5) On consensus and forks - What is the difference between a hard and
 soft fork?,
Mike Hearn, Aug 12th 2015,
https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7

 6) 2013 San Jose Bitcoin conference developer round-table

 --
 'peter'[:-1]@petertodd.org
 0402fe6fb9ad613c93e12bddfc6ec02a2bd92f002050594d

 

Re: [bitcoin-dev] BIP: Using Median time-past as endpoint for locktime calculations

2015-08-18 Thread Mark Friedenbach via bitcoin-dev
You are absolutely correct! My apologies for the oversight in editing. If
you could dig up the link though that would be really helpful.

On Tue, Aug 18, 2015 at 6:04 PM, Peter Todd via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On Tue, Aug 18, 2015 at 02:22:10AM +0100, Thomas Kerin via bitcoin-dev
 wrote:
  ==Acknowledgements==
 
  Mark Friedenbach for designing and authoring the reference
  implementation of this BIP.
 
  Thomas Kerin authored this BIP document.

 IIRC Gregory Maxwell came up with the actual idea in question, during a
 discussion between myself and Luke-Jr about incentives related to nTime
 on #bitcoin-dev or #bitcoin-wizards. He should get credit for it; I'll
 see if I can dig up a link to the discussion for historical interest.

 --
 'peter'[:-1]@petertodd.org
 0402fe6fb9ad613c93e12bddfc6ec02a2bd92f002050594d

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Minimum Block Size

2015-08-16 Thread Mark Friedenbach via bitcoin-dev
Levin, it is a complicated issue for which there isn't an easy answer. Part
of the issue is that block size doesn't actually measure resource usage
very reliably. It is possible to support a much higher volume of typical
usage transactions than transactions specifically constructed to cause DoS
issues. But if block size is the knob being tweaked, then you must design
for the DoS worst case, not the average/expected use case.

Additionally, there is an issue of time horizons and what presumed
improvements are made to the client. Bitcoin Core today can barely handle
1MB blocks, but that's an engineering limitation. So are we assuming fixes
that aren't actually deployed yet? Should we raise the block size before
that work is tested and its performance characteristics validated?

It's a complicated issue without easy answers, and that's why you're not
seeing straightforward statements of 2MB, 8MB, or 20MB from most of
the developers.

But that's not to say that people aren't doing anything. There is a
workshop being organized for September 12-13th that will cover much of
these it's complicated issues. There will be a follow-on workshop in the
Nov/Dec timeframe in which specific proposals will be discussed. I
encourage you to participate:

http://scalingbitcoin.org/

On Sun, Aug 16, 2015 at 9:41 AM, Levin Keller via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Hey everyone,

 as with the current max block size debate I was wondering: Is anyone
 here in favor of a minimum block size (say 2 MB or so)? If so I would be
 interested in an exchange (maybe off-list) of ideas. I am in favor of a
 lower limit and am giving it quite a bit of thought at the moment.

 Cheers

 Levin

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin XT 0.11A

2015-08-15 Thread Mark Friedenbach via bitcoin-dev
I would like very much to know how it is that we're supposed to be making
money off of lightning, and therefore how it represents a conflict of
interest. Apparently there is tons of money to be made in releasing
open-source protocols! I would hate to miss out on that.

We are working on lightning because Mike of all people said, essentially, 
if you're so fond of micro payment channels, why aren't you working on
them? And he was right! So we looked around and found the best proposal
and funded it.
On Aug 15, 2015 3:28 PM, Ken Friece via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 I know full well who works for Blockstream and I know you're not one of
 those folks. The Blockstream core devs are very vocal against a reasonable
 blocksize increase (17% growth per year in Pieter's BIP is not what I
 consider reasonable because it doesn't come close to keeping with
 technological increases). I think we can both agree that more on-chain
 space means less demand for lightning, and vice versa, which is a blatant
 conflict of interest.

 I'm also trying to figure out how things like lightning are not competing
 directly with miners for fees. More off-chain transactions means less
 blockchain demand, which would lower on-chain fees. I'm not sure what is
 controversial about that statement.

 The lightning network concept is actually a brilliant way to take fees
 away from miners without having to make any investment at all in SSH-256
 ASIC mining hardware.

 On Sat, Aug 15, 2015 at 6:16 PM, Eric Lombrozo elombr...@gmail.com
 wrote:


 On Aug 15, 2015, at 3:01 PM, Ken Friece via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 What are you so afraid of, Eric? If Mike's fork is successful, consensus
 is reached around larger blocks. If it is rejected, the status quo will
 remain for now. Network consensus, NOT CORE DEVELOPER CONSENSUS, is the
 only thing that matters, and those that go against network consensus will
 be severely punished with complete loss of income.


 I fully agree that core developers are not the only people who should
 have a say in this. But again, we’re not talking about merely forking some
 open source project - we’re talking about forking a ledger representing
 real assets that real people are holding…and I think it’s fair to say that
 the risk of permanent ledger forks far outweighs whatever benefits any
 change in the protocol might bring. And this would be true even if there
 were unanimous agreement that the change is good (which there clearly IS
 NOT in this case) but the deployment mechanism could still break things.

 If anything we should attempt a hard fork with a less contentious change
 first, just to test deployability.

 I'm not sure who appointed the core devs some sort of Bitcoin Gods that
 can hold up any change that they happen to disagree with. It seems like the
 core devs are scared to death that the bitcoin network may change without
 their blessing, so they go on and on about how terrible hard forks are.
 Hard forks are the only way to keep core devs in check.


 Again, let’s figure out a hard fork mechanism and test it with a far less
 contentious change first

 Despite significant past technical bitcoin achievements, two of the most
 vocal opponents to a reasonable blocksize increase work for a company
 (Blockstream) that stands to profit directly from artificially limiting the
 blocksize. The whole situation reeks. Because of such a blatant conflict of
 interest, the ethical thing to do would be for them to either resign from
 Blockstream or immediately withdraw themselves from the blocksize debate.
 This is the type of stuff that I hoped would end with Bitcoin, but alas, I
 guess human nature never changes.


 For the record, I do not work for Blockstream. Neither do a bunch of
 other people who have published a number of concerns. Very few of the
 concerns I’ve seen from the technical community seem to be motivated
 primarily by profit motives.

 It should also be pointed out that *not* making drastic changes is the
 default consensus policy…and the burden of justifying a change falls on
 those who want to make the change. Again, the risk of permanent ledger
 forks far outweighs whatever benefits protocol changes might bring.

 Personally, I think miners should give Bitcoin XT a serious look. Miners
 need to realize that they are in direct competition with the lightning
 network and sidechains for fees. Miners, ask yourselves if you think you'll
 earn more fees with 1 MB blocks and more off-chain transactions or with 8
 MB blocks and more on-chain transactions…


 Miners are NOT in direct competition with the lightning network and
 sidechains - these claims are patently false. I recommend you take a look
 at these ideas and understand them a little better before trying to make
 any such claims. Again, I do not work for Blockstream…and my agenda in this
 post is not to promote either of these ideas…but with all due respect, I do
 not think 

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-11 Thread Mark Friedenbach via bitcoin-dev
On Mon, Aug 10, 2015 at 10:34 PM, Thomas Zander via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 So, while LN is written, rolled out and tested, we need to respond with
 bigger
 blocks.  8Mb - 8Gb sounds good to me.


This is where things diverge. It's fine to pick a new limit or growth
trajectory. But defend it with data and reasoned analysis.

Can you at least understand the conservative position here? 1MB sounds
good to me is how we got into this mess. We must make sure that we avoid
making the same mistakes again, creating more or worse problems then we are
solving.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-11 Thread Mark Friedenbach via bitcoin-dev
On Mon, Aug 10, 2015 at 11:31 PM, Thomas Zander via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On Monday 10. August 2015 23.03.39 Mark Friedenbach wrote:
  This is where things diverge. It's fine to pick a new limit or growth
  trajectory. But defend it with data and reasoned analysis.

 We currently serve about 0,007% of the world population sending maybe one
 transaction a month.
 This can only go up.

 There are about 20 currencies in the world that are unstable and showing
 early
 signs of hyperinflation. If even small percentage of these people cash-out
 and
 get Bitcoins for their savings you'd have the amount of people using
 Bitcoin
 as savings go from maybe half a million to 10 million in the space of a
 couple
 of months. Why so fast? Because all the world currencies are linked.
 Practically all currencies follow the USD, and while that one may stay
 robust
 and standing, the linkage has been shown in the past to cause
 chain-effects.

 It is impossible to predict how much uptake Bitcoin will take, but we have
 seen big rises in price as Cyprus had a bailin and then when Greece first
 showed bad signs again.
 Lets do our due diligence and agree that in the current world economy there
 are sure signs that people are considering Bitcoin on a big scale.

 Bigger amount of people holding Bitcoin savings won't make the transaction
 rate go up very much, but if you have feet on the ground you already see
 that
 people go back to barter in countries like Poland, Ireland, Greece etc.
 And Bitcoin will be an alternative to good to ignore.  Then transaction
 rates
 will go up. Dramatically.

 If you are asking for numbers, that is a bit tricky. Again; we are at
 0,007%... Thats like a f-ing rounding error in the world economy. You can't
 reason from that. Its like using a float to do calculations that you should
 have done in a double and getting weird output.

 Bottom line is that a maximum size of 8Mb blocks is not that odd. Because
 a 20
 times increase is very common in a company that is about 6 years old.
 For instance Android was about that age when it started to get shipped by
 non-
 Google companies. There the increase was substantially bigger and the
 company
 backing it was definitely able to change direction faster than the Bitcoin
 oiltanker can change direction.

 ...

 Another metric to remember; if you follow hackernews (well, the incubator
 more
 than the linked articles) you'd be exposed to the thinking of these
 startups.
 Their only criteria is growth. and this is rather substantial growth. Like
 150% per month.  Naturally, most of these build on top of html or other
 existing technologies.  But the point is that exponential growth is
 expected
 in any startup.  They typically have a much much more agressive timeline,
 though. Every month instead of every year.
 Having exponential growth in the blockchain is really not odd and even if
 we
 have LN or sidechains or the next changetip, this space will be used. And
 we
 will still have scarcity.


I'm sorry, I really don't want to sound like a jerk, but not a single word
of that mattered. Yes we all want Bitcoin to scale such that every person
in the world can use it without difficulty. However if that were all that
we cared about then I would be remiss if I did not point out that there are
plenty of better, faster, and cheaper solutions to finding global consensus
over a payment ledger than Bitcoin. Architectures which are algorithmically
superior in their scaling properties. Indeed they are already implemented
and you can use them today:

https://www.stellar.org/
http://opentransactions.org/

So why do I work on Bitcoin, and why do I care about the outcome of this
debate? Because Bitcoin offers one thing, and one thing only which
alternative architectures fundamentally lack: policy neutrality. It can't
be censored, it can't be shut down, and the rules cannot change from
underneath you. *That* is what Bitcoin offers that can't be replicated at
higher scale with a SQL database and an audit log.

It follows then, that if we make a decision now which destroys that
property, which makes it possible to censor bitcoin, to deny service, or to
pressure miners into changing rules contrary to user interests, then
Bitcoin is no longer interesting. We might as well get rid of mining at
that point and make Bitcoin look like Stellar or Open-Transactions because
at least then we'd scale even better and not be pumping millions of tons of
CO2 into the atmosphere from running all those ASICs.

On the other side, 3Tb harddrives are sold, which take 8Mb blocks without
 problems.


Straw man, storage is not an issue.


 You can buy broadband in every relevant country that easily supports the
 bandwidth we need. (remember we won't jump to 8Mb in a day, it will likely
 take at least 6 months).


Neither one of those assertions is clear. Keep in mind the goal is to have
Bitcoin survive active censorship. Presumably that means being able to run
a 

Re: [bitcoin-dev] What Lightning Is

2015-08-09 Thread Mark Friedenbach via bitcoin-dev
Tom, you appear to be misunderstanding how lightning network and
micropayment hub-and-spoke models in general work.

 But neither can Bob receive money, unless payment hub has
advanced it to the channel (or (2) below applies).  Nothing requires the
payment hub to do this.

On the contrary the funds were advanced by the hub on the creation of the
channel. There is no credit involved. if the funds aren't already available
for Bob to immediately claim his balance, the payment doesn't go through in
the first place.

On Sun, Aug 9, 2015 at 11:46 AM, Tom Harding via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 On 8/4/2015 4:27 AM, Pieter Wuille via bitcoin-dev wrote:

  Don't turn Bitcoin into something uninteresting, please.

 Consider how Bob will receive money using the Lightning Network.

 Bob receives a payment by applying a contract to his local payment
 channel, increasing the amount payable to him when the channel is closed.

 There are two possible sources of funding for Bob's increased claim.
 They can appear alone, or in combination:


 Funding Source (1)
 A deposit from Bob's payment hub

 Bob can receive funds, if his payment hub has made a deposit to the
 channel.  Another name for this is credit.

 This credit has no default risk: Bob cannot just take payment hub's
 deposit. But neither can Bob receive money, unless payment hub has
 advanced it to the channel (or (2) below applies).  Nothing requires the
 payment hub to do this.

 This is a 3rd-party dependency totally absent with plain old bitcoin.
 It will come with a fee and, in an important way, it is worse than the
 current banking system.  If a bank will not even open an account for Bob
 today, why would a payment hub lock up hard bitcoin to allow Bob to be
 paid through a Poon-Dryja channel?


 Funding Source (2)
 Bob's previous spends

 If Bob has previously spent from the channel, decreasing his claim on
 its funds (which he could have deposited himself), that claim can be
 re-increased.

 To avoid needing credit (1), Bob has an incentive to consolidate
 spending and income in the same payment channel, just as with today's
 banks.  This is at odds with the idea that Bob will have accounts with
 many payment hubs.  It is an incentive for centralization.


 With Lightning Network, Bob will need a powerful middleman to send and
 receive money effectively.  *That* is uninteresting to me.


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] If you had a single chance to double the transactions/second Bitcoin allows...

2015-08-07 Thread Mark Friedenbach via bitcoin-dev
Then I would suggest working on payment channel networks. No decrease of
the interblock time will ever compete with the approximately instant time
it takes to validate a microchannel payment.

On Fri, Aug 7, 2015 at 4:08 PM, Sergio Demian Lerner 
sergio.d.ler...@gmail.com wrote:

 In some rare occasions in everyday life, what matters is seconds. Like
 when paying for parking in the car while some other cars are behind you in
 the line. You don't want them to get upset.

 I takes me tens of minutes to shop. But once you choose your merchandise
 and your payment starts processing, if the payment system allows several
 payments to be pending simultaneously and you're not blocking the next
 person to pay, then I don't care waiting some minutes for confirmation. But
 saving 10 minutes of confirmation is a lot.

 I ague that our time is mostly measured in minutes (but I don't have any
 sociological, cultural, genetic or anthropological evidence). It takes
 minutes to read an e-mail, minutes to correct a bug, minutes to have lunch,
 minutes to drive to the office, minutes to talk to your kids. A payment
 taking 1 minute is much better than a payment taking 10. If I could choose
 a single thing to change to Bitcoin, I would lower the payment time, even
 within the minute scale.

 Sergio



 On Fri, Aug 7, 2015 at 7:46 PM, Natanael natanae...@gmail.com wrote:

 Den 7 aug 2015 23:37 skrev Sergio Demian Lerner via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org:
 
  Mark,
  It took you 3 minutes to respond to my e-mail. And I responded to you 4
 minutes later. If you had responded to me in 10 minutes, I would be of out
 the office and we wouldn't have this dialogue. So 5 minutes is a lot of
 time.
 
  Obviously this is not a technical response to the technical issues you
 argue. But minutes is a time scale we humans use to measure time very
 often.

 But what's more likely to matter is seconds. What you need then is some
 variant of multisignature notaries (Greenaddress.it, lightning network),
 where the combination of economic incentives and legal liability gives you
 the assurance of doublespend protection from the time of publication of the
 transaction to the first block confirmation.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] If you had a single chance to double the transactions/second Bitcoin allows...

2015-08-07 Thread Mark Friedenbach via bitcoin-dev
Actually I gave a cached answer earlier which on further review may need
updating. (Bad Mark!)

I presume by what's more likely to matter is seconds you are referencing
point of sale. As you mention yourself, lightning network or green address
style payment escrow obviates the need for short inter-block times.

But with lightning there is a danger of channels being exhausted in the
time between blocks, causing the need for new channels to be established.
So lightning does in fact benefit from moderately shorter inter-block
times, although how much of an issue this will be is anyone's guess now.

Still the first two points about larger SPV proofs and selfish mining still
hold true, which sets the bar particularly high for justifying more
frequent blocks.

On Fri, Aug 7, 2015 at 3:46 PM, Natanael natanae...@gmail.com wrote:

 Den 7 aug 2015 23:37 skrev Sergio Demian Lerner via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org:
 
  Mark,
  It took you 3 minutes to respond to my e-mail. And I responded to you 4
 minutes later. If you had responded to me in 10 minutes, I would be of out
 the office and we wouldn't have this dialogue. So 5 minutes is a lot of
 time.
 
  Obviously this is not a technical response to the technical issues you
 argue. But minutes is a time scale we humans use to measure time very
 often.

 But what's more likely to matter is seconds. What you need then is some
 variant of multisignature notaries (Greenaddress.it, lightning network),
 where the combination of economic incentives and legal liability gives you
 the assurance of doublespend protection from the time of publication of the
 transaction to the first block confirmation.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Mark Friedenbach via bitcoin-dev
Surely you have some sort of empirical measurement demonstrating the
validity of that statement? That is to say you've established some
technical criteria by which to determine how much centralization pressure
is too much, and shown that Pieter's proposal undercuts expected progress
in that area?

On Fri, Aug 7, 2015 at 12:07 PM, Ryan Butler rryanani...@gmail.com wrote:

 Clarification...

 These are not mutually exclusive.  We can design an increase to blocksize
 that increases available space on chain AND follow technological
 evolution.  Peter's latest proposal is way too conservative on that front.

 And given Peter's assertion that demand is infinite there will still be a
 an ocean of off chain transactions for the likes of blockstream to address.
 On Aug 7, 2015 1:57 PM, Ryan Butler rryanani...@gmail.com wrote:

 Who said anything about scaling bitcoin to visa levels now?  We're
 talking about an increase now that scales into the future at a rate that is
 consistent with technological progress.

 Peter himself said So, I think the block size should follow
 technological evolution

 The blocksize increase proposals have been modeled around this very
 thing.  It's reasonable to increase the blocksize to a point that a
 reasonable person, with reasonable equipment and internet access can run a
 node or even a miner with acceptable orphan rates.  Most miners are spv
 mining anyways.  The 8 or even 20 MB limits are within those parameters.

 These are not mutually exclusive.  We can design an increase to blocksize
 that addresses both demand exceeding the available space AND follow
 technological evolution.  Peter's latest proposal is way too conservative
 on that front.
 On Aug 7, 2015 1:25 PM, Mark Friedenbach m...@friedenbach.org wrote:

 Please don't put words into Pieter's mouth. I guarantee you everyone
 working on Bitcoin in their heart of hearts would prefer everyone in the
 world being able to use the Bitcoin ledger for whatever purpose, if there
 were no cost.

 But like any real world engineering issue, this is a matter of
 tradeoffs. At the extreme it is simply impossible to scale Bitcoin to the
 terrabyte sized blocks that would be necessary to service the entire
 world's financial transactions. Not without sacrificing entirely the
 protection of policy neutrality achieved through decentralization. And as
 that is Bitcoin's only advantage over traditional consensus systems, you
 would have to wonder what the point of such an endeavor would be.

 So *somewhere* you have to draw the line, and transactions below that
 level are simply pushed into higher level or off-chain protocols.

 The issue, as Pieter and Jorge have been pointing out, is that technical
 discussion over where that line should be has been missing from this debate.

 On Fri, Aug 7, 2015 at 10:47 AM, Ryan Butler via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 Interesting position there Peter...you fear more people actually using
 bitcoin.  The less on chain transactions the lower the velocity and the
 lower the value of the network.  I would be careful what you ask for
 because you end up having nothing left to even root the security of these
 off chain transactions with and then neither will exist.

 Nobody ever said you wouldn't run out of capacity at any size.  It's
 quite the fallacy to draw the conclusion from that statement that block
 size should remain far below a capacity it can easily maintain which would
 bring more users/velocity/value to the system.  The outcomes of both of
 those scenarios are asymmetric.  A higher block size can support more users
 and volume.

 Raising the blocksize isn't out of fear.  It's the realization that we
 are at a point where we can raise it and support more users and
 transactions while keeping the downsides to a minimum (centralization etc).
 On Aug 7, 2015 11:28 AM, Pieter Wuille via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 On Fri, Aug 7, 2015 at 5:55 PM, Gavin Andresen 
 gavinandre...@gmail.com wrote:

 On Fri, Aug 7, 2015 at 11:16 AM, Pieter Wuille 
 pieter.wui...@gmail.com wrote:

 I guess my question (and perhaps that's what Jorge is after): do you
 feel that blocks should be increased in response to (or for fear of) 
 such a
 scenario.


 I think there are multiple reasons to raise the maximum block size,
 and yes, fear of Bad Things Happening as we run up against the 1MB limit 
 is
 one of the reasons.

 I take the opinion of smart engineers who actually do resource
 planning and have seen what happens when networks run out of capacity 
 very
 seriously.


 This is a fundamental disagreement then. I believe that the demand is
 infinite if you don't set a fee minimum (and I don't think we should), and
 it just takes time for the market to find a way to fill whatever is
 available - the rest goes into off-chain systems anyway. You will run out
 of capacity at any size, and acting out of fear of that reality does not
 improve the system. 

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Mark Friedenbach via bitcoin-dev
Please don't put words into Pieter's mouth. I guarantee you everyone
working on Bitcoin in their heart of hearts would prefer everyone in the
world being able to use the Bitcoin ledger for whatever purpose, if there
were no cost.

But like any real world engineering issue, this is a matter of tradeoffs.
At the extreme it is simply impossible to scale Bitcoin to the terrabyte
sized blocks that would be necessary to service the entire world's
financial transactions. Not without sacrificing entirely the protection of
policy neutrality achieved through decentralization. And as that is
Bitcoin's only advantage over traditional consensus systems, you would have
to wonder what the point of such an endeavor would be.

So *somewhere* you have to draw the line, and transactions below that level
are simply pushed into higher level or off-chain protocols.

The issue, as Pieter and Jorge have been pointing out, is that technical
discussion over where that line should be has been missing from this debate.

On Fri, Aug 7, 2015 at 10:47 AM, Ryan Butler via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:

 Interesting position there Peter...you fear more people actually using
 bitcoin.  The less on chain transactions the lower the velocity and the
 lower the value of the network.  I would be careful what you ask for
 because you end up having nothing left to even root the security of these
 off chain transactions with and then neither will exist.

 Nobody ever said you wouldn't run out of capacity at any size.  It's quite
 the fallacy to draw the conclusion from that statement that block size
 should remain far below a capacity it can easily maintain which would bring
 more users/velocity/value to the system.  The outcomes of both of those
 scenarios are asymmetric.  A higher block size can support more users and
 volume.

 Raising the blocksize isn't out of fear.  It's the realization that we are
 at a point where we can raise it and support more users and transactions
 while keeping the downsides to a minimum (centralization etc).
 On Aug 7, 2015 11:28 AM, Pieter Wuille via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:

 On Fri, Aug 7, 2015 at 5:55 PM, Gavin Andresen gavinandre...@gmail.com
 wrote:

 On Fri, Aug 7, 2015 at 11:16 AM, Pieter Wuille pieter.wui...@gmail.com
 wrote:

 I guess my question (and perhaps that's what Jorge is after): do you
 feel that blocks should be increased in response to (or for fear of) such a
 scenario.


 I think there are multiple reasons to raise the maximum block size, and
 yes, fear of Bad Things Happening as we run up against the 1MB limit is one
 of the reasons.

 I take the opinion of smart engineers who actually do resource planning
 and have seen what happens when networks run out of capacity very seriously.


 This is a fundamental disagreement then. I believe that the demand is
 infinite if you don't set a fee minimum (and I don't think we should), and
 it just takes time for the market to find a way to fill whatever is
 available - the rest goes into off-chain systems anyway. You will run out
 of capacity at any size, and acting out of fear of that reality does not
 improve the system. Whatever size blocks are actually produced, I believe
 the result will either be something people consider too small to be
 competitive (you mean Bitcoin can only do 24 transactions per second?
 sounds almost the same as you mean Bitcoin can only do 3 transactions per
 second?), or something that is very centralized in practice, and likely
 both.


 And if so, if that is a reason for increase now, won't it be a reason
 for an increase later as well? It is my impression that your answer is yes,
 that this is why you want to increase the block size quickly and
 significantly, but correct me if I'm wrong.


 Sure, it might be a reason for an increase later. Here's my message to
 in-the-future Bitcoin engineers:  you should consider raising the maximum
 block size if needed and you think the benefits of doing so (like increased
 adoption or lower transaction fees or increased reliability) outweigh the
 costs (like higher operating costs for full-nodes or the disruption caused
 by ANY consensus rule change).


 In general that sounds reasonable, but it's a dangerous precedent to make
 technical decisions based on a fear of change of economics...

 --
 Pieter


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary

2015-07-28 Thread Mark Friedenbach via bitcoin-dev
Does it matter even in the slightest why the block size limit was put in
place? It does not. Bitcoin is a decentralized payment network, and the
relationship between utility (block size) and decentralization is
empirical. Why the 1MB limit was put in place at the time might be a
historically interesting question, but it bears little relevance to the
present engineering issues.

On Tue, Jul 28, 2015 at 5:43 PM, Jean-Paul Kogelman via bitcoin-dev 
bitcoin-dev@lists.linuxfoundation.org wrote:


  Enter a “temporary” anti-spam measure - a one megabyte block size limit.
 Let’s test this out, then increase it once we see how things work. So far
 so good…
 

 The block size limit was put in place as an anti-DoS measure (monster
 blocks), not anti-spam. It was never intended to have any economic
 effect, not on spam and not on any future fee market.


 jp

 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev