Re: [bitcoin-dev] Address expiration times should be added to BIP-173

2017-09-27 Thread Chris Priest via bitcoin-dev
A better solution is to just have the sending wallet check to see if the
address you are about to send to has been used before. If it's a fresh
address, it sends it through without any popup alert. If the address has
history going back a certain amount of time, then a popup comes up and
notifies the sender that they are sending to a non-fresh address that may
no longer be controlled by the receiver anymore.

Also, an even better idea is to set up an "address expiration service".
When you delete a wallet, you first send off an "expiration notice" which
is just a message (signed with the private key) saying "I am about to
delete this address, here is my new address". When someone tries to send to
that address, they first consult the address expiration service, and the
service will either tell them "this address is not expired, proceed", or
"this address has been expired, please send to this other address
instead...". Basically like a 301 redirect, but for addresses. I don't
think address expiration should be part of the protocol.

On Wed, Sep 27, 2017 at 10:06 AM, Peter Todd via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Re-use of old addresses is a major problem, not only for privacy, but also
> operationally: services like exchanges frequently have problems with users
> sending funds to addresses whose private keys have been lost or stolen;
> there
> are multiple examples of exchanges getting hacked, with users continuing to
> lose funds well after the actual hack has occured due to continuing
> deposits.
> This also makes it difficult operationally to rotate private keys. I
> personally
> have even lost funds in the past due to people sending me BTC to addresses
> that
> I gave them long ago for different reasons, rather than asking me for fresh
> one.
>
> To help combat this problem, I suggest that we add a UI-level expiration
> time
> to the new BIP173 address format. Wallets would be expected to consider
> addresses as invalid as a destination for funds after the expiration time
> is
> reached.
>
> Unfortunately, this proposal inevitably will raise a lot of UI and
> terminology
> questions. Notably, the entire notion of addresses is flawed from a user
> point
> of view: their experience with them should be more like "payment codes",
> with a
> code being valid for payment for a short period of time; wallets should
> not be
> displaying addresses as actually associated with specific funds. I suspect
> we'll see users thinking that an expired address risks the funds
> themselves;
> some thought needs to be put into terminology.
>
> Being just an expiration time, seconds-level resolution is unnecessary, and
> may give the wrong impression. I'd suggest either:
>
> 1) Hour resolution - 2^24 hours = 1914 years
> 2) Month resolution - 2^16 months = 5458 years
>
> Both options have the advantage of working well at the UI level regardless
> of
> timezone: the former is sufficiently short that UI's can simply display an
> "exact" time (though note different leap second interpretations), while the
> latter is long enough that rounding off to the nearest day in the local
> timezone is fine.
>
> Supporting hour-level (or just seconds) precision has the advantage of
> making
> it easy for services like exchanges to use addresses with relatively short
> validity periods, to reduce the risks of losses after a hack. Also, using
> at
> least hour-level ensures we don't have any year 2038 problems.
>
> Thoughts?
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>


-- 
Chris Priest
786-531-5938
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Better MMR Definition

2017-02-23 Thread Chris Priest via bitcoin-dev
On 2/22/17, Peter Todd via bitcoin-dev
 wrote:
> Reposting something that came up recently in a private discussion with some
> academics:
>
> Concretely, let's define a prunable MMR with the following grammar. This
> definition is an improvement on whats in the python-proofmarshal by
> committing
> to the number of items in the tree implicitly; an obvious max-log2(n)-sized
> proof-of-tree-size can be obtained by following the right-most nodes:
>
> Maybe(T) := UNPRUNED  | PRUNED 
>
> FullNode(0) := 
> FullNode(n) :=  
>
> PartialNode(0) := SOME  | NONE
> PartialNode(n) :=  
>
> MMR := FULL   | PARTIAL  
>
> Basically we define it in four parts. First we define Maybe(T) to represent
> pruned and unpruned (hash only) data. Secondly we define full nodes within
> 2^n
> sized trees. Third we define partial nodes. And finally we define the MMR
> itself as being either a full or partial node.
>
> First of all, with pruning we can define a rule that if any operation
> (other
> than checking commitment hashes) attempts to access pruned data, it should
> immediately fail. In particular, no operation should be able to determine
> if
> data is or isn't pruned. Equally, note how an implementation can keep track
> of
> what data was accessed during any given operation, and prune the rest,
> which
> means a proof is just the parts of the data structure accessed during one
> or
> more operations.
>
> With that, notice how proving the soundness of the proofs becomes trivial:
> if
> validation is deterministic, it is obviously impossible to construct two
> different proofs that prove contradictory statements, because a proof is
> simply
> part of the data structure itself. Contradiction would imply that the two
> proofs are different, but that's easily rejected by simply checking the hash
> of
> the data.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>


What problem does this try to solve, and what does it have to do with bitcoin?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Pre-BIP] Community Consensus Voting System

2017-02-05 Thread Chris Priest via bitcoin-dev
Personally I think once the blocksize arguments are solved, there will
be no more contentious changes for this voting system to deal with.
What other contentious issues have come up in the past 3 years or so
that wasn't blocksize/scaling related? Do other protocols like TCP/IP
and the HTTP protocol have developers arguing every day over issues no
one can agree on?

On 2/3/17, t. khan via bitcoin-dev
 wrote:
> Most of these are answered in the BIP, but for clarity:
>
>>Who decides on which companies are eligible?
> Preferably no one decides. The company would have to exist prior to the
> vote, and would need a public-facing website. In the event of contested
> votes (meaning someone finds evidence of a fake company), the admin could
> investigate and post results.
>
>>Is there some kind of centralized database that one registers?
> No.
>
>>Who administers this?
> I don't know. I'm happy to volunteer, if no one else wants to be
> responsible for it. The only task would be adding the confirmed votes to
> each respective BIP. From there, everything's public and can be confirmed
> by everyone.
>
>>What is to stop someone from creating a million fake companies to sway the
> voting?
> The logistics of doing that prevent it. But let's say 10 fake companies ...
> first, you'd need to register ten domain names, host and customize the
> website, all before the vote and in a way that no one would notice.
>
>>How does a company make it's vote?
> Someone at the company sends a very small transaction to the BIP's vote
> address. Someone at the company then posts what the vote was and its
> transaction ID on the company's blog/twitter, etc., and then emails the URL
> to the administrator.
>
>>How does one verify that the person voting on behalf of a company is
> actually the correct person?
> They post it on their company blog.
>
> On Fri, Feb 3, 2017 at 11:19 AM, alp alp via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> This proposal seems hopelessly broken.
>>
>> Who decides on which companies are eligible?  Is there some kind of
>> centralized database that one registers?  Who administers this?  What is
>> to
>> stop someone from creating a million fake companies to sway the voting?
>> How does a company make it's vote?  How does one verify that the person
>> voting on behalf of a company is actually the correct person?
>>
>>
>>
>> On Thu, Feb 2, 2017 at 7:32 PM, Dave Scotese via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> There are two ideas here for "on-chain" voting, both of which require
>>> changes to the software.  I agree with David that on-chain solutions
>>> complicate things.  Both proposals can be effected without any software
>>> changes:
>>>
>>> Those who wish to use proof of stake can provide a service for making
>>> vanity addresses containing some indicator of the proposal to be
>>> supported
>>> - 1bigblock or 12mbblk or whatever - based on a supporter-provided
>>> secret
>>> key, and then supporters can move their bitcoin into their own vanity
>>> address and then whoever wants to can create a website to display the
>>> matching addresses and explain that this is the financial power in the
>>> hands of supporters and how to add your "financial power vote."
>>>
>>> Those who simply want to "buy votes" can use their funds in marketing
>>> efforts to promote the proposal they support.
>>>
>>> This second method, of course, can be abused.  The first actually
>>> requires people to control bitcoin in order to represent support.
>>> Counting
>>> actual, real people is still a technology in its infancy, and I don't
>>> think
>>> I want to see it progress much. People are not units, but individuals,
>>> and
>>> their value only becomes correlated to their net worth after they've
>>> been
>>> alive for many years, and even then, some of the best people have died
>>> paupers. If bitcoin-discuss got more traffic, I think this discussion
>>> would
>>> be better had on that list.
>>>
>>> notplato
>>>
>>> On Thu, Feb 2, 2017 at 4:24 PM, Luke Dashjr via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
 Strongly disagree with buying "votes", or portraying open standards as
 a
 voting process. Also, this depends on address reuse, so it's
 fundamentally
 flawed in design.

 Some way for people to express their support weighed by coins (without
 losing/spending them), and possibly weighed by running a full node,
 might
 still be desirable. The most straightforward way to do this is to
 support
 message signatures somehow (ideally without using the same pubkey as
 spending), and some [inherently unreliable, but perhaps useful if the
 community "colludes" to not-cheat] way to sign with ones' full node.

 Note also that the BIP process already has BIP Comments for leaving
 textual
 opinions on the BIP unrelated to stake. See BIP 2 for details on that.

 Luke


 On Thu

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-26 Thread Chris Priest via bitcoin-dev
> If there is a split, it becomes 2 incompatible ledgers.

Not necessarily. When the BIP50 hard fork happened, it didn't create
two incompatible ledgers. It *could* have, but it didn't. If every
single transaction mined during that time has been "double spent" on
the other chain, then it would have created a very bad situation. When
one side of the fork gets abandoned, actual users would have lost
money. Since only one person was able to perform this double spend,
only the miners and that one double spender lost money when the one
side was abandoned. If there had been a significant number of users
who had value only on the chain that was eventually abandoned, that
chain would have incentive to not be abandoned and that *would* have
resulted in a permanent incompatible split. It was essentially the
replay *effect* (not "attack") that allowed bitcoin to survive that
hard fork. BIP50 was written before the term "replay attack" or
"replay effect" has been coined, so it doesn't say much about how
transactions replayed...

On 1/25/17, Johnson Lau  wrote:
> I don’t think this is how the blockchain consensus works. If there is a
> split, it becomes 2 incompatible ledgers. Bitcoin is not a trademark, and
> you don’t need a permission to hardfork it. And what you suggest is also
> technically infeasible, as the miners on the new chain may not have a
> consensus only what’s happening in the old chain.
>
>> On 26 Jan 2017, at 15:03, Chris Priest  wrote:
>>
>> I don't think the solution should be to "fix the replay attack", but
>> rather to "force the replay effect". The fact that transactions can be
>> relayed should be seen as a good thing, and not something that should
>> be fixed, or even called an "attack".
>>
>> The solution should be to create a "bridge" that replays all
>> transactions from one network over to the other, and vice-versa. A
>> fork should be transparent to the end-user. Forcing the user to choose
>> which network to use is bad, because 99% of people that use bitcoin
>> don't care about developer drama, and will only be confused by the
>> choice. When a user moves coins mined before the fork date, both
>> blockchains should record that transaction. Also a rule should be
>> introduced that prevents users "tainting" their prefork-mined coins
>> with coins mined after the fork. All pre-fork mined coins should
>> "belong" to the network with hashpower majority. No other networks
>> should be able to claim pre-forked coins as being part of their
>> issuance (and therefore part of market cap). Market cap may be
>> bullshit, but it is used a lot in the cryptosphere to compare coins to
>> each other.
>>
>> The advantage of pre-fork coins being recorded on both forks is that
>> if one fork goes extinct, no one loses any money. This setup
>> encourages the minority chain to die,and unity returned. If pre-fork
>> coins change hands on either fork (and not on the other), then holders
>> have an incentive to not let their chain die, and the networks will be
>> irreversibly split forever. The goal should be unity not permanent
>> division.
>>
>> On 1/25/17, Matt Corallo via bitcoin-dev
>>  wrote:
>>> "A. For users on both existing and new fork, anti-replay is an option,
>>> not mandatory"
>>>
>>> To maximize fork divergence, it might make sense to require this. Any
>>> sensible proposal for a hard fork would include a change to the sighash
>>> anyway, so might as well make it required, no?
>>>
>>> Matt
>>>
>>> On 01/24/17 14:33, Johnson Lau via bitcoin-dev wrote:
 This is a pre-BIP. Just need some formatting to make it a formal BIP

 Motivation:

 In general, hardforks are consensus rule changes that make currently
 invalid transactions / blocks valid. It requires a very high degree of
 consensus and all economic active users migrate to the new rules at the
 same time. If a significant amount of users refuse to follow, a
 permanent ledger split may happen, as demonstrated by Ethereum (“DAO
 hardfork"). In the design of DAO hardfork, a permanent split was not
 anticipated and no precaution has been taken to protect against
 transaction replay attack, which led to significant financial loss for
 some users.

 A replay attack is an attempt to replay a transaction of one network on
 another network. It is normally impossible, for example between Bitcoin
 and Litecoin, as different networks have completely different ledgers.
 The txid as SHA256 hash guarantees that replay across network is
 impossible. In a blockchain split, however, since both forks share the
 same historical ledger, replay attack would be possible, unless some
 precautions are taken.

 Unfortunately, fixing problems in bitcoin is like repairing a flying
 plane. Preventing replay attack is constrained by the requirement of
 backward compatibility. This proposal has the following objectives:

 A. For users on both existing and new fork, anti-r

Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-26 Thread Chris Priest via bitcoin-dev
I don't think the solution should be to "fix the replay attack", but
rather to "force the replay effect". The fact that transactions can be
relayed should be seen as a good thing, and not something that should
be fixed, or even called an "attack".

The solution should be to create a "bridge" that replays all
transactions from one network over to the other, and vice-versa. A
fork should be transparent to the end-user. Forcing the user to choose
which network to use is bad, because 99% of people that use bitcoin
don't care about developer drama, and will only be confused by the
choice. When a user moves coins mined before the fork date, both
blockchains should record that transaction. Also a rule should be
introduced that prevents users "tainting" their prefork-mined coins
with coins mined after the fork. All pre-fork mined coins should
"belong" to the network with hashpower majority. No other networks
should be able to claim pre-forked coins as being part of their
issuance (and therefore part of market cap). Market cap may be
bullshit, but it is used a lot in the cryptosphere to compare coins to
each other.

The advantage of pre-fork coins being recorded on both forks is that
if one fork goes extinct, no one loses any money. This setup
encourages the minority chain to die,and unity returned. If pre-fork
coins change hands on either fork (and not on the other), then holders
have an incentive to not let their chain die, and the networks will be
irreversibly split forever. The goal should be unity not permanent
division.

On 1/25/17, Matt Corallo via bitcoin-dev
 wrote:
> "A. For users on both existing and new fork, anti-replay is an option,
> not mandatory"
>
> To maximize fork divergence, it might make sense to require this. Any
> sensible proposal for a hard fork would include a change to the sighash
> anyway, so might as well make it required, no?
>
> Matt
>
> On 01/24/17 14:33, Johnson Lau via bitcoin-dev wrote:
>> This is a pre-BIP. Just need some formatting to make it a formal BIP
>>
>> Motivation:
>>
>> In general, hardforks are consensus rule changes that make currently
>> invalid transactions / blocks valid. It requires a very high degree of
>> consensus and all economic active users migrate to the new rules at the
>> same time. If a significant amount of users refuse to follow, a
>> permanent ledger split may happen, as demonstrated by Ethereum (“DAO
>> hardfork"). In the design of DAO hardfork, a permanent split was not
>> anticipated and no precaution has been taken to protect against
>> transaction replay attack, which led to significant financial loss for
>> some users.
>>
>> A replay attack is an attempt to replay a transaction of one network on
>> another network. It is normally impossible, for example between Bitcoin
>> and Litecoin, as different networks have completely different ledgers.
>> The txid as SHA256 hash guarantees that replay across network is
>> impossible. In a blockchain split, however, since both forks share the
>> same historical ledger, replay attack would be possible, unless some
>> precautions are taken.
>>
>> Unfortunately, fixing problems in bitcoin is like repairing a flying
>> plane. Preventing replay attack is constrained by the requirement of
>> backward compatibility. This proposal has the following objectives:
>>
>> A. For users on both existing and new fork, anti-replay is an option,
>> not mandatory.
>>
>> B. For transactions created before this proposal is made, they are not
>> protected from anti-replay. The new fork has to accept these
>> transactions, as there is no guarantee that the existing fork would
>> survive nor maintain any value. People made time-locked transactions in
>> anticipation that they would be accepted later. In order to maximise the
>> value of such transactions, the only way is to make them accepted by any
>> potential hardforks.
>>
>> C. It doesn’t require any consensus changes in the existing network to
>> avoid unnecessary debate.
>>
>> D. As a beneficial side effect, the O(n^2) signature checking bug could
>> be fixed for non-segregated witness inputs, optionally.
>>
>> Definitions:
>>
>> “Network characteristic byte” is the most significant byte of the
>> nVersion field of a transaction. It is interpreted as a bit vector, and
>> denotes up to 8 networks sharing a common history.
>>
>> “Masked version” is the transaction nVersion with the network
>> characteristic byte masked.
>>
>> “Existing network” is the Bitcoin network with existing rules, before a
>> hardfork. “New network” is the Bitcoin network with hardfork rules. (In
>> the case of DAO hardfork, Ethereum Classic is the existing network, and
>> the now called Ethereum is the new network)
>>
>> “Existing network characteristic bit” is the lowest bit of network
>> characteristic byte
>>
>> “New network characteristic bit” is the second lowest bit of network
>> characteristic byte
>>
>> Rules in new network:
>>
>> 1. If the network characteristic byte is non-zero, and the new n

Re: [bitcoin-dev] A BIP for partially-signed/not-signed raw transaction serialization; would it be useful?

2017-01-09 Thread Chris Priest via bitcoin-dev
I approve of this idea. Counterparty has the same problem. Their API
returns a unsigned transaction that is formed differently from how
other unsigned transactions, which causes friction. Someone needs to
write up a specification that is standardized so that all unsigned
transactions are of the same form. Basically the signature section of
the should contains all the information required to make the
signature, and it needs to be encoded in a way that the signing
application (a blockchain library like BitcoinJ or BitcoinJS) can tell
that it is unsigned.

On 1/9/17, 木ノ下じょな via bitcoin-dev  wrote:
> I have been seeing issues like the following many times on all the
> different projects that support multisig with users responsible for partial
> transaction transport.
>
> https://github.com/OutCast3k/coinbin/issues/73
>
> I would like to gather opinions before proposing a BIP, (like whether we
> need one or not) so please let me know what you think.
>
> Basically, Electrum, Copay, Coinb.in, Bitcoin Core, etc. all have different
> methodology for serializing partially signed multisig raw transactions, as
> well as not-signed raw transactions regardless of scriptPubkey.
>
> I think we should all be on the same page when serializing not-signed and
> partially signed transactions so that the person could look at the data
> alone and know what is necessary and how to manipulate it to sign and
> complete the transaction.
>
> - Jon
>
> --
> -BEGIN PGP PUBLIC KEY BLOCK-
> Comment: http://openpgpjs.org
>
> xsBNBFTmJ8oBB/9rd+7XLxZG/x/KnhkVK2WBG8ySx91fs+qQfHIK1JrakSV3
> x6x0cK3XLClASLLDomm7Od3Q/fMFzdwCEqj6z60T8wgKxsjWYSGL3mq8ucdv
> iBjC3wGauk5dQKtT7tkCFyQQbX/uMsBM4ccGBICoDmIJlwJIj7fAZVqGxGOM
> bO1RhYb4dbQA2qxYP7wSsHJ6/ZNAXyEphOj6blUzdqO0exAbCOZWWF+E/1SC
> EuKO4RmL7Imdep7uc2Qze1UpJCZx7ASHl2IZ4UD0G3Qr3pI6/jvNlaqCTa3U
> 3/YeJwEubFsd0AVy0zs809RcKKgX3W1q+hVDTeWinem9RiOG/vT+Eec/ABEB
> AAHNI2tpbm9zaGl0YSA8a2lub3NoaXRham9uYUBnbWFpbC5jb20+wsByBBAB
> CAAmBQJU5ifRBgsJCAcDAgkQRB9iZ30dlisEFQgCCgMWAgECGwMCHgEAAC6Z
> B/9otobf0ASHYdlUBeIPXdDopyjQhR2RiZGYaS0VZ5zzHYLDDMW6ZIYm5CjO
> Fc09ETLGKFxH2RcCOK2dzwz+KRU4xqOrt/l5gyd50cFE1nOhUN9+/XaPgrou
> WhyT9xLeGit7Xqhht93z2+VanTtJAG6lWbAZLIZAMGMuLX6sJDCO0GiO5zxa
> 02Q2D3kh5GL57A5+oVOna12JBRaIA5eBGKVCp3KToT/z48pxBe3WAmLo0zXr
> hEgTSzssfb2zTwtB3Ogoedj+cU2bHJvJ8upS/jMr3TcdguySmxJlGpocVC/e
> qxq12Njv+LiETOrD8atGmXCnA+nFNljBkz+l6ADl93jHzsBNBFTmJ9EBCACu
> Qq9ZnP+aLU/Rt6clAfiHfTFBsJvLKsdIKeE6qHzsU1E7A7bGQKTtLEnhCCQE
> W+OQP+sgbOWowIdH9PpwLJ3Op+NhvLlMxRvbT36LwCmBL0yD7bMqxxmmVj8n
> vlMMRSe4wDSIG19Oy7701imnHZPm/pnPlneg/Meu/UffpcDWYBbAFX8nrXPY
> vkVULcI/qTcCxW/+S9fwoXjQhWHaiJJ6y3cYOSitN31W9zgcMvLwLX3JgDxE
> flkwq/M+ZkfCYnS3GAPEt8GkVKy2eHtCJuNkGFlCAmKMX0yWzHRAkqOMN5KP
> LFbkKY2GQl13ztWp82QYJZpj5af6dmyUosurn6AZABEBAAHCwF8EGAEIABMF
> AlTmJ9QJEEQfYmd9HZYrAhsMAABKbgf/Ulu5JAk4fXgH0DtkMmdkFiKEFdkW
> 0Wkw7Vhd5eZ4NzeP9kOkD01OGweT9hqzwhfT2CNXCGxh4UnvEM1ZMFypIKdq
> 0XpLLJMrDOQO021UjAa56vHZPAVmAM01z5VzHJ7ekjgwrgMLmVkm0jWKEKaO
> n/MW7CyphG7QcZ6cJX2f6uJcekBlZRw9TNYRnojMjkutlOVhYJ3J78nc/k0p
> kcgV63GB6D7wHRF4TVe4xIBqKpbBhhN+ISwFN1z+gx3lfyRMSmiTSrGdKEQe
> XSIQKG8XZQZUDhLNkqPS+7EMV1g7+lOfT4GhLL68dUXDa1e9YxGH6zkpVECw
> Spe3vsHZr6CqFg==
> =/vUJ
> -END PGP PUBLIC KEY BLOCK-
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Classic 1.2.0 released

2017-01-07 Thread Chris Priest via bitcoin-dev
Its too bad you're not the one who decides what gets posted here or
not. If you don't like whats being discussed, then don't open those
emails.

On 1/7/17, Eric Lombrozo via bitcoin-dev
 wrote:
> Can you guys please take this discussion elsewhere? Perhaps to
> bitcoin-discuss? This is not the place to rehash discussions that have
> taken place a million times already. The behavior of the network under
> contentious hard forks has been discussed ad nauseum. This mailing list is
> for the discussion of new ideas and proposals.
>
> Much appreciated. Thanks.
>
> On Sat, Jan 7, 2017 at 3:49 PM, Eric Voskuil via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On 01/07/2017 03:10 PM, Aymeric Vitte via bitcoin-dev wrote:
>> > Still wondering why you guys don't care about the ridiculous number of
>> > full nodes, no incentive to run one and what would happen if someone
>> > were to control a majority of full nodes
>>
>> The level of control over a majority of full nodes is irrelevant. If
>> this was truly a measure of control over Bitcoin someone would simply
>> spin up a bunch of nodes and take control at trivial cost.
>>
>> e
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Classic 1.2.0 released

2017-01-07 Thread Chris Priest via bitcoin-dev
Bitcoin Classic only changes the block format (by changing the rule
that they have to be 1MB or less). Miners are the only ones who make
blocks, so they are the only ones who mater when it comes to changing
block rules. Nodes, wallets and other software are not affected by
changing block rules. Unlike segwit, where *everybody* has to write
code to support the new transaction format.

Also, it doesn't matter that 75% of hashpower is made up of a dozen
people. That's how the system works, it's not a matter of opinion. If
you are just a node or just a wallet, and you want your voice to
matter, then you need to get a hold of some hashpower.


On 1/7/17, David Vorick  wrote:
> No, Bitcoin classic only activates if 75% of the _miners_ adopt it. That
> says nothing about the broader network and indeed is much easier to achieve
> through politicking, bribery, coercion, and other tomfoolery as 75% of the
> hashrate is ultimately only a dozen people or so.
>
> You have plenty of channels through which you can make your announcements,
> this particular one is not okay.
>
> On Jan 7, 2017 3:12 PM, "Chris Priest via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Bitcoin Classic only activates if 75% of the network adopts it. That
>> is not irresponsible or dangerous. It would only be dangerous if it
>> activates at 50%, because that would create a situation where its not
>> clear which side of the fork has the most proof of work.
>>
>> On 1/7/17, Eric Lombrozo via bitcoin-dev
>>  wrote:
>> > Your release announcement does not make it clear that Bitcoin Classic
>> > is
>> > incompatible with the current Bitcoin network and its consensus rules.
>> > It
>> > is a hard fork on mainnet with no safe activation as well as including
>> > other unsafe changes. There is also no BIP for the hard fork. There is
>> also
>> > no evidence of community wide consensus for such a hard fork. This is
>> > dangerous and irresponsible.
>> >
>> >
>> > It's wrong to announce software without correctly informing people
>> > about
>> > the contents or risks. Furthermore, there are no release notes in
>> > https://github.com/bitcoinclassic/bitcoinclassic/tree/v1.2.0/doc nor
>> > changelog. Without those, it is almost impossible for average users to
>> know
>> > what is under the hood or what has changed and time consuming for
>> > developers to assess.
>> >
>> > On Fri, Jan 6, 2017 at 2:16 AM, Tom Zander via bitcoin-dev <
>> > bitcoin-dev@lists.linuxfoundation.org> wrote:
>> >
>> >> Bitcoin Classic version 1.2.0 is now available from;
>> >>
>> >>  <https://bitcoinclassic.com/gettingstarted.html>
>> >>
>> >> This is a new major version release, including new features, various
>> >> bugfixes and performance improvements.
>> >>
>> >> This release marks a change in strategy for Bitcoin Classic, moving
>> >> from
>> >> the
>> >> very conservative block size proposal based on compromise to one where
>> >> Classic truly innovates and provides a long term solution for the
>> >> market
>> >> to
>> >> choose and leave behind the restrictions of the old.
>> >>
>> >> The most visible change in this version is the decentralised block
>> >> size
>> >> solution where node operators decide on the maximum size.
>> >>
>> >> Bitcoin Classic is focused on providing users a way to get onto the
>> >> Bitcoin
>> >> network using a high quality validating node for a large set of use
>> >> cases.
>> >> Classic presents top notch quality processes in this release, to help
>> >> anyone
>> >> running Bitcoin.
>> >>
>> >> We include in this release various projects with the beta label.
>> >> People
>> >> who
>> >> want to use the Classic node as an on-ramp to Bitcoin will find them
>> >> interesting. These projects will need to be enabled in the config by
>> >> those
>> >> that want to test them.
>> >>
>> >> More background information on this release and Classic can be seen in
>> >> this
>> >> video: https://vimeo.com/192789752
>> >> The full release notes are on github at
>> >> https://github.com/bitcoinclassic/bitcoinclassic/releases/tag/v1.2.0
>> >>
>> >> --
>> >> Tom Zander
>> >> Blog: https://zander.github.io
>> >> Vlog: https://vimeo.com/channels/tomscryptochannel
>> >> ___
>> >> bitcoin-dev mailing list
>> >> bitcoin-dev@lists.linuxfoundation.org
>> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> >>
>> >
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Classic 1.2.0 released

2017-01-07 Thread Chris Priest via bitcoin-dev
Bitcoin Classic only activates if 75% of the network adopts it. That
is not irresponsible or dangerous. It would only be dangerous if it
activates at 50%, because that would create a situation where its not
clear which side of the fork has the most proof of work.

On 1/7/17, Eric Lombrozo via bitcoin-dev
 wrote:
> Your release announcement does not make it clear that Bitcoin Classic is
> incompatible with the current Bitcoin network and its consensus rules. It
> is a hard fork on mainnet with no safe activation as well as including
> other unsafe changes. There is also no BIP for the hard fork. There is also
> no evidence of community wide consensus for such a hard fork. This is
> dangerous and irresponsible.
>
>
> It's wrong to announce software without correctly informing people about
> the contents or risks. Furthermore, there are no release notes in
> https://github.com/bitcoinclassic/bitcoinclassic/tree/v1.2.0/doc nor
> changelog. Without those, it is almost impossible for average users to know
> what is under the hood or what has changed and time consuming for
> developers to assess.
>
> On Fri, Jan 6, 2017 at 2:16 AM, Tom Zander via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Bitcoin Classic version 1.2.0 is now available from;
>>
>>  
>>
>> This is a new major version release, including new features, various
>> bugfixes and performance improvements.
>>
>> This release marks a change in strategy for Bitcoin Classic, moving from
>> the
>> very conservative block size proposal based on compromise to one where
>> Classic truly innovates and provides a long term solution for the market
>> to
>> choose and leave behind the restrictions of the old.
>>
>> The most visible change in this version is the decentralised block size
>> solution where node operators decide on the maximum size.
>>
>> Bitcoin Classic is focused on providing users a way to get onto the
>> Bitcoin
>> network using a high quality validating node for a large set of use
>> cases.
>> Classic presents top notch quality processes in this release, to help
>> anyone
>> running Bitcoin.
>>
>> We include in this release various projects with the beta label. People
>> who
>> want to use the Classic node as an on-ramp to Bitcoin will find them
>> interesting. These projects will need to be enabled in the config by
>> those
>> that want to test them.
>>
>> More background information on this release and Classic can be seen in
>> this
>> video: https://vimeo.com/192789752
>> The full release notes are on github at
>> https://github.com/bitcoinclassic/bitcoinclassic/releases/tag/v1.2.0
>>
>> --
>> Tom Zander
>> Blog: https://zander.github.io
>> Vlog: https://vimeo.com/channels/tomscryptochannel
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Committed bloom filters for improved wallet performance and SPV security

2017-01-06 Thread Chris Priest via bitcoin-dev
Its a method for determining the probability that a valid tx will be
mined in a block before that tx actually gets mined, which is useful
when accepting payments in situations when you can't wait for the full
confirmation. No one is saying all tx validation should be performed
by querying miners mempools, that's ridiculous. Obviously once the tx
gets it's first confirmation, you go back to determining validity the
way you always have. There is no "security catastrophe".

Even if you're running a full node, you can't know for certain that
any given tx will make it into a future block. You can't be certain
the future miner who finally does mine that tx will mine your TXID or
another TXID that spends the same inputs to another address (a double
spend). The only way to actually know for certain is to query every
single large hashpower mempool.

On 1/4/17, Eric Voskuil  wrote:
> On 01/04/2017 11:06 PM, Chris Priest via bitcoin-dev wrote:
>> On 1/3/17, Jonas Schnelli via bitcoin-dev
>>  wrote:
>>>
>>> There are plenty, more sane options. If you can't run your own full-node
>>> as a merchant (trivial), maybe co-use a wallet-service with centralized
>>> verification (maybe use two of them), I guess Copay would be one of
>>> those wallets (as an example). Use them in watch-only mode.
>>
>> The best way is to connect to the mempool of each miner and check to
>> see if they have your txid in their mempool.
>>
>> https://www.antpool.com/api/is_in_mempool?txid=334847bb...
>> https://www.f2pool.com/api/is_in_mempool?txid=334847bb...
>> https://bw.com/api/is_in_mempool?txid=334847bb...
>> https://bitfury.com/api/is_in_mempool?txid=334847bb...
>> https://btcc.com/api/is_in_mempool?txid=334847bb...
>>
>> If each of these services return "True", and you know those services
>> so not engage in RBF, then you can assume with great confidence that
>> your transaction will be in the next block, or in a block very soon.
>> If any one of those services return "False", then you must assume that
>> it is possible that there is a double spend floating around, and that
>> you should wait to see if that tx gets confirmed. The problem is that
>> not every pool runs such a service to check the contents of their
>> mempool...
>>
>> This is an example of mining centralization increasing the security of
>> zero confirm.
>
> A world connected up to a few web services to determine payment validity
> is an example of a bitcoin security catastrophe.
>
> e
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Committed bloom filters for improved wallet performance and SPV security

2017-01-04 Thread Chris Priest via bitcoin-dev
On 1/3/17, Jonas Schnelli via bitcoin-dev
 wrote:
>
> There are plenty, more sane options. If you can't run your own full-node
> as a merchant (trivial), maybe co-use a wallet-service with centralized
> verification (maybe use two of them), I guess Copay would be one of
> those wallets (as an example). Use them in watch-only mode.

The best way is to connect to the mempool of each miner and check to
see if they have your txid in their mempool.

https://www.antpool.com/api/is_in_mempool?txid=334847bb...
https://www.f2pool.com/api/is_in_mempool?txid=334847bb...
https://bw.com/api/is_in_mempool?txid=334847bb...
https://bitfury.com/api/is_in_mempool?txid=334847bb...
https://btcc.com/api/is_in_mempool?txid=334847bb...

If each of these services return "True", and you know those services
so not engage in RBF, then you can assume with great confidence that
your transaction will be in the next block, or in a block very soon.
If any one of those services return "False", then you must assume that
it is possible that there is a double spend floating around, and that
you should wait to see if that tx gets confirmed. The problem is that
not every pool runs such a service to check the contents of their
mempool...

This is an example of mining centralization increasing the security of
zero confirm. If more people mined, this method will not work as well
because it would require you to call the API of hundreds of different
potential block creators.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On-going work: Coin Selection Simulation

2016-09-21 Thread Chris Priest via bitcoin-dev
>From my experience working with coin selection algorithms, there are
three "goals" to it:

1. Minimize cost
2. Maximize privacy
3. Minimize UTXO footprint

You can build a coin selection algorithm that achieves 1 and 3, but
will sacrifice 2. If you want coin selectin to maximize your privacy,
it will happen at the expense of UTXO footprint and fees. Minimizing
cost usually also minimizes UTXO footprint but not always. To
completely minimize UTXO footprint, you sacrifice a bit on cost, and a
lot on privacy.

On 9/21/16, Andreas Schildbach via bitcoin-dev
 wrote:
> On 09/21/2016 02:58 PM, Murch via bitcoin-dev wrote:
>
>> Android Wallet for Bitcoin
>
> The correct name is Bitcoin Wallet, or Bitcoin Wallet for Android (if
> you want to refer to the Android version).
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capital Efficient Honeypots w/ "Scorched Earth" Doublespending Protection

2016-08-24 Thread Chris Priest via bitcoin-dev
How does your system prevent against insider attacks? How do you know
the money is stolen by someone who compromised server #4, and not
stolen by the person who set up server #4? It is my understanding
these days most attacks are inside jobs.

On 8/24/16, Peter Todd via bitcoin-dev
 wrote:
> On Thu, Aug 25, 2016 at 01:37:34AM +1000, Matthew Roberts wrote:
>> Really nice idea. So its like a smart contract that incentivizes
>> publication that a server has been hacked? I also really like how the
>> funding has been handled -- with all the coins stored in the same address
>> and then each server associated with a unique signature. That way, you
>> don't have to split up all the coins among every server and reduce the
>> incentive for an attacker yet you can still identify which server was
>> hacked.
>>
>> It would be nice if after the attacker broke into the server that they
>> were
>> also incentivized to act on the information as soon as possible
>> (revealing
>> early on when the server was compromised.) I suppose you could split up
>> the
>> coins into different outputs that could optimally be redeemed by the
>> owner
>> at different points in the future -- so they're incentivzed to act lest
>
> Remember that it's _always_ possible for the owner to redeem the coins at
> any
> time, and there's no way to prevent that.
>
> The incentive for the intruder to collect the honeypot in a timely manner
> is
> simple: once they've broken in, the moment the honeypot owner learns about
> the
> compromise they have every reason to attempt to recover the funds, so the
> intruder needs to act as fast as possible to maximize their chances of
> being
> rewarded.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] *Changing* the blocksize limit

2016-08-06 Thread Chris Priest via bitcoin-dev
Because the blocksize limit is denominated in bytes, miners choose
transactions to add to a block based on fee/byte ratio. This mean that
if you make a transaction with a lot of inputs, your transaction will
be very big, an you'll have a to pay a lot in fees to get that
transaction included in a block.

For a long time I have been of the belief that it is a flaw in bitcoin
that you have to pay more to move coins that are sent to you via small
value UTXOs, compared to coins sent to you through a single high
values UTXO. There are many legitimate uses of bitcoin where you get
the money is very small increments (such as microtransactions). This
is the basis for my "Wildcard inputs" proposal now known as BIP131.
This BIP was rejected because it requires a database index, which
people thought would make bitcoin not scale, which I think is complete
malarkey, but it is what it is. It has recently occurred to me a way
to achieve the same effect without needing the database index.

If the blocksize limit was denominated in outputs, miners would choose
transactions based on maximum fee per output. This would essentially
make it free to include an input to a transaction.

If the blocksize limit were removed and replaced with a "block output
limit", it would have multiple positive effects. First off, like I
said earlier, it would incentivize microtransactions. Secondly it
would serve to decrease the UTXO set. As I described in the text of
BIP131, as blocks fill up and fees rise, there is a "minimum
profitability to include an input to a transaction" which increases.
At the time I wrote BIP131, it was something like 2 cents: Any UTXO
worth less than 2 cents was not economical to add to a transaction,
and therefore likely to never be spent (unless blocks get bigger and
fee's drop). This contributes to the "UTXO bloat problem" which a lot
of people talk about being a big problem.

If the blocksize limit is to be changed to a block output limit, the
number the limit is set to should be roughly the amount of outputs
that are found in 1MB blocks today. This way, the change should be
considered non-controversial. I think its silly that some people think
its a good thing to keep usage restricted, but again, it is what it
is.

Blocks can be bigger than 1MB, but the extra data in the block will
not result in more people using bitcoin, but rather existing users
spending inputs to decrease the UTXO set.

It would also bring about data that can be used to determine how to
scale bitcoin in the future. For instance, we have *no idea* how the
network will handle blocks bigger than 1MB, simply because the network
has never seen blocks bigger than 1MB. People have set up private
networks for testing bigger blocks, but thats not quite the same as
1MB+ blocks on the actual live network. This change will allow us to
see what actually happens when bigger blocks gets published.

Why is this change a bad idea?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 151

2016-07-02 Thread Chris Priest via bitcoin-dev
On 6/30/16, Erik Aronesty via bitcoin-dev
 wrote:
>
> I would like to see a PGP-like "web of trust" proposal for both the
> security of the bitcoin network itself /and/ (eventually) of things like
> transmission of bitcoin addresses.
>
> Something where nodes of any kind (full, spv, mobile wallets) can
> /optionally/ accumulate trust over time and are capable of verifying the
> identity of other nodes in that web.
>

Isn't the system already functioning in this way already? Bitcoin
exchanges and block explorers already have a reputation earned by so
many years of serving the community. Their HTTPS certificate acts like
a public/private key, and when your wallet makes a request to their
server, the keys are automatically checked for validity by the
underlying http library.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] BIP draft: Memo server

2016-06-01 Thread Chris Priest via bitcoin-dev
I'm currently working on a wallet called multiexplorer. You can check
it at https://multiexplorer.com/wallet

It supports all the BIPs, including the ones that lets you export and
import based on a 12 word mnemonic. This lets you easily import
addresses from one wallet to the next. For instance, you can
copy+paste your 12 word mnemonic from Coinbase CoPay into
Multiexplorer wallet and all of your address and transaction history
is imported (except CoPay doesn't support altcoins, so it will just be
your BTC balance that shows up). Its actually pretty cool, but not
everything is transferred over.

For instance, some people like to add little notes such as "paid sally
for lunch at Taco Bell", or "Paid rent" to each transaction they make
through their wallet's UI. When you export and import into another
wallet these memos are lost, as there is no way for this data to be
encoded into the mnemonic.

For my next project, I want to make a stand alone system for archiving
and serving these memos. After it's been built and every wallet
supports the system, you should be able to move from one wallet by
just copy+pasting the mnemonic into the next wallet without losing
your memos. This will make it easier for people to move off of old
wallets that may not be safe anymore, to more modern wallets with
better security features. Some people may want to switch wallets, but
since its much harder to backup memos, people may feel stuck using a
certain wallet. This is bad because it creates lock in.

I wrote up some details of how the system will work:

https://github.com/priestc/bips/blob/master/memo-server.mediawiki

Basically the memos are encrypted and then sent to a server where the
memo is stored. An API exists that allows wallets to get the memos
through an HTTPS interface. There isn't one single memo server, but
multiple memo servers all ran by different people. These memo servers
share data amongst each other through a sync process.

The specifics of how the memos will be encrypted have not been set in
stone yet. The memos will be publicly propagated, so it is important
that they are encrypted strongly. I'm not a cryptography expert, so
someone else has to decide on the scheme that is appropriate.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

2016-05-17 Thread Chris Priest via bitcoin-dev
On 5/17/16, Eric Lombrozo via bitcoin-dev
 wrote:
> Nice!
>
> We’ve been talking about doing this forever and it’s so desperately needed.
>

"So desperately needed"? How do you figure? The UTXO set is currently
1.5 GB. What kind of computer these days doesn't have 1.5 GB of
memory? Since you people insist on keeping the blocksize limit at 1MB,
the UTXO set growth is stuck growing at a tiny rate. Most consumer
hardware sold thee days has 8GB or more RAM, it'll take decades before
the UTXO set come close to not fitting into 8 GB of memory.

Maybe 30 or 40 years from not I can see this change being "so
desperately needed" when nodes are falling off because the UTXO set is
to large, but that day is not today.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Chris Priest via bitcoin-dev
Segwit requires work from exchanges, wallets and services in order for
adoption to happen. This is because segwit changes the rules regarding
the Transaction data structure. A blocksize increase does not change
the Transaction rules at all. The blocksize increase is a change to
the Block structure. Most wallets these days are Block agnostic.

Essentially, if a client has been built using a library that abstracts
away the block, then that client's *code* does not need to be updated
to handle this blocksize limit change. An example is any service using
the Bitcore javascript library. Any wallet built using Bitcore does
not need any changes to handle a blocksize upgrade. I have one project
that is live that was built using Bitcore. Before, during, and after
the fork, I do not need to lift a finger *codewise* to keep my project
still working. Same goes for projects that are built using
pybitcointools, as well as probably a few other libraries.

A wallet using Bitcore also has to work in tandem with a blockchan
api. Bitcore itself does not provide any blockchain data, you have to
get that somewhere else, such as a Node API. That API has to be based
on a Node that is following the upgraded chain. My wallet for instance
is built on top of Bitpay Insight. If bitpay doesn't upgrade their
Node to follow the 2MB chain, then I must either...

1) Change my wallet to use my own Bitpay Insight. (Insight is open
source, so you can host you own using any Node client you want)
2) Switch to another API, such as Toshi or Bockr.io, or
Blokchain.Info, or ... (there are dozens to choose from)

A blockchain service such as a blockexplorer does need to be upgraded
to handle a blocksize hardfork. The only work required is updating
their node software so that the MAX_BLOCKSIZE parameter is set to 2MB.
This can be done by either changing the source code themselves, or by
installing an alternate client such as XT, Classic, or Unlimited.

On 2/6/16, Adam Back via bitcoin-dev
 wrote:
> Hi Gavin
>
> It would probably be a good idea to have a security considerations
> section, also, is there a list of which exchange, library, wallet,
> pool, stats server, hardware etc you have tested this change against?
>
> Do you have a rollback plan in the event the hard-fork triggers via
> false voting as seemed to be prevalent during XT?  (Or rollback just
> as contingency if something unforseen goes wrong).
>
> How do you plan to monitor and manage security through the hard-fork?
>
> Adam
>
> On 6 February 2016 at 16:37, Gavin Andresen via bitcoin-dev
>  wrote:
>> Responding to "28 days is not long enough" :
>>
>> I keep seeing this claim made with no evidence to back it up.  As I said,
>> I
>> surveyed several of the biggest infrastructure providers and the btcd
>> lead
>> developer and they all agree "28 days is plenty of time."
>>
>> For individuals... why would it take somebody longer than 28 days to
>> either
>> download and restart their bitcoind, or to patch and then re-run (the
>> patch
>> can be a one-line change MAX_BLOCK_SIZE from 100 to 200)?
>>
>> For the Bitcoin Core project:  I'm well aware of how long it takes to
>> roll
>> out new binaries, and 28 days is plenty of time.
>>
>> I suspect there ARE a significant percentage of un-maintained full
>> nodes--
>> probably 30 to 40%. Losing those nodes will not be a problem, for three
>> reasons:
>> 1) The network could shrink by 60% and it would still have plenty of open
>> connection slots
>> 2) People are committing to spinning up thousands of supports-2mb-nodes
>> during the grace period.
>> 3) We could wait a year and pick up maybe 10 or 20% more.
>>
>> I strongly disagree with the statement that there is no cost to a longer
>> grace period. There is broad agreement that a capacity increase is needed
>> NOW.
>>
>> To bring it back to bitcoin-dev territory:  are there any TECHNICAL
>> arguments why an upgrade would take a business or individual longer than
>> 28
>> days?
>>
>>
>> Responding to Luke's message:
>>
>>> On Sat, Feb 6, 2016 at 1:12 AM, Luke Dashjr via bitcoin-dev
>>>  wrote:
>>> > On Friday, February 05, 2016 8:51:08 PM Gavin Andresen via bitcoin-dev
>>> > wrote:
>>> >> Blog post on a couple of the constants chosen:
>>> >>   http://gavinandresen.ninja/seventyfive-twentyeight
>>> >
>>> > Can you put this in the BIP's Rationale section (which appears to be
>>> > mis-named
>>> > "Discussion" in the current draft)?
>>
>>
>> I'll rename the section and expand it a little. I think standards
>> documents
>> like BIPs should be concise, though (written for implementors), so I'm
>> not
>> going to recreate the entire blog post there.
>>
>>>
>>> >
>>> >> Signature operations in un-executed branches of a Script are not
>>> >> counted
>>> >> OP_CHECKMULTISIG evaluations are counted accurately; if the signature
>>> >> for a
>>> >> 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the
>>> >> top
>>> >> of the execution stack, it is counted as one signature operation

Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-06 Thread Chris Priest via bitcoin-dev
Its mostly a problem for exchanges and miners. Those entities need to
be on the network 100% of the time because they are using the network
100% of the time. A normal wallet user isn't taking payments every few
minutes like the exchanges are. "Getting booted off the network" is
not something to worry about for normal wallet users.

If miners aren't up to date, that is the biggest problem. A sudden
drop in hashpower will effect the network for all users, including
normal wallet users (by them having to wait longer for confirmations).
Miners must not be booted off the network ever. Hashpower voting is
the best way to make sure this never happens.

On 2/6/16, Tom Zander via bitcoin-dev
 wrote:
> On Saturday, February 06, 2016 06:09:21 PM Jorge Timón via bitcoin-dev
> wrote:
>> None of the reasons you list say anything about the fact that "being
>> lost"
>> (kicked out of the network) is a problem for those node's users.
>
> That's because its not.
>
> If you have a node that is "old" your node will stop getting new blocks.
> The node will essentially just say "x-hours behind" with "x" getting larger
>
> every hour. Funds don't get confirmed. etc.
>
> After upgrading the software they will see the new reality of the network.
>
> Nobody said its a problem, because its not.
>
> --
> Tom Zander
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] "Hashpower liquidity" is more important than "mining centralization"

2015-12-25 Thread Chris Priest via bitcoin-dev
The term "mining centralization" is very common. It come up almost in
every single discussion relating to bitcoin these days. Some people
say "mining is already centralized" and other such things. I think
this is a very bad term, and people should stop saying those things.
Let me explain:

Under normal operations, if every single miner in th network is under
one roof, nothing would happen. If there was oly one mining pool that
everyone had to use, this would have no effect on the system
whatsoever. The only time this would be a problem is if that one pool
were to censor transactions, or in any other way operate out of the
normal.

Right now, the network is in a period of peace. There are no
governments trying to coerce mining pools into censoring transaction,
or otherwise disrupting the network. For all we know, the next 500
years of bitcoin's history could be filled with complete peaceful
operations with no government interference at all.

*If* for some reason in the future a government were to decide that
they want to disrupt the bitcoin network, then all the hashpower being
under one control will be problematic, if and only if hashpower
liquidity is very low. Hashpower liquidity is the measure of how
easily hashpower can move from one pool to another. If all the mining
hardware on the network is mining one one pool and **will never or can
never switch to another pool** then the hashpower liquidity is very
low. If all the hashpower on the network can very easily move to
another pool, then hashpower liquidity is very high.

If the one single mining pool were to start censoring transactions and
there is no other pool to move to, then hashpower liquidity is very
high, and that would be very bad for bitcoin. If there was dozens of
other pools in existence, and all the mining hardware owners could
switch to another pool easiely, then the hashpower liquidity is very
high, and the censorship attack will end as soon as the hashpower
moves to other pools.

My argument is that hashpower liquidity is much more important of a
metric to think about than simply "mining centralization". The
difference between the two terms is that one term describes a
temporary condition, while the other one measures a more permanent
condition. Both terms are hard to measure in concrete terms.

Instead of saying "this change will increase mining centralization" we
should instead be thinking "will this change increase hashpower
liquidity?".

Hopefully people will understand this concept and the term "mining
centralization" will become archaic.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread Chris Priest via bitcoin-dev
On 12/19/15, Emin Gün Sirer  wrote:
>
> Chris Priest is confusing these attacks with selfish mining, and further,
> his characterization of selfish mining is incorrect. Selfish Mining is
> guaranteed to yield profits for any pool over 33% (as a result, Nick
> Szabo has dubbed this the "34% attack") and it may pay off even
> below that point if the attacker is well-positioned in the network;
> or it may not, depending on the makeup of the rest of the pools
> as well as the network characteristics (the more centralized
> and bigger the other pools are, the less likely it is to pay off). There
> was a lot of noise in the community when the SM paper came out,
> so there are tons of incorrect response narrative out there. By now,
> everyone who seems to be Bitcoin competent sees SM as a
> concern, and Ethereum has already adopted our fix. I'd have hoped
> that a poster to this list would be better informed than to repeat the
> claim that "majority will protect Bitcoin" to refute a paper whose title
> is "majority is not enough."

http://www.coindesk.com/bitcoin-mining-network-vulnerability/

just sayin'...

But anyways, I agree with you on the rest of your email. This is only
really an attack from the perspective of the mining pool. From the
user's perspective, its not an attack at all. Imagine your aunt who
has bitcoin on a SPV wallet on her iphone. Does she care that two
mining pools are attacking each other? Its has nothing to do with her,
and it has nothing to do with most users or bitcoin either. From the
bitcoin user's perspective, the mining pool landscape *should* be
constantly changing. Fixing this "attack" is promoting mining pool
statism. Existing mining pools will have an advantage over up and
coming mining pools. That is not an advantage that is best for bitcoin
from the user's perspective.

Now, on the other hand, if this technique is used so much, it results
in too many pools getting shut down such that the difficulty starts to
decrease, *then* maybe it might be time to start thinking about fixing
this issue. The difficulty dropping means the security of the network
is decreased, which *does* have an effect on every user.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread Chris Priest via bitcoin-dev
On 12/19/15, jl2012  wrote:
> Chris Priest via bitcoin-dev 於 2015-12-19 22:34 寫到:
>> Block witholding attacks are only possible if you have a majority of
>> hashpower. If you only have 20% hashpower, you can't do this attack.
>> Currently, this attack is only a theoretical attack, as the ones with
>> all the hashpower today are not engaging in this behavior. Even if
>> someone who had a lot of hashpower decided to pull off this attack,
>> they wouldn't be able to disrupt much. Once that time comes, then I
>> think this problem should be solved, until then it should be a low
>> priority. There are more important things to work on in the meantime.
>>
>
> This is not true. For a pool with 5% total hash rate, an attacker only
> needs 0.5% of hash rate to sabotage 10% of their income. It's already
> enough to kill the pool
>
>

This begs the question: If this is such a devastating attack, then why
hasn't this attack brought down every pool in existence? As far as I'm
aware, there are many pools in operation despite this possibility.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread Chris Priest via bitcoin-dev
Then shouldn't this be something the pool deals with, not the bitcoin protocol?

On 12/19/15, Matt Corallo  wrote:
> Peter was referring to pool-block-withholding, not selfish mining.
>
> On December 19, 2015 7:34:26 PM PST, Chris Priest via bitcoin-dev
>  wrote:
>>Block witholding attacks are only possible if you have a majority of
>>hashpower. If you only have 20% hashpower, you can't do this attack.
>>Currently, this attack is only a theoretical attack, as the ones with
>>all the hashpower today are not engaging in this behavior. Even if
>>someone who had a lot of hashpower decided to pull off this attack,
>>they wouldn't be able to disrupt much. Once that time comes, then I
>>think this problem should be solved, until then it should be a low
>>priority. There are more important things to work on in the meantime.
>>
>>On 12/19/15, Peter Todd via bitcoin-dev
>> wrote:
>>> At the recent Scaling Bitcoin conference in Hong Kong we had a
>>chatham
>>> house rules workshop session attending by representitives of a super
>>> majority of the Bitcoin hashing power.
>>>
>>> One of the issues raised by the pools present was block withholding
>>> attacks, which they said are a real issue for them. In particular,
>>pools
>>> are receiving legitimate threats by bad actors threatening to use
>>block
>>> withholding attacks against them. Pools offering their services to
>>the
>>> general public without anti-privacy Know-Your-Customer have little
>>> defense against such attacks, which in turn is a threat to the
>>> decentralization of hashing power: without pools only fairly large
>>> hashing power installations are profitable as variance is a very real
>>> business expense. P2Pool is often brought up as a replacement for
>>pools,
>>> but it itself is still relatively vulnerable to block withholding,
>>and
>>> in any case has many other vulnerabilities and technical issues that
>>has
>>> prevented widespread adoption of P2Pool.
>>>
>>> Fixing block withholding is relatively simple, but (so far) requires
>>a
>>> SPV-visible hardfork. (Luke-Jr's two-stage target mechanism) We
>>should
>>> do this hard-fork in conjunction with any blocksize increase, which
>>will
>>> have the desirable side effect of clearly show consent by the entire
>>> ecosystem, SPV clients included.
>>>
>>>
>>> Note that Ittay Eyal and Emin Gun Sirer have argued(1) that block
>>> witholding attacks are a good thing, as in their model they can be
>>used
>>> by small pools against larger pools, disincentivising large pools.
>>> However this argument is academic and not applicable to the real
>>world,
>>> as a much simpler defense against block withholding attacks is to use
>>> anti-privacy KYC and the legal system combined with the variety of
>>> withholding detection mechanisms only practical for large pools.
>>> Equally, large hashing power installations - a dangerous thing for
>>> decentralization - have no block withholding attack vulnerabilities.
>>>
>>> 1) http://hackingdistributed.com/2014/12/03/the-miners-dilemma/
>>>
>>> --
>>> 'peter'[:-1]@petertodd.org
>>> 0188b6321da7feae60d74c7b0becbdab3b1a0bd57f10947d
>>>
>>___
>>bitcoin-dev mailing list
>>bitcoin-dev@lists.linuxfoundation.org
>>https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Segregated witness softfork with moderate adoption has very small block size effect

2015-12-19 Thread Chris Priest via bitcoin-dev
By that same logic, any wallet that implemented P2SH is also voting
for an increased block size, since P2SH results in smaller
transactions, in the same way SW results in smaller transactions.

On 12/19/15, Peter Todd via bitcoin-dev
 wrote:
> On Sat, Dec 19, 2015 at 11:49:25AM -0500, jl2012 via bitcoin-dev wrote:
>> I have done some calculation for the effect of a SW softfork on the
>> actual total block size.
>
> Note how the fact that segwit needs client-side adoption to enable an
> actual blocksize increase can be a good thing: it's a clear sign that
> the ecosystem as a whole has opted-into a blocksize increase.
>
> Not as good as a direct proof-of-stake vote, and somewhat coercive as a
> vote as you pay lower fees, but it's an interesting side-effect.
>
> --
> 'peter'[:-1]@petertodd.org
> 0188b6321da7feae60d74c7b0becbdab3b1a0bd57f10947d
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] We need to fix the block withholding attack

2015-12-19 Thread Chris Priest via bitcoin-dev
Block witholding attacks are only possible if you have a majority of
hashpower. If you only have 20% hashpower, you can't do this attack.
Currently, this attack is only a theoretical attack, as the ones with
all the hashpower today are not engaging in this behavior. Even if
someone who had a lot of hashpower decided to pull off this attack,
they wouldn't be able to disrupt much. Once that time comes, then I
think this problem should be solved, until then it should be a low
priority. There are more important things to work on in the meantime.

On 12/19/15, Peter Todd via bitcoin-dev
 wrote:
> At the recent Scaling Bitcoin conference in Hong Kong we had a chatham
> house rules workshop session attending by representitives of a super
> majority of the Bitcoin hashing power.
>
> One of the issues raised by the pools present was block withholding
> attacks, which they said are a real issue for them. In particular, pools
> are receiving legitimate threats by bad actors threatening to use block
> withholding attacks against them. Pools offering their services to the
> general public without anti-privacy Know-Your-Customer have little
> defense against such attacks, which in turn is a threat to the
> decentralization of hashing power: without pools only fairly large
> hashing power installations are profitable as variance is a very real
> business expense. P2Pool is often brought up as a replacement for pools,
> but it itself is still relatively vulnerable to block withholding, and
> in any case has many other vulnerabilities and technical issues that has
> prevented widespread adoption of P2Pool.
>
> Fixing block withholding is relatively simple, but (so far) requires a
> SPV-visible hardfork. (Luke-Jr's two-stage target mechanism) We should
> do this hard-fork in conjunction with any blocksize increase, which will
> have the desirable side effect of clearly show consent by the entire
> ecosystem, SPV clients included.
>
>
> Note that Ittay Eyal and Emin Gun Sirer have argued(1) that block
> witholding attacks are a good thing, as in their model they can be used
> by small pools against larger pools, disincentivising large pools.
> However this argument is academic and not applicable to the real world,
> as a much simpler defense against block withholding attacks is to use
> anti-privacy KYC and the legal system combined with the variety of
> withholding detection mechanisms only practical for large pools.
> Equally, large hashing power installations - a dangerous thing for
> decentralization - have no block withholding attack vulnerabilities.
>
> 1) http://hackingdistributed.com/2014/12/03/the-miners-dilemma/
>
> --
> 'peter'[:-1]@petertodd.org
> 0188b6321da7feae60d74c7b0becbdab3b1a0bd57f10947d
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-13 Thread Chris Priest via bitcoin-dev
> In none of these cases do you lose anything.

Nor do you gain anything. Archive nodes will still need to exist
precisely because paper wallets don't include UTXO data. This is like
adding the ability to partially seed a movie with bittorrent. You
still need someone who has the whole thing has to be participating in
order for anyone to play the movie.

This isn't going to kill bitcoin, but it won't make it any better.
Every paper wallet would have to be re-printed with UTXO data
included. It doesn't even solve the core problem because someone can
still flood the network with lots of UTXOs, as long as they spend them
quickly.

On 12/13/15, Gregory Maxwell  wrote:
> On Sun, Dec 13, 2015 at 8:13 AM, Chris Priest  wrote:
>> Lets say it's 2050 and I want to sweep a paper wallet I created in
>> 2013. I can't just make the TX and send it to the network, I have to
>> first contact an "archive node" to get the UTXO data in order to make
>> the TX. How is this better than how the system works today?
>
> You already are in that boat. If your paper wallet has only the
> private key (as 100% of them do today). You'll have no idea what coins
> have been assigned to it, or what their TXids are. You'll need to
> contact a public index (which isn't a service existing nodes provide)
> or synchronize the full blockchain history to find it. Both are also
> sufficient for jl2012's (/Petertodd's STXO), they'd only be providing
> you with somewhat more data.  If instead, you insist that you'd
> already be running a full node and not have to wait for the sync, then
> again you'd also be your own archive. In none of these cases do you
> lose anything.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Forget dormant UTXOs without confiscating bitcoin

2015-12-13 Thread Chris Priest via bitcoin-dev
I don't like this scheme at all. It doesn't seem to make bitcoin
better, it makes it worse.

Lets say it's 2050 and I want to sweep a paper wallet I created in
2013. I can't just make the TX and send it to the network, I have to
first contact an "archive node" to get the UTXO data in order to make
the TX. How is this better than how the system works today?

Since many people are going to be holding BTC long term (store of
value of a first-class feature of bitcoin), this scheme is going to
effect pretty much all users.

These archive nodes will be essential to network's operation. If there
are no running archive nodes, the effect on the network is the same as
the network today without any full nodes.

Anyways, UTXO size is a function of number of users, rather than a
function of time. If tons of people join the network, UTXO still will
increase no matter what. All this change is going to do is make it
harder for people to use bitcoin. A person can still generate 1GB of
UTXO data, but as long as they spend those UTXOs within the amount
they are still using those resources.

IMO, wildcard inputs is still the best way to limit the UTXO set.


On 12/12/15, Gregory Maxwell via bitcoin-dev
 wrote:
> On Sun, Dec 13, 2015 at 1:00 AM, Vincent Truong via bitcoin-dev
>  wrote:
>> have run a node/kept their utxo before they were aware of this change and
>> then realise miners have discarded their utxo. Oops?
>
> I believe you have misunderstood jl2012's post.  His post does not
> cause the outputs to become discarded. They are still spendable,
> but the transactions must carry a membership proof to spend them.
> They don't have to have stored the data themselves, but they must
> get it from somewhere-- including archive nodes that serve this
> purpose rather than having every full node carry all that data forever.
>
> Please be conservative with the send button. The list loses its
> utility if every moderately complex idea is hit with reflexive
> opposition by people who don't understand it.
>
> Peter Todd has proposed something fairly similar with "STXO
> commitments". The primary argument against this kind of approach that
> I'm aware of is that the membership proofs get pretty big, and if too
> aggressive this trades bandwidth for storage, and storage is usually
> the cheaper resource. Though at least the membership proofs could be
> omitted when transmitting to a node which has signaled that it has
> kept the historical data anyways.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 9 style version bits for txns

2015-12-08 Thread Chris Priest via bitcoin-dev
A wallet doesn't receive transactions from other wallets. That is what
a node does. Wallets just make transactions and then sends them to the
nodes. Nodes then send them to other nodes.

In the early days of bitcoin, all wallets were nodes, but now a lot of
wallets are just wallets with out any specific node. For instance, SPV
wallets, they don't get their UTXO data from any one node that can or
can not support a feature. They get UTXO data from many nodes, some of
which could support said feature, others may not.

The nature of the work that nodes perform, they *should* broadcast
what features they support. The only nodes that matter to the network
are nodes that produce blocks. Nodes that don't produce blocks are
kind of just there, serving whoever happens to connect... I guess
nodes could broadcast their supported implementations of via part of
the version message that is part of the p2p handshake process...

On 12/8/15, Vincent Truong  wrote:
> I suppose whether the wallet devs want to implement the soft fork or not is
> irrelevant. They only need to indicate if they are ready i.e. they've
> tested the new soft fork, hard fork or feature and validated that it
> doesn't break their nodes or wallet software.
> On Dec 9, 2015 6:40 AM, "Chris Priest"  wrote:
>
>> I proposed in my Wildcard Inputs BIP that the version field be split
>> in two. The first 4 bytes are version number (which in practice is
>> being used for script version), and the second 4 bits are used for
>> transaction type.
>>
>> I don't think the BIP9 mechanism really applies to transactions. A
>> block is essentially a collection of transactions, therefore voting on
>> the block applies to the many parties who have transactions in the
>> block. A transaction on the other hand only effects at most two
>> parties (the sender and the receiver). In other words, block are
>> "communal" data structures, transactions are individual data
>> structures. Also, the nature of soft forks are that wallets can choose
>> to implement a new feature or not. For instance, if no wallets
>> implement RBF or SW, then those features effectively don't exist,
>> regardless of how many nodes have upgraded to handle the feature.
>>
>> Any new transaction feature should get a new "type" number. A new
>> transaction feature can't happen until the nodes support it.
>>
>> On 12/8/15, Vincent Truong via bitcoin-dev
>>  wrote:
>> > So I have been told more than once that the version announcement in
>> blocks
>> > is not a vote, but a signal for readiness, used in isSupermajority().
>> > Basically, if soft forks (and hard forks) won't activate unless a
>> certain %
>> > of blocks have been flagged with the version up (or bit flipped when
>> > versionbits go live) to signal their readiness, that is a vote against
>> > implementation if they never follow up. I don't like this politically
>> > correct speech because in reality it is a vote... But I'm not here to
>> argue
>> > about that... I would like to see if there are any thoughts on
>> > extending/copying isSupermajority() for a new secondary/non-critical
>> > function to also look for a similar BIP 9 style version bit in txns.
>> > Apologies if already proposed, haven't heard of it anywhere.
>> >
>> > If we are looking for a signal of readiness, it is unfair to wallet
>> > developers and exchanges that they are unable to signal if they too are
>> > ready for a change. As more users are going into use SPV or SPV-like
>> > wallets, when a change occurs that makes them incompatible/in need of
>> > upgrade we need to make sure they aren't going to break or introduce
>> > security flaws for users.
>> >
>> > If a majority of transactions have been sent are flagged ready, we know
>> > that they're also good to go.
>> >
>> > Would you implement the same versionbits for txn's version field, using
>> > 3
>> > bits for versioning and 29 bits for flags? This indexing of every txn
>> might
>> > sound insane and computationally expensive for bitcoin Core to run, but
>> if
>> > it isn't critical to upgrade with soft forks, then it can be watched
>> > outside the network by enthusiasts. I believe this is the most
>> politically
>> > correct way to get wallet devs and exchanges involved. (If it were me I
>> > would absolutely try figure out a way to stick it in
>> > isSupermajority...)
>> >
>> > Miners can watch for readiness flagged by wallets before they
>> > themselves
>> > flag ready. We will have to trust miners to not jump the gun, but
>> > that's
>> > the trade off.
>> >
>> > Thoughts?
>> >
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 9 style version bits for txns

2015-12-08 Thread Chris Priest via bitcoin-dev
I proposed in my Wildcard Inputs BIP that the version field be split
in two. The first 4 bytes are version number (which in practice is
being used for script version), and the second 4 bits are used for
transaction type.

I don't think the BIP9 mechanism really applies to transactions. A
block is essentially a collection of transactions, therefore voting on
the block applies to the many parties who have transactions in the
block. A transaction on the other hand only effects at most two
parties (the sender and the receiver). In other words, block are
"communal" data structures, transactions are individual data
structures. Also, the nature of soft forks are that wallets can choose
to implement a new feature or not. For instance, if no wallets
implement RBF or SW, then those features effectively don't exist,
regardless of how many nodes have upgraded to handle the feature.

Any new transaction feature should get a new "type" number. A new
transaction feature can't happen until the nodes support it.

On 12/8/15, Vincent Truong via bitcoin-dev
 wrote:
> So I have been told more than once that the version announcement in blocks
> is not a vote, but a signal for readiness, used in isSupermajority().
> Basically, if soft forks (and hard forks) won't activate unless a certain %
> of blocks have been flagged with the version up (or bit flipped when
> versionbits go live) to signal their readiness, that is a vote against
> implementation if they never follow up. I don't like this politically
> correct speech because in reality it is a vote... But I'm not here to argue
> about that... I would like to see if there are any thoughts on
> extending/copying isSupermajority() for a new secondary/non-critical
> function to also look for a similar BIP 9 style version bit in txns.
> Apologies if already proposed, haven't heard of it anywhere.
>
> If we are looking for a signal of readiness, it is unfair to wallet
> developers and exchanges that they are unable to signal if they too are
> ready for a change. As more users are going into use SPV or SPV-like
> wallets, when a change occurs that makes them incompatible/in need of
> upgrade we need to make sure they aren't going to break or introduce
> security flaws for users.
>
> If a majority of transactions have been sent are flagged ready, we know
> that they're also good to go.
>
> Would you implement the same versionbits for txn's version field, using 3
> bits for versioning and 29 bits for flags? This indexing of every txn might
> sound insane and computationally expensive for bitcoin Core to run, but if
> it isn't critical to upgrade with soft forks, then it can be watched
> outside the network by enthusiasts. I believe this is the most politically
> correct way to get wallet devs and exchanges involved. (If it were me I
> would absolutely try figure out a way to stick it in isSupermajority...)
>
> Miners can watch for readiness flagged by wallets before they themselves
> flag ready. We will have to trust miners to not jump the gun, but that's
> the trade off.
>
> Thoughts?
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Coalescing Transactions BIP Draft

2015-12-07 Thread Chris Priest via bitcoin-dev
I made a post a few days ago where I laid out a scheme for
implementing "coalescing transactions" using a new opcode. I have
since come to the realization that an opcode is not the best way to do
this. A much better approach I think is a new "transaction type" field
that is split off from the version field. Other uses can come out of
this type field, wildcard inputs is just the first one.

There are two unresolved issues. First, there might need to be a limit
on how many inputs are included in the "coalesce". Lets say you have
an address that has 100,000,000 inputs. If you were to coalesce them
all into one single input, that means that every node has to count of
these 100,000,000 inputs, which could take a long time. But then
again, the total number of inputs a wildcard can cover is limited to
the actual number of UTXOs in the pool, which is very much a
finite/constrained number.

One solution is to limit all wildcard inputs to, say, 10,000 items. If
you have more inputs that you want coalesced, you have to do it in
10,000 chunks, starting from the beginning. I want wildcard inputs to
look as much like normal inputs as much as possible to facilitate
implementation, so embedding a "max search" inside the transaction I
don't think is the best idea. I think if there is going to be a limit,
it should be implied.

The other issue is with limiting wildcard inputs to only inputs that
are confirmed into a fixed number of blocks. Sort of like how coinbase
has to be a certain age before it can be spent, maybe wildcard inputs
should only work on inputs older than a certain block age. Someone
brought up in the last thread that re-orgs can cause problems. I don't
quite see how that could happen, as re-orgs don't really affect
address balances, only block header values, which coalescing
transactions have nothing to do with.

Here is the draft:
https://github.com/priestc/bips/blob/master/bip-coalesc-wildcard.mediawiki
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

2015-11-24 Thread Chris Priest via bitcoin-dev
1. Technically is it promoting address reuse, but in this case, I
think it's OK. The primary purpose of a coalescing transaction is to
clear out *all* funds associated with one address and send them to
another address (belonging to the same owner). If you coalesce the
inputs to the same address over and over again, you an do that, but
you'll run the risk of leaking your private key.

2. I see these transactions being broadcast in the background when the
user is not planning on sending or receiving any payments. By the time
the wallet user wants to spend funds from the address, the coalescing
transaction should be sufficiently deep enough in the blockchain to
avoid re-org tomfoolery. Exchanges and payment processors who take in
payments around the clock will probably never use these transactions,
at least not on "live" addresses.

3. I never thought of that, but thats a benefit too!

On 11/24/15, Jannes Faber via bitcoin-dev
 wrote:
> Few issues I can think of:
>
> 1. In its basic form this encourages address reuse. Unless the wildcard can
> be constructed such that it can match a whole branch of an HD  wallet.
> Although I guess that would tie all those addresses together making HD moot
> to begin with.
>
> 2. Sounds pretty dangerous during reorgs. Maybe such a transaction should
> include a block height which indicates the maximum block that any utxo can
> match. With the requirement that the specified block height is at least 100
> blocks in the past. Maybe add a minimum block height as well to prevent
> unnecessary scanning (with the requirement that at least one utxo must be
> in that minimum block).
>
> 3. Seems like a nice way to the reduce utxo set. But hard to say how
> effective it would really be.
> On 25 Nov 2015 12:48 a.m., "Chris Priest via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> > This idea could be applied by having the wildcard signature apply to
>> > all
>> > UTXOs that are of a standard form and paid to a particular address, and
>> be
>> > a signature of some kind of message to that effect.
>>
>> I think this is true. Not *all* transactions will be able to match the
>> wildcard. For instance if someone sent some crazy smart contract tx to
>> your address, the script associated with that tx will be such that it
>> will not apply to the wildcard. Most "vanilla" utxos that I've seen
>> have the formula: OP_DUP OP_HASH160 [a hash corresponding to your
>> address] OP_EQUALVERIFY OP_CHECKSIG". Just UTXOs in that form could
>> apply to the wildcard.
>>
>> On 11/24/15, Dave Scotese via bitcoin-dev
>>  wrote:
>> > What is required to spend bitcoin is that input be provided to the UTXO
>> > script that causes it to return true.  What Chris is proposing breaks
>> > the
>> > programmatic nature of the requirement, replacing it with a requirement
>> > that the secret be known.  Granted, the secret is the only requirement
>> > in
>> > most cases, but there is no built-in assumption that the script always
>> > requires only that secret.
>> >
>> > This idea could be applied by having the wildcard signature apply to
>> > all
>> > UTXOs that are of a standard form and paid to a particular address, and
>> be
>> > a signature of some kind of message to that effect.  I imagine the cost
>> of
>> > re-scanning the UTXO set to find them all would justify a special extra
>> > mining fee for any transaction that used this opcode.
>> >
>> > Please be blunt about any of my own misunderstandings that this email
>> makes
>> > clear.
>> >
>> > On Tue, Nov 24, 2015 at 1:51 PM, Bryan Bishop via bitcoin-dev <
>> > bitcoin-dev@lists.linuxfoundation.org> wrote:
>> >
>> >> On Tue, Nov 24, 2015 at 11:34 AM, Chris Priest via bitcoin-dev <
>> >> bitcoin-dev@lists.linuxfoundation.org> wrote:
>> >>
>> >>> **OP_CHECKWILDCARDSIGVERIFY**
>> >>
>> >>
>> >> Some (minor) discussion of this idea in -wizards earlier today
>> >> starting
>> >> near near "09:50" (apologies for having no anchor links):
>> >> http://gnusha.org/bitcoin-wizards/2015-11-24.log
>> >>
>> >> - Bryan
>> >> http://heybryan.org/
>> >> 1 512 203 0507
>> >>
>> >> ___
>> >> bitcoin-dev mailing list
>> >> bitcoin-dev@lists.linuxfoundation.org
>> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>

Re: [bitcoin-dev] OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

2015-11-24 Thread Chris Priest via bitcoin-dev
> This idea could be applied by having the wildcard signature apply to all
> UTXOs that are of a standard form and paid to a particular address, and be
> a signature of some kind of message to that effect.

I think this is true. Not *all* transactions will be able to match the
wildcard. For instance if someone sent some crazy smart contract tx to
your address, the script associated with that tx will be such that it
will not apply to the wildcard. Most "vanilla" utxos that I've seen
have the formula: OP_DUP OP_HASH160 [a hash corresponding to your
address] OP_EQUALVERIFY OP_CHECKSIG". Just UTXOs in that form could
apply to the wildcard.

On 11/24/15, Dave Scotese via bitcoin-dev
 wrote:
> What is required to spend bitcoin is that input be provided to the UTXO
> script that causes it to return true.  What Chris is proposing breaks the
> programmatic nature of the requirement, replacing it with a requirement
> that the secret be known.  Granted, the secret is the only requirement in
> most cases, but there is no built-in assumption that the script always
> requires only that secret.
>
> This idea could be applied by having the wildcard signature apply to all
> UTXOs that are of a standard form and paid to a particular address, and be
> a signature of some kind of message to that effect.  I imagine the cost of
> re-scanning the UTXO set to find them all would justify a special extra
> mining fee for any transaction that used this opcode.
>
> Please be blunt about any of my own misunderstandings that this email makes
> clear.
>
> On Tue, Nov 24, 2015 at 1:51 PM, Bryan Bishop via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Tue, Nov 24, 2015 at 11:34 AM, Chris Priest via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> **OP_CHECKWILDCARDSIGVERIFY**
>>
>>
>> Some (minor) discussion of this idea in -wizards earlier today starting
>> near near "09:50" (apologies for having no anchor links):
>> http://gnusha.org/bitcoin-wizards/2015-11-24.log
>>
>> - Bryan
>> http://heybryan.org/
>> 1 512 203 0507
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
>
> --
> I like to provide some work at no charge to prove my value. Do you need a
> techie?
> I own Litmocracy <http://www.litmocracy.com> and Meme Racing
> <http://www.memeracing.net> (in alpha).
> I'm the webmaster for The Voluntaryist <http://www.voluntaryist.com> which
> now accepts Bitcoin.
> I also code for The Dollar Vigilante <http://dollarvigilante.com/>.
> "He ought to find it more profitable to play by the rules" - Satoshi
> Nakamoto
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

2015-11-24 Thread Chris Priest via bitcoin-dev
A coalescing transaction in my scheme is the same size as a normal
transaction. You only include one UTXO, the rest are implied based on
the presence of the OP_CHECKWILDCARDSIGVERIFY opcode.

The code that determines if a UTXO is spent or not will need to be
modified to include a check to see if any matching coalescing
transactions exist in any later block. Maybe there should be a
"coalescing pool" containing all coalescing transactions that make
such a check faster.

The part I'm not too sure about is the "wildcard signature". I'm not
too versed in cryptography to know how exactly to pull this off, but I
think it should be simple.
You'd just have to some way inject a flag into the signing process
that can be verified later.

I originally wanted the "wildcardness" of the transaction expressed by
the transaction version number.
Basically any input that exists within a "version 2 transaction" is
viewed as a wildcard input. Then I realized whats to stop someone from
modifying the transaction from version 1 to version 2 and stealing
someones funds. The "wildcardness" must be expressed in the signature
so you know that the private key holder intended all inputs to be
included. Hence the need for a new opcode.

btw, this scheme is definitely in the 10x or higher gain. You could
potentially spend an unlimited number of UTXOs this way.

On 11/24/15, Gavin Andresen  wrote:
> On Tue, Nov 24, 2015 at 12:34 PM, Chris Priest via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> The technical reason for this is that you have to explicitly list each
>> UTXO individually when making bitcoin transactions. There is no way to
>> say "all the utxos". This post describes a way to achieve this. I'm
>> not yet a bitcoin master, so there are parts of this proposal that I
>> have not yet figured out entirely, but I'm sure other people who know
>> more could help out.
>>
>
> So every input has:
>  32-byte hash (transaction being spent)
>  4-byte output (output being spent)
>  4-byte sequence number
> ... plus the scriptSig. Which is as small as about 73 bytes if you're
> spending a raw OP_CHECKSIG (which you can't do as a bitcoin address, but
> could via the BIP70 payment protocol), and which is at least two serialized
> bytes.
>
> Best case for any scheme to coalesce scriptSigs would to somehow make
> all-but-the-first scriptSig zero-length, so the inputs would be 42 bytes
> instead of 40+73 bytes -- the coalesce transaction would be about one-third
> the size, so instead of paying (say) $1 in transaction fees you'd pay 37
> cents.
>
> That's in the gray are of the "worth doing" threshold-- if it was a 10x
> improvement (pay 10 cents instead of $1) it'd be in my personal "definitely
> worth the trouble of doing" category.
>
> RE: the scheme:  an OP_RINGSIGVERIFY is probably the right way to do this:
>   https://en.wikipedia.org/wiki/Ring_signature
>
> The funding transactions would be:   OP_RINGSIGVERIFY
> ... which might could be redeemed with  for one input and
> then... uhh... maybe just  for the other
> inputs that are part of the same ring signature group (OP_0 if the first
> input has the signature that is good for all the other public keys, which
> would be the common case).
>
> --
> --
> Gavin Andresen
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

2015-11-24 Thread Chris Priest via bitcoin-dev
Here is the problem I'm trying to solve with this idea:

Lets say you create an address, publish the address on your blog, and
tell all your readers to donate $0.05 to that address if they like
your blog. Lets assume you receive 10,000 donations this way. This all
adds up to $500. The problem is that because of the way the bitcoin
payment protocol works, a large chunk of that money will go to fees.
If one person sent you a single donation of $500, you would be able to
spend most of the $500, but since you got this coin by many smaller
UTXO's, your wallet has to use a higher tx fee when spending this
coin.

The technical reason for this is that you have to explicitly list each
UTXO individually when making bitcoin transactions. There is no way to
say "all the utxos". This post describes a way to achieve this. I'm
not yet a bitcoin master, so there are parts of this proposal that I
have not yet figured out entirely, but I'm sure other people who know
more could help out.

**OP_CHECKWILDCARDSIGVERIFY**

First, I propose a new opcode. This opcode works exactly the same as
OP_CHECKSIGVERIFY, except it only evaluates true if the signature is a
"wildcard signature". What is a wildcard signature you ask? This is
the part that I have not yet 100% figured out yet. It is basically a
signature that was created in such a way that expresses the private
key owners intent to make this input a *wildcard input*

** wildcard inputs **

A wildcard input is defined as a input to a transaction that has been
signed with OP_CHECKWILDCARDSIGVERIFY. The difference between a
wildcard input and a regular input is that the regular input respects
the "value" or "amount" field, while the wildcard input ignores that
value, and instead uses the value of *all inputs* with a matching
locking script.

** coalescing transaction"

A bitcoin transaction that
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] summarising security assumptions (re cost metrics)

2015-11-06 Thread Chris Priest via bitcoin-dev
> The bigger point however, which Erik explained, was: widespread use of
> APIs as a sole means of interfacing with the blockchain also
> contributes to reducing network security for everyone, because it
> erodes the consensus rule validation security described under
> "Validators" in the OP.

I completely disagree with this. You are implying that there is some
sort of ideal ratio of full nodes to 'client only' nodes that the
network must maintain. You seem to be implying that if that ideal
ratio is to somehow be disrupted, then security suffers. My question
to you is what is that ideal ratio and what methodology did you use to
come up with it?

The way I see it, the security of the system is independent on ratio
between full nodes and lightweight nodes.

In other words, if there are 100,000 lightweight wallets to 100 full
nodes, then you have the same security profile as one with 100,000
full nodes to 100 lightweight wallets.

I think most 'big blockers' think the same way I do, hence the rub
between the two camps.

Small block people need to make a better case as to how exactly full
node ratio relates to network security (especially the 'for everyone'
part), because the link is not clear to me at all. Small block people
seem to take this simple fact as self evident, but I just don't see
it.

On 11/6/15, Adam Back  wrote:
> You're right that it is better that there be more APIs than fewer,
> that is less of a single point of failure.  It also depends what you
> mean by APIs: using an API to have a second cross-check of information
> is quite different to building a wallet or business that only
> interfaces with the blockchain via a 3rd party API.  There are
> different APIs also: some are additive eg they add a second signature
> via multisig, but even those models while better can still be a mixed
> story for security, if the user is not also checking their own
> full-node or checking SPV to make the first signature.
>
> Power users and businesses using APIs instead of running a full-node,
> or instead of doing SPV checks, should be clear about the API and what
> security they are delegating to a third party, and whether they have a
> reason to trust the governance and security competence of the third
> party: in the simplest case it can reduce their and their users
> security below SPV.
>
> The bigger point however, which Erik explained, was: widespread use of
> APIs as a sole means of interfacing with the blockchain also
> contributes to reducing network security for everyone, because it
> erodes the consensus rule validation security described under
> "Validators" in the OP.
>
> Adam
>
>
> On 6 November 2015 at 09:05, Chris Priest  wrote:
>> On 11/5/15, Eric Voskuil via bitcoin-dev
>>  wrote:
>>> On 11/05/2015 03:03 PM, Adam Back via bitcoin-dev wrote:
 ...
 Validators: Economically dependent full nodes are an important part of
 Bitcoin's security model because they assure Bitcoin security by
 enforcing consensus rules.  While full nodes do not have orphan
 risk, we also dont want maliciously crafted blocks with pathological
 validation cost to erode security by knocking reasonable spec full
 nodes off the network on CPU (or bandwidth grounds).
 ...
 Validators vs Miner decentralisation balance:

 There is a tradeoff where we can tolerate weak miner decentralisation
 if we can rely on good validator decentralisation or vice versa.  But
 both being weak is risky.  Currently given mining centralisation
 itself is weak, that makes validator decentralisation a critical
 remaining defence - ie security depends more on validator
 decentralisation than it would if mining decentralisation was in a
 better shape.
>>>
>>> This side of the security model seems underappreciated, if not poorly
>>> understood. Weakening is not just occurring because of the proliferation
>>> of non-validating wallet software and centralized (web) wallets, but
>>> also centralized Bitcoin APIs.
>>>
>>> Over time developers tend to settle on a couple of API providers for a
>>> given problem. Bing and Google for search and mapping, for example. All
>>> applications and users of them, depending on an API service, reduce to a
>>> single validator. Imagine most Bitcoin applications built on the
>>> equivalent of Bing and Google.
>>>
>>> e
>>>
>>>
>>
>> I disagree. I think blockchain APIs are a good thing for
>> decentralization. There aren't just 3 or 4 blockexplorer APIs out
>> there, there are dozens. Each API returns essentially the same data,
>> so they are all interchangeable. Take a look at this python package:
>> https://github.com/priestc/moneywagon
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] summarising security assumptions (re cost metrics)

2015-11-06 Thread Chris Priest via bitcoin-dev
On 11/5/15, Eric Voskuil via bitcoin-dev
 wrote:
> On 11/05/2015 03:03 PM, Adam Back via bitcoin-dev wrote:
>> ...
>> Validators: Economically dependent full nodes are an important part of
>> Bitcoin's security model because they assure Bitcoin security by
>> enforcing consensus rules.  While full nodes do not have orphan
>> risk, we also dont want maliciously crafted blocks with pathological
>> validation cost to erode security by knocking reasonable spec full
>> nodes off the network on CPU (or bandwidth grounds).
>> ...
>> Validators vs Miner decentralisation balance:
>>
>> There is a tradeoff where we can tolerate weak miner decentralisation
>> if we can rely on good validator decentralisation or vice versa.  But
>> both being weak is risky.  Currently given mining centralisation
>> itself is weak, that makes validator decentralisation a critical
>> remaining defence - ie security depends more on validator
>> decentralisation than it would if mining decentralisation was in a
>> better shape.
>
> This side of the security model seems underappreciated, if not poorly
> understood. Weakening is not just occurring because of the proliferation
> of non-validating wallet software and centralized (web) wallets, but
> also centralized Bitcoin APIs.
>
> Over time developers tend to settle on a couple of API providers for a
> given problem. Bing and Google for search and mapping, for example. All
> applications and users of them, depending on an API service, reduce to a
> single validator. Imagine most Bitcoin applications built on the
> equivalent of Bing and Google.
>
> e
>
>

I disagree. I think blockchain APIs are a good thing for
decentralization. There aren't just 3 or 4 blockexplorer APIs out
there, there are dozens. Each API returns essentially the same data,
so they are all interchangeable. Take a look at this python package:
https://github.com/priestc/moneywagon
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev