Re: [Bitcoin-development] Hard fork via miner vote
On Saturday, 20 June 2015, at 8:11 pm, Pieter Wuille wrote: you want full nodes that have not noticed the fork to fail rather than see a slow but otherwise functional chain. Isn't that what the Alert mechanism is for? If these nodes continue running despite an alert telling them they're outdated, then it must be intentional. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] F2Pool has enabled full replace-by-fee
On Friday, 19 June 2015, at 9:18 am, Adrian Macneil wrote: If full-RBF sees any significant adoption by miners, then it will actively harm bitcoin adoption by reducing or removing the ability for online or POS merchants to accept bitcoin payments at all. Retail POS merchants probably should not be accepting vanilla Bitcoin payments, as Bitcoin alone does not (and cannot) guarantee the irreversibility of a transaction until it has been buried several blocks deep in the chain. Retail merchants should be requiring a co-signature from a mutually trusted co-signer that vows never to sign a double-spend. The reason we don't yet see such technology permeating the ecosystem is because, to date, zero-conf transactions have been irreversible enough, but this has only been a happy accident; it was never promised, and it should not be relied upon. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] F2Pool has enabled full replace-by-fee
On Friday, 19 June 2015, at 3:53 pm, justusranv...@riseup.net wrote: I'd also like to note that prima facie doesn't mean always, it means that the default assumption, unless proven otherwise. Why would you automatically assume fraud by default? Shouldn't the null hypothesis be the default? Without any information one way or another, you ought to make *no assumption* about the fraudulence or non-fraudulence of any given double-spend. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] F2Pool has enabled full replace-by-fee
Even if you could prove intent to pay, this would be almost useless. I can sincerely intend to do a lot of things, but this doesn't mean I'll ever actually do them. I am in favor of more zero-confirmation transactions being reversed / double-spent. Bitcoin users largely still believe that accepting zero-conf transactions is safe, and evidently it's going to take some harsh lessons in reality to correct this belief. On Friday, 19 June 2015, at 9:42 am, Eric Lombrozo wrote: If we want a non-repudiation mechanism in the protocol, we should explicitly define one rather than relying on “prima facie” assumptions. Otherwise, I would recommend not relying on the existence of a signed transaction as proof of intent to pay… On Jun 19, 2015, at 9:36 AM, Matt Whitlock b...@mattwhitlock.name wrote: On Friday, 19 June 2015, at 3:53 pm, justusranv...@riseup.net wrote: I'd also like to note that prima facie doesn't mean always, it means that the default assumption, unless proven otherwise. Why would you automatically assume fraud by default? Shouldn't the null hypothesis be the default? Without any information one way or another, you ought to make *no assumption* about the fraudulence or non-fraudulence of any given double-spend. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] User vote in blocksize through fees
On Friday, 12 June 2015, at 7:44 pm, Peter Todd wrote: On Fri, Jun 12, 2015 at 02:36:31PM -0400, Matt Whitlock wrote: On Friday, 12 June 2015, at 7:34 pm, Peter Todd wrote: On Fri, Jun 12, 2015 at 02:22:36PM -0400, Matt Whitlock wrote: Why should miners only be able to vote for double the limit or halve the limit? If you're going to use bits, I think you need to use two bits: 0 0 = no preference (wildcard vote) 0 1 = vote for the limit to remain the same 1 0 = vote for the limit to be halved 1 1 = vote for the limit to be doubled User transactions would follow the same usage. In particular, a user vote of 0 0 (no preference) could be included in a block casting any vote, but a block voting 0 0 (no preference) could only contain transactions voting 0 0 as well. Sounds like a good encoding to me. Taking the median of the three options, and throwing away don't care votes entirely, makes sense. I hope you mean the *plurality* of the three options after throwing away the don't cares, not the *median*. Median ensures that voting no change is meaningful. If double + no change = 66%-1, you'd expect the result to be no change, not halve With a plurality vote you'd end up with a halving that was supported by a minority. Never mind. I think I've figured out what you're getting at, and you're right. We wouldn't want halve to win on a plurality just because the remaining majority of the vote was split between double and remain-the-same. Good catch. :) -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] User vote in blocksize through fees
On Friday, 12 June 2015, at 11:20 am, Mark Friedenbach wrote: Peter it's not clear to me that your described protocol is free of miner influence over the vote, by artificially generating transactions which they claim in their own blocks Miners could fill their blocks with garbage transactions that agree with their vote, but this wouldn't bring them any real income, as they'd be paying their own money as fees to themselves. To get real income, miners would have to vote in accordance with real users. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Max Block Size: Simple Voting Procedure
Why do it as an OP_RETURN output? It could be a simple token in the coinbase input script, similar to how support for P2SH was signaled among miners. And why should there be an explicit token for voting for the status quo? Simply omitting any indication should be an implicit vote for the status quo. A miner would only need to insert an indicator into their block if they wished for a larger block. That said, proposals of this type have been discussed before, and the objection is always that miners would want larger blocks than the rest of the network could bear. Unless you want Bitcoin to become centralized in the hands of a few large mining pools, you shouldn't hand control over the block size limits to the miners. On Sunday, 31 May 2015, at 3:04 pm, Stephen Morse wrote: This is likely very similar to other proposals, but I want to bring voting procedures back into the discussion. The goal here is to create a voting procedure that is as simple as possible to increase the block size limit. Votes are aggregated over each 2016 block period. Each coinbase transaction may have an output at tx.vout[0] with OP_RETURN data in it of the format: OP_RETURN {OP_1 or OP_2} OP_2 means the miner votes to increase the block size limit. OP_1 means the miner votes to not increase the block size limit. *Not including such a vote is equivalent to voting to NOT increase the block size. *I first thought that not voting should mean that you vote with your block size, but then decided that it would be too gameable by others broadcasting transactions to affect your block size. If in a 2016 block round there were more than 1008 blocks that voted to increase the block size limit, then the max block size increases by 500 kb. The votes can start when there is a supermajority of miners signaling support for the voting procedure. A few important properties of this simple voting: - It's not gameable via broadcasting transactions (assuming miners don't set their votes to be automatic, based on the size of recent blocks). - Miners don't have to bloat their blocks artificially just to place a vote for larger block sizes, and, similarly, don't need to exclude transactions even when they think the block size does not need to be raised. - The chain up until the point that this goes into effect may be interpreted as just lacking votes to increase the block size. We can't trust all miners, but we have to trust that 50% of them are honest for the system to work. This system makes it so that altering the maximum block size requires 50% of miners (hash power) to vote to increase the consensus-limit. Thanks for your time. I think this is an important time in Bitcoin's history. I'm not married to this proposal, but I think it would work. I think a lot of the proposals mentioned on this mailing list would work. I think it's time we just pick one and run with it. Please let me know your thoughts. I will start working on a pull request if this receives any support from miners/core devs/community members, unless someone with more experience volunteers. Best, Stephen -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Cost savings by using replace-by-fee, 30-90%
On Tuesday, 26 May 2015, at 11:22 am, Danny Thorpe wrote: What prevents RBF from being used for fraudulent payment reversals? Pay 1BTC to Alice for hard goods, then after you receive the goods broadcast a double spend of that transaction to pay Alice nothing? Your only cost is the higher network fee of the 2nd tx. The First-Seen-Safe replace-by-fee presently being discussed on this list disallows fraudulent payment reversals, as it disallows a replacing transaction that pays less to any output script than the replaced transaction paid. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database
On Monday, 25 May 2015, at 8:41 pm, Mike Hearn wrote: some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you can only spend confirmed UTXOs. I can't tell you how aggravating it is to have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for the last transaction I did to confirm first. All the more aggravating because I know, if I have multiple UTXOs in my wallet, I can make multiple spends within the same block. Andreas' wallet hasn't done that for years. Are you repeating this from some very old memory or do you actually see this issue in reality? The only time you're forced to wait for confirmations is when you have an unconfirmed inbound transaction, and thus the sender is unknown. I see this behavior all the time. I am using the latest release, as far as I know. Version 4.30. The same behavior occurs in the Testnet3 variant of the app. Go in there with an empty wallet and receive one payment and wait for it to confirm. Then send a payment and, before it confirms, try to send another one. The wallet won't let you send the second payment. It'll say something like, You need x.xx more bitcoins to make this payment. But if you wait for your first payment to confirm, then you'll be able to make the second payment. If it matters, I configure the app to connect only to my own trusted Bitcoin node, so I only ever have one active connection at most. I notice that outgoing payments never show as Sent until they appear in a block, presumably because the app never sees the transaction come in over any connection. -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Zero-Conf for Full Node Discovery
On Tuesday, 26 May 2015, at 1:15 am, Peter Todd wrote: On Tue, May 26, 2015 at 12:52:07AM -0400, Matt Whitlock wrote: On Monday, 25 May 2015, at 11:48 pm, Jim Phillips wrote: Do any wallets actually do this yet? Not that I know of, but they do seed their address database via DNS, which you can poison if you control the LAN's DNS resolver. I did this for a Bitcoin-only Wi-Fi network I operated at a remote festival. We had well over a hundred lightweight wallets, all trying to connect to the Bitcoin P2P network over a very bandwidth-constrained Internet link, so I poisoned the DNS and rejected all outbound connection attempts on port 8333, to force all the wallets to connect to a single local full node, which had connectivity to a single remote node over the Internet. Thus, all the lightweight wallets at the festival had Bitcoin network connectivity, but we only needed to backhaul the Bitcoin network's transaction traffic once. Interesting! What festival was this? The Porcupine Freedom Festival (PorcFest) in New Hampshire last summer. I strongly suspect that it's the largest gathering of Bitcoin users at any event that is not specifically Bitcoin-themed. There's a lot of overlap between the Bitcoin and liberty communities. PorcFest draws somewhere around 1000-2000 attendees, a solid quarter of whom have Bitcoin wallets on their mobile devices. The backhaul was a 3G cellular Internet connection, and the local Bitcoin node and network router were hosted on a Raspberry Pi with some Netfilter tricks to restrict connectivity. The net result was that all Bitcoin nodes (lightweight and heavyweight) on the local Wi-Fi network were unable to connect to any Bitcoin nodes except for the local node, which they discovered via DNS. I also had provisions in place to allow outbound connectivity to the API servers for Mycelium, Blockchain, and Coinbase wallets, by feeding the DNS resolver's results in real-time into a whitelisting Netfilter rule utilizing IP Sets. For your amusement, here's the graphic for the banner that I had made to advertise the network at the festival (*chuckle*): http://www.mattwhitlock.com/bitcoin_wifi.png -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Zero-Conf for Full Node Discovery
Who would be performing a Sybil attack against themselves? We're talking about a LAN here. All the nodes would be under the control of the same entity. In that case, you actually want them all connecting solely to a central hub node on the LAN, and the hub node should connect to diverse and unpredictable other nodes on the Bitcoin network. On Monday, 25 May 2015, at 9:46 pm, Kevin Greene wrote: This is something you actually don't want. In order to make it as difficult as possible for an attacker to perform a sybil attack, you want to choose a set of peers that is as diverse, and unpredictable as possible. On Mon, May 25, 2015 at 9:37 PM, Matt Whitlock b...@mattwhitlock.name wrote: This is very simple to do. Just ping the all nodes address (ff02::1) and try connecting to TCP port 8333 of each node that responds. Shouldn't take but more than a few milliseconds on any but the most densely populated LANs. On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote: Is there any work being done on using some kind of zero-conf service discovery protocol so that lightweight clients can find a full node on the same LAN to peer with rather than having to tie up WAN bandwidth? I envision a future where lightweight devices within a home use SPV over WiFi to connect with a home server which in turn relays the transactions they create out to the larger and faster relays on the Internet. In a situation where there are hundreds or thousands of small SPV devices in a single home (if 21, Inc. is successful) monitoring the blockchain, this could result in lower traffic across the slow WAN connection. And yes, I realize it could potentially take a LOT of these devices before the total bandwidth is greater than downloading a full copy of the blockchain, but there's other reasons to host your own full node -- trust being one. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts http://www.linkedin.com/in/ergophobe *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Zero-Conf for Full Node Discovery
This is very simple to do. Just ping the all nodes address (ff02::1) and try connecting to TCP port 8333 of each node that responds. Shouldn't take but more than a few milliseconds on any but the most densely populated LANs. On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote: Is there any work being done on using some kind of zero-conf service discovery protocol so that lightweight clients can find a full node on the same LAN to peer with rather than having to tie up WAN bandwidth? I envision a future where lightweight devices within a home use SPV over WiFi to connect with a home server which in turn relays the transactions they create out to the larger and faster relays on the Internet. In a situation where there are hundreds or thousands of small SPV devices in a single home (if 21, Inc. is successful) monitoring the blockchain, this could result in lower traffic across the slow WAN connection. And yes, I realize it could potentially take a LOT of these devices before the total bandwidth is greater than downloading a full copy of the blockchain, but there's other reasons to host your own full node -- trust being one. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts http://www.linkedin.com/in/ergophobe *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Zero-Conf for Full Node Discovery
On Monday, 25 May 2015, at 11:48 pm, Jim Phillips wrote: Do any wallets actually do this yet? Not that I know of, but they do seed their address database via DNS, which you can poison if you control the LAN's DNS resolver. I did this for a Bitcoin-only Wi-Fi network I operated at a remote festival. We had well over a hundred lightweight wallets, all trying to connect to the Bitcoin P2P network over a very bandwidth-constrained Internet link, so I poisoned the DNS and rejected all outbound connection attempts on port 8333, to force all the wallets to connect to a single local full node, which had connectivity to a single remote node over the Internet. Thus, all the lightweight wallets at the festival had Bitcoin network connectivity, but we only needed to backhaul the Bitcoin network's transaction traffic once. On May 25, 2015 11:37 PM, Matt Whitlock b...@mattwhitlock.name wrote: This is very simple to do. Just ping the all nodes address (ff02::1) and try connecting to TCP port 8333 of each node that responds. Shouldn't take but more than a few milliseconds on any but the most densely populated LANs. On Monday, 25 May 2015, at 11:06 pm, Jim Phillips wrote: Is there any work being done on using some kind of zero-conf service discovery protocol so that lightweight clients can find a full node on the same LAN to peer with rather than having to tie up WAN bandwidth? I envision a future where lightweight devices within a home use SPV over WiFi to connect with a home server which in turn relays the transactions they create out to the larger and faster relays on the Internet. In a situation where there are hundreds or thousands of small SPV devices in a single home (if 21, Inc. is successful) monitoring the blockchain, this could result in lower traffic across the slow WAN connection. And yes, I realize it could potentially take a LOT of these devices before the total bandwidth is greater than downloading a full copy of the blockchain, but there's other reasons to host your own full node -- trust being one. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts http://www.linkedin.com/in/ergophobe *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database
Minimizing the number of UTXOs in a wallet is sometimes not in the best interests of the user. In fact, quite often I've wished for a configuration option like Try to maintain _[number]_ UTXOs in the wallet. This is because I often want to make multiple spends from my wallet within one block, but spends of unconfirmed inputs are less reliable than spends of confirmed inputs, and some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you can only spend confirmed UTXOs. I can't tell you how aggravating it is to have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for the last transaction I did to confirm first. All the more aggravating because I know, if I have multiple UTXOs in my wallet, I can make multiple spends within the same block. On Saturday, 9 May 2015, at 12:09 pm, Jim Phillips wrote: Forgive me if this idea has been suggested before, but I made this suggestion on reddit and I got some feedback recommending I also bring it to this list -- so here goes. I wonder if there isn't perhaps a simpler way of dealing with UTXO growth. What if, rather than deal with the issue at the protocol level, we deal with it at the source of the problem -- the wallets. Right now, the typical wallet selects only the minimum number of unspent outputs when building a transaction. The goal is to keep the transaction size to a minimum so that the fee stays low. Consequently, lots of unspent outputs just don't get used, and are left lying around until some point in the future. What if we started designing wallets to consolidate unspent outputs? When selecting unspent outputs for a transaction, rather than choosing just the minimum number from a particular address, why not select them ALL? Take all of the UTXOs from a particular address or wallet, send however much needs to be spent to the payee, and send the rest back to the same address or a change address as a single output? Through this method, we should wind up shrinking the UTXO database over time rather than growing it with each transaction. Obviously, as Bitcoin gains wider adoption, the UTXO database will grow, simply because there are 7 billion people in the world, and eventually a good percentage of them will have one or more wallets with spendable bitcoin. But this idea could limit the growth at least. The vast majority of users are running one of a handful of different wallet apps: Core, Electrum; Armory; Mycelium; Breadwallet; Coinbase; Circle; Blockchain.info; and maybe a few others. The developers of all these wallets have a vested interest in the continued usefulness of Bitcoin, and so should not be opposed to changing their UTXO selection algorithms to one that reduces the UTXO database instead of growing it. From the miners perspective, even though these types of transactions would be larger, the fee could stay low. Miners actually benefit from them in that it reduces the amount of storage they need to dedicate to holding the UTXO. So miners are incentivized to mine these types of transactions with a higher priority despite a low fee. Relays could also get in on the action and enforce this type of behavior by refusing to relay or deprioritizing the relay of transactions that don't use all of the available UTXOs from the addresses used as inputs. Relays are not only the ones who benefit the most from a reduction of the UTXO database, they're also in the best position to promote good behavior. -- *James G. Phillips IV* https://plus.google.com/u/0/113107039501292625391/posts *Don't bunt. Aim out of the ball park. Aim for the company of immortals. -- David Ogilvy* *This message was created with 100% recycled electrons. Please think twice before printing.* -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
[Bitcoin-development] Proposed alternatives to the 20MB step function
Between all the flames on this list, several ideas were raised that did not get much attention. I hereby resubmit these ideas for consideration and discussion. - Perhaps the hard block size limit should be a function of the actual block sizes over some trailing sampling period. For example, take the median block size among the most recent 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually and organically, rather than having human beings guessing at what is an appropriate limit. - Perhaps the hard block size limit should be determined by a vote of the miners. Each miner could embed a desired block size limit in the coinbase transactions of the blocks it publishes. The effective hard block size limit would be that size having the greatest number of votes within a sliding window of most recent blocks. - Perhaps the hard block size limit should be a function of block-chain length, so that it can scale up smoothly rather than jumping immediately to 20 MB. This function could be linear (anticipating a breakdown of Moore's Law) or quadratic. I would be in support of any of the above, but I do not support Mike Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the road without actually solving the problem, and it does so in a controversial (step function) way. -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Proposed alternatives to the 20MB step function
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote: It seems you missed my suggestion about basing the maximum block size on the bitcoin days destroyed in transactions that are included in the block. I think it has potential for both scaling as well as keeping up a constant fee pressure. If tuned properly, it should both stop spamming and increase block size maximum when there are a lot of real transactions waiting for inclusion. I saw it. I apologize for not including it in my list. I should have, for sake of discussion, even though I have a problem with it. My problem with it is that bitcoin days destroyed is not a measure of demand for space in the block chain. In the distant future, when Bitcoin is the predominant global currency, bitcoins will have such high velocity that the number of bitcoin days destroyed in each block will be much lower than at present. Does this mean that the block size limit should be lower in the future than it is now? Clearly this would be incorrect. Perhaps I am misunderstanding your proposal. Could you describe it more explicitly? -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Block Size Increase
I'm not so much opposed to a block size increase as I am opposed to a hard fork. My problem with a hard fork is that everyone and their brother wants to seize the opportunity of a hard fork to insert their own pet feature, and such a mad rush of lightly considered, obscure feature additions would be extremely risky for Bitcoin. If it could be guaranteed that raising the block size limit would be the only incompatible change introduced in the hard fork, then I would support it, but I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences. -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] network disruption as a service and proof of local storage
I agree that someone could do this, but why is that a problem? Isn't the goal of this exercise to ensure more full nodes on the network? In order to be able to answer the challenges, an entity would need to be running a full node somewhere. Thus, they have contributed at least one additional full node to the network. I could certainly see a case for a company to host hundreds of lightweight (e.g., EC2) servers all backed by a single copy of the block chain. Why force every single machine to have its own copy? All you really need to require is that each agency/participant have its own copy. On Friday, 27 March 2015, at 2:32 pm, Robert McKay wrote: Basically the problem with that is that someone could setup a single full node that has the blockchain and can answer those challenges and then a bunch of other non-full nodes that just proxy any such challenges to the single full node. Rob On 2015-03-26 23:04, Matt Whitlock wrote: Maybe I'm overlooking something, but I've been watching this thread with increasing skepticism at the complexity of the offered solution. I don't understand why it needs to be so complex. I'd like to offer an alternative for your consideration... Challenge: Send me: SHA256(SHA256(concatenation of N pseudo-randomly selected bytes from the block chain)). Choose N such that it would be infeasible for the responding node to fetch all of the needed blocks in a short amount of time. In other words, assume that a node can seek to a given byte in a block stored on local disk much faster than it can download the entire block from a remote peer. This is almost certainly a safe assumption. For example, choose N = 1024. Then the proving node needs to perform 1024 random reads from local disk. On spinning media, this is likely to take somewhere on the order of 15 seconds. Assuming blocks are averaging 500 KiB each, then 1024 blocks would comprise 500 MiB of data. Can 500 MiB be downloaded in 15 seconds? This data transfer rate is 280 Mbps. Almost certainly not possible. And if it is, just increase N. The challenge also becomes more difficult as average block size increases. This challenge-response protocol relies on the lack of a partial getdata command in the Bitcoin protocol: a node cannot ask for only part of a block; it must ask for an entire block. Furthermore, nodes could ban other nodes for making too many random requests for blocks. On Thursday, 26 March 2015, at 7:09 pm, Sergio Lerner wrote: If I understand correctly, transforming raw blocks to keyed blocks takes 512x longer than transforming keyed blocks back to raw. The key is public, like the IP, or some other value which perhaps changes less frequently. Yes. I was thinking that the IP could be part of a first layer of encryption done to the blockchain data prior to the asymetric operation. That way the asymmetric operation can be the same for all users (no different primers for different IPs, and then the verifiers does not have to verify that a particular p is actually a pseudo-prime suitable for P.H. ) and the public exponent can be just 3. Two protocols can be performed to prove local possession: 1. (prover and verifier pay a small cost) The verifier sends a seed to derive some n random indexes, and the prover must respond with the hash of the decrypted blocks within a certain time bound. Suppose that decryption of n blocks take 100 msec (+-100 msec of network jitter). Then an attacker must have a computer 50 faster to be able to consistently cheat. The last 50 blocks should not be part of the list to allow nodes to catch-up and encrypt the blocks in background. Can you clarify, the prover is hashing random blocks of *decrypted*, as-in raw, blockchain data? What does this prove other than, perhaps, fast random IO of the blockchain? (which is useful in its own right, e.g. as a way to ensure only full-node IO-bound mining if baked into the PoW) How is the verifier validating the response without possession of the full blockchain? You're right, It is incorrect. Not the decrypted blocks must be sent, but the encrypted blocks. There correct protocol is this: 1. (prover and verifier pay a small cost) The verifier sends a seed to derive some n random indexes, and the prover must respond with the the encrypted blocks within a certain time bound. The verifier decrypts those blocks to check if they are part of the block-chain. But then there is this improvement which allows the verifier do detect non full-nodes with much less computation: 3. (prover pays a small cost, verifier smaller cost) The verifier asks the prover to send a Merkle tree root of hashes of encrypted blocks with N indexes selected by a psudo-random function seeded by a challenge value, where each encrypted
Re: [Bitcoin-development] network disruption as a service and proof of local storage
On Friday, 27 March 2015, at 4:57 pm, Wladimir J. van der Laan wrote: On Fri, Mar 27, 2015 at 11:16:43AM -0400, Matt Whitlock wrote: I agree that someone could do this, but why is that a problem? Isn't the goal of this exercise to ensure more full nodes on the network? In order to be able to answer the challenges, an entity would need to be running a full node somewhere. Thus, they have contributed at least one additional full node to the network. I could certainly see a case for a company to host hundreds of lightweight (e.g., EC2) servers all backed by a single copy of the block chain. Why force every single machine to have its own copy? All you really need to require is that each agency/participant have its own copy. They would not even have to run one. It could just pass the query to a random other node, and forward its result :) D'oh. Of course. Thanks. :/ The suggestion about encrypting blocks with a key tied to IP address seems like a bad idea, though. Lots of nodes are on dynamic IP addresses. It wouldn't really be practical to re-encrypt the entire block chain every time a node's IP address changes. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] network disruption as a service and proof of local storage
On Friday, 27 March 2015, at 4:57 pm, Wladimir J. van der Laan wrote: On Fri, Mar 27, 2015 at 11:16:43AM -0400, Matt Whitlock wrote: I agree that someone could do this, but why is that a problem? Isn't the goal of this exercise to ensure more full nodes on the network? In order to be able to answer the challenges, an entity would need to be running a full node somewhere. Thus, they have contributed at least one additional full node to the network. I could certainly see a case for a company to host hundreds of lightweight (e.g., EC2) servers all backed by a single copy of the block chain. Why force every single machine to have its own copy? All you really need to require is that each agency/participant have its own copy. They would not even have to run one. It could just pass the query to a random other node, and forward its result :) Ah, easy way to fix that. In fact, in my first draft of my suggestion, I had the answer, but I removed it because I thought it was superfluous. Challenge: Send me: SHA256(SHA256(concatenation of N pseudo-randomly selected bytes from the block chain | prover's nonce | verifier's nonce)). The nonces are from the version messages exchanged at connection startup. A node can't pass the buck because it can't control the nonce that a random other node chooses. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] network disruption as a service and proof of local storage
Maybe I'm overlooking something, but I've been watching this thread with increasing skepticism at the complexity of the offered solution. I don't understand why it needs to be so complex. I'd like to offer an alternative for your consideration... Challenge: Send me: SHA256(SHA256(concatenation of N pseudo-randomly selected bytes from the block chain)). Choose N such that it would be infeasible for the responding node to fetch all of the needed blocks in a short amount of time. In other words, assume that a node can seek to a given byte in a block stored on local disk much faster than it can download the entire block from a remote peer. This is almost certainly a safe assumption. For example, choose N = 1024. Then the proving node needs to perform 1024 random reads from local disk. On spinning media, this is likely to take somewhere on the order of 15 seconds. Assuming blocks are averaging 500 KiB each, then 1024 blocks would comprise 500 MiB of data. Can 500 MiB be downloaded in 15 seconds? This data transfer rate is 280 Mbps. Almost certainly not possible. And if it is, just increase N. The challenge also becomes more difficult as average block size increases. This challenge-response protocol relies on the lack of a partial getdata command in the Bitcoin protocol: a node cannot ask for only part of a block; it must ask for an entire block. Furthermore, nodes could ban other nodes for making too many random requests for blocks. On Thursday, 26 March 2015, at 7:09 pm, Sergio Lerner wrote: If I understand correctly, transforming raw blocks to keyed blocks takes 512x longer than transforming keyed blocks back to raw. The key is public, like the IP, or some other value which perhaps changes less frequently. Yes. I was thinking that the IP could be part of a first layer of encryption done to the blockchain data prior to the asymetric operation. That way the asymmetric operation can be the same for all users (no different primers for different IPs, and then the verifiers does not have to verify that a particular p is actually a pseudo-prime suitable for P.H. ) and the public exponent can be just 3. Two protocols can be performed to prove local possession: 1. (prover and verifier pay a small cost) The verifier sends a seed to derive some n random indexes, and the prover must respond with the hash of the decrypted blocks within a certain time bound. Suppose that decryption of n blocks take 100 msec (+-100 msec of network jitter). Then an attacker must have a computer 50 faster to be able to consistently cheat. The last 50 blocks should not be part of the list to allow nodes to catch-up and encrypt the blocks in background. Can you clarify, the prover is hashing random blocks of *decrypted*, as-in raw, blockchain data? What does this prove other than, perhaps, fast random IO of the blockchain? (which is useful in its own right, e.g. as a way to ensure only full-node IO-bound mining if baked into the PoW) How is the verifier validating the response without possession of the full blockchain? You're right, It is incorrect. Not the decrypted blocks must be sent, but the encrypted blocks. There correct protocol is this: 1. (prover and verifier pay a small cost) The verifier sends a seed to derive some n random indexes, and the prover must respond with the the encrypted blocks within a certain time bound. The verifier decrypts those blocks to check if they are part of the block-chain. But then there is this improvement which allows the verifier do detect non full-nodes with much less computation: 3. (prover pays a small cost, verifier smaller cost) The verifier asks the prover to send a Merkle tree root of hashes of encrypted blocks with N indexes selected by a psudo-random function seeded by a challenge value, where each encrypted-block is previously prefixed with the seed before being hashed (e.g. N=100). The verifier receives the Markle Root and performs a statistical test on the received information. From the N hashes blocks, it chooses M N (e.g. M = 20), and asks the proved for the blocks at these indexes. The prover sends the blocks, the verifier validates the blocks by decrypting them and also verifies that the Merkle tree was well constructed for those block nodes. This proves with high probability that the Merkle tree was built on-the-fly and specifically for this challenge-response protocol. I also wonder about the effect of spinning disk versus SSD. Seek time for 1,000 random reads is either nearly zero or dominating depending on the two modes. I wonder if a sequential read from a random index is a possible trade-off,; it doesn't prove possession of the whole chain nearly as well, but at least iowait converges significantly. Then again, that presupposes a specific ordering on disk which might not exist. In X years it will all be solid-state, so eventually it's moot. Good idea.
Re: [Bitcoin-development] Address Expiration to Prevent Reuse
On Tuesday, 24 March 2015, at 6:57 pm, Tom Harding wrote: It appears that a limited-lifetime address, such as the fanciful address = 4HB5ld0FzFVj8ALj6mfBsbifRoD4miY36v_349366 where 349366 is the last valid block for a transaction paying this address, could be made reuse-proof with bounded resource requirements, The core devs seem not to like ideas such as this because a transaction that was once valid can become invalid due to a chain reorganization. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] alternate proposal opt-in miner takes double-spend (Re: replace-by-fee v0.10.0rc4)
On Sunday, 22 February 2015, at 2:29 pm, Natanael wrote: In other words, you are unprotected and potentially at greater risk if you create a transaction depending on another zero-confirmation transaction. This happened to one of the merchants at the Bitcoin 2013 conference in San Jose. They sold some T-shirts and accepted zero-confirmation transactions. The transactions depended on other unconfirmed transactions, which never confirmed, so this merchant never got their money. I keep telling people not to accept transactions with zero confirmations, but no one listens. -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] BIP70: why Google Protocol Buffers for encoding?
On Wednesday, 28 January 2015, at 5:19 pm, Giuseppe Mazzotta wrote: On 28-01-15 16:42, Mike Hearn wrote: Just as a reminder, there is no obligation to use the OS root store. You can (and quite possibly should) take a snapshot of the Mozilla/Apple/MSFT etc stores and load it in your app. We do this in bitcoinj by default to avoid cases where BIP70 requests work on some platforms and not others, although the developer can easily override this and use the OS root store instead. Except that Mozilla/Apple/MSFT will update these certificate stores - second their policies - and your snapshot/collection might get outdated at a different pace than the OS-provided certificates, depending on how you (or the package maintainer) are rolling out updates. I'm frankly _horrified_ to learn that BitcoinJ ships its own root CA certificates bundle. This means that, if a root CA gets breached and a certificate gets revoked, all BitcoinJ-using software will be vulnerable until BitcoinJ ships an update *and* the software in question pulls in the new BitcoinJ update and releases its own update. That might never happen. -- Dive into the World of Parallel Programming. The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] [softfork proposal] Strict DER signatures
To be more in the C++ spirit, I would suggest changing the (const std::vectorunsigned char sig, size_t off) parameters to (std::vectorunsigned char::const_iterator itr, std::vectorunsigned char::const_iterator end). Example: bool ConsumeNumber(std::vectorunsigned char::const_iterator itr, std::vectorunsigned char::const_iterator end, unsigned int len) { // Length of number should be within signature. if (itr + len = end) return false; // Negative numbers are not allowed. if (*itr 0x80) return false; // Zero bytes at the start are not allowed, unless it would // otherwise be interpreted as a negative number. if (len 1 (*itr == 0x00) !(*(itr + 1) 0x80)) return false; // Consume number itself. itr += len; return true; } On Thursday, 22 January 2015, at 11:02 am, Rusty Russell wrote: Pieter Wuille pieter.wui...@gmail.com writes: Hello everyone, We've been aware of the risk of depending on OpenSSL for consensus rules for a while, and were trying to get rid of this as part of BIP 62 (malleability protection), which was however postponed due to unforeseen complexities. The recent evens (see the thread titled OpenSSL 1.0.0p / 1.0.1k incompatible, causes blockchain rejection. on this mailing list) have made it clear that the problem is very real, however, and I would prefer to have a fundamental solution for it sooner rather than later. OK, I worked up a clearer (but more verbose) version with fewer magic numbers. More importantly, feel free to steal the test cases. One weirdness is the restriction on maximum total length, rather than a 32 byte (33 with 0-prepad) limit on signatures themselves. Apologies for my babytalk C++. Am sure there's a neater way. /* Licensed under Creative Commons zero (public domain). */ #include vector #include cstdlib #include cassert #ifdef CLARIFY bool ConsumeByte(const std::vectorunsigned char sig, size_t off, unsigned int val) { if (off = sig.size()) return false; val = sig[off++]; return true; } bool ConsumeTypeByte(const std::vectorunsigned char sig, size_t off, unsigned int t) { unsigned int type; if (!ConsumeByte(sig, off, type)) return false; return (type == t); } bool ConsumeNonZeroLength(const std::vectorunsigned char sig, size_t off, unsigned int len) { if (!ConsumeByte(sig, off, len)) return false; // Zero-length integers are not allowed. return (len != 0); } bool ConsumeNumber(const std::vectorunsigned char sig, size_t off, unsigned int len) { // Length of number should be within signature. if (off + len sig.size()) return false; // Negative numbers are not allowed. if (sig[off] 0x80) return false; // Zero bytes at the start are not allowed, unless it would // otherwise be interpreted as a negative number. if (len 1 (sig[off] == 0x00) !(sig[off+1] 0x80)) return false; // Consume number itself. off += len; return true; } // Consume a DER encoded integer, update off if successful. bool ConsumeDERInteger(const std::vectorunsigned char sig, size_t off) { unsigned int len; // Type byte must be integer if (!ConsumeTypeByte(sig, off, 0x02)) return false; if (!ConsumeNonZeroLength(sig, off, len)) return false; // Now the BE encoded value itself. if (!ConsumeNumber(sig, off, len)) return false; return true; } bool IsValidSignatureEncoding(const std::vectorunsigned char sig) { // Format: 0x30 [total-length] 0x02 [R-length] [R] 0x02 [S-length] [S] [sighash] // * total-length: 1-byte length descriptor of everything that follows, // excluding the sighash byte. // * R-length: 1-byte length descriptor of the R value that follows. // * R: arbitrary-length big-endian encoded R value. It cannot start with any // null bytes, unless the first byte that follows is 0x80 or higher, in which // case a single null byte is required. // * S-length: 1-byte length descriptor of the S value that follows. // * S: arbitrary-length big-endian encoded S value. The same rules apply. // * sighash: 1-byte value indicating what data is hashed. // Accept empty signature as correctly encoded (but invalid) signature, // even though it is not strictly DER. if (sig.size() == 0) return true; // Maximum size constraint. if (sig.size() 73) return false; size_t off = 0; // A signature is of type compound. if (!ConsumeTypeByte(sig, off, 0x30)) return false; unsigned int len; if (!ConsumeNonZeroLength(sig, off, len)) return false; // Make sure the length covers the rest (except sighash). if (len + 1 != sig.size() - off) return false; // Check R value. if
Re: [Bitcoin-development] The legal risks of auto-updating wallet software; custodial relationships
On Tuesday, 20 January 2015, at 12:40 pm, Peter Todd wrote: On Tue, Jan 20, 2015 at 12:23:14PM -0500, Matt Whitlock wrote: If you have the private keys for your users' bitcoins, then you are every bit as much the owner of those bitcoins as your users are. There is no custodial relationship, as you have both the ability and the right to spend those bitcoins. Possession of a private key is equivalent to ownership of the bitcoins controlled by that private key. Posessing a private key certainly does not give you an automatic legal right to anything. As an example I could sign an agreement with you that promised I would manage some BTC on your behalf. That agreement without any doubt takes away any legal right I had to your BTC, enough though I may have have the technical ability to spend them. This is the very reason why the law has the notion of a custodial relationship in the first place. I never signed any kind of agreement with Andreas Schildbach. I keep my bitcoins in his wallet with the full knowledge that an auto-update could clean me out. (I only hold walking around amounts of money in my mobile wallet for exactly this reason.) I would love it if Andreas offered me an agreement not to spend my bitcoins without my consent, but I doubt he'd legally be allowed to offer such an agreement, as that would indeed set up a custodial relationship, which would put him into all sorts of regulatory headache. -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] The legal risks of auto-updating wallet software; custodial relationships
On Tuesday, 20 January 2015, at 10:46 am, Peter Todd wrote: I was talking to a lawyer with a background in finance law the other day and we came to a somewhat worrying conclusion: authors of Bitcoin wallet software probably have a custodial relationship with their users, especially if they use auto-update mechanisms. Unfortunately this has potential legal implications as custodial relationships tend to be pretty highly regulated. Why is this? Well, in most jurisdictions financial laws a custodial relationship is defined as having the ability, but not the right, to dispose of an asset. If you have the private keys for your users' bitcoins - e.g. an exchange or online wallet - you clearly have the ability to spend those bitcoins, thus you have a custodial relationship. If you have the private keys for your users' bitcoins, then you are every bit as much the owner of those bitcoins as your users are. There is no custodial relationship, as you have both the ability and the right to spend those bitcoins. Possession of a private key is equivalent to ownership of the bitcoins controlled by that private key. -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] The legal risks of auto-updating wallet software; custodial relationships
On Tuesday, 20 January 2015, at 6:44 pm, Tamas Blummer wrote: Knowing the private key and owning the linked coins is not necessarily the same in front of a court. At least in german law there is a difference between ‘Eigentum' means ownership and ‘Besitz’ means ability to deal with it. Being able to deal with an asset does not make you the owner. So what we're telling the newbies in /r/bitcoin is plain wrong. Bitcoins *do* have an owner independent from the parties who have access to the private keys that control their disposition. That's pretty difficult to reconcile from a technological perspective. -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] BIP70: why Google Protocol Buffers for encoding?
Even if a compact binary encoding is a high priority, there are more standard choices than Google Protocol Buffers. For example, ASN.1 is a very rigorously defined standard that has been around for decades, and ASN.1 even has an XML encoding (XER) that is directly convertible to/from the binary encoding (BER/DER), given the schema. In practice, I'm mostly agnostic about what encoding is actually used in BIP70, and I wouldn't fault BIP70 for choosing Google Protocol Buffers, but the very existence of Protobuf perplexes me, as it apparently re-solves a problem that was solved 40 years ago by ASN.1. It's as though the engineers at Google weren't aware that ASN.1 existed. On Monday, 19 January 2015, at 7:07 pm, Richard Brady wrote: Hi Gavin, Mike and co Is there a strong driver behind the choice of Google Protocol Buffers for payment request encoding in BIP-0070? Performance doesn't feel that relevant when you think that: 1. Payment requests are not broadcast, this is a request / response flow, much more akin to a web request. 2. One would be cramming this data into a binary format just so you can then attach it to a no-so-binary format such as HTTP. Some great things about protocols/encodings such as HTTP/JSON/XML are: 1. They are human readable on-the-wire. No Wireshark plugin required, tcpdump or ngrep will do. 2. There are tons of great open source libraries and API for parsing / manipulating / generating. 3. It's really easy to hand-craft a test message for debugging. 4. The standards are much easier to read and write. They don't need to contain code like BIP-0070 currently does and they can contain examples, which BIP70 does not. 5. They are thoroughly specified by independent standards bodies such as the IETF. Gotta love a bit of MUST / SHOULD / MAY in a standard. 6. They're a family ;-) Keen to hear your thoughts on this and very keen to watch the payment protocol grow regardless of encoding choice! My background is SIP / VoIP and I think that could be a fascinating use case for this protocol which I'm hoping to do some work on. Best, Richard -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] convention/standard for sorting public keys for p2sh multisig transactions
On Wednesday, 14 January 2015, at 3:53 pm, Eric Lombrozo wrote: Internally, pubkeys are DER-encoded integers. I thought pubkeys were represented as raw integers (i.e., they're embedded in Script as a push operation whose payload is the raw bytes of the big-endian representation of the integer). As far as I know, DER encoding is only used for signatures. Am I mistaken? -- New Year. New Location. New Benefits. New Data Center in Ashburn, VA. GigeNET is offering a free month of service with a new server in Ashburn. Choose from 2 high performing configs, both with 100TB of bandwidth. Higher redundancy.Lower latency.Increased capacity.Completely compliant. http://p.sf.net/sfu/gigenet ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] [BIP draft] CHECKLOCKTIMEVERIFY - Prevent a txout from being spent until an expiration time
Is there a reason why we can't have the new opcode simply replace the top stack item with the block height of the txout being redeemed? Then arbitrary logic could be implemented, including output cannot be spent until a certain time and also output can ONLY be spent until a certain time, as well as complex logic with alternative key groups with differing time constraints. OP_CHECKLOCKTIMEVERIFY, as conceived, seems too limited, IMHO. On Thursday, 2 October 2014, at 4:05 pm, Flavien Charlon wrote: Very good, I like the proposal. A question I have: can it be used to do the opposite, i.e. build a script that can only be spent up until block X? On Thu, Oct 2, 2014 at 2:09 AM, Peter Todd p...@petertodd.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 1 October 2014 17:55:36 GMT-07:00, Luke Dashjr l...@dashjr.org wrote: On Thursday, October 02, 2014 12:05:15 AM Peter Todd wrote: On 1 October 2014 11:23:55 GMT-07:00, Luke Dashjr l...@dashjr.org wrote: Thoughts on some way to have the stack item be incremented by the height at which the scriptPubKey was in a block? Better to create a GET-TXIN-BLOCK-(TIME/HEIGHT)-EQUALVERIFY operator. scriptPubKey would be: GET-TXIN-BLOCKHEIGHT-EQUALVERIFY (fails unless top stack item is equal to the txin block height) delta height ADD (top stack item is now txin height + delta height) CHECKLOCKTIMEVERIFY This sounds do-able, although it doesn't address using timestamps. For timestamps replace height with time in the above example; the minimum block time rule will prevent gaming it. You'd want these sacrifices to unlock years into the future to thoroughly exceed any reasonable business cycle; that's so far into the future that miners are almost certain to just mine them and collect the fees. For many use cases, short maturity periods are just as appropriate IMO. Very easy to incentivise mining centralisation with short maturities. I personally think just destroying coins is better, but it doesn't sit well with people so this is the next best thing. -BEGIN PGP SIGNATURE- Version: APG v1.1.1 iQFQBAEBCAA6BQJULKWsMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8 cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhcg8CACueZNGfWaZR+xyG9/o JwDBCnqOtwr6Bnosg3vNcRIDUnmsh+Qkk5dk2JpqYNYw7C3duhlwHshgsGOFkHEV f5RHDwkzGLJDLXrBwxxcIDdm3cJL8UVpQzJ7dD7aSnfj7MU/0aru3HaIU2ZfymUb 63jhul6FGbXH3K6p3bOoNrfIrCCGOv8jOIzeAgxNPydk8MVPgRhlYLAKBJxu8nMr 1oJGeaKVSGSPSrRdgS8tI4uOs0F4Q49APrLPGxGTERlATmWrr+asHGJTIxsB2IEm vrNgVRpkaN4Of9k96qzD9ReKfBfqm0WQKLolcXCVqGpdoHcvXh2AeWdjB/EFTyOq SOgO =WybM -END PGP SIGNATURE- -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] [BIP draft] CHECKLOCKTIMEVERIFY - Prevent a txout from being spent until an expiration time
Oops, sorry. I meant: replace the top stack item with the block height of the txin doing the redeeming. (So the script can calculate the current time to some reference time embedded in the script.) On Friday, 3 October 2014, at 10:28 am, Matt Whitlock wrote: Is there a reason why we can't have the new opcode simply replace the top stack item with the block height of the txout being redeemed? Then arbitrary logic could be implemented, including output cannot be spent until a certain time and also output can ONLY be spent until a certain time, as well as complex logic with alternative key groups with differing time constraints. OP_CHECKLOCKTIMEVERIFY, as conceived, seems too limited, IMHO. On Thursday, 2 October 2014, at 4:05 pm, Flavien Charlon wrote: Very good, I like the proposal. A question I have: can it be used to do the opposite, i.e. build a script that can only be spent up until block X? On Thu, Oct 2, 2014 at 2:09 AM, Peter Todd p...@petertodd.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 1 October 2014 17:55:36 GMT-07:00, Luke Dashjr l...@dashjr.org wrote: On Thursday, October 02, 2014 12:05:15 AM Peter Todd wrote: On 1 October 2014 11:23:55 GMT-07:00, Luke Dashjr l...@dashjr.org wrote: Thoughts on some way to have the stack item be incremented by the height at which the scriptPubKey was in a block? Better to create a GET-TXIN-BLOCK-(TIME/HEIGHT)-EQUALVERIFY operator. scriptPubKey would be: GET-TXIN-BLOCKHEIGHT-EQUALVERIFY (fails unless top stack item is equal to the txin block height) delta height ADD (top stack item is now txin height + delta height) CHECKLOCKTIMEVERIFY This sounds do-able, although it doesn't address using timestamps. For timestamps replace height with time in the above example; the minimum block time rule will prevent gaming it. You'd want these sacrifices to unlock years into the future to thoroughly exceed any reasonable business cycle; that's so far into the future that miners are almost certain to just mine them and collect the fees. For many use cases, short maturity periods are just as appropriate IMO. Very easy to incentivise mining centralisation with short maturities. I personally think just destroying coins is better, but it doesn't sit well with people so this is the next best thing. -BEGIN PGP SIGNATURE- Version: APG v1.1.1 iQFQBAEBCAA6BQJULKWsMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8 cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhcg8CACueZNGfWaZR+xyG9/o JwDBCnqOtwr6Bnosg3vNcRIDUnmsh+Qkk5dk2JpqYNYw7C3duhlwHshgsGOFkHEV f5RHDwkzGLJDLXrBwxxcIDdm3cJL8UVpQzJ7dD7aSnfj7MU/0aru3HaIU2ZfymUb 63jhul6FGbXH3K6p3bOoNrfIrCCGOv8jOIzeAgxNPydk8MVPgRhlYLAKBJxu8nMr 1oJGeaKVSGSPSrRdgS8tI4uOs0F4Q49APrLPGxGTERlATmWrr+asHGJTIxsB2IEm vrNgVRpkaN4Of9k96qzD9ReKfBfqm0WQKLolcXCVqGpdoHcvXh2AeWdjB/EFTyOq SOgO =WybM -END PGP SIGNATURE- -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] SPV clients and relaying double spends
Probably the first double-spend attempt (i.e., the second transaction to spend the same output(s) as another tx already in the mempool) would still need to be relayed. A simple double-spend alert wouldn't work because it could be forged. But after there have been two attempts to spend the same output, no further transactions spending that same output should be relayed, in order to prevent flooding the network. On Thursday, 25 September 2014, at 7:12 pm, Aaron Voisine wrote: Something like that would be a great help for SPV clients that can't detect double spends on their own. (still limited of course to sybil attack concerns) Aaron Voisine breadwallet.com On Thu, Sep 25, 2014 at 7:07 PM, Matt Whitlock b...@mattwhitlock.name wrote: What's to stop an attacker from broadcasting millions of spends of the same output(s) and overwhelming nodes with slower connections? Might it be a better strategy not to relay the actual transactions (after the first) but rather only propagate (once) some kind of double-spend alert? On Thursday, 25 September 2014, at 7:02 pm, Aaron Voisine wrote: There was some discussion of having nodes relay double-spends in order to alert the network about double spend attempts. A lot more users will be using SPV wallets in the future, and one of the techniques SPV clients use to judge how likely a transaction is to be confirmed is if it propagates across the network. I wonder if and when double-spend relaying is introduced, if nodes should also send BIP61 reject messages or something along those lines to indicate which transactions those nodes believe to be invalid, but are relaying anyway. This would be subject to sybil attacks, as is monitoring propagation, however it does still increase the cost of performing a 0 confirmation double spend attack on an SPV client above just relaying double-spends without indicating if a node believes the transaction to be valid. Aaron Voisine breadwallet.com -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] SPV clients and relaying double spends
What's to stop an attacker from broadcasting millions of spends of the same output(s) and overwhelming nodes with slower connections? Might it be a better strategy not to relay the actual transactions (after the first) but rather only propagate (once) some kind of double-spend alert? On Thursday, 25 September 2014, at 7:02 pm, Aaron Voisine wrote: There was some discussion of having nodes relay double-spends in order to alert the network about double spend attempts. A lot more users will be using SPV wallets in the future, and one of the techniques SPV clients use to judge how likely a transaction is to be confirmed is if it propagates across the network. I wonder if and when double-spend relaying is introduced, if nodes should also send BIP61 reject messages or something along those lines to indicate which transactions those nodes believe to be invalid, but are relaying anyway. This would be subject to sybil attacks, as is monitoring propagation, however it does still increase the cost of performing a 0 confirmation double spend attack on an SPV client above just relaying double-spends without indicating if a node believes the transaction to be valid. Aaron Voisine breadwallet.com -- Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] deterministic transaction expiration
It would make more sense to introduce a new script opcode that pushes the current block height onto the operand stack. Then you could implement arbitrary logic about which blocks the transaction can be valid in. This would require that the client revalidate all transactions in its mempool (really, only those making use of this opcode) whenever the chain tip changes. On Thursday, 31 July 2014, at 5:58 pm, Kaz Wesley wrote: There is currently little in place for managing transaction lifetime in the network's mempools (see discussion in github in #3722 mempool transaction expiration, and it seems to be a major factor blocking some mempool exchange, see #1833/1918, #3721). Expiry per-node a certain amount of wall time after receipt has been proposed, but that's a fragile mechanism -- a single node could keep all relayable transactions alive forever by remembering transactions until most nodes have dropped them and then releasing them back into the wild. I have a proposal for a way to add finite and predictable lifespans to transactions in mempools: we d̶e̶s̶t̶r̶o̶y̶ ̶t̶h̶e̶ ̶r̶e̶s̶u̶r̶r̶e̶c̶t̶i̶o̶n̶ ̶h̶u̶b̶ use nLockTime and a new standardness rule. It could be done in stages, would not necessarily require even a soft fork, and does not cause problems with reorgs like the proposal in #3509: 1. start setting nLockTime to the current height by default in newly created transactions (or slightly below the current height, for reorg-friendliness) 2. once users have had some time to upgrade to clients that set nLockTime, start discouraging transactions without nLockTime -- possibly with a slightly higher fee required for relay 3. start rate-limiting relay of transactions without an nLockTime (maybe this alone could be used to achieve [2]) 4. add a new IsStandard rule rejecting transactions with an nLockTime more than N blocks behind the current tip (for some fixed value N, to be determined) Transactions would stop being relayed and drop out of mempools a fixed number of blocks from their creation; once that window had passed, the sender's wallet could begin to expect the transaction would not be confirmed. In case a reorg displaces a transaction until after its expiry height, a miner can still put it back in the blockchain; the expiry height is just a relay rule. Also, a user who needed to get their original expired transaction confirmed could still do so by submitting it directly to a miner with suitable policies. -- Want fast and easy access to all the code in your enterprise? Index and search up to 200,000 lines of code with a free copy of Black Duck Code Sight - the same software that powers the world's largest code search on Ohloh, the Black Duck Open Hub! Try it now. http://p.sf.net/sfu/bds ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] deterministic transaction expiration
On Thursday, 31 July 2014, at 7:28 pm, Gregory Maxwell wrote: On Thu, Jul 31, 2014 at 6:38 PM, Matt Whitlock b...@mattwhitlock.name wrote: It would make more sense to introduce a new script opcode that pushes the current block height onto the operand stack. Then you could implement arbitrary logic about which blocks the transaction can be valid in. This would require that the client revalidate all transactions in its mempool (really, only those making use of this opcode) whenever the chain tip changes. Transactions that become invalid later are have pretty severe consequences because they might mean that completely in an absence of fraud transactions are forever precluded due to a otherwise harmless reorg. I understand what you're saying, but I don't understand why it's a problem. Transactions shouldn't be considered final until a reasonable number of confirmations anyway, so the possibility that an accepted transaction could become invalid due to a chain reorganization is not a new danger. Ordinary transactions can similarly become invalid due to chain reorganizations, due to inputs already having been spent in the new branch. -- Want fast and easy access to all the code in your enterprise? Index and search up to 200,000 lines of code with a free copy of Black Duck Code Sight - the same software that powers the world's largest code search on Ohloh, the Black Duck Open Hub! Try it now. http://p.sf.net/sfu/bds ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Self-dependency transaction question...
On Sunday, 13 July 2014, at 7:32 pm, Richard Moore wrote: P.S. If it is valid, another question; what would happen if a transaction was self-referencing? I realize it would be very difficult to find one, but if I could find a transaction X whose input was X and had an output Y, would Y be a new valid utxo, without being a generation transaction input? Even if you could find such a transaction that contained its own digest, and even if such a transaction were valid, it still couldn't conjure new coins into existence. The sum of the outputs must be less than or equal to the sum of the inputs (except in the case of a coinbase transaction). If a transaction were to spend its own output, then the input would be completely used up by the output, leaving no balance for a second output. -- Want fast and easy access to all the code in your enterprise? Index and search up to 200,000 lines of code with a free copy of Black Duck Code Sight - the same software that powers the world's largest code search on Ohloh, the Black Duck Open Hub! Try it now. http://p.sf.net/sfu/bds ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Bitcoin Protocol Specification
Is anyone working on a similar specification document for Satoshi's P2P protocol? I know how blocks and transactions are structured and verified, but I'm interested in knowing how they're communicated over the network. -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Incentivizing the running of full nodes
How can there be any kind of lottery that doesn't involve proof of work or proof of stake? Without some resource-limiting factor, there is no way to limit the number of lottery tickets any given individual could acquire. The very process of Bitcoin mining was invented specifically to overcome the Sybil problem, which had plagued computer scientists for decades, and now you're proposing a system that suffers from the same problem. Or am I wrong about this? On Monday, 16 June 2014, at 1:12 am, Odinn Cyberguerrilla wrote: I have been noticing for some time the problem which Mike H. identified as how we are bleeding nodes ~ losing nodes over time. This link was referenced in the coindesk article of May 9, 2014: http://sourceforge.net/p/bitcoin/mailman/bitcoin-development/thread/CANEZrP2rgiQHpekEpFviJ22QsiV%2Bs-F2pqosaZOA5WrRtJx5pg%40mail.gmail.com/#msg32196023 (coindesk article for reference: http://www.coindesk.com/bitcoin-nodes-need/) The proposed solution is noted here as a portion of an issue at: https://github.com/bitcoin/bitcoin/issues/4079 Essentially that part which has to do with helping reduce the loss of nodes is as follows: a feature similar to that suggested by @gmaxwell that would process small change and tiny txouts to user specified donation targets, in an incentivized process. Those running full nodes (Bitcoin Core all the time), processing their change and txouts through Core, would be provided incentives in the form of a 'decentralizing lottery' such that all participants who are running nodes and donating no matter how infrequently (and no matter who they donate to) will be entered in the 'decentralizing lottery,' the 'award amounts' (which would be distinct from 'block rewards' for any mining) would vary from small to large bitcoin amounts depending on how many participants are involved in the donations process. This would help incentivize individuals to run full nodes as well as encouraging giving and microdonations. The option could be expressed in the transactions area to contribute to help bitcoin core development for those that are setting up change and txouts for donations, regarding the microdonation portion (which has also has been expressed conceptually at abis.io This addresses the issue of how to incentivize more interested individuals to run full nodes (Bitcoin Core). The lottery concept (which would be applicable to anyone running the full node regardless of whether or not they are mining) is attractive from the point of view that it will complement the block reward concept already in place which serves those who mine, but more attractive to the individual who doesn't feel the urge to mine, but would like to have the chance of being compensated for the effort they put into the system. I hope that this leads to additional development discussion on these concepts regarding incentivizing giving. This may also involve a process BIP. I look forward to your remarks. Respect, Odinn -- HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing Easy Data Exploration http://p.sf.net/sfu/hpccsystems ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Incentivizing the running of full nodes
On Monday, 16 June 2014, at 5:07 pm, Justus Ranvier wrote: On 06/16/2014 04:25 PM, Matt Whitlock wrote: How can there be any kind of lottery that doesn't involve proof of work or proof of stake? Without some resource-limiting factor, there is no way to limit the number of lottery tickets any given individual could acquire. The very process of Bitcoin mining was invented specifically to overcome the Sybil problem, which had plagued computer scientists for decades, and now you're proposing a system that suffers from the same problem. Or am I wrong about this? If you allow the solution set to include pay-to-play networks, and not just free P2P networks, then it's easier to find a solution Imagine every node is competing with its peers in terms of relevancy. Relevancy is established by delivering newly-seen transactions first. Each node keeps track of which of its peers send it transactions that it hadn't seen and forwarded to them yet (making sure that the transactions do make it into a block) and uses that information to determine whether or not it should be paying that peer, or if that peer should be paying it, or if they are equal relevancy and no net payment is required. Once any given pair of nodes can establish who, if anyone, should be paying they could use micropayment channels to handle payments. Nodes that are well connected, and with high uptimes would end up being net recipients of payments. Mobile nodes and other low-uptime nodes would be net payers. Now that you've established a market for the service of delivering transaction information, you can rely on price signals to properly match supply and demand. People who hate market-based solutions could always run these nodes and configure them to refuse to pay anyone, and to charge nothing to their peers, if that's what they wanted. This is a cool idea, but doesn't it generate some perverse incentives? If I'm running a full node and I want to pay CheapAir for some plane tickets, I'll want to pay in the greatest number of individual transactions possible, to maximize the rewards that I'll receive from my connected peers. This maybe would not be a problem if transaction fees were required on all transactions, but as it is (e.g., while fee-free transactions can be accepted into blocks if they have high enough priority), I can preload my wallet with hundreds of small-ish outputs, let them sit there for a few months to accumulate coin age, and then spend each little piece in a separate transaction when it comes time to pay for a big-ticket purchase. It's more lucrative for me to pay for my plane ticket in 100 separate, low-value transactions than in one high-value transaction. So you're incentivizing greater consumption of bandwidth and storage. -- HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing Easy Data Exploration http://p.sf.net/sfu/hpccsystems ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Incentivizing the running of full nodes
On Monday, 16 June 2014, at 7:59 pm, Mike Hearn wrote: This is a cool idea, but doesn't it generate some perverse incentives? If I'm running a full node and I want to pay CheapAir for some plane tickets, I'll want to pay in the greatest number of individual transactions possible Peers can calculate rewards based on number of inputs or total kb used: you're paying for kilobytes with either coin age or fees no matter what. So I think in practice it's not a big deal. So effectively, if you pay for your bandwidth/storage usage via fees, then the reward system is constrained by proof of burn, and if you pay for your usage via coin age, then the reward system is constrained by proof of stake. Now another concern: won't this proposal increase the likelihood of a network split? The free-market capitalist nodes will want to charge their peers and will kick and ban peers that don't pay up (and will pay their peers to avoid being kicked and banned themselves), whereas the socialist nodes will want all of their peers to feed them transactions out of the goodness of their hearts and will thus necessarily be relegated to connecting only to other altrustic peers. Thus, the network will comprise two incompatible ideological camps, whose nodes won't interconnect. -- HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing Easy Data Exploration http://p.sf.net/sfu/hpccsystems ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Going to tag 0.9.2 final
On Friday, 13 June 2014, at 9:24 pm, xor wrote: On Friday, June 13, 2014 12:18:37 PM Wladimir wrote: If I do not hear anything, I will do a last-minute language import High risk projects as Bitcoin should NOT see ANY changes between release candidates and releases. You can cause severe havoc with ANY changes. Humans make mistakes! Please do not do such things! Agreed. Does Bitcoin Core not have a release cycle policy? Typically mission-critical projects will enter a code and resource freeze prior to tagging a release candidate, after which point only critical bugfixes are allowed into the release branch. A language translation update does not qualify as a critical bugfix and should be merged during the next release cycle. -- HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing Easy Data Exploration http://p.sf.net/sfu/hpccsystems ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Paper Currency
On Saturday, 17 May 2014, at 11:31 am, Jerry Felix wrote: I picked some BIP numbers myself that seem to be available. I'm quite certain you're explicitly *NOT* supposed to do this. -- Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free. http://p.sf.net/sfu/SauceLabs ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] DNS seeds unstable
Is Peter Todd's server actually up? The Google public DNS resolver at 8.8.8.8 can't resolve testnet-seed.bitcoin.petertodd.org either (SERVFAIL). On Friday, 16 May 2014, at 6:34 pm, Andreas Schildbach wrote: Apparently British Telecom also cannot speak to Peter Todd's server. That another very large ISP in Europe. On 05/15/2014 01:50 PM, Andreas Schildbach wrote: I'm bringing this issue up again. The current Bitcoin DNS seed infrastructure is unstable. I assume this is because of we're using a custom DNS implementation which is not 100% compatible. There have been bugs in the past, like a case sensitive match for the domain name. Current state (seeds taken from bitcoinj): mainnet: seed.bitcoin.sipa.beOK dnsseed.bluematt.me OK dnsseed.bitcoin.dashjr.org SERVFAIL, tried multiple ISPs seed.bitcoinstats.com OK testnet: testnet-seed.bitcoin.petertodd.org SERVFAIL, just from Telefonica testnet-seed.bluematt.meOK (but only returns one node) Note: Telefonica is one of Europe's largest ISPs. I would try to improve DNS myself, but I'm not capable of writing C. My fix would be to reimplement everything in Java -- I doubt you guys would be happy with that. -- Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free. http://p.sf.net/sfu/SauceLabs -- Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free. http://p.sf.net/sfu/SauceLabs ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free. http://p.sf.net/sfu/SauceLabs ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Prenumbered BIP naming
On Monday, 12 May 2014, at 9:53 am, Gregory Maxwell wrote: I've noticed some folks struggling to attach labels to their yet to be numbered BIPs. I'd recommend people call them draft-main author name-what it does like draft-maxwell-coinburning in the style of pre-WG IETF drafts. Why is there such a high bar to getting a number assigned to a BIP anyway? BIP 1 seems to suggest that getting a BIP number assigned is no big deal, but the reality seems to betray that casual notion. Even proposals with hours of work put into them are not getting BIP numbers. It's not exactly as though there's a shortage of integers. Are numbers assigned only to proposals that are well liked? Isn't the point of assigning numbers so that we can have organized discussions about all proposals, even ones we don't like? -- Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available Simple to use. Nothing to install. Get started now for free. http://p.sf.net/sfu/SauceLabs ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Tuesday, 22 April 2014, at 10:06 am, Jan Møller wrote: This is a very useful BIP, and I am very much looking forward to implementing it in Mycelium, in particular for bip32 wallets. To me this is not about whether to use SSS instead of multisig transactions. In the end you want to protect a secret (be it a HD master seed or a private key) in such a way that you can recover it in case of partial theft/loss. Whether I'll use the master seed to generate keys that are going to be used for multisig transactions is another discussion IMO. A few suggestions: - I think it is very useful to define different prefixes for testnet keys/seeds. As a developer I use the testnet every day, and many of our users use it for trying out new functionality. Mixing up keys meant for testnet and mainnet is bad. A fair point. I'll add some prefixes for testnet. - Please allow M=1. From a usability point of view it makes sense to allow the user to select 1 share if that is what he wants. How does that make sense? Decomposing a key/seed into 1 share is functionally equivalent to dispensing with the secret sharing scheme entirely. I have no strong opinions of whether to use GF(2^8) over Shamir's Secret Sharing, but the simplicity of GF(2^8) is appealing. I'll welcome forks of my draft BIP. I don't really have the inclination to research GF(2^8) secret sharing schemes and write an implementation at the present time, but if someone wants to take my BIP in that direction, then okay. -- Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Tuesday, 22 April 2014, at 10:39 am, Tamas Blummer wrote: Extra encoding for testnet is quite useless complexity in face of many alt chains. BIPS should be chain agnostic. I would argue that Bitcoin should be altcoin-agnostic. :) I have no interest in catering to altcoins. But that's just me. -- Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Tuesday, 22 April 2014, at 10:39 am, Jan Møller wrote: On Tue, Apr 22, 2014 at 10:29 AM, Matt Whitlock b...@mattwhitlock.namewrote: On Tuesday, 22 April 2014, at 10:27 am, Jan Møller wrote: - Please allow M=1. From a usability point of view it makes sense to allow the user to select 1 share if that is what he wants. How does that make sense? Decomposing a key/seed into 1 share is functionally equivalent to dispensing with the secret sharing scheme entirely. I agree that it may look silly to have just one-of-one share from a technical point of view, but from an end-user point of view there could be reasons for just having one piece of paper to manage. If M can be 1 then the software/hardware doesn't have to support multiple formats, import/export paths + UI (one for SIPA keys in one share, one for HD seeds in one share, one for SIPA keys + HD seeds in multiple shares). Less complexity more freedom of choice. Alright. It's a fair argument. Do you agree with encoding M using a bias of -1 so that M up to and including 256 can be encoded in one byte? Necessary Shares = M+1, not a problem I would probably encode N-of-M in 1 byte as I don't see good use cases with more than 17 shares. Anyway, I am fine with it as it is. Encoding bias of M changed to -1, and test vectors updated: https://github.com/whitslack/btctool/blob/bip/bip-.mediawiki -- Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Double-spending unconfirmed transactions is a lot easier than most people realise
On Tuesday, 22 April 2014, at 8:45 pm, Tom Harding wrote: A network where transaction submitters consider their (final) transactions to be unchangeable the moment they are transmitted, and where the network's goal is to confirm only transactions all of whose UTXO's have not yet been seen in a final transaction's input, has a chance to be such a network. Respectfully, this is not the goal of miners. The goal of miners is to maximize profits. Always will be. If they can do that by enabling replace-by-fee (and they can), then they will. Altruism does not factor into business. -- Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
[Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?
For the life of me, I cannot figure out what's wrong with this. It seems like Bitcoind has lost its mind. I'm trying to redeem a 2-of-3 multisig P2SH output using a raw transaction. Here's the address that the P2SH output was sent to: $ bitcoind createmultisig 2 '[03566474f987a012a69a08097253394ebd681454df29c3f1fb0495a5b45490f928, 03927407ca158155d0d30366395ca9cdc7d93cfa0a5b22181374431c15aae7b358, 02cff98aba464f5d4ebac5e6417f142326235f5a0a59708ba6231471cce4ee0747]' { address : 33snuCcVUmn9iBG345keJRzMfVwz7Uo87C, redeemScript : 522103566474f987a012a69a08097253394ebd681454df29c3f1fb0495a5b45490f9282103927407ca158155d0d30366395ca9cdc7d93cfa0a5b22181374431c15aae7b3582102cff98aba464f5d4ebac5e6417f142326235f5a0a59708ba6231471cce4ee074753ae } The transaction containing the output is ec7d985ae265a3a79c68d852e0e52cf4177c3362d7a25fb718be82f980f39285. It's the second output. So I ask Bitcoind to create a raw transaction to spend the output: $ bitcoind createrawtransaction '[{txid:ec7d985ae265a3a79c68d852e0e52cf4177c3362d7a25fb718be82f980f39285, vout:1}]' '{19YNEu4ZqX3nU9rJMuMcDy3pzFhactZPmk:0.0005, 1J2qtR7HBbE4rkNAgZCo4hZUEd2Z4jtSgz:0.0004}' 0100018592f380f982be18b75fa2d762337c17f42ce5e052d8689ca7a365e25a987dec010250c31976a9145dafa18ab21debe3d20f2c39e88d630f822bd29e88ac409c1976a914bad35cd767b657daa4a735b32e3d1f1dab52872d88ac And here is the decoded transaction, for completeness: $ bitcoind decoderawtransaction 0100018592f380f982be18b75fa2d762337c17f42ce5e052d8689ca7a365e25a987dec010250c31976a9145dafa18ab21debe3d20f2c39e88d630f822bd29e88ac409c1976a914bad35cd767b657daa4a735b32e3d1f1dab52872d88ac { txid : 8d731e6e333d805f6c8b569e1a608d14127d61d3123b699355133b2c757c16fb, version : 1, locktime : 0, vin : [ { txid : ec7d985ae265a3a79c68d852e0e52cf4177c3362d7a25fb718be82f980f39285, vout : 1, scriptSig : { asm : , hex : }, sequence : 4294967295 } ], vout : [ { value : 0.0005, n : 0, scriptPubKey : { asm : OP_DUP OP_HASH160 5dafa18ab21debe3d20f2c39e88d630f822bd29e OP_EQUALVERIFY OP_CHECKSIG, hex : 76a9145dafa18ab21debe3d20f2c39e88d630f822bd29e88ac, reqSigs : 1, type : pubkeyhash, addresses : [ 19YNEu4ZqX3nU9rJMuMcDy3pzFhactZPmk ] } }, { value : 0.0004, n : 1, scriptPubKey : { asm : OP_DUP OP_HASH160 bad35cd767b657daa4a735b32e3d1f1dab52872d OP_EQUALVERIFY OP_CHECKSIG, hex : 76a914bad35cd767b657daa4a735b32e3d1f1dab52872d88ac, reqSigs : 1, type : pubkeyhash, addresses : [ 1J2qtR7HBbE4rkNAgZCo4hZUEd2Z4jtSgz ] } } ] } Now I'll sign the transaction with 2 of 3 keys: $ bitcoind signrawtransaction 0100018592f380f982be18b75fa2d762337c17f42ce5e052d8689ca7a365e25a987dec010250c31976a9145dafa18ab21debe3d20f2c39e88d630f822bd29e88ac409c1976a914bad35cd767b657daa4a735b32e3d1f1dab52872d88ac '[{txid:ec7d985ae265a3a79c68d852e0e52cf4177c3362d7a25fb718be82f980f39285, vout:1, scriptPubKey:a91417f9f4ba5c2f2b9334805f91bbbf90a19aaa3d5687, redeemScript:522103566474f987a012a69a08097253394ebd681454df29c3f1fb0495a5b45490f9282103927407ca158155d0d30366395ca9cdc7d93cfa0a5b22181374431c15aae7b3582102cff98aba464f5d4ebac5e6417f142326235f5a0a59708ba6231471cce4ee074753ae}]' '[Ky7EQeg71YHeftLc31tt8AoNSezFEgUCbvwYak1eKksg6gQww6FF, KxAXrjTMZJN1Egqkckdz9TXyB2kyJ68wu7CiJk6Rygmr9zv2nScG]' { hex : 0100018592f380f982be18b75fa2d762337c17f42ce5e052d8689ca7a365e25a987dec0100fc004730440220781ae7e3e309289f53cc2c4016adfb5a1d0081157d4366b9f77f0358b7aeccbb022009c7297f60088b1815d6970c8e246e6b516ff8fce5e85de209004d8cc29e460201473044022018a23405ca72c5577f78c2356bdb8ba36259edb1320b90e2c31188e6317602201972db07bf5ef8e30221d3707ce6eb7ab748527ec8e7ca14241350920f03257f014c69522103566474f987a012a69a08097253394ebd681454df29c3f1fb0495a5b45490f9282103927407ca158155d0d30366395ca9cdc7d93cfa0a5b22181374431c15aae7b3582102cff98aba464f5d4ebac5e6417f142326235f5a0a59708ba6231471cce4ee074753ae0250c31976a9145dafa18ab21debe3d20f2c39e88d630f822bd29e88ac409c1976a914bad35cd767b657daa4a735b32e3d1f1dab52872d88ac, complete : true } And here's the decode of the signed transaction: $ bitcoind decoderawtransaction
Re: [Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?
Thanks for the quick reply to both of you, Mike and Pieter. I feel foolish for posting to this list, because the debug.log does indeed say inputs already spent. That's so weird, though, because we haven't been able to get anything to accept the transaction, seemingly, and yet it was accepted into the block chain 15 blocks ago. Anyway, I'm sorry for the noise. On Tuesday, 15 April 2014, at 5:11 pm, Pieter Wuille wrote: The first input seems to be already spent by another transaction (which looks very similar). 0.9 should report a more detailed reason for rejection, by the way. On Tue, Apr 15, 2014 at 5:05 PM, Mike Hearn m...@plan99.net wrote: Check debug.log to find out the reason it was rejected. -- Learn Graph Databases - Download FREE O'Reilly Book Graph Databases is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Learn Graph Databases - Download FREE O'Reilly Book Graph Databases is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?
On Tuesday, 15 April 2014, at 5:30 pm, Mike Hearn wrote: That's so weird, though, because we haven't been able to get anything to accept the transaction, seemingly, and yet it was accepted into the block chain 15 blocks ago. If the tx is already in the block chain then it won't be accepted again, because it would be double spending itself! Haha, yes, I know that. But we had been trying to get a 2-of-3 to be accepted by something for hours, and everything was rejecting it: Coinb.in, our local Bitcoind, the Eligius tx push form. Evidently something did accept it and we didn't notice. We're starting over again now and trying to reproduce the success (or failure). -- Learn Graph Databases - Download FREE O'Reilly Book Graph Databases is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?
On Tuesday, 15 April 2014, at 8:47 am, Mike Belshe wrote: For what it is worth, I found btcd (the go implementation of bitcoind) has much better error/diagnostics messages. It would have given you more than -22 TX Rejected. I used it to debug my own multi-sig transactions and it was very helpful. I'll have to check that out. A follow-up on my initial post... I did just successfully create, sign, and transmit another 2-of-3 transaction, so once again, I'm sorry I bothered this list. But since I did (and am now doing so again), I'll give a little more background on what we've been up to. It's not quite as simple as what I've shared thus far. We have built a tool from scratch in C++ that is kind of a Swiss Army knife of Bitcoin. It does all sorts of key and address conversions, hash functions, encoding and decoding, script disassembly, BIP38 encryption/decrytion, the Shamir Secret Sharing that I've posted about here on this list before, and transaction building and signing. It has its own wallet and it's own UTXO cache that contains only TXOs that are relevant to the objects in its wallet. It synchronizes its cache by scanning bitcoind's block data files. (It memory maps them and can do a full scan of the entire block chain in about a minute!) The wallet can contain keys, seeds, and multi-signature aggregates (which in turn can comprise keys and seeds). What we've been testing is deriving sequences of multi-sig P2SH addresses from a set of public seeds, sending bitcoins to those addresses, then using our tool to find those outputs in the block chain and to create transactions that redeem them, and then signing those trans actions by supplying the private seeds to the tool. Our tool is quite a bit easier to use than Bitcoind. (I was frankly appalled at the command-line syntax that was necessary to get Bitcoind to sign a P2SH multi-sig transaction.) $ ./btctool privkey /dev/random privseed1 $ ./btctool privkey /dev/random privseed2 $ ./btctool privkey /dev/random privseed3 $ pubseed1=$(./btctool pubkey privseed1) $ pubseed2=$(./btctool pubkey privseed2) $ pubseed3=$(./btctool pubkey privseed3) $ ./chaintool init $ ./chaintool add demo 2 :${pubseed1} :${pubseed2} :${pubseed3} $ ./chaintool ls demo2 :036447c7edc861b9f41fa0f611d81784f19ce692f37e8772b55c37c743cd526b49 :03c831711ea65decc06b0f3ccb4b9f1ba1a99a6933e520f6e7e4c3dbb4f015b701 :0347f2a0a346f21538fc451b95a600bc64ce5d2d28b89bf547697f3a77195d8dd1 $ ./btctool addresses 1 2 ${pubseed1} ${pubseed2} ${pubseed3} 3GQd1tosFCE7Vo4TAiDHEKTaBgoyZTeL6R $ bitcoind sendtoaddress 3GQd1tosFCE7Vo4TAiDHEKTaBgoyZTeL6R 0.01 6a9538f496f4c2d7f50c342fa6f6f76904a3b19f55f3a54a0003fc00b327d81b (I waited here for the tx to get into a block) $ ./chaintool sync /var/lib/bitcoin/.bitcoin/blocks 2 /dev/null $ ./chaintool listunspent [ { txid: 6a9538f496f4c2d7f50c342fa6f6f76904a3b19f55f3a54a0003fc00b327d81b, vout: 1, address: 3GQd1tosFCE7Vo4TAiDHEKTaBgoyZTeL6R, scriptPubKey: a914a1701be36532f05a74511fca89afce180c58189587, amount: 100, confirmations: 1 } ] $ cat outputs EOF 13QAKNuh9uFcEiNAsct6LSF1qWQR6HLarT 5 1FV4Fm3VCXfWy7BAXzT8t5qqTvEKZSad9v EOF $ tx=$(./chaintool createtx 1 demo outputs) (I manually edited ${tx} at this point to add an OP_RETURN output. We're currently working toward using OP_RETURN in a provable solvency scheme.) $ signedtx1=$(./chaintool signtx ${tx} privseed1) input #0: need 1 of [:03c831711ea65decc06b0f3ccb4b9f1ba1a99a6933e520f6e7e4c3dbb4f015b701, :0347f2a0a346f21538fc451b95a600bc64ce5d2d28b89bf547697f3a77195d8dd1] $ signedtx2=$(./chaintool signtx ${signedtx1} privseed2) $ bitcoind sendrawtransaction ${signedtx2} b485b185c77d803f75e1ccfee1b5072846c9e0728f4c955ca40dce82263f8f16 $ exit :-) -- Learn Graph Databases - Download FREE O'Reilly Book Graph Databases is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?
On Tuesday, 15 April 2014, at 6:39 pm, Chris Beams wrote: Looks interesting. Is the source available? The intent is to open-source it. We will do so when I'm confident that we have all the kinks worked out. Here's what it can do presently: $ ./btctool usage: ./btctool function [args] encode16 Encode stdin to hex. decode16 [hex] Decode hex from stdin or string. encode64 [hex] Encode stdin or octets to Base64. decode64 [base64] Decode Base64 from stdin or string. encode58 version [hex] Encode stdin or octets to Base58Check. decode58 [base58] Decode Base58Check from stdin or string. disassemble [script] Disassemble hex script. sha256 [hex] Hash stdin or octets using SHA-256. rmd160 [hex] Hash stdin or octets using RIPEMD-160. privkey [hex] Derive private key from stdin or octets. pubkey [privkey] Derive public key from private key. address [pubkey] Derive address from public key. address m [pubkey...] Derive m-of-n P2SH address from public keys. encrypt [privkey] Encrypt private key per BIP38. decrypt [privkey] Decrypt private key per BIP38. shares m n [privkey] Distribute private key into m-of-n shares. join [share...] Join shares to reconstitute private key. privkeys k [privseed] Derive k private keys from private seed. pubkeys k [pubseed] Derive k public keys from public seed. addresses k [pubseed] Derive k addresses from public seed. addresses k m [pubseed...] Derive k m-of-n P2SH addresses from public seeds. $ ./chaintool usage: ./chaintool function [args] init Initialize a new cache file. add label pubkey Add a public key. add label :pubseed Add a public seed. add label m {pubkey|:pubseed}... Add public keys/seeds for m-of-n P2SH. rm label Remove a public key or seed. ls List public keys and seeds. sync blocksdir Synchronize with block chain. tip Print hash of block at tip of main chain. getbalance [label...] Get available balance. listunspent [label...] List unspent outputs in JSON. createtx [fee] [label...] Create transaction from address+amount pairs on stdin. signtx tx [{privkey|privseed}...] Sign transaction with private key(s)/seed(s). -- Learn Graph Databases - Download FREE O'Reilly Book Graph Databases is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Monday, 7 April 2014, at 7:07 pm, Gregory Maxwell wrote: On Mon, Apr 7, 2014 at 6:46 PM, Matt Whitlock b...@mattwhitlock.name wrote: On Monday, 7 April 2014, at 5:38 pm, Gregory Maxwell wrote: On Mon, Apr 7, 2014 at 5:33 PM, Nikita Schmidt nik...@megiontechnologies.com wrote: Regarding the choice of fields, any implementation of this BIP will need big integer arithmetic to do base-58 anyway. Nah, it doesn't. E.g. https://gitorious.org/bitcoin/libblkmaker/source/eb33f9c8e441ffef457a79d76ceed1ea20ab3059:base58.c That only *decodes* Base58Check. It has no encode function, which would require biginteger division. Yes thats only a decode but the same process (long division with manual carries) works just fine the other way. There is absolutely no need to use big integers for this. What do you think a big-integer division by a word-sized divisor *is*? Obviously rolling your own is always an option. Are you just saying that Base58 encoding and decoding is easier than Shamir's Secret Sharing because the divisors are small? -- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] have there been complains about network congestion? (router crashes, slow internet when running Bitcoin nodes)
On Tuesday, 8 April 2014, at 12:13 pm, Angel Leon wrote: I was wondering if we have or expect to have these issues in the future, perhaps uTP could help greatly the performance of the entire network at some point. Or people could simply learn to configure their routers correctly. The only time I ever notice that Bitcoind is saturating my upstream link is when I try to transfer a file using SCP from a computer on my home network to a computer out on the Internet somewhere. SCP sets the maximize throughput flag in the IP type of service field, and my router interprets that as meaning low priority, and so those SCP transfers get stalled behind Bitcoind. But mostly everything else (e.g., email, web browsing, instant messaging, SSH) shows no degration whatsoever regardless of what Bitcoind is doing. The key is to move the packet queue from the cable modem into the router, where intelligent decisions about packet priority and reordering can be enacted. µTP pretty much reinvents the wheel, and it does so in userspace, where the overhead is greater. There's no need for it if proper QoS is in effect. -- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Monday, 7 April 2014, at 5:38 pm, Gregory Maxwell wrote: On Mon, Apr 7, 2014 at 5:33 PM, Nikita Schmidt nik...@megiontechnologies.com wrote: Regarding the choice of fields, any implementation of this BIP will need big integer arithmetic to do base-58 anyway. Nah, it doesn't. E.g. https://gitorious.org/bitcoin/libblkmaker/source/eb33f9c8e441ffef457a79d76ceed1ea20ab3059:base58.c That only *decodes* Base58Check. It has no encode function, which would require biginteger division. -- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Finite monetary supply for Bitcoin
On Saturday, 5 April 2014, at 12:21 pm, Jorge Timón wrote: I like both DD-MM- and -MM-DD. I just dislike MM-DD- and -DD-MM. Your preferences reflect a cultural bias. The only entirely numeric date format that is unambiguous across all cultures is -MM-DD. (No culture uses -DD-MM, or at least the ISO seems to think so.) -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Friday, 4 April 2014, at 5:51 pm, Nikita Schmidt wrote: On 4 April 2014 01:42, Matt Whitlock b...@mattwhitlock.name wrote: The fingerprint field, Hash16(K), is presently specified as a 16-bit field. Rationale: There is no need to consume 4 bytes just to allow shares to be grouped together. And if someone has more than 100 different secrets, they probably have a good system for managing their shares and won't need the hash anyway. Right, of course. Sorry, I didn't notice there was an update. Two bytes are plenty. I'm worried however about the dependency on SHA-512, which may be stretching it for a tiny embedded application. The other uses of HashL can be avoided. We are balancing here between consistency with the rest of this proposal, where everything is done via HashL, and consistency with the general practice of generating fingerprints with SHA-256, like in Base58Check. I'd be fine with changing the key fingerprint algorithm to something else. Do you like CRC16? Speaking of encoding, is it not wasteful to allocate three different application/version bytes just for the sake of always starting with 'SS'? It would be OK if it were accepted as a BIP, but merely as a de-facto standard it should aim at minimising future chances of collision. I agree on principle, however I think the more user-acceptable behavior is for all base58-encoded Shamir shares to begin with a common prefix, such as SS. Users are accustomed to relying on the prefix of the base58 encoding to understand what the object is: 1 for mainnet pubkey hash, 3 for mainnet script hash, 5 for uncompressed private key, P for passphrase-protected private key, etc. Yes, 5 for uncompressed private key and K or L for compressed private key. One A/VB and three prefixes in base58. Am I the only one to see this as a counter-example? However, thinking about this, I can find logic in wanting to stabilise text prefixes at a cost of six A/V bytes (as per the latest spec). There are only 58 first characters versus 256 AVBs, so we should rather be saving the former. The type of a base58-encoded object is determined not only by the application/version byte but by the payload length as well. For example, a base58-encoded object with an application/version byte of 0x80 but a payload length of 16 bytes would not be mistakable for a Bitcoin private key, even though AVB 0x80 does denote a Bitcoin private key when the payload length is 32 or 33 bytes. So it's not as simple as saying that this proposal costs 6 AVBs. It really costs one AVB for 18-byte payloads, one AVB for 21-byte payloads, one AVB for 34-byte payloads, one AVB fo 37-byte payloads, one AVB for 66-byte payloads, and one AVB for 69-byte payloads. What about using the same P256 prime as for the elliptic curve? Just for consistency's sake. The initial draft of this BIP used the cyclic order (n) of the generator point on the secp256k1 elliptic curve as the modulus. The change to the present scheme was actually done for consistency's sake, so all sizes of secret can use a consistently defined modulus. Fair enough. Although I would have chosen the field order (p) simply because that's how all arithmetic already works in bitcoin. One field for everybody. It's also very close to 2^256, although still smaller than your maximum prime. Now of course with different bit lengths we have to pick one consistency over others. As Gregory Maxwell pointed out, you can't use p when you're dealing with private keys, as that is the order of the finite field over which the elliptic curve is defined, but private keys are not points on that curve; a private key is a scalar number of times to multiply the generator point. That means you have to use the order of the generator point as the modulus when working with private keys. Also, I'm somewhat inclined towards using the actual x instead of j in the encoding. I find it more direct and straightforward to encode the pair (x, y). And x=0 can denote a special case for future extensions. There is no technical reason behind this, it's just for (subjective) clarity and consistency. There is a technical reason for encoding j rather than x[j]: it allows for the first 256 shares to be encoded, rather than only the first 255 shares. Wow, big deal. It's hard to imagine anyone needing exactly 256 shares, but who knows. And with j = x (starting from 1) we'd get user-friendly share numbering and simpler formulas in the spec and possibly in the implementation, with no off-by-one stuff. And M instead of M-2... It's common for implementation limits to be powers of two. I don't foresee any off-by-one errors, as the spec is clear on the value of each byte in the encoding. -- ___ Bitcoin-development mailing list Bitcoin
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Friday, 4 April 2014, at 7:14 am, Gregory Maxwell wrote: I still repeat my concern that any private key secret sharing scheme really ought to be compatible with threshold ECDSA, otherwise we're just going to have another redundant specification. I have that concern too, but then how can we support secrets of sizes other than 256 bits? A likely use case for this BIP (even more likely than using it to decompose Bitcoin private keys) is using it to decompose BIP32 master seeds, which can be 512 bits in size. We can't use secp256k1_n as the modulus there. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Friday, 4 April 2014, at 9:25 am, Gregory Maxwell wrote: On Fri, Apr 4, 2014 at 9:05 AM, Matt Whitlock b...@mattwhitlock.name wrote: On Friday, 4 April 2014, at 7:14 am, Gregory Maxwell wrote: I still repeat my concern that any private key secret sharing scheme really ought to be compatible with threshold ECDSA, otherwise we're just going to have another redundant specification. I have that concern too, but then how can we support secrets of sizes other than 256 bits? A likely use case for this BIP (even more likely than using it to decompose Bitcoin private keys) is using it to decompose BIP32 master seeds, which can be 512 bits in size. We can't use secp256k1_n as the modulus there. Well, if you're not doing anything homorphic with the result the computation should probably be over a small field (for computational efficiency and implementation simplicity reasons) and the data split up, this also makes it easier to deal with many different data sizes, since the various sizes will more efficiently divide into the small field. The field only needs to be large enough to handle the number of distinct shares you wish to issue, so even an 8 bit field would probably be adequate (and yields some very simple table based implementations). Are you proposing to switch from prime fields to a binary field? Because if you're going to break up a secret into little pieces, you can't assume that every piece of the secret will be strictly less than some 8-bit prime modulus. And if you're going to do a base conversion, then you have to do arbitrary-precision integer math anyway, so I don't see that the small field really saves you any code. If that route is taken, rather than encoding BIP32 master keys, it would probably be prudent to encode the encryption optional version https://bitcointalk.org/index.php?topic=258678.0 ... and if we're talking about a new armored private key format then perhaps we should be talking about Mark Friedenbach's error correcting capable scheme: https://gist.github.com/maaku/8996338#file-bip-ecc32-mediawiki (though it would be nicer if we could find a decoding scheme that supported list decoding without increasing the complexity of a basic implementation, since an advanced recovery tool could make good use of a list decode) Weren't you just clamoring for implementation *simplicity* in your previous paragraph? :) -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Friday, 4 April 2014, at 10:08 am, Gregory Maxwell wrote: On Fri, Apr 4, 2014 at 9:36 AM, Matt Whitlock b...@mattwhitlock.name wrote: Are you proposing to switch from prime fields to a binary field? Because if you're going to break up a secret into little pieces, you can't assume that every piece of the secret will be strictly less than some 8-bit prime modulus. And if you're going to do a base conversion, then you have to do arbitrary-precision integer math anyway, so I don't see that the small field really saves you any code. Yes, I'm proposing using the binary extension field of GF(2^8). There are many secret sharing and data integrity applications available already operating over GF(2^8) so you can go compare implementation approaches without having to try them our yourself. Obviously anything efficiently encoded as bytes will efficiently encode over GF(2^8). Honestly, that sounds a lot more complicated than what I have now. I made my current implementation because I just wanted something simple that would let me divide a private key into shares for purposes of dissemination to my next of kin et al. Weren't you just clamoring for implementation *simplicity* in your previous paragraph? :) I do think there is a material difference in complexity that comes in layers rather than at a single point. It's much easier to implement a complex thing that has many individually testable parts then a single complex part. (Implementing arithmetic mod some huge P is quite a bit of work unless you're using some very high level language with integrated bignums— and are comfortable hoping that their bignums are sufficiently consistent with the spec). I already have a fairly polished implementation of my BIP, and it's not written in a very high-level language; it's C++, and the parts that do the big-integer arithmetic are basically C. I'm using the GMP library: very straightforward, very reliable, very fast. Do you have a use case in mind that would benefit from byte-wise operations rather than big-integer operations? I mean, I guess if you were trying to implement this BIP on a PIC microcontroller, it might be nice to process the secret in smaller bites. (No pun intended.) But I get this feeling that you're only pushing me away from the present incarnation of my proposal because you think it's too similar (but not quite similar enough) to a threshold ECDSA key scheme. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Friday, 4 April 2014, at 10:51 am, Gregory Maxwell wrote: On Fri, Apr 4, 2014 at 10:16 AM, Matt Whitlock b...@mattwhitlock.name wrote: Honestly, that sounds a lot more complicated than what I have now. I made my current implementation because I just wanted something simple that would let me divide a private key into shares for purposes of dissemination to my next of kin et al. I suggest you go look at some of the other secret sharing implementations that use GF(2^8), they end up just being a couple of dozen lines of code. Pretty simple stuff, and they work efficiently for all sizes of data, there are implementations in a multitude of languages. There are a whole bunch of these. Okay, I will. Do you have a use case in mind that would benefit from byte-wise operations rather than big-integer operations? I mean, I guess if you were trying to implement this BIP on a PIC microcontroller, it might be nice to process the secret in smaller bites. (No pun intended.) But I get this feeling that you're only pushing me away from the present incarnation of my proposal because you think it's too similar (but not quite similar enough) to a threshold ECDSA key scheme. It lets you efficiently scale to any size data being encoded without extra overhead or having additional primes. It can be compactly implemented in Javascript (there are several implementations you can find if you google), it shouldn't be burdensome to implement on a device like a trezor (much less a real microcontroller). Those are fair points. And yea, sure, it's distinct from the implementation you'd use for threshold signing. A threshold singing one would lack the size agility or the easy of implementation on limited devices. So I do think that if there is to be two it would be good to gain the advantages that can't be achieved in an threshold ECDSA compatible approach. I agree. I'll look into secret sharing in GF(2^8), but it may take me a few days. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Thursday, 3 April 2014, at 4:41 pm, Nikita Schmidt wrote: I agree with the recently mentioned suggestion to make non-essential metadata, namely key fingerprint and degree (M), optional. Their 4-byte and 1-byte fields can be added individually at an implementation's discretion. During decoding, the total length will determine which fields are included. The fingerprint field, Hash16(K), is presently specified as a 16-bit field. Rationale: There is no need to consume 4 bytes just to allow shares to be grouped together. And if someone has more than 100 different secrets, they probably have a good system for managing their shares and won't need the hash anyway. Encoding for the testnet is not specified. Hmm, is that actually needed? Speaking of encoding, is it not wasteful to allocate three different application/version bytes just for the sake of always starting with 'SS'? It would be OK if it were accepted as a BIP, but merely as a de-facto standard it should aim at minimising future chances of collision. I agree on principle, however I think the more user-acceptable behavior is for all base58-encoded Shamir shares to begin with a common prefix, such as SS. Users are accustomed to relying on the prefix of the base58 encoding to understand what the object is: 1 for mainnet pubkey hash, 3 for mainnet script hash, 5 for uncompressed private key, P for passphrase-protected private key, etc. I'd add a clause allowing the use of random coefficients instead of deterministic, as long as the implementation guarantees to never make another set of shares for the same private key or master seed. I'm not sure that's necessary, as this is an Informational BIP. Implementations are free to ignore it. Shares with randomly selected coefficients would work just fine in a share joiner that conforms to the BIP, so I would expect implementors to feel free to ignore the deterministic formula and use randomly selected coefficients. What about using the same P256 prime as for the elliptic curve? Just for consistency's sake. The initial draft of this BIP used the cyclic order (n) of the generator point on the secp256k1 elliptic curve as the modulus. The change to the present scheme was actually done for consistency's sake, so all sizes of secret can use a consistently defined modulus. Also, I'm somewhat inclined towards using the actual x instead of j in the encoding. I find it more direct and straightforward to encode the pair (x, y). And x=0 can denote a special case for future extensions. There is no technical reason behind this, it's just for (subjective) clarity and consistency. There is a technical reason for encoding j rather than x[j]: it allows for the first 256 shares to be encoded, rather than only the first 255 shares. If you want a sentinel value reserved for future extensions, then you might take notice that 0x is an invalid key fingerprint, along with several other values, and also that 0xFF is an unusable value of M−2, as that would imply M=257, but the scheme can only encode up to 256 shares, so one would never have enough shares to meet the threshold. I considered having the two optional fields be mandatory and allowing 0x and 0xFF as redacted field values, but I like allowing the shares to be shorter if the optional fields are omitted. (Imagine engraving Shamir secret shares onto metal bars by hand with an engraving tool. Fewer characters is better!) -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Finite monetary supply for Bitcoin
The creation date in your BIP header has the wrong format. It should be 01-04-2014, per BIP 1. :-) On Tuesday, 1 April 2014, at 9:00 pm, Pieter Wuille wrote: Hi all, I understand this is a controversial proposal, but bear with me please. I believe we cannot accept the current subsidy schedule anymore, so I wrote a small draft BIP with a proposal to turn Bitcoin into a limited-supply currency. Dogecoin has already shown how easy such changes are, so I consider this a worthwhile idea to be explored. The text can be found here: https://gist.github.com/sipa/9920696 Please comment! Thanks, -- Pieter -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 9:44 am, Tamas Blummer wrote: I used Shamir's Secret Sharing to decompose a seed for a BIP32 master key, that is I think more future relevant than a single key. Therefore suggest to adapt the BIP for a length used there typically 16 or 32 bytes and have a magic code to indicate its use as key vs. seed. Master keys of 32 bytes would work as-is, as ordinary private keys are also 32 bytes. Secrets of other lengths could be supported if the function that generates a[i] from a[i-1] (which is presently SHA-256) were replaced with a function having parameterized output length, such as scrypt. Base58Check encodings of shares for secrets of lengths other than 32 bytes would have prefixes other than SS, but that's not a huge concern. I suspect 32 bytes would be the most common secret length anyway, wouldn't you? -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 10:08 am, Chris Beams wrote: Matt, could you expand on use cases for which you see Shamir's Secret Sharing Scheme as the best tool for the job? In particular, when do you see that it would be superior to simply going with multisig in the first place? Perhaps you see these as complimentary approaches, toward defense-in-depth? In any case, the Motivation and Rationale sections of the BIP in its current form are silent on these questions. Okay, yes, I will address these questions. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 10:08 am, Chris Beams wrote: Matt, could you expand on use cases for which you see Shamir's Secret Sharing Scheme as the best tool for the job? In particular, when do you see that it would be superior to simply going with multisig in the first place? Perhaps you see these as complimentary approaches, toward defense-in-depth? In any case, the Motivation and Rationale sections of the BIP in its current form are silent on these questions. I have added two new sections to address your questions. https://github.com/whitslack/btctool/blob/bip/bip-.mediawiki -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 2:36 pm, Mike Hearn wrote: Right - the explanation in the BIP about the board of directors is IMO a little misleading. The problem is with splitting a private key is that at some point, *someone* has to get the full private key back and they can then just remember the private key to undo the system. CHECKMULTISIG avoids this. The implication is that every director would want to retain the board's private key for himself but also would want to prevent every other director from successfully retaining the private key for himself, leading to a perpetual stalemate in which no director ever gets to retain the private key. I can imagine that there may be occasional uses for splitting a wallet seed like this, like for higher security cold wallets, but I suspect an ongoing shared account like a corporate account is still best off using CHECKMULTISIG or the n-of-m ECDSA threshold scheme proposed by Ali et al. Multisig does not allow for the topology I described. Say the board has seven directors, meaning the majority threshold is four. This means the organization needs the consent of six individuals in order to sign a transaction: the president, the CFO, and any four of the board members. A 6-of-9 multisig would not accomplish the same policy, as then any six board members could successfully sign a transaction without the consent of the president or CFO. Of course the multi-signature scheme could be expanded to allow for hierarchical threshold topologies, or Shamir's Secret Sharing can be used to distribute keys at the second level (and further, if desired). -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 10:19 am, Jeff Garzik wrote: On Sat, Mar 29, 2014 at 10:10 AM, Matt Whitlock b...@mattwhitlock.name wrote: Multisig does not allow for the topology I described. Say the board has seven directors, meaning the majority threshold is four. This means the organization needs the consent of six individuals in order to sign a transaction: the president, the CFO, and any four of the board members. A 6-of-9 multisig would not accomplish the same policy, as then any six board members could successfully sign a transaction without the consent of the president or CFO. Of course the multi-signature scheme could be expanded to allow for hierarchical threshold topologies, or Shamir's Secret Sharing can be used to distribute keys at the second level (and further, if desired). Disagree with does not allow Review bitcoin's script language. Bitcoin script can handle the use case you describe. Add conditionals to the bitcoin script, OP_IF etc. You can do 'multisig AND multisig' type boolean logic entirely in script, and be far more flexible than a single CHECKMULTISIG affords. Depends on your definition of can. Bitcoin's scripting language is awesome, but it's mostly useless due to the requirement that scripts match one of a select few standard templates in order to be allowed to propagate across the network and be mined into blocks. I really hate IsStandard and wish it would die. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 7:36 am, Gregory Maxwell wrote: On Sat, Mar 29, 2014 at 7:28 AM, Watson Ladd w...@uchicago.edu wrote: This is not the case: one can use MPC techniques to compute a signature from shares without reconstructing the private key. There is a paper on this for bitcoin, but I don't know where it is. Practically speaking you cannot unless the technique used is one carefully selected to make it possible. This proposal isn't such a scheme I beleieve, however, and I think I'd strongly prefer that we BIP standardize a formulation which also has this property. I too would prefer that, but I do not believe there exists a method for computing a traditional signature from decomposed private key shares. Unless I'm mistaken, the composed signature has a different formula and requires a different verification algorithm from the ECDSA signatures we're using today. Thus, such a scheme would require a change to the Bitcoin scripting language. I specifically did not want to address that in my BIP because changes like that take too long. I am aiming to be useful in the present. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 9:44 am, Tamas Blummer wrote: I used Shamir's Secret Sharing to decompose a seed for a BIP32 master key, that is I think more future relevant than a single key. Therefore suggest to adapt the BIP for a length used there typically 16 or 32 bytes and have a magic code to indicate its use as key vs. seed. I have expanded the BIP so that it additionally applies to BIP32 master seeds of sizes 128, 256, and 512 bits. https://github.com/whitslack/btctool/blob/bip/bip-.mediawiki The most significant change versus the previous version is how the coefficients of the polynomials are constructed. Previously they were SHA-256 digests. Now they are SHA-512 digests, modulo a prime number that is selected depending on the size of the secret. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 12:59 pm, Alan Reiner wrote: I won't lie, there's a lot of work that goes into making an interface that makes this feature usable. The user needs clear ways to identify which fragments are associated with which wallet, and which fragments are compatible with each other. The same is true of the multiple private keys involved in a multi-signature addresses. They need a way to save some fragments to file, print them, or simply write them down. I proposed a share encoding scheme for exactly this purpose. They need a way to re-enter fragment, reject duplicates, identify errors, etc. Without it, the math fails silently, and you end up restoring a different wallet. I intentionally omitted the parameter M (minimum subset size) from the shares because including it would give an adversary a vital piece of information. Likewise, including any kind of information that would allow a determination of whether the secret has been correctly reconstituted would give an adversary too much information. Failing silently when given incorrect shares or an insufficient number of shares is intentional. Also I put the secret in the highest-order coefficient of the polynomial, Does it make any difference which coefficient holds the secret? It's convenient to put it in the lowest-order coefficient to simply the recovery code. and made sure that the other coefficients were deterministic. This meant that if print out an M-of-N wallet, I can later print out an M-of-(N+1) wallet and the first N fragments will be the same. I'm not sure how many users would trust this, but we felt it was important in case a user needs to export some fragments, even if they don't increase N. My BIP likewise deterministically chooses the coefficients so that the shares of a secret are consistent across all runs of the algorithm having the same M. As I'm sure you're aware, N (the number of shares to output) plays no part in the calculation and merely controls how many times the outermost loop is executed. My BIP doesn't even mention this parameter. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 10:25 am, Dev Random wrote: On Sat, 2014-03-29 at 11:44 -0400, Matt Whitlock wrote: On Saturday, 29 March 2014, at 11:08 am, Watson Ladd wrote: https://freedom-to-tinker.com/blog/stevenag/new-research-better-wallet-security-for-bitcoin/ Thanks. This is great, although it makes some critical references to an ACM paper for which no URL is provided, and thus I cannot implement it. A distributed ECDSA notwithstanding, we still need a way to decompose a BIP32 master seed into shares. I am envisioning a scenario in which I It would seem that threshold ECDSA with keys derived from separate seeds has better security properties than one seed that is then split up. The main thing is that there is no single point of attack in the generation or signing. No contest here. But can threshold ECDSA work with BIP32? In other words, can a threshold ECDSA public key be generated from separate, precomputed private keys, or can it only be generated interactively? Maybe the BIP32 master seeds have to be generated interactively, and then all sets of corresponding derived keys are valid signing groups? Threshold ECDSA certainly sounds nice, but is anyone working on a BIP for it? I would take it on myself, but I don't understand it well enough yet, and publicly available information on it seems lacking. I proposed this Shamir Secret Sharing BIP as an easily understood, easily implemented measure that we can use today, with no changes to existing Bitcoin software. It's low-hanging fruit. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 10:48 am, devrandom wrote: On Sat, 2014-03-29 at 13:38 -0400, Matt Whitlock wrote: Threshold ECDSA certainly sounds nice, but is anyone working on a BIP for it? I would take it on myself, but I don't understand it well enough yet, and publicly available information on it seems lacking. I proposed this Shamir Secret Sharing BIP as an easily understood, easily implemented measure that we can use today, with no changes to existing Bitcoin software. It's low-hanging fruit. Good points, although multisig is catching on quickly in the ecosystem. AFAIK, all production wallets can send to p2sh addresses. As far as I know, Blockchain.info wallets still can't send to P2SH addresses. This was a *major* roadblock in the Bitcoin project that I've been working on for the past several months, and it was the impetus for my creating this Shamir Secret Sharing implementation in the first place. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Presenting a BIP for Shamir's Secret Sharing of Bitcoin private keys
On Saturday, 29 March 2014, at 2:08 pm, Alan Reiner wrote: Regardless of how does it, I believe that obfuscating that information is bad news from a usability perspective. Undoubtedly, users will make lots of backups of lots of wallets and think they remember the M-parameter but don't. They will accidentally mix in some 3-of-5 fragments with their 2-of-4 not realizing they are incompatible, or not able to distinguish them. Or they'll distribute too many thinking the threshold is higher and end up insecure, or possibly not have enough fragments to restore their wallet thinking the M-value was lower than it actually was. I just don't see the value in adding such complexity for the benefit of obfuscating information an attacker might be able to figure out anyway (most backups will be 2-of-N or 3-of-N) and can't act on anyway (because he doesn't know where the other frags are and they are actually in safe-deposit boxes) Okay, you've convinced me. However, it looks like the consensus here is that my BIP is unneeded, so I'm not sure it would be worth the effort for me to improve it with your suggestions. -- ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development