Re: [Bitcoin-development] Merged mining a side chain with proof of burn on parent chain
Burn mining side chains might be one of the foundation ideas for that ecosystem, enabling permission-less merge mining for anyone with interest in a side chain. I am puzzled by the lack of feedback on the idea. Tamas Blummer Bits of Proof signature.asc Description: Message signed with OpenPGP using GPGMail -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Merged mining a side chain with proof of burn on parent chain
On Mon, Dec 15, 2014 at 11:21:01AM +0100, Tamas Blummer wrote: Burn mining side chains might be one of the foundation ideas for that ecosystem, enabling permission-less merge mining for anyone with interest in a side chain. I am puzzled by the lack of feedback on the idea. It's not a new idea actually - I outlined a system I eventually called zookeyv¹ based on the notion of sacrificing Bitcoins to achieve consensus a year and a half ago on #bitcoin-wizards. The discussion started here and continued for a few days: https://download.wpsoftware.net/bitcoin/wizards/2013/05/13-05-29.log I later wrote up the idea in the context of adding Zerocoin to Bitcoin: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg02472.html For key-value mapping I eventually decided that the system didn't necessarily need to be a strict linear blockchain - a directed acyclic graph of commitments had advantages as there needed to be less syncronization between transactions. This also means that the graph doesn't necessarily need to be revealed directly in the blockchain, exposing it to miner censorship. OTOH revealing it makes it easy to determine if an attacker larger than you exists. These days I'd suggest using timelock crypto to defeat miner censorship, while ensuring that in principle consensus over all possible parts of the chain can eventually be reached.² Proof-of-sacrifice for consensus has a few weird properties. For instance you can defeat attackers after the fact by simply sacrificing more than the attacker. Not unlike having a infinite amount of mining equipment available with the only constraint being funds to pay for the electricity. (which *is* semi-true with altcoins!) As for your specific example, you can improve it's censorship resistance by having the transactions commit to a nonce in advance in some way indistinguishable from normal transactions, and then making the selection criteria be some function of H(nonce | blockhash) - for instance highest wins. So long as the chain selection is based on cumulative difficulty based on a fixed target - as is the case in Bitcoin proper - you should get a proper incentive to publish blocks, as well as the total work information rachet effect Bitcoin has against attackers. 1) In honor of Zooko's triangle. 2) This doesn't necessarily take as much work as you might expect: you can work backwards from the most recent block(s) if the scheme requires block B_i to include the decryption key for block B_{i-1}. -- 'peter'[:-1]@petertodd.org 18d439a78581d2a9ecd762a2b37dd5b403a82beb58dcbc7c signature.asc Description: Digital signature -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
[Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
BtcDrak was working on rebasing my CHECKLOCKTIMEVERIFY¹ patch to master a few days ago and found a fairly large design change that makes merging it currently impossible. Pull-req #4890², specifically commit c7829ea7, changed the EvalScript() function to take an abstract SignatureChecker object, removing the txTo and nIn arguments that used to contain the transaction the script was in and the txin # respectively. CHECKLOCKTIMEVERIFY needs txTo to obtain the nLockTime field of the transaction, and it needs nIn to obtain the nSequence of the txin. We need to fix this if CHECKLOCKTIMEVERIFY is to be merged. Secondly, that this change was made, and the manner in which is was made, is I think indicative of a development process that has been taking significant risks with regard to refactoring the consensus critical codebase. I know I personally have had a hard time keeping up with the very large volume of code being moved and changed for the v0.10 release, and I know BtcDrak - who is keeping Viacoin up to date with v0.10 - has also had a hard time giving the changes reasonable review. The #4890 pull-req in question had no ACKs at all, and only two untested utACKS, which I find worrying for something that made significant consensus critical code changes. While it would be nice to have a library encapsulating the consensus code, this shouldn't come at the cost of safety, especially when the actual users of that library or their needs is still uncertain. This is after all a multi-billion project where a simple fork will cost miners alone tens of thousands of dollars an hour; easily much more if it results in users being defrauded. That's also not taking into account the significant negative PR impact and loss of trust. I personally would recommend *not* upgrading to v0.10 due to these issues. A much safer approach would be to keep the code changes required for a consensus library to only simple movements of code for this release, accept that the interface to that library won't be ideal, and wait until we have feedback from multiple opensource projects with publicly evaluatable code on where to go next with the API. 1) https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki 2) https://github.com/bitcoin/bitcoin/pull/4890 -- 'peter'[:-1]@petertodd.org 1b18a596ecadd07c0e49620fb71b16f9e41131df9fc52fa6 signature.asc Description: Digital signature -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Merged mining a side chain with proof of burn on parent chain
Peter, Thanks for the research, I am glad that the idea, that proof-of-burn can “transfer proof-of-work was discussed earlier, as those discussions give some attack vectors that I can reevaluate in a new context, that is side chains. I think that the lottery component I suggested, makes it much more resilient to “outspend” attack, since the attacker not only needs to outspend but win the lottery for a reorg. This raises the cost of the attack by magnitudes above the regular miner burn cost. In addition, I suggest the burn transaction to include the Bitcoin block height, thereby disabling re-use of a burn, for a later reorg. Tamas Blummer Bits of Proof On Dec 15, 2014, at 1:39 PM, Peter Todd p...@petertodd.org wrote: On Mon, Dec 15, 2014 at 11:21:01AM +0100, Tamas Blummer wrote: Burn mining side chains might be one of the foundation ideas for that ecosystem, enabling permission-less merge mining for anyone with interest in a side chain. I am puzzled by the lack of feedback on the idea. It's not a new idea actually - I outlined a system I eventually called zookeyv¹ based on the notion of sacrificing Bitcoins to achieve consensus a year and a half ago on #bitcoin-wizards. The discussion started here and continued for a few days: https://download.wpsoftware.net/bitcoin/wizards/2013/05/13-05-29.log I later wrote up the idea in the context of adding Zerocoin to Bitcoin: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg02472.html For key-value mapping I eventually decided that the system didn't necessarily need to be a strict linear blockchain - a directed acyclic graph of commitments had advantages as there needed to be less syncronization between transactions. This also means that the graph doesn't necessarily need to be revealed directly in the blockchain, exposing it to miner censorship. OTOH revealing it makes it easy to determine if an attacker larger than you exists. These days I'd suggest using timelock crypto to defeat miner censorship, while ensuring that in principle consensus over all possible parts of the chain can eventually be reached.² Proof-of-sacrifice for consensus has a few weird properties. For instance you can defeat attackers after the fact by simply sacrificing more than the attacker. Not unlike having a infinite amount of mining equipment available with the only constraint being funds to pay for the electricity. (which *is* semi-true with altcoins!) As for your specific example, you can improve it's censorship resistance by having the transactions commit to a nonce in advance in some way indistinguishable from normal transactions, and then making the selection criteria be some function of H(nonce | blockhash) - for instance highest wins. So long as the chain selection is based on cumulative difficulty based on a fixed target - as is the case in Bitcoin proper - you should get a proper incentive to publish blocks, as well as the total work information rachet effect Bitcoin has against attackers. 1) In honor of Zooko's triangle. 2) This doesn't necessarily take as much work as you might expect: you can work backwards from the most recent block(s) if the scheme requires block B_i to include the decryption key for block B_{i-1}. -- 'peter'[:-1]@petertodd.org 18d439a78581d2a9ecd762a2b37dd5b403a82beb58dcbc7c signature.asc Description: Message signed with OpenPGP using GPGMail -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Deanonymisation of clients in Bitcoin P2P network paper
[...] I'm confused by this, I run quite a few nodes exclusively on tor and chart their connectivity and have seen no such connection dropping behaviour. In my experience the problem has always been getting bootstrapped. Most nodes hardly give any hidden service nodes in their getaddr. (this has been improved in master by including a set of hidden service seed nodes) But this assumes -onlynet=tor. Tor with exit nodes should be less problematic, unless someone managed to DoSban all the exit nodes as described in the paper (but I've never seen such an attack myself). When refering to getting bootstrapped, do you only mean collecting node addresses, or also syncing blocks? If you're saying the drops in connection counts are likely to be not caused by a DoSban attack on the exit nodes, what could be other reasons to look into? Can you tell me more about how you measured this? [As an aside I agree that there are lots of things to improve here, but the fact that users can in theory be forced off of tor via DOS attacks is not immediately concerning to me because its a conscious choice users would make to abandon their privacy (and the behaviour of the system here is known and intentional). There are other mechanisms available for people to relay their transactions than connecting directly to the bitcoin network; so their choice isn't just abandon privacy or don't use bitcoin at all.] Right, there's something to be said for splitting your own transaction submission from normal P2P networking and transaction relay. (esp for non-SPV wallets which don't inherently leak any information about their addresses) There was a pull request about this for Bitcoin Core one, maybe I closed it unfairly https://github.com/bitcoin/bitcoin/issues/4564 . Great! I find it very interesting to research options for splitting communication between Tor and non-Tor as long as the introduced information leakage between both connection methods can be proved to be nonexistent or negligible. In the case of Bitcoin, this makes me wonder about an attack that could look approximately like this: * Node A connects to Bitcoin using Tor (for submitting transactions) and IPv4 (for all other communication). * Node B strives for direct IPv4 connections with node A * Node B uses the direct IPv4 connections in order to supply Node A with additional peer addresses not supplied to any other nodes * Node B observes these additional peer addresses For transactions submitted to the additional peer addresses supplied by node B, how to prevent that the probability of these originating from node A is higher than for originating from other nodes? For handling block propagation using non-Tor connections, it's probably harder to create information leaks, as the logic for handling disagreement about blocks is pretty well-researched, meaning that it's less important where the blocks come from. Best regards, Isidor -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Merged mining a side chain with proof of burn on parent chain
The goal is to have an opportunity cost to breaking the rules. Proof of Burn is a real cost for following the rules. For every participant who could try to decide about the adequateness of the cost, no direct effect comes from the difference between Proof of Burn, and approaches which keep the Bitcoins inside the set of outputs that can still participate. So, any notion which differentiates with respect to this must make some assumption about what defines the interest of the Bitcoin nodes as a community. Best regards, Isidor -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
This is a pretty good example about refactoring discipline as well as premature/over optimisation. We all want to see more modular code, but the first steps should just be to relocate blocks of code so everything is more logically organised in smaller files (especially for consensus critical code). Refactoring should come in a second wave preferably after a stable release. Refactoring should be in the pure sense, optimising code with absolutely no change in behaviour. When it comes to actual API changes, I think we need to be a lot more careful and should be considered feature requests and get a lot more scrutiny as we are essentially breaking backwards compatibility. #4890 was pretty much merged with no discussion or thought yet other really simple and uncontroversial PRs remain unmerged for months. A key question in the case of EvalScript() would have been, why are we passing txTo and nIn here, and are there any future use cases that might require them? Why should this be removed from the API and the entire method signature changed?. BC breaks always need strong justification. So I've expressed my concern a few times about the speed and frequency of refactoring and also the way it's being done. I am not alone, as others not directly connected with the Bitcoin Core project have also expressed concerns about the number of refactorings for the sake of refactoring, especially of consensus critical code. Careful as we may be, we know from history that small edge case bugs can creep in very easily and cause a lot of unforeseen problems. BtcDrak On Mon, Dec 15, 2014 at 12:47 PM, Peter Todd p...@petertodd.org wrote: BtcDrak was working on rebasing my CHECKLOCKTIMEVERIFY¹ patch to master a few days ago and found a fairly large design change that makes merging it currently impossible. Pull-req #4890², specifically commit c7829ea7, changed the EvalScript() function to take an abstract SignatureChecker object, removing the txTo and nIn arguments that used to contain the transaction the script was in and the txin # respectively. CHECKLOCKTIMEVERIFY needs txTo to obtain the nLockTime field of the transaction, and it needs nIn to obtain the nSequence of the txin. We need to fix this if CHECKLOCKTIMEVERIFY is to be merged. Secondly, that this change was made, and the manner in which is was made, is I think indicative of a development process that has been taking significant risks with regard to refactoring the consensus critical codebase. I know I personally have had a hard time keeping up with the very large volume of code being moved and changed for the v0.10 release, and I know BtcDrak - who is keeping Viacoin up to date with v0.10 - has also had a hard time giving the changes reasonable review. The #4890 pull-req in question had no ACKs at all, and only two untested utACKS, which I find worrying for something that made significant consensus critical code changes. While it would be nice to have a library encapsulating the consensus code, this shouldn't come at the cost of safety, especially when the actual users of that library or their needs is still uncertain. This is after all a multi-billion project where a simple fork will cost miners alone tens of thousands of dollars an hour; easily much more if it results in users being defrauded. That's also not taking into account the significant negative PR impact and loss of trust. I personally would recommend *not* upgrading to v0.10 due to these issues. A much safer approach would be to keep the code changes required for a consensus library to only simple movements of code for this release, accept that the interface to that library won't be ideal, and wait until we have feedback from multiple opensource projects with publicly evaluatable code on where to go next with the API. 1) https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki 2) https://github.com/bitcoin/bitcoin/pull/4890 -- 'peter'[:-1]@petertodd.org 1b18a596ecadd07c0e49620fb71b16f9e41131df9fc52fa6 -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
On Mon, Dec 15, 2014 at 9:57 AM, Btc Drak btcd...@gmail.com wrote: We all want to see more modular code, but the first steps should just be to relocate blocks of code so everything is more logically organised in smaller files (especially for consensus critical code). Refactoring should come in a second wave preferably after a stable release. This is my opinion as well. In the Linux kernel, we often were faced with a situation where you have a One Big File driver with 1MB of source code. The first step was -always- raw code movement, a brain-dead breaking up of code into logical source code files. Refactoring of data structures comes after that. While not always money-critical, these drivers Had To Keep Working. We had several situations where we had active users, but zero hardware access for debugging, and zero access to the vendor knowledge (hardware documentation, engineers). Failure was not an option. ;p Performing the dumb Break Up Files step first means that future, more invasive data structures are easier to review, logically segregated, and not obscured by further code movement changes down the line. In code such as Bitcoin Core, it is important to think about the _patch stream_ and how to optimize for reviewer bandwidth. The current stream of refactoring is really a turn-off in terms of reviewing, sapping reviewer bandwidth by IMO being reviewer-unfriendly. It is a seemingly never-ending series of tiny refactor-and-then-stuff-in-a-class-and-make-it-pretty-and-do-all-the-work. Some change is in order, gentlemen. -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/ -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
On Mon, Dec 15, 2014 at 12:47 PM, Peter Todd p...@petertodd.org wrote: [snip] Pull-req #4890², specifically commit c7829ea7, changed the This change was authored more than three months ago and merged more than two months ago. [And also, AFAICT, prior to you authoring BIP65] I didn't participate in that pull-req, though I saw it... it had five other contributors working on it and I try to have minimal opinions on code organization and formatting. But the idea sounded (and still sounds) reasonable to me. Of course, anything could still be backed out if it turned out to be ill-advised (even post 0.10, as I think now we've had months of testing with this code in place and removing it may be more risky)... but your comments here are really not timely. Everyone has limited resources, which is understandable, but the concerns you are here are ones that didn't involve looking at the code to raise, and would have been better process wise raised earlier. We need to fix this if CHECKLOCKTIMEVERIFY is to be merged. I don't see why you conclude this. Rather than violating the layering by re-parsing the transaction as the lower level, just make this data additional information that is needed available. Yes, does mean that rebasing an altcoin that made modifications here will take more effort and understanding of the code than a purely mechanical change. Secondly, that this change was made, and the manner in which is was made, is I think indicative of a development process that has been taking significant risks with regard to refactoring the consensus critical codebase. I don't agree. The character of this change is fairly narrow. We have moderately good test coverage here, and there were five participants on the PR. While it would be nice to have a library encapsulating the consensus code, this shouldn't come at the cost of safety, especially when the actual users of that This is all true stuff, but the fact of it doesn't follow that any particular change was especially risky. Beyond the general 'things were changed in a way that made rebasing an-altcoin take more work' do you have a specific concern here? Other than travling back in time three months and doing something differently, do you have any suggestions to ameliorate that concern? E.g. are their additional tests we don't already have that you think would increase your confidence with respect to specific safety concerns? A much safer approach would be to keep the code changes required for a consensus library to only simple movements of code for this release, accept that the interface to that library won't be ideal, and wait until we have feedback from multiple opensource projects with publicly evaluatable code on where to go next with the API. There won't be any public users of the library until there can actually _be_ a library. PR4890's primary objective was disentangling the script validation from the node state introduced by the the signature caching changes a couple years ago, making it possible to build the consensus components without application specific threading logic... and makes it possible to have a plain script evaluator call without having to replicate all of bitcoind's threading, signature cache, etc. logic. Without a change like this you can't invoke the script engine without having a much larger chunk of bitcoind running. 0.10 is a major release, not a maintenance release. It's specifically in major releases that we make changes which are not purely code motion and narrow bugfixes (Though many of the changes in 0.10 were nicely factored into verify pure code motion changes from behavioral changes). There are many very important, even critical, behavioural changes in 0.10. That these changes have their own risks are part of why they aren't in 0.9.x. -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
While it would be nice to have a library encapsulating the consensus code, this shouldn't come at the cost of safety, especially when the actual users of that library or their needs is still uncertain. While I agree that it shouldn't come at unreasonable risk, my whole reason for prioritizing the consensus library is that it is the first step toward the goal of isolating the consensus code completely. As soon as it exists in a repository by itself, it is easier to enforce a different regime of change control there, or even freeze it completely over time. To keep track of consensus changes one'd only have to follow that repository, instead of filter it between tons of GUI, RPC or utility commits. IMO having the consensus isolated into a portable self-contained library is the most important goal of Bitcoin Core project at this point. I've tried to keep the amount of unnecessary refactoring down, but some is unfortunately unavoidable. I'm sure we can find a way to rebase CHECKLOCKTIMEVERIFY so that it can land in 0.11. Wladimir -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
On Mon, Dec 15, 2014 at 10:20 AM, Jeff Garzik jgar...@bitpay.com wrote: On Mon, Dec 15, 2014 at 9:57 AM, Btc Drak btcd...@gmail.com wrote: We all want to see more modular code, but the first steps should just be to relocate blocks of code so everything is more logically organised in smaller files (especially for consensus critical code). Refactoring should come in a second wave preferably after a stable release. This is my opinion as well. In the Linux kernel, we often were faced with a situation where you have a One Big File driver with 1MB of source code. The first step was -always- raw code movement, a brain-dead breaking up of code into logical source code files. Refactoring of data structures comes after that. While not always money-critical, these drivers Had To Keep Working. We had several situations where we had active users, but zero hardware access for debugging, and zero access to the vendor knowledge (hardware documentation, engineers). Failure was not an option. ;p Performing the dumb Break Up Files step first means that future, more invasive data structures are easier to review, logically segregated, and not obscured by further code movement changes down the line. In code such as Bitcoin Core, it is important to think about the _patch stream_ and how to optimize for reviewer bandwidth. The current stream of refactoring is really a turn-off in terms of reviewing, sapping reviewer bandwidth by IMO being reviewer-unfriendly. It is a seemingly never-ending series of tiny refactor-and-then-stuff-in-a-class-and-make-it-pretty-and-do-all-the-work. Some change is in order, gentlemen. -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/ That's exactly what happened during the modularization process, with the exception that the code movement and refactors happened in parallel rather than in series. But they _were_ done in separate logical chunks for the sake of easier review. The commit tag MOVEONLY developed organically out of this process, and a grep of the 0.10 branch for MOVEONLY is a testament to exactly how much code moved 1:1 out of huge files and into logically separated places and/or new files. Perhaps it's worth making MOVEONLY (which as the name implies, means that code has been copied 1:1 to a new location) use an official dev guideline for use in future refactors. Cory -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
On Mon, Dec 15, 2014 at 7:47 AM, Peter Todd p...@petertodd.org wrote: BtcDrak was working on rebasing my CHECKLOCKTIMEVERIFY¹ patch to master a few days ago and found a fairly large design change that makes merging it currently impossible. Pull-req #4890², specifically commit c7829ea7, changed the EvalScript() function to take an abstract SignatureChecker object, removing the txTo and nIn arguments that used to contain the transaction the script was in and the txin # respectively. CHECKLOCKTIMEVERIFY needs txTo to obtain the nLockTime field of the transaction, and it needs nIn to obtain the nSequence of the txin. We need to fix this if CHECKLOCKTIMEVERIFY is to be merged. Secondly, that this change was made, and the manner in which is was made, is I think indicative of a development process that has been taking significant risks with regard to refactoring the consensus critical codebase. I know I personally have had a hard time keeping up with the very large volume of code being moved and changed for the v0.10 release, and I know BtcDrak - who is keeping Viacoin up to date with v0.10 - has also had a hard time giving the changes reasonable review. The #4890 pull-req in question had no ACKs at all, and only two untested utACKS, which I find worrying for something that made significant consensus critical code changes. While it would be nice to have a library encapsulating the consensus code, this shouldn't come at the cost of safety, especially when the actual users of that library or their needs is still uncertain. This is after all a multi-billion project where a simple fork will cost miners alone tens of thousands of dollars an hour; easily much more if it results in users being defrauded. That's also not taking into account the significant negative PR impact and loss of trust. I personally would recommend *not* upgrading to v0.10 due to these issues. A much safer approach would be to keep the code changes required for a consensus library to only simple movements of code for this release, accept that the interface to that library won't be ideal, and wait until we have feedback from multiple opensource projects with publicly evaluatable code on where to go next with the API. 1) https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki 2) https://github.com/bitcoin/bitcoin/pull/4890 -- 'peter'[:-1]@petertodd.org 1b18a596ecadd07c0e49620fb71b16f9e41131df9fc52fa6 It would appear as though you're trying to drum up controversy here, but the argument is quite a stretch, and contrary to some other arguments you're making in parallel. There seem to be three themes in your above complaint, so I'd like to address them individually. 1. Pr #4890 specifically. The argument seems to be that this was not properly reviewed/tested, and that it is an unnecessary risk to the consensus codebase. Looking at the PR at github, while I certainly don't agree with those conclusions, I suppose I can understand where they're coming from. There's plenty of context missing, as well as sidebar discussions on IRC and other PRs. To an outside observer, these changes may look under-tested and unnecessary. The context that's missing is the flurry of work that was going on in parallel to modularize this (and surrounding code). #4890 was one of the first pieces necessary for that, so some of the discussion about it was happening in dependent pull requests. You can point to a lack ACKs in one place for that PR, but that doesn't mean that the changes weren't tested/reviewed/necessary. You could also argue that ACKs should've been mirrored on the PR in question for posterity, which would be a perfectly reasonable argument that I would agree with. 2. These changes conflict with a rebased version of your CHECKLOCKTIMEVERIFY changes. OK? You have a tree that's a few months old, and you find that you have conflicts when rebasing to master. It happens to all of us. Do as the rest of us do and update your changes to fit. If you missed the review of #4890 and think it should be reverted, then call for a revert. But please give a concrete reason other than I've picked this commit series for a crusade because it gave me merge conflicts. What is the conspiracy here? There's a signature cache that is implementation-specific, and in a parallel universe, you might be arguing that we should rip it out because it adds unnecessary complexity to the consensus code. The PR provides a path around that complexity. For some reason, your reaction is to cry foul months later because you missed reviewing it at the time, rather than cheering for the reduced complexity. 3. You seem to think that 1. and 2. seem to point to a systemic failure of the review process because modularization shouldn't come at the cost of safety. I agree that it shouldn't come at the cost of safety, but I see no failure here. There has been a HUGE effort to modularize the code with a combination of
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
If code movement is not compressed into a tight time window, code movement becomes a constant stream during development. A constant stream of code movement is a constant stream of patch breakage, for any patch or project not yet in-tree. The result is to increase the work and cost on any contributor whose patches are not immediately merged. For the record, since this is trending reddit, I __do__ support the end result of 0.10 refactoring, the work towards the consensus lib. My criticism is of a merge flow which _unintentionally_ rewards only certain types of patches, and creates disincentives for working on other types of patches. On Mon, Dec 15, 2014 at 4:19 PM, Cory Fields li...@coryfields.com wrote: On Mon, Dec 15, 2014 at 2:35 PM, Jeff Garzik jgar...@bitpay.com wrote: On Mon, Dec 15, 2014 at 1:42 PM, Cory Fields li...@coryfields.com wrote: That's exactly what happened during the modularization process, with the exception that the code movement and refactors happened in parallel rather than in series. But they _were_ done in separate logical chunks for the sake of easier review. That's exactly what was done except it wasn't Yes, in micro, at the pull request level, this happened * Code movement * Refactor At a macro level, that cycle was repeated many times, leading to the opposite end result: a lot of tiny movement/refactor/movement/refactor producing the review and patch annoyances described. It produces a blizzard of new files and new data structures, breaking a bunch of out-of-tree patches, complicating review quite a bit. If the vast majority of code movement is up front, followed by algebraic simplifications, followed by data structure work, further patches are easy to review/apply with less impact on unrelated code. I won't argue that at all because it's perfectly logical, but in practice that doesn't translate from the macro level to the micro level very well. At the micro level, minor code changes are almost always needed to accommodate useful code movement. Even if they're not required, it's often hard to justify code movement for the sake of code movement with the promise that it will be useful later. Rather than arguing hypotheticals, let's use a real example: https://github.com/bitcoin/bitcoin/pull/5118 . That one's pretty simple. The point of the PR was to unchain our openssl wrapper so that key operations could be performed by the consensus lib without dragging in bitcoind's structures. The first commit severs the dependencies. The second commit does the code movement from the now-freed wrapper. I'm having a hard time coming up with a workflow that would handle these two changes as _separate_ events, while making review easier. Note that I'm not attempting to argue with you here, rather I'm genuinely curious as to how you'd rather see this specific example (which is representative of most of my other code movement for the libbitcoinconsensus work, i believe) handled. Using your model above, I suppose we'd do the code movement first with the dependencies still intact as a pull request. At some later date, we'd sever the dependencies in the new files. I suppose you'd also prefer that I group a bunch of code-movement changes together into a single PR which needs little scrutiny, only verification that it's move-only. Once the code-movement PRs are merged, I can begin the cleanups which actually fix something. In practice, though, that'd be a massive headache for different reasons. Lots in flux with seemingly no benefits until some later date. My PR's can't depend on eachother because they don't actually fix issues in a linear fashion. That means that other devs can't depend on my PRs either for the same reason. And what have we gained? Do you find that assessment unreasonable? Cory -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/ -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged
On Mon, Dec 15, 2014 at 2:35 PM, Jeff Garzik jgar...@bitpay.com wrote: On Mon, Dec 15, 2014 at 1:42 PM, Cory Fields li...@coryfields.com wrote: That's exactly what happened during the modularization process, with the exception that the code movement and refactors happened in parallel rather than in series. But they _were_ done in separate logical chunks for the sake of easier review. That's exactly what was done except it wasn't Yes, in micro, at the pull request level, this happened * Code movement * Refactor At a macro level, that cycle was repeated many times, leading to the opposite end result: a lot of tiny movement/refactor/movement/refactor producing the review and patch annoyances described. It produces a blizzard of new files and new data structures, breaking a bunch of out-of-tree patches, complicating review quite a bit. If the vast majority of code movement is up front, followed by algebraic simplifications, followed by data structure work, further patches are easy to review/apply with less impact on unrelated code. I won't argue that at all because it's perfectly logical, but in practice that doesn't translate from the macro level to the micro level very well. At the micro level, minor code changes are almost always needed to accommodate useful code movement. Even if they're not required, it's often hard to justify code movement for the sake of code movement with the promise that it will be useful later. Rather than arguing hypotheticals, let's use a real example: https://github.com/bitcoin/bitcoin/pull/5118 . That one's pretty simple. The point of the PR was to unchain our openssl wrapper so that key operations could be performed by the consensus lib without dragging in bitcoind's structures. The first commit severs the dependencies. The second commit does the code movement from the now-freed wrapper. I'm having a hard time coming up with a workflow that would handle these two changes as _separate_ events, while making review easier. Note that I'm not attempting to argue with you here, rather I'm genuinely curious as to how you'd rather see this specific example (which is representative of most of my other code movement for the libbitcoinconsensus work, i believe) handled. Using your model above, I suppose we'd do the code movement first with the dependencies still intact as a pull request. At some later date, we'd sever the dependencies in the new files. I suppose you'd also prefer that I group a bunch of code-movement changes together into a single PR which needs little scrutiny, only verification that it's move-only. Once the code-movement PRs are merged, I can begin the cleanups which actually fix something. In practice, though, that'd be a massive headache for different reasons. Lots in flux with seemingly no benefits until some later date. My PR's can't depend on eachother because they don't actually fix issues in a linear fashion. That means that other devs can't depend on my PRs either for the same reason. And what have we gained? Do you find that assessment unreasonable? Cory -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk ___ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development