Re: [bitcoin-dev] Revising BIP 2 to expand editorial authority
Perhaps having authors consent to certain types of changes when they submit their BIP? > On Sep 27, 2017, at 1:20 PM, Sjors Provoost via bitcoin-dev >wrote: > > >> Op 27 sep. 2017, om 22:01 heeft Bryan Bishop via bitcoin-dev >> het volgende geschreven: >> >>> On Wed, Sep 27, 2017 at 1:56 PM, Luke Dashjr via bitcoin-dev >>> wrote: >>> What do people think about modifying BIP 2 to allow editors to merge these >>> kinds of changes without involving the Authors? Strictly speaking, BIP 2 >>> shouldn't be changed now that it is Active, but for such a minor revision, I >>> think an exception is reasonable. >> >> Even minor revisions can not change the meaning of text. Changing a single >> word can often have a strange impact on the meaning of the text. There >> should be some amount of care exercised here. Maybe it would be okay as long >> as edits are mentioned in the changelog at the bottom of each document, or >> mention that the primary authors have not reviewed suggested changes, or >> something as much; otherwise the reader might not be aware to check revision >> history to see what's going on. > > Perhaps it's enough to @mention authors in the PR and give them a week to > object before merging? > > Sjors > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Hypothetical 2 MB hardfork to follow BIP148
That would invalidate any pre-signed transactions that are currently out there. You can't just change the rules out from under people. > On May 30, 2017, at 4:50 PM, James MacWhyte via bitcoin-dev >wrote: > > >> The 1MB classic block size prevents quadratic hashing >> problems from being any worse than they are today. >> > > Add a transaction-size limit of, say, 10kb and the quadratic hashing problem > is a non-issue. Donezo. > ___ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary
Forgot to include the list. From: Jean-Paul Kogelman jeanpaulkogel...@me.com Date: July 31, 2015 at 4:02:20 PM PDT To: Jorge Timón jti...@jtimon.cc Cc: mi...@bitcoins.info Subject: Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary I wrote about this a earlier this month: http://www.mail-archive.com/bitcoin-dev@lists.linuxfoundation.org/msg00383.html You basically want 3 things: - A Minimum Specification of hardware: This is the lowest hardware configuration Bitcoin Core will run on at maximum capacity. - A theoretical model that takes into account all of the components in Bitcoin Core and how they affect Min Spec. - A benchmark tool to measure how changes affect Min Spec (and for users to see how their hardware measures up to Min Spec). jp On Jul 31, 2015, at 02:31 PM, Jorge Timón via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: On Fri, Jul 31, 2015 at 2:15 AM, Milly Bitcoin via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: These are the types of things I have been discussing in relation to a process: -A list of metrics -A Risk analysis of the baseline system. Bitcoin as it is now. -Mitigation strategies for each risk. -A set of goals. -A Road map for each goal that lists the changes or possible avenues to achieve that goal. Proposed changes would be measured against the same metrics and a risk analysis done so it can be compared with the baseline. For example, the block size debate would be discussed in the context of a road map related to a goal of increase scaling. One of the metrics would be a decentralization metric. (A framework for a decentralization metric is at http://www.hks.harvard.edu/fs/pnorris/Acrobat/stm103%20articles/Schneider_Decentralization.pdf). Cost would be one aspect of the decentralization metric. All this sounds very reasonable and useful. And if a formal organization owns this process, that's fine as well. I still think hardforks need to be uncontroversial (using the vague I will know it when I see it defintion) and no individual or organization can be an ultimate decider or otherwise Bitcoin losses all it's p2p nature (and this seems the point where you, Milly, and I disagree). But metrics and data tend to help when it comes to I will know it when I see it and evidences. So, yes, by all means, let's have an imperfect decentralization metric rather than not having anything to compare proposals. Competing decentralization metrics can appear later: we need a first one first. I would add that we should have sets of simulations being used to calculate some of those metrics, but maybe I'm just going too deep into details. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Bitcoin Core and hard forks
Miners could include their fee tiers in the coinbase, but this is obviously open to manipulation, with little recourse (unless they are a pool and miners move away because of it). In any event, I think that trying out a solution that is both simple and involves the least number of parties necessary is preferable. Have miners set their tiers, have users select the level of quality they want, ignore the block size. Miners will adapt their tiers depending on how many transactions actually end up in them. If for example they set the first tier to be $1 to be included in the current block and no user chooses that level of service, they've obviously priced themselves out of the market. The opposite is also true; if a tier is popular they can choose to increase the cost of that tier. jp On Jul 24, 2015, at 9:28 AM, Eric Lombrozo elombr...@gmail.com wrote: I suppose you can use a timelocked output that is spendable by anyone you could go somewhat in this direction…the thing is it still means the wallet must make fee estimations rather than being able to get a quick quote. On Jul 23, 2015, at 6:25 PM, Jean-Paul Kogelman jeanpaulkogel...@me.com wrote: I think implicit QoS is far simpler to implement, requires less parties and is closer to what Bitcoin started out as: a peer-to-peer digital cash system, not a peer-to-let-me-handle-that-for-you-to-peer system. jp On Jul 24, 2015, at 9:08 AM, Eric Lombrozo elombr...@gmail.com wrote: By using third parties separate from individual miners that do bidding on your behalf you get a mechanism that allows QoS guarantees and shifting the complexity and risk from the wallet with little computational resources to a service with abundance of them. Using timelocked contracts it’s possible to enforce the guarantees. Negotiating directly with miners via smart contracts seems difficult at best. On Jul 23, 2015, at 6:03 PM, Jean-Paul Kogelman via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: Doesn't matter. It's not going to be perfect given the block time variance among other factors but it's far more workable than guessing whether or not your transaction is going to end up in a block at all. jp On Jul 24, 2015, at 8:53 AM, Peter Todd p...@petertodd.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 23 July 2015 20:49:20 GMT-04:00, Jean-Paul Kogelman via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: And it's obvious how a size cap would interfere with such a QoS scheme. Miners wouldn't be able to deliver the below guarantees if they have to start excluding transactions. As mining is a random, poisson process, obviously giving guarantees without a majority of hashing power isn't possible. -BEGIN PGP SIGNATURE- iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsYyK AAoJEMCF8hzn9Lnc47AH/28WlecQLb37CiJpcvXO9tC4zqYEodurtB9nBHTSJrug VIEXZW53pSTdd3vv2qpGIlHxuYP8QmDSATztwQLuN6XWEszz7TO8MXBfLxKqZyGu i83WqSGjMAfwqjl0xR1G7PJgt4+E+0vaAFZc98vLCgZnedbiXRVtTGjhofG1jjTc DFMwMZHP0eqWTwtWwqUvnA7PTFHxdqoJruY/t1KceN+JDbBCJWMxBDswU64FXcVH 0ecsk9nhLMyylBX/2v4HjCXyayocH8jQ+FpLSP0xxERyS+f1npFX9cxFMq24uXqn PcnZfLfaSJ6gMbmhbYG5wYDKN3u732j7dLzSJnMW6jk= =LY1+ -END PGP SIGNATURE- ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Bitcoin Core and hard forks
Doesn't matter. It's not going to be perfect given the block time variance among other factors but it's far more workable than guessing whether or not your transaction is going to end up in a block at all. jp On Jul 24, 2015, at 8:53 AM, Peter Todd p...@petertodd.org wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 23 July 2015 20:49:20 GMT-04:00, Jean-Paul Kogelman via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org wrote: And it's obvious how a size cap would interfere with such a QoS scheme. Miners wouldn't be able to deliver the below guarantees if they have to start excluding transactions. As mining is a random, poisson process, obviously giving guarantees without a majority of hashing power isn't possible. -BEGIN PGP SIGNATURE- iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVsYyK AAoJEMCF8hzn9Lnc47AH/28WlecQLb37CiJpcvXO9tC4zqYEodurtB9nBHTSJrug VIEXZW53pSTdd3vv2qpGIlHxuYP8QmDSATztwQLuN6XWEszz7TO8MXBfLxKqZyGu i83WqSGjMAfwqjl0xR1G7PJgt4+E+0vaAFZc98vLCgZnedbiXRVtTGjhofG1jjTc DFMwMZHP0eqWTwtWwqUvnA7PTFHxdqoJruY/t1KceN+JDbBCJWMxBDswU64FXcVH 0ecsk9nhLMyylBX/2v4HjCXyayocH8jQ+FpLSP0xxERyS+f1npFX9cxFMq24uXqn PcnZfLfaSJ6gMbmhbYG5wYDKN3u732j7dLzSJnMW6jk= =LY1+ -END PGP SIGNATURE- ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Defining a min spec
I get it. :) Being able to run Bitcoin Core on open hardware is a noble (and important) goal! I hope that once we’ve figured out what the current requirements are that we can adjust these requirements (if needed) to include certain open hardware platforms. But that’s the next step. Bitcoin Core is a project in flight. Let’s first see where we’re at. What are the critical wall-time requirements? As discussed earlier, the block propagation times are very important to keep orphan rates low. This sounds like one of the inputs that can be used to model bandwidth and CPU requirements. Other inputs for this could be as simple as the minimum number of connected nodes (multiplier on outbound bandwidth, but not CPU), etc. It wouldn’t surprise me if many of the real world requirements will center around Bitcoin Core’s ability to talk to the outside world. jp On Jul 2, 2015, at 10:33 PM, Jeremy Rubin jeremy.l.rubin.tra...@gmail.com wrote: Jean-Paul, I think you're missing what I'm saying -- the point of my suggestion to make Rocket a min-spec is more along the lines of saying that the Rocket serves as a fixed point, Bitcoin Core performance must be acceptable on that platform, however it can be lower. Yes there are conversion factors and different architectures will perform differently. However, there still must be some baseline, a point at which we can say processors below it no longer are supported. I am saying that line should never be set so high as to exclude presently available open hardware. Ultimately, this ends up making an odd, but nice, goal for Bitcoin development. If Bitcoin Core needs more MIPS, the community must ensure the availability of open hardware that it can run on. Jeff, Moxie looks fantastic! The reason I thought RISC-V was a good selection is the very active development community which is pushing the performance of the ISA implementations forward. Can you speak to the health of Moxie development? Ultimately, ensuring support for many open architectures would be preferable. Are there other reasonable open-source processors that you are aware of? I would be willing to work on a design a Bitcoin specific open-hardware processor, up to the FPGA bound, if this would be useful for this goal. On Fri, Jul 3, 2015 at 12:19 PM, Jean-Paul Kogelman jeanpaulkogel...@me.com mailto:jeanpaulkogel...@me.com wrote: Ideally, the metrics that we settle on would be architecture agnostic and have some sort of conversion metric to map it onto any specific architecture. An Intel based architecture is going to perform vastly different from an ARM based one for example. Simple example: The PS3 PPE and Xbox 360 CPU are RISC processors that run at 3.2GHz, but their non-vector performance is rather poor. You’d be lucky to get about 33% effective utilization out of them (up to 50%, tops, but that’s really pushing it), so if you were to map this onto another architecture, you’d have at least a 3x conversion from this end alone (the other end could also have a scaling factor). Ultimately, how these values are expressed isn’t the important part. It’s the ability to measure the impact of a change that’s important. If some metric changes by, say, 5%, then it doesn’t really matter if it’s expressed in MIPS, INTOPS, MB or GB. The fact that it changed is what matters and what the effect is on the baseline (that ultimately could be expressed as a certain specific hardware configuration). It would probably be practical to have a number of comparable concrete min spec configurations and even more ideal would be if people in the community would have these systems up and running to do actual on-target performance benchmarks. jp On Jul 2, 2015, at 8:13 PM, Jeremy Rubin jeremy.l.rubin.tra...@gmail.com mailto:jeremy.l.rubin.tra...@gmail.com wrote: Might I suggest that the min-spec, if developed, target the RISC-V Rocket architecture (running on FPGA, I suppose) as a reference point for performance? This may be much lower performance than desirable, however, it means that we don't lock people into using large-vendor chipsets which have unknown, or known to be bad, security properties such as Intel AMT. In general, targeting open hardware seems to me to be more critical than performance metrics for the long term health of Bitcoin, however, performance is still important. Does anyone know how the RISC-V FPGA performance stacks up to, say, a Raspberry Pi? On Thu, Jul 2, 2015 at 10:52 PM, Owen Gunden ogun...@phauna.org mailto:ogun...@phauna.org wrote: I'm also a user who runs a full node, and I also like this idea. I think Gavin has done some back-of-the-envelope calculations around this stuff, but nothing so clearly defined as what you propose. On 07/02/2015 08:33 AM, Mistr Bigs wrote: I'm an end user running a full node on an aging laptop. I think this is a great suggestion! I'd love
Re: [bitcoin-dev] Defining a min spec
In the case of Bitcoin Core, for a starting point, you basically have to work backwards from what we have right now. We know some of the bounds already. Block size already tells you a lot about your bandwidth requirements, and Pieter’s simulations gives you even more information when you take orphan rates into account. There’s also a hard cap on the number of SigOps if I recall correctly, so that’s probably a good starting point for a MIPS metric, etc. Memory is probably the hardest to pin down since some memory structures like (from what I understand, correct me if I’m wrong) the UTXO database live fully in memory and are basically unbounded. Perhaps this can somehow be capped at a certain size and move all the really old UTXO’s that are unlikely to move to disk and just take the CPU / disk hit in the unlikely event that they are referenced by a new block. Has the address database been capped yet? Mempool? I realize that it’s probably debatable whether or not this behaviour should be independent of available memory since any bugs here could affect consensus (especially the UTXO db). Ultimately, what comes out of it is a list of numbers. A Mbit network I/O, B MIPS, C MB memory, D Disk space, etc. At that point you can debate whether or not such a machine can be considered an entrypoint bitcoin full node. You round up the numbers that are not really available anymore in off the shelf hardware (like disk space) and you round down the numbers that seem too high. For all we know we’re already over budget on some of the metrics that we decide to track as min spec. Network I/O for example. At that point you can start focussed research into bringing Bitcoin Core back into budget on those metrics. Then the discussion moves from “it’s probably too high” to “we’re X% over budget”. The most valuable thing that could come out of this is to get some kind of formulation how all the different levers in Bitcoin Core affect the min spec and ideally have a benchmark tool. For example, we could settle on a min spec that would exclude the Raspberry Pi 1 on MIPS, but when secp256k1 is enabled for validation, the MIPS requirement could drop significantly, allowing us to adjust the min spec downward to include the Raspberry Pi 1 again (again, just an example). Ideally some people would have the actual min spec machine built and running. The cost of that shouldn’t be too high (it’s the min spec after all) and I’m sure people would be happy to chip in a couple bits for this. Remember, the min spec should be able to handle Bitcoin Core running under full load; that’s maxed out blocks with maxed out SigOps, etc. jp On Jul 2, 2015, at 12:18 AM, Henning Kopp henning.k...@uni-ulm.de wrote: Hi Jean-Paul, that's a very interesting point of view and I have never thought about it this way, since I have a totally different background. How would you go on about defining a min spec? Is this done by testing the software on different hardware configurations or are you looking at the requirements a priori? Best regards Henning On Wed, Jul 01, 2015 at 09:04:19PM -0700, Jean-Paul Kogelman wrote: Hi folks, I’m a game developer. I write time critical code for a living and have to deal with memory, CPU, GPU and I/O budgets on a daily basis. These budgets are based on what we call a minimum specification (of hardware); min spec for short. In most cases the min spec is based on entry model machines that are available during launch, and will give the user an enjoyable experience when playing our games. Obviously, we can turn on a number of bells and whistles for people with faster machines, but that’s not the point of this mail. The point is, can we define a min spec for Bitcoin Core? The number one reason for this is: if you know how your changes affect your available budgets, then the risk of breaking something due to capacity problems is reduced to practically zero. One way of doing so is to work backwards from what we have right now: Block size (network / disk I/O), SigOps/block (CPU), UTXO size (memory), etc. Then there’s Pieter’s analysis of network bottlenecks and how it affects orphan rates that could be used to set some form of cap on what transfer time + verification time should be to keep the orphan rate at an acceptable level. So taking all of the above (and more) into account, what configuration would be the bare minimum to comfortably run Bitcoin Core at maximum load and can it be reasonably expected to still be out there in the field running Bitcoin Core? Also, can the parameters that were used to determine this min spec be codified in some way so that they can later be used if Bitcoin Core is optimized (or extended with new functionality) and see how it affects the min spec? Basically, with any reasonably big change, one of the first questions could be: “How does this change affect min spec? For example, currently
Re: [bitcoin-dev] Defining a min spec
Ideally, the metrics that we settle on would be architecture agnostic and have some sort of conversion metric to map it onto any specific architecture. An Intel based architecture is going to perform vastly different from an ARM based one for example. Simple example: The PS3 PPE and Xbox 360 CPU are RISC processors that run at 3.2GHz, but their non-vector performance is rather poor. You’d be lucky to get about 33% effective utilization out of them (up to 50%, tops, but that’s really pushing it), so if you were to map this onto another architecture, you’d have at least a 3x conversion from this end alone (the other end could also have a scaling factor). Ultimately, how these values are expressed isn’t the important part. It’s the ability to measure the impact of a change that’s important. If some metric changes by, say, 5%, then it doesn’t really matter if it’s expressed in MIPS, INTOPS, MB or GB. The fact that it changed is what matters and what the effect is on the baseline (that ultimately could be expressed as a certain specific hardware configuration). It would probably be practical to have a number of comparable concrete min spec configurations and even more ideal would be if people in the community would have these systems up and running to do actual on-target performance benchmarks. jp On Jul 2, 2015, at 8:13 PM, Jeremy Rubin jeremy.l.rubin.tra...@gmail.com wrote: Might I suggest that the min-spec, if developed, target the RISC-V Rocket architecture (running on FPGA, I suppose) as a reference point for performance? This may be much lower performance than desirable, however, it means that we don't lock people into using large-vendor chipsets which have unknown, or known to be bad, security properties such as Intel AMT. In general, targeting open hardware seems to me to be more critical than performance metrics for the long term health of Bitcoin, however, performance is still important. Does anyone know how the RISC-V FPGA performance stacks up to, say, a Raspberry Pi? On Thu, Jul 2, 2015 at 10:52 PM, Owen Gunden ogun...@phauna.org mailto:ogun...@phauna.org wrote: I'm also a user who runs a full node, and I also like this idea. I think Gavin has done some back-of-the-envelope calculations around this stuff, but nothing so clearly defined as what you propose. On 07/02/2015 08:33 AM, Mistr Bigs wrote: I'm an end user running a full node on an aging laptop. I think this is a great suggestion! I'd love to know what system requirements are needed for running Bitcoin Core. On Thu, Jul 2, 2015 at 6:04 AM, Jean-Paul Kogelman jeanpaulkogel...@me.com mailto:jeanpaulkogel...@me.com mailto:jeanpaulkogel...@me.com mailto:jeanpaulkogel...@me.com wrote: I’m a game developer. I write time critical code for a living and have to deal with memory, CPU, GPU and I/O budgets on a daily basis. These budgets are based on what we call a minimum specification (of hardware); min spec for short. In most cases the min spec is based on entry model machines that are available during launch, and will give the user an enjoyable experience when playing our games. Obviously, we can turn on a number of bells and whistles for people with faster machines, but that’s not the point of this mail. The point is, can we define a min spec for Bitcoin Core? The number one reason for this is: if you know how your changes affect your available budgets, then the risk of breaking something due to capacity problems is reduced to practically zero. ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org mailto:bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org mailto:bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev signature.asc Description: Message signed with OpenPGP using GPGMail ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[bitcoin-dev] Defining a min spec
Hi folks, I’m a game developer. I write time critical code for a living and have to deal with memory, CPU, GPU and I/O budgets on a daily basis. These budgets are based on what we call a minimum specification (of hardware); min spec for short. In most cases the min spec is based on entry model machines that are available during launch, and will give the user an enjoyable experience when playing our games. Obviously, we can turn on a number of bells and whistles for people with faster machines, but that’s not the point of this mail. The point is, can we define a min spec for Bitcoin Core? The number one reason for this is: if you know how your changes affect your available budgets, then the risk of breaking something due to capacity problems is reduced to practically zero. One way of doing so is to work backwards from what we have right now: Block size (network / disk I/O), SigOps/block (CPU), UTXO size (memory), etc. Then there’s Pieter’s analysis of network bottlenecks and how it affects orphan rates that could be used to set some form of cap on what transfer time + verification time should be to keep the orphan rate at an acceptable level. So taking all of the above (and more) into account, what configuration would be the bare minimum to comfortably run Bitcoin Core at maximum load and can it be reasonably expected to still be out there in the field running Bitcoin Core? Also, can the parameters that were used to determine this min spec be codified in some way so that they can later be used if Bitcoin Core is optimized (or extended with new functionality) and see how it affects the min spec? Basically, with any reasonably big change, one of the first questions could be: “How does this change affect min spec? For example, currently OpenSSL is used to verify the signatures in the transactions. The new secp256k1 implementation is several times faster than (depending on CPU architecture, I’m sure) OpenSSL’s implementation. So it would result in faster verification time. This can then result in the following things; either network I/O and CPU requirements are adjusted downward in the min spec (you can run the new Bitcoin Core on a cheaper configuration), or other parameters can be adjusted upwards (number of SigOps / transaction, block size?), through proper rollout obviously. Since we know how min spec is affected by these changes, they should be non-controversial by default. Nobody running min spec is going to be affected by it, etc. Every change that has a positive effect on min spec (do more on the same hardware) basically pushes the need to start following any of the curve laws (Nielsen, Moore) forward. No need for miners / node operators to upgrade. Once we hit what we call SOL (Speed Of Light, the fastest something can go on a specific platform) it’s time to start looking at periodically adjusting min spec upwards, or by that time maybe it’s possible to use conservative plots of the curve laws as a basis. Lastly, a benchmark test could be developed that can tell everyone running Bitcoin Core how their setup compares to the min spec and how long they can expect to run on this setup. What do you guys think? jp signature.asc Description: Message signed with OpenPGP using GPGMail ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev