-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/02/2015 04:03 AM, Mike Hearn wrote:
(...)
>
> If you really believe that decentralisation is, itself, the end,
> then why not go use an "ASIC resistant" alt coin with no SPV or web
> wallets which resembles Bitcoin at the end of 2009? That'd b
On 06/02/2015 04:03 AM, Mike Hearn wrote:
>
> 1,000 *people* in control vs. 10 is two orders of
>
> magnitude more decentralized.
>
>
> Yet Bitcoin has got worse by all these metrics: there was a time
> before mining pools when there were ~thousands of people mining with
> their local CPU
>
> 1,000 *people* in control vs. 10 is two orders of
magnitude more decentralized.
Yet Bitcoin has got worse by all these metrics: there was a time before
mining pools when there were ~thousands of people mining with their local
CPUs and GPUs. Now the number of full nodes that matter for block
On 06/01/2015 08:55 AM, Mike Hearn wrote:
>> Decentralization is the core of Bitcoin's security model and thus
that's what gives Bitcoin its value.
> No. Usage is what gives Bitcoin value.
Nonsense.
Visa, Dollar, Euro, Yuan, Peso have usage.
The value in Bitcoin is *despite* it's far lesser usag
The overlapping consensus mechanisms between the Core Developers, the
miners, the block chain based businesses, and the end users make it such
that the very definition of Bitcoin is not just what any single one of
those groups comes to a consensus about. We must ALL be in consensus about
just what
>
> It's surprising to see a core dev going to the public to defend a proposal
> most other core devs disagree on, and then lobbying the Bitcoin ecosystem.
>
I agree that it is a waste of time. Many agree. The Bitcoin ecosystem
doesn't really need lobbying - my experience from talking to businesse
RE: going to the public:
I started pushing privately for SOMETHING, ANYTHING to be done, or at the
very least for there to be some coherent plan besides "wait and see" back
in February.
As for it being unhealthy for me to write the code that I think should be
written and asking people to run it:
Agree with everything you said. Spot on observations on all counts.
Thank you for speaking up.
Adam
On 1 June 2015 at 13:45, Jérôme Legoupil wrote:
>>What do other people think?
>>
>>
>>If we can't come to an agreement soon, then I'll ask for help
>>reviewing/submitting patches to Mike's Bitcoi
>What do other people think?
>
>
>If we can't come to an agreement soon, then I'll ask for help
>reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
>big increase now that grows over time so we may never have to go through
>all this rancor and debate again."
>
>
>I'll then as
> Let the block size limit be a function of the number of current transactions
in the mempool.
There is no single mempool which transactions could be counted and there is
no consensus about the average number of unconfirmed transactions.
2015-06-01 13:30 GMT+02:00 Ricardo Filipe :
> I've been fo
I've been following the discussion of the block size limit and IMO it
is clear that any constant block size limit is, as many have said
before, just kicking the can down the road.
My problem with the dynamic lower limit solution based on past blocks
is that it doesn't account for usage spikes. I wo
> or achieving less than great DOS protection
Right now a bunch of redditors can DOS the network at the cost of a few
thousand dollars per day, shared between them. Since the cost of validating
transactions is far lower than current minimum relay fees, then increasing
the block size increases the
First off, I am glad that the idea of dynamic block size adjustment is
gaining some attention, in particular the model that I proposed.
I wanted to take some time and explain some of the philosophy of how,
and why, I proposed this this particular model.
When Bitcoin was first made, there was a 32
On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen
wrote:
> What do other people think?
>
>
> If we can't come to an agreement soon, then I'll ask for help
> reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
> big increase now that grows over time so we may never have to go
What about trying the dynamic scaling method within the 20MB range + 1 year
with a 40% increase of that cap? Until a way to dynamically scale is
found, the cap will only continue to be an issue. With 20 MB + 40% yoy,
we're either imposing an arbitrary cap later, or achieving less than great
DOS p
> miners would definitely be squeezing out transactions / putting pressure
to increase transaction fees
I'd just like to re-iterate that transactions getting "squeezed out"
(failure after a lengthy period of uncertainty) is a radical change from
the current behavior of the network. There are plent
>
> And looking at the version (aka user-agent) strings of publicly reachable
> nodes on the network.
> (e.g. see the count at https://getaddr.bitnodes.io/nodes/ )
>
Yeah, though FYI Luke informed me last week that I somehow managed to take
out the change to the user-agent string in Bitcoin XT, p
On Fri, May 29, 2015 at 3:09 PM, Tier Nolan wrote:
>
>
> On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen
> wrote:
>
>> But if there is still no consensus among developers but the "bigger
>> blocks now" movement is successful, I'll ask for help getting big miners to
>> do the same, and use the so
>
> The measure is miner consensus. How do you intend to measure
> exchange/merchant acceptance?
>
Asking them.
In fact, we already have. I have been talking to well known people and CEOs
in the Bitcoin community for some time now. *All* of them support bigger
blocks, this includes:
- Every
On Fri, May 29, 2015 at 10:09 AM, Tier Nolan wrote:
> How do you intend to measure exchange/merchant acceptance?
>
Public statements saying "we're running software that is ready for bigger
blocks."
And looking at the version (aka user-agent) strings of publicly reachable
nodes on the network.
How is this being pigheaded? In my opinion, this is leadership. If
*something* isn't implemented soon, the network is going to have some real
problems, right at the
time when adoption is starting to accelerate. I've been seeing nothing but
navel-gazing and circlejerks on this issue for weeks now.
On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen
wrote:
> But if there is still no consensus among developers but the "bigger blocks
> now" movement is successful, I'll ask for help getting big miners to do the
> same, and use the soft-fork block version voting mechanism to (hopefully)
> get a maj
Are you really that pig headed that you are going to try and blow up the
entire system just to get your way? A bunch of ignorant redditors do not
make consensus, mercifully.
On 2015-05-29 12:39, Gavin Andresen wrote:
> What do other people think?
>
> If we can't come to an agreement soon, then
What do other people think?
If we can't come to an agreement soon, then I'll ask for help
reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
big increase now that grows over time so we may never have to go through
all this rancor and debate again.
I'll then ask for help l
>
> If the plan is a fix once and for all, then that should be changed too.
> It could be set so that it is at least some multiple of the max block size
> allowed.
>
Well, but RAM is not infinite :-) Effectively what these caps are doing is
setting the minimum hardware requirements for running a B
On Fri, May 29, 2015 at 12:26 PM, Mike Hearn wrote:
> IMO it's not even clear there needs to be a size limit at all. Currently
> the 32mb message cap imposes one anyway
>
If the plan is a fix once and for all, then that should be changed too. It
could be set so that it is at least some multiple
>
> By the time a hard fork can happen, I expect average block size will be
> above 500K.
>
Yes, possibly.
> Would you support a rule that was "larger of 1MB or 2x average size" ?
> That is strictly better than the situation we're in today.
>
It is, but only by a trivial amount - hitting the li
Can we hold off on bike-shedding the particular choice of parameters until
people have a chance to weigh in on whether or not there is SOME set of
dynamic parameters they would support right now?
--
--
Gavin Andresen
--
My understanding, which is very likely wrong in one way or another, is
transaction size and block size are two slightly different things but
perhaps it's so negligible that block size is a fine stand-in for total
transaction throughput.
Potentially Doubling the block size everyday is frankly impru
On Thu, May 28, 2015 at 1:34 PM, Mike Hearn wrote:
> As noted, many miners just accept the defaults. With your proposed change
>> their target would effectively *drop* from 1mb to 800kb today, which
>> seems crazy. That's the exact opposite of what is needed right now.
>>
>
> I am very skeptical
On Thu, May 28, 2015 at 01:19:44PM -0400, Gavin Andresen wrote:
> As for whether there "should" be fee pressure now or not: I have no
> opinion, besides "we should make block propagation faster so there is no
> technical reason for miners to produce tiny blocks." I don't think us
> developers shoul
>
> Twenty is scary.
>
To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.
The max block size is really a limit set by hardware capability, whic
> until we have size-independent new block propagation
I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.
There are several significant improvemen
On Thu, May 28, 2015 at 1:05 PM, Mike Hearn wrote:
> Isn't that a step backwards, then? I see no reason for fee pressure to
>> exist at the moment. All it's doing is turning away users for no purpose:
>> mining isn't supported by fees, and the tiny fees we use right now seem to
>> be good enough
Le 28/05/2015 17:53, Gavin Andresen a écrit :
>
> So my straw-man proposal would be: max size 2x average size over last 144
> blocks, calculated at every block.
>
I like that idea.
Average is a better choice than median. The median is not well defined
on discrete sets, as shown in your example
>
> Even a 2x rule (implying 800K max blocks) would, today, be squeezing out
> transactions / putting pressure to increase fees .
>
> So my straw-man proposal would be: max size 2x average size over last 144
> blocks, calculated at every block.
>
Isn't that a step backwards, then? I see no re
I would support a dynamic block size increase as outlined. I have a few
questions though.
Is scaling by average block size the best and easiest method, why not scale
by transactions confirmed instead? Anyone can write and relay a
transaction, and those are what we want to scale for, why not measur
On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock wrote:
> Between all the flames on this list, several ideas were raised that did
> not get much attention. I hereby resubmit these ideas for consideration and
> discussion.
>
> - Perhaps the hard block size limit should be a function of the actual
> b
On Mon, May 18, 2015 at 2:42 AM, Rusty Russell
wrote:
> OK. Be nice if these were cleaned up, but I guess it's a sunk cost.
>
Yeah.
On the plus side, as people spend their money, old UTXOs would be used up
and then they would be included in the cost function. It is only people
who are storing
Tier Nolan writes:
> On Sat, May 16, 2015 at 1:22 AM, Rusty Russell
> wrote:
>> 3) ... or maybe not, if any consumed UTXO was generated before the soft
>>fork (reducing Tier's perverse incentive).
>
> The incentive problem can be fixed by excluding UTXOs from blocks before a
> certain count.
On Sat, May 16, 2015 at 1:22 AM, Rusty Russell
wrote:
> Some tweaks:
>
> 1) Nomenclature: call tx_size "tx_cost" and real_size "tx_bytes"?
>
Fair enough.
>
> 2) If we have a reasonable hard *byte* limit, I don't think that we need
>the MAX(). In fact, it's probably OK to go negative.
>
I
Tier Nolan writes:
> On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell wrote:
>
>> An example would
>> be tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size -
>> 3*utxo_consumed_size).
>
>
> This could be implemented as a soft fork too.
>
> * 1MB hard size limit
> * 900kB soft limit
I
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell wrote:
> An example would
> be tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size -
> 3*utxo_consumed_size).
This could be implemented as a soft fork too.
* 1MB hard size limit
* 900kB soft limit
S = block size
U = UTXO_adjusted_siz
> How much will that cost me?
> The network is hashing at 310PetaHash/sec right now.
> Takes 600 seconds to find a block, so 186,000PH per block
> 186,000 * 0.00038 = 70 extra PH
>
> If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC
> (reward plus fees), that 70 PH costs:
> (2
Le 11/05/2015 00:31, Mark Friedenbach a écrit :
> I'm on my phone today so I'm somewhat constrained in my reply, but the key
> takeaway is that the proposal is a mechanism for miners to trade subsidy
> for the increased fees of a larger block. Necessarily it only makes sense
> to do so when the m
I'm on my phone today so I'm somewhat constrained in my reply, but the key
takeaway is that the proposal is a mechanism for miners to trade subsidy
for the increased fees of a larger block. Necessarily it only makes sense
to do so when the marginal fee per KB exceeds the subsidy fee per KB. It
corr
Le 08/05/2015 22:33, Mark Friedenbach a écrit :
> * For each block, the miner is allowed to select a different difficulty
> (nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
> and this miner-selected difficulty is used for the proof of work check. In
> addition to adjustin
On Sun, May 10, 2015 at 9:21 PM, Gavin Andresen wrote:
> a while I think any algorithm that ties difficulty to block size is just a
> complicated way of dictating minimum fees.
Thats not the long term effect or the motivation-- what you're seeing
is that the subsidy gets in the way here. Conside
Let me make sure I understand this proposal:
On Fri, May 8, 2015 at 11:36 PM, Gregory Maxwell wrote:
> (*) I believe my currently favored formulation of general dynamic control
> idea is that each miner expresses in their coinbase a preferred size
> between some minimum (e.g. 500k) and the miner
Micropayment channels are not pie in the sky proposals. They work today on
Bitcoin as it is deployed without any changes. People just need to start
using them.
On May 10, 2015 11:03, "Owen Gunden" wrote:
> On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
> > Another related point which has been ten
On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
> Another related point which has been tendered before but seems to have
> been ignored is that changing how the size limit is computed can help
> better align incentives and thus reduce risk. E.g. a major cost to the
> network is the UTXO impact of t
On Sat, May 09, 2015 at 01:36:56AM +0300, Joel Joonatan Kaartinen wrote:
> such a contract is a possibility, but why would big owners give an
> exclusive right to such pools? It seems to me it'd make sense to offer
> those for any miner as long as the get paid a little for it. Especially
> when it'
On Sat, May 9, 2015 at 12:58 PM, Gavin Andresen
wrote:
> RE: fixing sigop counting, and building in UTXO cost: great idea! One of
> the problems with this debate is it is easy for great ideas get lost in all
> the noise.
>
If the UTXO set cost is built in, UTXO database entries suddenly are wort
RE: fixing sigop counting, and building in UTXO cost: great idea! One of
the problems with this debate is it is easy for great ideas get lost in all
the noise.
RE: a hard upper limit, with a dynamic limit under it:
I like that idea. Can we drill down on the hard upper limit?
There are lots of pe
On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach wrote:
> These rules create an incentive environment where raising the block size has
> a real cost associated with it: a more difficult hashcash target for the
> same subsidy reward. For rational miners that cost must be counter-balanced
> by addit
It seems to me all this would do is encourage 0-transaction blocks, crippling
the network. Individual blocks don't have a "maximum" block size, they have an
actual block size. Rational miners would pick blocks to minimize difficulty,
lowering the "effective" maximum block size as defined by th
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
interfac
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?
My argument is that the less risky, more conservative default fallback
behavior should be either non-propa
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine wrote:
> This is a clever way to tie block size to fees.
>
> I would just like to point out though that it still fundamentally is using
> hard block size limits to enforce scarcity. Transactions with below market
> fees will hang in limbo for days and
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or see
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH fl
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions agains
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock wrote:
> - Perhaps the hard block size limit should be a function of the actual block
> sizes over some
> trailing sampling period. For example, take the median block size among the
> most recent
> 2016 blocks and multiply it by 1.5. This allows Bitc
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
> Matt,
>
> It seems you missed my suggestion about basing the maximum block size on
> the bitcoin days destroyed in transactions that are included in the block.
> I think it has potential for both scaling as well as keeping
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.
E.g. one of mining companies might try to ban
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the
On Friday, 8 May 2015, at 8:48 am, Matt Whitlock wrote:
> On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
> > It seems you missed my suggestion about basing the maximum block size on
> > the bitcoin days destroyed in transactions that are included in the block.
> > I think it has
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
> It seems you missed my suggestion about basing the maximum block size on
> the bitcoin days destroyed in transactions that are included in the block.
> I think it has potential for both scaling as well as keeping up a constant
> fe
I like the bitcoin days destroyed idea.
I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.
It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that isn't
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and inc
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.
I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.
Relying on an
There are certainly arguments to be made for and against all of these
proposals.
The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I s
72 matches
Mail list logo