Re: [bitcoin-dev] [BIP/Draft] BIP Acceptance Process

2015-09-09 Thread Andy Chase via bitcoin-dev
Thanks for your response BTC Drak, I will attempt to summarize your
points and respond to them:

* Some BIPs are not consensus critical -- True, see my response to Luke
* BIPs do not imply usage -- This I covered in my paper.
* Acceptance can be defined by actual use -- That's one way of doing it

> Getting back to your specific proposal. It seems to focus more on
> getting BIPs accepted in the sense of published

Wildly incorrect. My BIP had nothing to do with getting published. The
first words you can read in my proposal are as follows:

> The current process for accepting a BIP is not clearly defined. While 
> BIP-0001 defines the process for writing and submitting a Bitcoin Improvement 
> Proposal to the community it does not specify the precise method for which 
> BIPs are considered accepted or rejected. This proposal sets up a method for 
> determining BIP acceptance.

* but the proposal is "complete" when the proposer is happy with the final text.

This would be a cool inclusion. That is the intent of my "Submit for
comments" process.

---

Overall your post seemed to miss the point of my proposal, but that's
likely my fault for poor wording. I'm trying to develop a process of
coming to "consensus" i.e. gathering feedback and reducing opinions
down to a yes/no should this BIP happen or should we find a better
solution.

Importantly, it's not client specific. It's just a way of saying "hey
everyone, here's a problem and solution that a lot of people agree on"
or "hey everyone, here's a problem and solution that has a few
problems with it"

It's true that even if a "BIP" is "accepted" by my proposal it still
may not actually happen (this is mentioned in my proposal), and I
believe that's healthy. We can't force a change on anyone nor should
we.

---

Since so many people are missing the actual problem I'm solving,
here's another way of wording it: A BIP is proposed and goes through
the process. A PR is submitted that matches the BIP perfectly, and is
submitted and vetted. Should wladimir merge it?

My process isn't perfect solution that would make it so we could
replace wladimir with a wladBot. But it's a tool we can use for
gathering meaningful information to help guide that decision. Waiting
on all objections to be handled works okay so far but won't work
forever.


On Mon, Sep 7, 2015 at 12:37 PM, Btc Drak  wrote:
>
> Sorry not to reply earlier. I have a rather long post. I've split it
> into two sections, one explaining the background and secondly talking
> very specifically about your proposal and possible areas to look at.
>
> I think there's a key misunderstanding about BIPs and "who decides
> what in Bitcoin". A BIP usually defines some problem and a solutions
> or helps communicate proposals to the technical community. They are
> sort of mini white papers on specific topics often with reference
> implementations attached. They may be consensus critical, or not. The
> process for getting a BIP published is fairly loose in that it really
> just requires some discussion and relevance to Bitcoin regardless of
> whether the proposal is something that would be accepted or used by
> others in the ecosystem. The BIP editor is obviously going to filter
> out obvious nonesense and that shouldn't be controversial but obvious
> when you see it.
>
> You need to separate out the idea of BIPs as is, and implementations
> of BIPs in Bitcoin software (like Bitcoin Core).
>
> Take BIP64 for example. It's a proposal that adds a service to nodes
> allowing anyone to query the UTXO set on the p2p network. Bitcoin Core
> as a project has not implemented it but was instead implemented in XT
> and is utilised by Lighthouse. So the BIP specification is there in
> the BIPs repository. As far as the bitcoin ecosystem goes, only
> Bitcoin XT and lighthouse utilise it so far.
>
> BIP101 is another example, but one of a consensus critical proposal
> that would change the Bitcoin protocol (i.e. requires a hard fork). It
> was adopted by only the XT project and so far no other software. At
> the time of writing miners have chosen not to run implementations of
> BIP101.
>
> You can see the BIPs authoring and publishing process is a separate
> issue entirely to the implementation and acceptance by the Bitcoin
> ecosystem.
>
> For non-consensus critical proposals like BIP64, or maybe one relating
> to privacy (how to order transaction output for example), you could
> judge acceptance of the proposal by the number of software projects
> that implement the proposal, and the number of users it impacts. If a
> proposal is utilised by many projects, but not the few projects that
> have the majority of users, one could not claim wide acceptance.
>
> For consensus critical proposals like BIP66 (Strict DER encoding) this
> BIP was implemented in at least two bitcoin software implementations.
> Over 95% of miners adopted the proposal over a 4.5 month period. The
> BIP became de facto accepted, and in fact, once 95% 

[bitcoin-dev] Yet another blocklimit proposal / compromise

2015-09-09 Thread Marcel Jamin via bitcoin-dev
I propose to:

a) assess what blocklimit is currently technically possible without driving
up costs of running a node up too much. Most systems currently running a
fullnode probably have some capacity left.

b) set the determined blocklimit at the next reward halving

c) then double the blocksize limit at every halving thereafter up to a
hardlimit of 8GB.

Reasoning:

Doubling every four years will stay within expected technological growth.
Cisco's VNI forecast predicts a 2.1-fold increase in global average fixed
broadand speed from 2014 to 2019. Nielsen's law, which looks more at the
power user (probably more fitting) is even more optimistic with +50% per
year.

This proposal can be considered a compromise between Pieter's and Gavin's
proposals. While the growth rate is more or less what Pieter proposes, it
includes an initial increase to kickstart the growth. If we start with 8MB,
which seems to be popular among miners, we would reach 8GB in 2056 (as
opposed to 2036 in BIP101). The start date (ca. mid 2016) is also a
compromise between Pieter's 01/2017 and Gavin's 01/2016.

It's simple, predictable and IMHO elegant -- block subsidy halves,
blocksize limit doubles.

It might make sense to update the limit more often in between, though.
Either completely linearly based on a block's timestamp like in BIP101, or
for example for each difficulty period.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dynamic limit to the block size - BIP draft discussion

2015-09-09 Thread Washington Sanchez via bitcoin-dev
Errata + clarity (in bold):
>
>
>- So in my proposal, if 2000+ *blocks *have a size >= 60% *of the
>current limit*, this is an indication that real transaction volume has
>increased and we're approaching a time where block could be filled to
>capacity without an increase. The block size increase, 10%, is triggered.
>
>
On Wed, Sep 9, 2015 at 9:11 AM, Washington Sanchez <
washington.sanc...@gmail.com> wrote:

> If you want me to take your proposal seriously, you need to justify why
>> 60% full is a good answer
>>
>
> Sure thing Gavin.
>
> If you want blocks to be at least 60% full...
>
>
> First off, I do not want blocks to be at least 60% full, so let me try and
> explain where I got this number from
>
>- The idea of this parameter is set a *triggering level* for an
>increase in the block size
>- The triggering level is the point where a reasonable medium-term
>trend can be observed. That trend is an increase in the transaction volume
>that, left unchecked, would fill up blocks.
>- Determining the appropriate triggering level is difficult, and it
>consists of 3 parameters:
>   1. Evaluation period
>  - *Period of time where you check to see if the conditions to
>  trigger a raise the block size are true *
>  - Ideally you want an increase to occur in response to a real
>  increase of transaction volume from the market, and not some short 
> term
>  spam attack.
>  - Too short, spam attacks can be used to trigger multiple
>  increases (at least early on). Too long, the block size doesn't 
> increase
>  fast enough to transaction demand.
>  - I selected a period of *4032 blocks*
>  2. Capacity
>  - *The capacity level that a majority of blocks
>  would demonstrate in order to trigger a block size increase*
>  - The capacity level, in tandem with the evaluation period and
>  threshold level, needs to reflect an underlying trend towards filling
>  blocks.
>  - If the capacity level is too low, block size increases can be
>  triggered prematurely. If the capacity level is too high, the 
> network could
>  be unnecessarily jammed with the transactions before an increase can 
> kick
>  in.
>  - I selected a capacity level of *60%*.
>   3. Threshold
>  - *The number of blocks during the evaluation period that are
>  above the capacity level in order to trigger a block size increase.*
>  - If blocks are getting larger than 60% over a 4032 block
>  period, how many reflect a market-driven increase transaction volume?
>  - If the threshold is too low, increases could be triggered
>  artificially or prematurely. If the threshold is too high, the 
> easier it
>  gets for 1-2 mining pools to prevent any increases in the block size 
> or the
>  block size doesn't respond fast enough to a real increase in 
> transaction
>  volume.
>  - I selected a threshold of *2000 blocks or ~50%*.
>   - So in my proposal, if 2000+ nodes have a block size >= 60%, this
>is an indication that real transaction volume has increased and we're
>approaching a time where block could be filled to capacity without an
>increase. The block size increase, 10%, is triggered.
>
> A centralized decision, presumably by Satoshi, was made on the parameters
> that alter the target difficulty, rather than attempt to forecast hash
> rates based on his CPU power. He allowed the system to scale to a level
> where real market demand would take it. I believe the same approach should
> be replicated for the block size. The trick of course is settling on the
> right variables. I hope this proposal is a good way to do that.
>
> *Some additional calculations*
>
> Block sizes for each year are *theoretical maximums* if ALL trigger
> points are activated in my proposal (unlikely, but anyway).
> These calculations assume zero transactions are taken off-chain by third
> party processors or the LN, and no efficiency improvements.
>
>- 2015
>   - 1 MB/block
>   - 2 tps (conservative factor, also carried on below)
>   - 0.17 million tx/day
>- 2016
>   - 3.45 MB/block
>   - 7 tps
>   - 0.6 million tx/day
>- 2017
>   - 12 MB/block
>   - 24 tps
>   - 2 million tx/day
>- 2018
>   - 41 MB/block
>   - 82 tps
>   - 7 million tx/day
>- 2019
>   - 142 MB/block
>   - 284 tps
>   - 25 million tx/day
>- 2020
>   - 490 MB/block
>   - 980 tps
>   - 85 million tx/day
>
> By way of comparison, Alipay (payment processor for the Alibaba Group's
> ecosystem) processes 30 million escrow transactions per day. This gives us
> at least 4-5 years to reach the present day transaction processing capacity
> of 1 corporation... in reality it will take a little longer as I doubt all
> block size triggers 

Re: [bitcoin-dev] Adjusted difficulty depending on relative blocksize

2015-09-09 Thread Tom Harding via bitcoin-dev

There is another concern regarding "flexcap" that was not discussed.

A change to difficulty in response to anything BUT observed block
production rate unavoidably changes the money supply schedule, unless
you also change the reward, and in that case you've still changed the
timing even if not the average rate.


On 8/14/2015 8:14 AM, Jakob Rönnbäck via bitcoin-dev wrote:
> Ah, there we go. I should have dug deeper into the mailing list
>
> Thanks
>
> /jakob
>
>> 14 aug 2015 kl. 17:03 skrev Adam Back :
>>
>> There is a proposal that relates to this, see the flexcap proposal by
>> Greg Maxwell & Mark Friedenbach, it was discussed on the list back in
>> May:
>>
>> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008017.html
>>
>> and 
>> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008038.html
>>
>> Adam
>

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Adjusted difficulty depending on relative blocksize

2015-09-09 Thread Tom Harding via bitcoin-dev


Well let's see.  All else being equal, if everybody uses difficulty to 
buy big blocks during retarget interval 0, blocks and therefore money 
issuance is slower during that interval.  Then, the retargeting causes 
it to be faster during interval 1.  Subsidy got shifted from the 
calendar period corresponding to interval 0, to interval 1.


If you change the reward, you can lower the time-frame of the effect to 
the order of a single block interval, but there is still an effect.


These schemes do not avoid the need for a hard cap, and there are new 
rules for the size of the allowed adjustment, in addition to the main 
rule relating difficulty to block size.  So it seems they generally have 
more complexity than the other blocksize schemes being considered.



On 9/9/2015 11:59 AM, Warren Togami Jr. via bitcoin-dev wrote:
Does it really change the schedule when the next difficulty retarget 
readjusts to an average of 10 minutes again?


On Tue, Sep 8, 2015 at 8:27 PM, Tom Harding via bitcoin-dev 
> wrote:



There is another concern regarding "flexcap" that was not discussed.

A change to difficulty in response to anything BUT observed block
production rate unavoidably changes the money supply schedule, unless
you also change the reward, and in that case you've still changed the
timing even if not the average rate.



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev