Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Wed, Aug 5, 2015 at 9:26 PM, Jorge Timón <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> This is a much more reasonable position. I wish this had been starting
> point of this discussion instead of "the block size limit must be
> increased as soon as possible or bitcoin will fail".
>

It REALLY doesn't help the debate when you say patently false statements
like that.

My first blog post on this issue is here:
  http://gavinandresen.ninja/why-increasing-the-max-block-size-is-urgent

... and I NEVER say "Bitcoin will fail".  I say:

"If the number of transactions waiting gets large enough, the end result
will be an over-saturated network, busy doing nothing productive. I don’t
think that is likely– it is more likely people just stop using Bitcoin
because transaction confirmation becomes increasingly unreliable."

Mike sketched out the worst-case here:
  https://medium.com/@octskyward/crash-landing-f5cc19908e32

... and concludes:

"I believe there are no situations in which Bitcoin can enter an overload
situation and come out with its reputation and user base intact. Both would
suffer heavily and as Bitcoin is the founder of the cryptocurrency concept,
the idea itself would inevitably suffer some kind of negative
repercussions."




So please stop with the over-the-top claims about what "the other side"
believe, there are enough of those (on both sides of the debate) on reddit.
I'd really like to focus on how to move forward, and how best to resolve
difficult questions like this in the future.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Thu, Aug 6, 2015 at 3:40 PM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Aug 5, 2015 at 9:26 PM, Jorge Timón <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> This is a much more reasonable position. I wish this had been starting
>> point of this discussion instead of "the block size limit must be
>> increased as soon as possible or bitcoin will fail".
>>
>
> It REALLY doesn't help the debate when you say patently false statements
> like that.
>
> My first blog post on this issue is here:
>   http://gavinandresen.ninja/why-increasing-the-max-block-size-is-urgent
>
> ... and I NEVER say "Bitcoin will fail".  I say:
>
> "If the number of transactions waiting gets large enough, the end result
> will be an over-saturated network, busy doing nothing productive. I don’t
> think that is likely– it is more likely people just stop using Bitcoin
> because transaction confirmation becomes increasingly unreliable."
>

But you seem to consider that a bad thing. Maybe saying that you're
claiming that this equals Bitcoin failing is an exaggeration, but you do
believe that evolving towards an ecosystem where there is competition for
block space is a bad thing, right?

I don't agree that "Not everyone is able to use the block chain for every
use case" is the same thing as "People stop using Bitcoin". People are
already not using it for every use case.

Here is what my proposed BIP says: "No hard forking change that relaxes the
block size limit can be guaranteed to provide enough space for every
possible demand - or even any particular demand - unless strong
centralization of the mining ecosystem is expected. Because of that, the
development of a fee market and the evolution towards an ecosystem that is
able to cope with block space competition should be considered healthy."

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 10:06 AM, Pieter Wuille 
wrote:

> But you seem to consider that a bad thing. Maybe saying that you're
> claiming that this equals Bitcoin failing is an exaggeration, but you do
> believe that evolving towards an ecosystem where there is competition for
> block space is a bad thing, right?
>

No, competition for block space is good.

What is bad is artificially limiting or centrally controlling the supply of
that space.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Thu, Aug 6, 2015 at 4:21 PM, Gavin Andresen 
wrote:

> On Thu, Aug 6, 2015 at 10:06 AM, Pieter Wuille 
> wrote:
>
>> But you seem to consider that a bad thing. Maybe saying that you're
>> claiming that this equals Bitcoin failing is an exaggeration, but you do
>> believe that evolving towards an ecosystem where there is competition for
>> block space is a bad thing, right?
>>
>
> No, competition for block space is good.
>

So if we would have 8 MB blocks, and there is a sudden influx of users (or
settlement systems, who serve much more users) who want to pay high fees
(let's say 20 transactions per second) making the block chain inaccessible
for low fee transactions, and unreliable for medium fee transactions (for
any value of low, medium, and high), would you be ok with that? If so, why
is 8 MB good but 1 MB not? To me, they're a small constant factor that does
not fundamentally improve the scale of the system. I dislike the outlook of
"being forever locked at the same scale" while technology evolves, so my
proposal tries to address that part. It intentionally does not try to
improve a small factor, because I don't think it is valuable.

What is bad is artificially limiting or centrally controlling the supply of
> that space.
>

It's exactly as centrally limited as the finite supply of BTC - by
consensus. You and I may agree that a finite supply is a good thing, and
may disagree about whether a consensus rule about the block size is a good
idea (and if so, at what level), but it's a choice we make as a community
about the rules of the system we want to use.

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Fwd: Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille 
wrote:

> So if we would have 8 MB blocks, and there is a sudden influx of users (or
> settlement systems, who serve much more users) who want to pay high fees
> (let's say 20 transactions per second) making the block chain inaccessible
> for low fee transactions, and unreliable for medium fee transactions (for
> any value of low, medium, and high), would you be ok with that?


Yes, that's fine. If the network cannot handle the transaction volume that
people want to pay for, then the marginal transactions are priced out. That
is true today (otherwise ChangeTip would be operating on-blockchain), and
will be true forever.


> If so, why is 8 MB good but 1 MB not? To me, they're a small constant
> factor that does not fundamentally improve the scale of the system.


"better is better" -- I applaud efforts to fundamentally improve the
scalability of the system, but I am an old, cranky, pragmatic engineer who
has seen that successful companies tackle problems that arise and are
willing to deploy not-so-perfect solutions if they help whatever short-term
problem they're facing.


> I dislike the outlook of "being forever locked at the same scale" while
> technology evolves, so my proposal tries to address that part. It
> intentionally does not try to improve a small factor, because I don't think
> it is valuable.


I think consensus is against you on that point.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Jorge Timón via bitcoin-dev
On Thu, Aug 6, 2015 at 3:40 PM, Gavin Andresen  wrote:
> On Wed, Aug 5, 2015 at 9:26 PM, Jorge Timón
>  wrote:
>>
>> This is a much more reasonable position. I wish this had been starting
>> point of this discussion instead of "the block size limit must be
>> increased as soon as possible or bitcoin will fail".
>
>
> It REALLY doesn't help the debate when you say patently false statements
> like that.

I'm pretty sure that I can quote Mike Hearn with a sentence extremely
similar to that in this forum or in some of his blog posts (not in
https://medium.com/@octskyward/crash-landing-f5cc19908e32 at a first
glance...).
But yeah, what people said in the past is not very important: people
change their minds (they even acknowledge their mistake some times).
What interests me more it's what people think now.

I don't want to put words in your mouth and you are more than welcome
to correct what I think you think with what you really think.
All I'm trying to do is framing your fears properly.
If I say "all fears related to not raising the block size limit in the
short term can be summarized as a fear of fees rising in the short
term".
Am I correct? Am I missing some other argument?
Of course, problems that need to be solved regardless of the block
size (like an unbounded mempool) should not be considered for this
discussion.

> My first blog post on this issue is here:
>   http://gavinandresen.ninja/why-increasing-the-max-block-size-is-urgent
>
> ... and I NEVER say "Bitcoin will fail".  I say:
>
> "If the number of transactions waiting gets large enough, the end result
> will be an over-saturated network, busy doing nothing productive. I don’t
> think that is likely– it is more likely people just stop using Bitcoin
> because transaction confirmation becomes increasingly unreliable."

If you pay high enough fees your transactions will be likely mined in
the next block.
So this seems to be reducible to the "fees rising" concern unless I am
missing something.

> So please stop with the over-the-top claims about what "the other side"
> believe, there are enough of those (on both sides of the debate) on reddit.
> I'd really like to focus on how to move forward, and how best to resolve
> difficult questions like this in the future.

I think I would have a much better understanding of what "the other
side" thinks if I ever got an answer to a couple of very simple
questions I have been repeating ad nausea:

1) If "not now" when will it be a good time to let fees rise above zero?

2) When will you consider a size to be too dangerous for centralization?
In other words, why 20 GB would have been safe but 21 GB wouldn't have
been (or the respective maximums and respective +1 for each block
increase proposal)?

On Thu, Aug 6, 2015 at 4:21 PM, Gavin Andresen  wrote:
> What is bad is artificially limiting or centrally controlling the supply of
> that space.

3) Does this mean that you would be in favor of completely removing
the consensus rule that limits mining centralization by imposing an
artificial (like any other consensus rule) block size maximum?

I've been insistently repeating this question too.
Admittedly, it would be a great disappointment if your answer to this
question is "yes": that can only mean that either you don't understand
how the consensus rule limits mining centralization or that you don't
care about mining centralization at all.

If you really want things to move forward, please, prove it by
answering these questions so that we don't have to imagine what the
answers are (because what we imagine is probably much worse than your
actual answers).
I'm more than willing to stop trying to imagine what "big block
advocates" think, but I need your answers from the "big block
advocates".

Asking repeatedly doesn't seem to be effective. So I will answer the
questions myself in the worse possible way I think a "big block
advocate" could answer them.
Feel free to replace my stupid answers with your own:

--- (FICTION ANSWERS [given the lack of real answers])

3) Does this mean that you would be in favor of completely removing
the consensus rule that limits mining centralization by imposing an
artificial (like any other consensus rule) block size maximum?

Yes, I would remove the rule because I don't care about mining centralization.

2) When will you consider a size to be too dangerous for centralization?
In other words, why 20 GB would have been safe but 21 GB wouldn't have
been (or the respective maximums and respective +1 for each block
increase proposal)?

Never, as said I don't care about mining centralization.
I thought users and Bitcoin companies would agree with a 20 GB limit
hardfork with proper lobbying, but I certainly prefer 21 GB.
From 1 MB to infinity, the bigger the better, always.

1) If "not now" when will it be a good time to let fees rise above zero?

Never. Fees are just an excuse, the real goal is making Bitcoin centralized.



I'm quite confident that y

Re: [bitcoin-dev] Fwd: Block size following technological growth

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Thu, Aug 6, 2015 at 5:06 PM, Gavin Andresen 
wrote:

> On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille 
> wrote:
>
>> So if we would have 8 MB blocks, and there is a sudden influx of users
>> (or settlement systems, who serve much more users) who want to pay high
>> fees (let's say 20 transactions per second) making the block chain
>> inaccessible for low fee transactions, and unreliable for medium fee
>> transactions (for any value of low, medium, and high), would you be ok with
>> that?
>
>
> Yes, that's fine. If the network cannot handle the transaction volume that
> people want to pay for, then the marginal transactions are priced out. That
> is true today (otherwise ChangeTip would be operating on-blockchain), and
> will be true forever.
>

The network can "handle" any size. I believe that if a majority of miners
forms SPV mining agreements, then they are no longer affected by the block
size, and benefit from making their blocks slow to validate for others (as
long as the fee is negligable compared to the subsidy). I'll try to find
the time to implement that in my simulator. Some hardware for full nodes
will always be able to validate and index the chain, so nobody needs to run
a pesky full node anymore and they can just use a web API to validate
payments.

Being able the "handle" a particular rate is not a boolean question. It's a
question of how much security, centralization, and risk for systemic error
we're willing to tolerate. These are not things you can just observe, so
let's keep talking about the risks, and find a solution that we agree on.


>
>> If so, why is 8 MB good but 1 MB not? To me, they're a small constant
>> factor that does not fundamentally improve the scale of the system.
>
>
> "better is better" -- I applaud efforts to fundamentally improve the
> scalability of the system, but I am an old, cranky, pragmatic engineer who
> has seen that successful companies tackle problems that arise and are
> willing to deploy not-so-perfect solutions if they help whatever short-term
> problem they're facing.
>

I don't believe there is a short-term problem. If there is one now, there
will be one too at 8 MB blocks (or whatever actual size blocks are
produced).


>
>
>> I dislike the outlook of "being forever locked at the same scale" while
>> technology evolves, so my proposal tries to address that part. It
>> intentionally does not try to improve a small factor, because I don't think
>> it is valuable.
>
>
> I think consensus is against you on that point.
>

Maybe. But I believe that it is essential to not take unnecessary risks,
and find a non-controversial solution.

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 11:25 AM, Jorge Timón  wrote:

> 1) If "not now" when will it be a good time to let fees rise above zero?
>

Fees are already above zero. See
http://gavinandresen.ninja/the-myth-of-not-full-blocks


> 2) When will you consider a size to be too dangerous for centralization?
> In other words, why 20 GB would have been safe but 21 GB wouldn't have
> been (or the respective maximums and respective +1 for each block
> increase proposal)?
>

http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

3) Does this mean that you would be in favor of completely removing
> the consensus rule that limits mining centralization by imposing an
> artificial (like any other consensus rule) block size maximum?
>

I don't believe that the maximum block size has much at all to do with
mining centralization, so I don't accept the premise of the question.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Mike Hearn via bitcoin-dev
Whilst 1mb to 8mb might seem irrelevant from a pure computer science
perspective payment demand is not really infinite, at least not if by
"payment" we mean something resembling how current Bitcoin users use the
network.

If we define "payment" to mean the kind of thing that Bitcoin users and
enthusiasts have been doing up until now, then suddenly 1mb to 8mb makes a
ton of sense and doesn't really seem that small: we'd have to increase
usage by nearly an order of magnitude before it becomes an issue again!

If we think of Bitcoin as a business that serves customers, growing our
user base by an order of magnitude would be a great and celebration worthy
achievement! Not at all a small constant factor :)

And keeping the current user base happy and buying things is extremely
interesting, both to me and Gavin. Without users Bitcoin is nothing at all.
Not a settlement network, not anything.

It's actually going to be quite hard to grow that much. As the white paper
says, "the system works well enough for most transactions". And despite a
lot of effort by many people, killer apps that use Bitcoin's unique
features are still hit and miss. Perhaps Streamium, Lighthouse, ChangeTip,
some distributed exchange or something else will stimulate huge new demand
for transactions in future . but if so we're not there yet.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Tom Harding via bitcoin-dev
On 8/6/2015 7:53 AM, Pieter Wuille via bitcoin-dev wrote:
>
> So if we would have 8 MB blocks, and there is a sudden influx of users
> (or settlement systems, who serve much more users) who want to pay
> high fees (let's say 20 transactions per second) making the block
> chain inaccessible for low fee transactions, and unreliable for medium
> fee transactions (for any value of low, medium, and high), would you
> be ok with that?

Gavin has answered this question in the clearest way possible -- in
tested C++ code, which increases capacity only on a precise schedule for
20 years, then stops increasing.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Analysis paralysis and the blocksize debate

2015-08-06 Thread Ken Friece via bitcoin-dev
I am a long time Bitcoin user, miner, investor, full node operator, and
distributed real-time system software engineer.

Out of the all currently proposed blocksize increase solutions, I support
BIP101 (or something like it) and find the current blocksize debate very
frustrating, so I will try to systematically debunk some common arguments
against BIP101 below.

   1.

   We should not increase the blocksize because we'll just end up hitting
   the limit again at a future time.


   -

  If a reasonable blocksize increase like BIP101 that scales with
  technology is implemented, it's not a given that we will hit the
blocksize
  limit (consistently full blocks over an extended period of time) again in
  the future. Stating that we will definitely hit the blocksize limit again
  in the future is pure speculation about future bitcoin transaction volume
  that can't possibly be known at this time, and is nothing but conjecture.
  If technology increases faster than bitcoin transaction volume, simply
  scaling the Bitcoin blocksize could keep up with future demand.
If Bitcoin
  is successful beyond our wildest dreams and future transaction volume
  outstrips future blockchain space, alternative solutions like
sidechains or
  the lightning network will have more time to be implemented.


   1.

   The full node count will go down if we increase the blocksize because it
   will be more costly to run a full node.


   -

  I'm not sure this is true. I currently run a full node on a higher
  end consumer PC with a higher end home network connection, but if the
  blocksize limit is not raised and transaction fees increase such
that it is
  no longer economical for me to access the bitcoin blockchain directly, I
  will no longer run a full node to support the network. Bitcoin
is no longer
  interesting to me if it gets to the point where the average user
is priced
  off the blockchain. Users that have direct access to the blockchain are
  more likely to run full nodes. If Bitcoin ever becomes purely a limited,
  small settlement network, I will no longer support it and move onto
  something else.


   1.

   The blocksize can only be increased if there is developer “consensus”
   and the change is not “controversial”.


   -

  Any blocksize increase will the deemed “controversial” and lack
  “consensus”, but doing nothing in the face of consistently full
blocks and
  rising transaction fees is also “controversial” and will lack
“consensus”.
  Inaction, maintaining the status quo, and blocking a blocksize
increase in
  the face of consistently full blocks and rising transaction fees
is just as
  controversial as increasing the blocksize.


   1.

   Don't increase the blocksize now with all the promising work going on
   with sidechains and the lightning network.


   -

  KISS (keep it simple, stupid). When dealing with a highly complex,
  mission critical system, there needs to be a very compelling reason to
  choose a much more complex solution over a simple solution like BIP101.
  Sidechains and the lightning network are much more complex than
BIP101 and
  introduce new points of failure. It's hard enough to get folks
to trust the
  Bitcoin network after 7 years, so it will likely take years until
  sidechains and the lightning network are proven enough to be trusted by
  users.


   1.

   Some miners will be at a disadvantage with larger blocks.


   -

  As folks have pointed out multiple times, network speed is just one
  of many variables that impact mining profitability. I don't believe for a
  second that every miner in China (57% of the network) would be negatively
  impacted by larger blocks because some of them either have decent network
  connections or connect to mining pools with decent network connections.
  Miners will be given ample notice of a blocksize increase before
it occurs,
  so they will have time to make plans to deal with larger blocks. More
  transactions per block should also increase miner profitability because
  more transactions equals more users equals more transaction fees!

In conclusion, I think the next year is a make or break year for Bitcoin
and hope the developers do everything they can to come up with a reasonable
long term growth plan to put Bitcoin in the best possible position to be
successful.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Jorge Timón via bitcoin-dev
First of all, thank you very much for answering the questions, and
apologies for not having formulated them properly (fortunately that's
not an irreparable mistake).

On Thu, Aug 6, 2015 at 6:03 PM, Gavin Andresen  wrote:
> On Thu, Aug 6, 2015 at 11:25 AM, Jorge Timón  wrote:
>>
>> 1) If "not now" when will it be a good time to let fees rise above zero?
>
>
> Fees are already above zero. See
> http://gavinandresen.ninja/the-myth-of-not-full-blocks

When we talk about "fees" we're talking about different things. I
should have been more specific.
Average fees are greatly influenced by wallet and policy defaults, and
they also include extra fees that are included for fast confirmation.
I'm not talking about fast confirmation transactions, but about
non-urgent transactions.

What is the market minimum fee for miners to mine a transaction?

That's currently zero.
If you don't want to directly look at what blocks contain, we can also
use a fee estimator and define a "non-urgent period", say 1 week worth
of blocks (1008 blocks).
The chart in your link doesn't include a 1008 blocks line, but the 15
blocks (about 2.5 hours) line seems to already show zero fees:

http://img.svbtle.com/p4x3s7fn52sz9a.png

So I reformulate the question:

1) If "not now", when will it be a good time to let the "market
minimum fee for miners to mine a transaction" rise above zero?

>> 2) When will you consider a size to be too dangerous for centralization?
>> In other words, why 20 GB would have been safe but 21 GB wouldn't have
>> been (or the respective maximums and respective +1 for each block
>> increase proposal)?
>
>
> http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

This just shows where the 20 GB come from, not why you would reject 21 GB.
Let me rephrase.

2) Do you have any criterion (automatic or not) that can result in you
saying "no, this is too much" for any proposed size?
Since you don't think the consensus block size maximum limits mining
centralization (as you later say), it must be based on something else.
In any case, if you lack a criterion that's fine as well: it's never
too late to have one.
Would you agree that blocksize increase proposals should have such a
criterion/test?

>> 3) Does this mean that you would be in favor of completely removing
>> the consensus rule that limits mining centralization by imposing an
>> artificial (like any other consensus rule) block size maximum?
>
>
> I don't believe that the maximum block size has much at all to do with
> mining centralization, so I don't accept the premise of the question.

Ok, this is an enormous step forward in the discussion, thank you.
In my opinion all discussions will be sterile while we can't even
agree on what are the positive effects of the consensus rule that
supposedly needs to be changed.

It's not that you don't care about centralization, it's that you don't
believe that a consensus block size maximum limits centralization at
all.
This means that if I can convince you that the consensus block size
maximum does in fact limit centralization in any way, you may change
your views about the whole blocksize consensus rule change, you may
even take back or change your own proposal.
But let's leave that aside that for now.

Regardless of the history of the consensus rule (which I couldn't care
less about), I believe the only function that the maximum block size
rule currently serves is limiting centralization.
Since you deny that function, do you think the (artificial) consensus
rule is currently serving any other purpose that I'm missing?

If the answer is something along the lines of "not really, it's just
technical debt", then I think you should be honest and consequent, and
directly advocate for the complete removal of the consensus rule.

I really think conversations can't really advance until we clarify the
different positions about the discussed consensus rule current
purpose.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Sergio Demian Lerner via bitcoin-dev
Is there any up to date documentation about TheBlueMatt relay network
including what kind of block compression it is currently doing? (apart from
the source code)

Regards, Sergio.

On Wed, Aug 5, 2015 at 7:14 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Aug 5, 2015 at 9:19 PM, Arnoud Kouwenhoven - Pukaki Corp
>  wrote:
> > Thanks for this (direct) feedback. It would make sense that if blocks
> can be
> > submitted using ~5kb packets, that no further optimizations would be
> needed
> > at this point. I will look into the relay network transmission protocol
> to
> > understand how it works!
> >
> > I hear that you are saying that this network solves speed of transmission
> > and thereby (technical) block size issues. Presumably it would solve
> speed
> > of block validation too by prevalidating transactions.
>
>
> Correct. Bitcoin Core has cached validation for many years now... if
> not for that and other optimizations, things would be really broken
> right now. :)
>
> > Assuming this is all
> > true, and I have no reason to doubt that at this point, I do not
> understand
> > why there is any discussion at all about the (technical) impact of large
> > blocks, why there are large numbers of miners building on invalid blocks
> > (SPV mining, https://bitcoin.org/en/alert/2015-07-04-spv-mining), or why
> > there is any discussion about the speed of block validation (cpu
> processing
> > time to verify blocks and transactions in blocks being a limitation).
>
> I'm also mystified by a lot of the large block discussion, much of it
> is completely divorced from the technology as deployed; much less what
> we-- in industry-- know to be possible. I don't blame you or anyone in
> particular on this; it's a new area and we don't yet know what we need
> to know to know what we need to know; or to the extent that we do it
> hasn't had time to get effectively communicated.
>
> The technical/security implications of larger blocks are related to
> other things than propagation time, if you assume people are using the
> available efficient relay protocol (or better).
>
> SPV mining is a bit of a misnomer (If I coined the term, I'm sorry).
> What these parties are actually doing is blinding mining on top of
> other pools' stratum work. You can think of it as sub-pooling with
> hopping onto whatever pool has the highest block (I'll call it VFSSP
> in this post-- validation free stratum subpooling).  It's very easy to
> implement, and there are other considerations.
>
> It was initially deployed at a time when a single pool in Europe has
> amassed more than half of the hashrate. This pool had propagation
> problems and a very high orphan rate, it may have (perhaps
> unintentionally) been performing a selfish mining attack; mining off
> their stratum work was an easy fix which massively cut down the orphan
> rates for anyone who did it.  This was before the relay network
> protocol existed (the fact that all the hashpower was consolidating on
> a single pool was a major motivation for creating it).
>
> VFSSP also cuts through a number of practical issues miners have had:
> Miners that run their own bitcoin nodes in far away colocation
> (>100ms) due to local bandwidth or connectivity issues (censored
> internet); relay network hubs not being anywhere near by due to
> strange internet routing (e.g. japan to china going via the US for ...
> reasons...); the CreateNewBlock() function being very slow and
> unoptimized, etc.   There are many other things like this-- and VFSSP
> avoids them causing delays even when you don't understand them or know
> about them. So even when they're easily fixed the VFSSP is a more
> general workaround.
>
> Mining operations are also usually operated in a largely fire and
> forget manner. There is a long history in (esp pooled) mining where
> someone sets up an operation and then hardly maintains it after the
> fact... so some of the use of VFSSP appears to just be inertia-- we
> have better solutions now, but they they work to deploy and changing
> things involves risk (which is heightened by a lack of good
> monitoring-- participants learn they are too latent by observing
> orphaned blocks at a cost of 25 BTC each).
>
> One of the frustrating things about incentives in this space is that
> bad outcomes are possible even when they're not necessary. E.g. if a
> miner can lower their orphan rate by deploying a new protocol (or
> simply fixing some faulty hardware in their infrastructure, like
> Bitcoin nodes running on cheap VPSes with remote storage)  OR they can
> lower their orphan rate by pointing their hashpower at a free
> centeralized pool, they're likely to do the latter because it takes
> less effort.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev ma

Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Olaoluwa Osuntokun via bitcoin-dev
Other than the source code, the best documentation I've come across is a few
lines on IRC explaining the high-level design of the protocol:
https://botbot.me/freenode/bitcoin-wizards/2015-07-10/?msg=44146764&page=2

On Thu, Aug 6, 2015 at 10:18 AM Sergio Demian Lerner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Is there any up to date documentation about TheBlueMatt relay network
> including what kind of block compression it is currently doing? (apart from
> the source code)
>
> Regards, Sergio.
>
> On Wed, Aug 5, 2015 at 7:14 PM, Gregory Maxwell via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Wed, Aug 5, 2015 at 9:19 PM, Arnoud Kouwenhoven - Pukaki Corp
>>  wrote:
>> > Thanks for this (direct) feedback. It would make sense that if blocks
>> can be
>> > submitted using ~5kb packets, that no further optimizations would be
>> needed
>> > at this point. I will look into the relay network transmission protocol
>> to
>> > understand how it works!
>> >
>> > I hear that you are saying that this network solves speed of
>> transmission
>> > and thereby (technical) block size issues. Presumably it would solve
>> speed
>> > of block validation too by prevalidating transactions.
>>
>>
>> Correct. Bitcoin Core has cached validation for many years now... if
>> not for that and other optimizations, things would be really broken
>> right now. :)
>>
>> > Assuming this is all
>> > true, and I have no reason to doubt that at this point, I do not
>> understand
>> > why there is any discussion at all about the (technical) impact of large
>> > blocks, why there are large numbers of miners building on invalid blocks
>> > (SPV mining, https://bitcoin.org/en/alert/2015-07-04-spv-mining), or
>> why
>> > there is any discussion about the speed of block validation (cpu
>> processing
>> > time to verify blocks and transactions in blocks being a limitation).
>>
>> I'm also mystified by a lot of the large block discussion, much of it
>> is completely divorced from the technology as deployed; much less what
>> we-- in industry-- know to be possible. I don't blame you or anyone in
>> particular on this; it's a new area and we don't yet know what we need
>> to know to know what we need to know; or to the extent that we do it
>> hasn't had time to get effectively communicated.
>>
>> The technical/security implications of larger blocks are related to
>> other things than propagation time, if you assume people are using the
>> available efficient relay protocol (or better).
>>
>> SPV mining is a bit of a misnomer (If I coined the term, I'm sorry).
>> What these parties are actually doing is blinding mining on top of
>> other pools' stratum work. You can think of it as sub-pooling with
>> hopping onto whatever pool has the highest block (I'll call it VFSSP
>> in this post-- validation free stratum subpooling).  It's very easy to
>> implement, and there are other considerations.
>>
>> It was initially deployed at a time when a single pool in Europe has
>> amassed more than half of the hashrate. This pool had propagation
>> problems and a very high orphan rate, it may have (perhaps
>> unintentionally) been performing a selfish mining attack; mining off
>> their stratum work was an easy fix which massively cut down the orphan
>> rates for anyone who did it.  This was before the relay network
>> protocol existed (the fact that all the hashpower was consolidating on
>> a single pool was a major motivation for creating it).
>>
>> VFSSP also cuts through a number of practical issues miners have had:
>> Miners that run their own bitcoin nodes in far away colocation
>> (>100ms) due to local bandwidth or connectivity issues (censored
>> internet); relay network hubs not being anywhere near by due to
>> strange internet routing (e.g. japan to china going via the US for ...
>> reasons...); the CreateNewBlock() function being very slow and
>> unoptimized, etc.   There are many other things like this-- and VFSSP
>> avoids them causing delays even when you don't understand them or know
>> about them. So even when they're easily fixed the VFSSP is a more
>> general workaround.
>>
>> Mining operations are also usually operated in a largely fire and
>> forget manner. There is a long history in (esp pooled) mining where
>> someone sets up an operation and then hardly maintains it after the
>> fact... so some of the use of VFSSP appears to just be inertia-- we
>> have better solutions now, but they they work to deploy and changing
>> things involves risk (which is heightened by a lack of good
>> monitoring-- participants learn they are too latent by observing
>> orphaned blocks at a cost of 25 BTC each).
>>
>> One of the frustrating things about incentives in this space is that
>> bad outcomes are possible even when they're not necessary. E.g. if a
>> miner can lower their orphan rate by deploying a new protocol (or
>> simply fixing some faulty hardware in their infrastructure, like
>> Bitcoin nodes running on chea

Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Tom Harding via bitcoin-dev
On 8/6/2015 10:16 AM, Sergio Demian Lerner via bitcoin-dev wrote:
> Is there any up to date documentation about TheBlueMatt relay network
> including what kind of block compression it is currently doing? (apart
> from the source code)
>

Another question.

Did the "relay network" relay
009cc829aa25b40b2cd4eb83dd498c12ad0d26d90c439d99, the
BTC Nuggets block that was invalid post-softfork?  If so,

 - Is there reason to believe that by so doing, it contributed to the
growth of the 2015-07-04 fork?
 - Will the relay network at least validate block version numbers in the
future?

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Gregory Maxwell via bitcoin-dev
On Thu, Aug 6, 2015 at 6:17 PM, Tom Harding via bitcoin-dev
 wrote:
>  - Will the relay network at least validate block version numbers in the
> future?

It already validates block version numbers.

It only relays valid transactions.

Although, the block relaying itself is explicitly "unvalidated" and
the software client can only usefully be used with a mempool
maintaining full node (otherwise it doesn't provide much value,
because the node must wait to validate the things). ... but that
doesn't actually mean no validation at all is performed, many
stateless checks are performed.

On Thu, Aug 6, 2015 at 5:16 PM, Sergio Demian Lerner via bitcoin-dev
 wrote:
> Is there any up to date documentation about TheBlueMatt relay network
> including what kind of block compression it is currently doing? (apart from
> the source code)

I don't know if Matt has an extensive writeup. But the basic
optimization it performs is trivial.  I wouldn't call it compression,
though it does have some analog to RTP "header compression".

All it does is relay transactions verified by a local node and keeps a
FIFO of the relayed transactions in both directions, which is
synchronous on each side.

When a block is recieved on either side, it replaces transactions with
their indexes in the FIFO and relays it along. Transactions not in the
fifo are escaped and sent whole. On the other side the block is
reconstructed using the stored data and handed to the node (where the
preforwarded transactions would have also been pre-validated).

There is some more than basic elaboration for resource management
(e.g. multiple queues for different transaction sizes)-- and more
recently using block templates to learn transaction priority be a bit
more immune to spam attacks, but its fairly simple.

Much better could be done about intelligently managing the queues or
efficiently transmitting the membership sets, etc.  It's just
basically the simplest thing that isn't completely stupid.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fwd: Block size following technological growth

2015-08-06 Thread Michael Naber via bitcoin-dev
How many nodes are necessary to ensure sufficient network reliability? Ten,
a hundred, a thousand? At what point do we hit the point of diminishing
returns, where adding extra nodes starts to have negligible impact on the
overall reliability of the system?





On Thu, Aug 6, 2015 at 10:26 AM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thu, Aug 6, 2015 at 5:06 PM, Gavin Andresen 
> wrote:
>
>> On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille 
>> wrote:
>>
>>> So if we would have 8 MB blocks, and there is a sudden influx of users
>>> (or settlement systems, who serve much more users) who want to pay high
>>> fees (let's say 20 transactions per second) making the block chain
>>> inaccessible for low fee transactions, and unreliable for medium fee
>>> transactions (for any value of low, medium, and high), would you be ok with
>>> that?
>>
>>
>> Yes, that's fine. If the network cannot handle the transaction volume
>> that people want to pay for, then the marginal transactions are priced out.
>> That is true today (otherwise ChangeTip would be operating on-blockchain),
>> and will be true forever.
>>
>
> The network can "handle" any size. I believe that if a majority of miners
> forms SPV mining agreements, then they are no longer affected by the block
> size, and benefit from making their blocks slow to validate for others (as
> long as the fee is negligable compared to the subsidy). I'll try to find
> the time to implement that in my simulator. Some hardware for full nodes
> will always be able to validate and index the chain, so nobody needs to run
> a pesky full node anymore and they can just use a web API to validate
> payments.
>
> Being able the "handle" a particular rate is not a boolean question. It's
> a question of how much security, centralization, and risk for systemic
> error we're willing to tolerate. These are not things you can just observe,
> so let's keep talking about the risks, and find a solution that we agree on.
>
>
>>
>>> If so, why is 8 MB good but 1 MB not? To me, they're a small constant
>>> factor that does not fundamentally improve the scale of the system.
>>
>>
>> "better is better" -- I applaud efforts to fundamentally improve the
>> scalability of the system, but I am an old, cranky, pragmatic engineer who
>> has seen that successful companies tackle problems that arise and are
>> willing to deploy not-so-perfect solutions if they help whatever short-term
>> problem they're facing.
>>
>
> I don't believe there is a short-term problem. If there is one now, there
> will be one too at 8 MB blocks (or whatever actual size blocks are
> produced).
>
>
>>
>>
>>> I dislike the outlook of "being forever locked at the same scale" while
>>> technology evolves, so my proposal tries to address that part. It
>>> intentionally does not try to improve a small factor, because I don't think
>>> it is valuable.
>>
>>
>> I think consensus is against you on that point.
>>
>
> Maybe. But I believe that it is essential to not take unnecessary risks,
> and find a non-controversial solution.
>
> --
> Pieter
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fwd: Block size following technological growth

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Thu, Aug 6, 2015 at 8:43 PM, Michael Naber  wrote:

> How many nodes are necessary to ensure sufficient network reliability?
> Ten, a hundred, a thousand? At what point do we hit the point of
> diminishing returns, where adding extra nodes starts to have negligible
> impact on the overall reliability of the system?
>

It's not about reliability. There are plenty of nodes currently for
synchronization and other network functions.

It's about reduction of trust. Running a full node and using it verify your
transactions is how you get personal assurance that everyone on the network
is following the rules. And if you don't do so yourself, the knowledge that
others are using full nodes and relying on them is valuable. Someone just
running 1000 nodes in a data center and not using them for anything does
not do anything for this, it's adding network capacity without use.

That doesn't mean that the full node count (or the reachable full node
count even) are meaningless numbers. They are an indication of how hard it
is (for various reasons) to run/use a full node, and thus provide feedback.
But they are not the goal, just an indicator.

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 1:15 PM, Jorge Timón  wrote:

> So I reformulate the question:
>
> 1) If "not now", when will it be a good time to let the "market
> minimum fee for miners to mine a transaction" rise above zero?


Two answers:

1. If you are willing to wait an infinite amount of time, I think the
minimum fee will always be zero or very close to zero, so I think it's a
silly question.

2. The "market minimum fee" should be determined by the market. It should
not be up to us to decide "when is a good time."


> 2) Do you have any criterion (automatic or not) that can result in you
> saying "no, this is too much" for any proposed size?
>

Sure, if keeping up with transaction volume requires a cluster of computers
or more than "pretty good" broadband bandwidth I think that's too far.
That's where original 20MB limit comes from, otherwise I'd have proposed a
much higher limit.


> Would you agree that blocksize increase proposals should have such a
> criterion/test?


Although I've been very clear with my criterion, no, I don't think all
blocksize increase proposals should have to justify "why this size" or "why
this rate of increase." Part of my frustration with this whole debate is
we're talking about a sanity-check upper-limit; as long as it doesn't open
up some terrible new DoS possibility I don't think it really matters much
what the exact number is.



> Regardless of the history of the consensus rule (which I couldn't care
> less about), I believe the only function that the maximum block size
> rule currently serves is limiting centralization.
> Since you deny that function, do you think the (artificial) consensus
> rule is currently serving any other purpose that I'm missing?
>

It prevents trivial denial-of-service attacks (e.g. I promise to send you a
1 Terabyte block, then fill up your memory or disk...).

And please read what I wrote: I said that the block limit has LITTLE effect
on MINING centralization.  Not "no effect on any type of centralization."

If the limit was removed entirely, it is certainly possible we'd end up
with very few organizations (and perhaps zero individuals) running full
nodes.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Aug 6, 2015 9:42 PM, "Gavin Andresen via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
2. The "market minimum fee" should be determined by the market. It should
not be up to us to decide "when is a good time."

I partially agree. The community should decide what risks it is willing to
take, and set limits accordingly. Let the market decide how that space is
best used.

>
>>
>> Would you agree that blocksize increase proposals should have such a
>> criterion/test?
>
>
> Although I've been very clear with my criterion, no, I don't think all
blocksize increase proposals should have to justify "why this size" or "why
this rate of increase." Part of my frustration with this whole debate is
we're talking about a sanity-check upper-limit; as long as it doesn't open
up some terrible new DoS possibility I don't think it really matters much
what the exact number is.

It is only a DoS protection limit if you want to rely on trusting miners. I
prefer a system where I don't have to do that.

But I agree the numbers don't matter much, for a different reason: the
market will fill up whatever space is available, and we'll have the same
discussion when the new limit doesn't seem enough anymore.

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Matt Corallo via bitcoin-dev
No, don't think so, the protocol is, essentially, relay transactions, when you 
get a block, send header, iterate over transactions, for each, either use two 
bytes for nth-recent-transaction-relayed, use 
0x-3-byte-length-transaction-data. There are quite a few implementation 
details, and lots of things could be improved, but that is pretty much how it 
works.

Matt

On August 6, 2015 7:16:56 PM GMT+02:00, Sergio Demian Lerner via bitcoin-dev 
 wrote:
>Is there any up to date documentation about TheBlueMatt relay network
>including what kind of block compression it is currently doing? (apart
>from
>the source code)
>
>Regards, Sergio.
>
>On Wed, Aug 5, 2015 at 7:14 PM, Gregory Maxwell via bitcoin-dev <
>bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Wed, Aug 5, 2015 at 9:19 PM, Arnoud Kouwenhoven - Pukaki Corp
>>  wrote:
>> > Thanks for this (direct) feedback. It would make sense that if
>blocks
>> can be
>> > submitted using ~5kb packets, that no further optimizations would
>be
>> needed
>> > at this point. I will look into the relay network transmission
>protocol
>> to
>> > understand how it works!
>> >
>> > I hear that you are saying that this network solves speed of
>transmission
>> > and thereby (technical) block size issues. Presumably it would
>solve
>> speed
>> > of block validation too by prevalidating transactions.
>>
>>
>> Correct. Bitcoin Core has cached validation for many years now... if
>> not for that and other optimizations, things would be really broken
>> right now. :)
>>
>> > Assuming this is all
>> > true, and I have no reason to doubt that at this point, I do not
>> understand
>> > why there is any discussion at all about the (technical) impact of
>large
>> > blocks, why there are large numbers of miners building on invalid
>blocks
>> > (SPV mining, https://bitcoin.org/en/alert/2015-07-04-spv-mining),
>or why
>> > there is any discussion about the speed of block validation (cpu
>> processing
>> > time to verify blocks and transactions in blocks being a
>limitation).
>>
>> I'm also mystified by a lot of the large block discussion, much of it
>> is completely divorced from the technology as deployed; much less
>what
>> we-- in industry-- know to be possible. I don't blame you or anyone
>in
>> particular on this; it's a new area and we don't yet know what we
>need
>> to know to know what we need to know; or to the extent that we do it
>> hasn't had time to get effectively communicated.
>>
>> The technical/security implications of larger blocks are related to
>> other things than propagation time, if you assume people are using
>the
>> available efficient relay protocol (or better).
>>
>> SPV mining is a bit of a misnomer (If I coined the term, I'm sorry).
>> What these parties are actually doing is blinding mining on top of
>> other pools' stratum work. You can think of it as sub-pooling with
>> hopping onto whatever pool has the highest block (I'll call it VFSSP
>> in this post-- validation free stratum subpooling).  It's very easy
>to
>> implement, and there are other considerations.
>>
>> It was initially deployed at a time when a single pool in Europe has
>> amassed more than half of the hashrate. This pool had propagation
>> problems and a very high orphan rate, it may have (perhaps
>> unintentionally) been performing a selfish mining attack; mining off
>> their stratum work was an easy fix which massively cut down the
>orphan
>> rates for anyone who did it.  This was before the relay network
>> protocol existed (the fact that all the hashpower was consolidating
>on
>> a single pool was a major motivation for creating it).
>>
>> VFSSP also cuts through a number of practical issues miners have had:
>> Miners that run their own bitcoin nodes in far away colocation
>> (>100ms) due to local bandwidth or connectivity issues (censored
>> internet); relay network hubs not being anywhere near by due to
>> strange internet routing (e.g. japan to china going via the US for
>...
>> reasons...); the CreateNewBlock() function being very slow and
>> unoptimized, etc.   There are many other things like this-- and VFSSP
>> avoids them causing delays even when you don't understand them or
>know
>> about them. So even when they're easily fixed the VFSSP is a more
>> general workaround.
>>
>> Mining operations are also usually operated in a largely fire and
>> forget manner. There is a long history in (esp pooled) mining where
>> someone sets up an operation and then hardly maintains it after the
>> fact... so some of the use of VFSSP appears to just be inertia-- we
>> have better solutions now, but they they work to deploy and changing
>> things involves risk (which is heightened by a lack of good
>> monitoring-- participants learn they are too latent by observing
>> orphaned blocks at a cost of 25 BTC each).
>>
>> One of the frustrating things about incentives in this space is that
>> bad outcomes are possible even when they're not necessary. E.g. if a
>> miner can lower their orphan ra

Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Matt Corallo via bitcoin-dev


On August 6, 2015 8:42:38 PM GMT+02:00, Gregory Maxwell via bitcoin-dev 
 wrote:
>On Thu, Aug 6, 2015 at 6:17 PM, Tom Harding via bitcoin-dev
> wrote:
>>  - Will the relay network at least validate block version numbers in
>the
>> future?
>
>It already validates block version numbers.
>
>It only relays valid transactions.
>
>Although, the block relaying itself is explicitly "unvalidated" and
>the software client can only usefully be used with a mempool
>maintaining full node (otherwise it doesn't provide much value,
>because the node must wait to validate the things). ... but that
>doesn't actually mean no validation at all is performed, many
>stateless checks are performed.
>
>On Thu, Aug 6, 2015 at 5:16 PM, Sergio Demian Lerner via bitcoin-dev
> wrote:
>> Is there any up to date documentation about TheBlueMatt relay network
>> including what kind of block compression it is currently doing?
>(apart from
>> the source code)
>
>I don't know if Matt has an extensive writeup. But the basic
>optimization it performs is trivial.  I wouldn't call it compression,
>though it does have some analog to RTP "header compression".
>
>All it does is relay transactions verified by a local node and keeps a
>FIFO of the relayed transactions in both directions, which is
>synchronous on each side.
>
>When a block is recieved on either side, it replaces transactions with
>their indexes in the FIFO and relays it along. Transactions not in the
>fifo are escaped and sent whole. On the other side the block is
>reconstructed using the stored data and handed to the node (where the
>preforwarded transactions would have also been pre-validated).
>
>There is some more than basic elaboration for resource management
>(e.g. multiple queues for different transaction sizes)-- and more

No, just one queue, but it has a count-of-oversize-txn-limit, in addition to a 
size.

>recently using block templates to learn transaction priority be a bit
>more immune to spam attacks, but its fairly simple.

Except it doesn't really work :( (see 
https://github.com/TheBlueMatt/RelayNode/issues/12#issuecomment-128234446)

>Much better could be done about intelligently managing the queues or
>efficiently transmitting the membership sets, etc.  It's just
>basically the simplest thing that isn't completely stupid.

Patches welcome :) (read the issues list first... Rewriting the protocol from 
scratch is by far not the biggest win here).

Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Idea: Efficient bitcoin block propagation

2015-08-06 Thread Matt Corallo via bitcoin-dev


On August 6, 2015 8:17:35 PM GMT+02:00, Tom Harding via bitcoin-dev 
 wrote:
>On 8/6/2015 10:16 AM, Sergio Demian Lerner via bitcoin-dev wrote:
>> Is there any up to date documentation about TheBlueMatt relay network
>> including what kind of block compression it is currently doing?
>(apart
>> from the source code)
>>
>
>Another question.
>
>Did the "relay network" relay
>009cc829aa25b40b2cd4eb83dd498c12ad0d26d90c439d99, the
>BTC Nuggets block that was invalid post-softfork?  If so,

The version check was only added hours after the initial fork, so it should 
have (assuming BTC Nuggets or anyone who accepted it is running a client)

> - Is there reason to believe that by so doing, it contributed to the
>growth of the 2015-07-04 fork?

The reason other miners mined on that fork is because they were watching each 
other's stratum servers, so the relay network should not have had a significant 
effect. Still, even in a different fork, miners already aggressively relay 
around the network/between each other, so I'm not so worried.

>- Will the relay network at least validate block version numbers in the
>future?
>
>___
>bitcoin-dev mailing list
>bitcoin-dev@lists.linuxfoundation.org
>https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Jorge Timón via bitcoin-dev
Really, thanks again for replying and not getting mad when I get your
thoughts wrong.
I believe that I've learned more about your position on the subject
today than in months of discussion and blogs (that's not a critique to
your blog post, it's just that they didn't answer to some questions
that I personally needed responded).

On Thu, Aug 6, 2015 at 9:42 PM, Gavin Andresen  wrote:
> On Thu, Aug 6, 2015 at 1:15 PM, Jorge Timón  wrote:
>>
>> So I reformulate the question:
>>
>> 1) If "not now", when will it be a good time to let the "market
>> minimum fee for miners to mine a transaction" rise above zero?
>
>
> Two answers:
>
> 1. If you are willing to wait an infinite amount of time, I think the
> minimum fee will always be zero or very close to zero, so I think it's a
> silly question.

I'm very happy to have made the stupid question then. It has revealed
another big difference in the fundamental assumptions we're using.

My assumption is that for any reasonable size, free transactions will
eventually disappear (assuming Bitcoin doesn't "fail" for some other
reason).
Maybe I'm being too optimistic about the demand side of the market in
the long term.

In contrast, your assumption seems to be (and please correct me on
anything I get wrong) that...

"The limit will always be big enough so that free transactions are
mined forever. Therefore fees just allow users to prioritize their
urgent transactions and relay policies to protect their nodes against
DoS attacks.
Well, obviously, they also serve to pay for mining in a low-subsidy
future, but even with the presence of free transactions, fees will be
enough to cover mining costs, or a new mechanisms will be developed to
make a low-total-reward blockchain safe or expensive proof of work
will be replaced or complemented with something else that's cheaper.
The main point is that fees are not a mechanism to decide what gets
priced out of the blockchain, because advancements in technology will
always give as enough room for free transactions."
- jtimon putting words in Gavin's mouth, with the only intention to
understand him better.

I'm using "free transactions" even though you said "zero or very close to zero".
To you, "zero or very close to zero" may be the same thing, but to me
zero and very close to zero are like...different galaxies.
To me, entering the "very close to zero galaxy" is a huge step in the
development of the fee market.
I've been always assuming that moving from zero to 1 satoshi was
precisely what "big block advocates" wanted to avoid.
What they meant by "Bitcoin is going to become a high-value only
network" and similar things.
Knowing that for "big block advocates" zero and "very close to zero"
are equally acceptable changes things.

> 2. The "market minimum fee" should be determined by the market. It should
> not be up to us to decide "when is a good time."

I completely agree, but the block size limit is a consensus rule that
doesn't adapt to the market. The market will adapt to whatever limit
is chosen by the consensus rules.

>> 2) Do you have any criterion (automatic or not) that can result in you
>> saying "no, this is too much" for any proposed size?
>
>
> Sure, if keeping up with transaction volume requires a cluster of computers
> or more than "pretty good" broadband bandwidth I think that's too far.
> That's where original 20MB limit comes from, otherwise I'd have proposed a
> much higher limit.
>
>> Would you agree that blocksize increase proposals should have such a
>> criterion/test?
>
>
> Although I've been very clear with my criterion, no, I don't think all
> blocksize increase proposals should have to justify "why this size" or "why
> this rate of increase."

I would really like a more formal criterion, ideally automatic (like
any other test, the parameters can be modified as technology
advances).
But fair enough, even though your criterion is too vague or not
future-proof enough, I guess it is still a criterion.
It seems that this is a matter of disagreements and ideal ways of
doing things and not really a disagreement on fundamental assumptions.
So it seems this question wasn't so interesting after all.

> Part of my frustration with this whole debate is
> we're talking about a sanity-check upper-limit; as long as it doesn't open
> up some terrible new DoS possibility I don't think it really matters much
> what the exact number is.

That's what you think you are discussing, but I (and probably some
other people) think we are discussing something entirely different.
Because we have a fundamentally different assumption on what the block
size limit is about.
I really hope that identifying these "fundamental assumption
discrepancies" (FAD from now own) will help us avoid circular
discussions so that everything is less frustrating and more productive
for everyone.

>> Regardless of the history of the consensus rule (which I couldn't care
>> less about), I believe the only function that the maximum block size
>> rule currently serv

Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Peter Todd via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 6 August 2015 10:21:54 GMT-04:00, Gavin Andresen via bitcoin-dev 
 wrote:
>On Thu, Aug 6, 2015 at 10:06 AM, Pieter Wuille
>
>wrote:
>
>> But you seem to consider that a bad thing. Maybe saying that you're
>> claiming that this equals Bitcoin failing is an exaggeration, but you
>do
>> believe that evolving towards an ecosystem where there is competition
>for
>> block space is a bad thing, right?
>>
>
>No, competition for block space is good.
>
>What is bad is artificially limiting or centrally controlling the
>supply of
>that space.

Incidentally, why is that competition good? What specific design goal is that 
competition achieving?

-BEGIN PGP SIGNATURE-

iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVw9fn
AAoJEMCF8hzn9Lnc47AH/idUy2rGlUCBTTU/jDpNjMy5VGYYRawx50lrnGBufvIJ
8ZbFleI+gbnFCaJiaPF9ZN0mTjFWv7YcFzlwoPam11UfhEYI2Cl1aGha+R7g/18t
+1256i4Ykg0uEqrX9ITpYyzoBsVMaqsaOGBbJbUUtHoD1V1GCYBYi5JAl1msGjH/
2o+/Gh7gBB1Ll6SPtgeM1cCudRXA7PJr3WTjkLy8oGKY7lmVsPUfQ7h3OBJMTwa5
B+i1KTpSWdWyciWk0a3z7cxNfaajd7Pj3jZYoeCzKJdZja7lnB7FzUnaPE3y0wse
Bby6w48R4VeYsVhM+GolvRDVVSQN9XNfRSiRjuMW4Eg=
=wzhz
-END PGP SIGNATURE-

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Elliot Olds via bitcoin-dev
On Wed, Aug 5, 2015 at 6:26 PM, Jorge Timón  wrote:
>
>
> Given that for any non-absurdly-big size some transactions will
> eventually be priced out, and that the consensus rule serves for
> limiting mining centralization (and more indirectly centralization in
> general) and not about trying to set a given average transaction fee,
> I think the current level of mining centralization will always be more
> relevant than the current fee level when discussing any change to the
> consensus rule to limit centralization (at any point in time).
> In other words, the question "can we change this without important
> risks of destroying the decentralized properties of the system in the
> short or long run?" should be always more important than "is there a
> concerning rise in fees to motivate this change at all?".
>

I agree with you that decentralization is the most important feature of
Bitcoin, but I also think we need to think probabilistically and concretely
about when risks to decentralization are worthwhile.

Decentralization is not infinitely valuable in relation to low fees, just
like being alive is not infinitely valuable in relation to having money.
For instance, people will generally not accept a 100% probability of death
in exchange for any amount of money. However anyone who understands
probability and has the preferences of a normal person would play a game
where they accept a one in a billion chance of instant death to win one
billion dollars if they don't die.

Similarly we shouldn't accept a 100% probability of Bitcoin being
controlled by a single entity for any guarantee of cheap tx fees no matter
how low they are, but there should be some minuscule risk of
decentralization that we'd be willing to accept (like raising the block
size to 1.01 MB) if it somehow allowed us to dramatically increase
usability. (Imagine something like the Lightning Network but even better
was developed, but it could only work with 1.01 MB blocks).



> > Jorge, if a fee equilibrium developed at 1MB of $5/tx, and you somehow
> knew
> > with certainty that increasing to 4MB would result in a 20 cent/tx
> > equilibrium that would last for a year (otherwise fees would stay around
> $5
> > for that year), would you be in favor of an increase to 4MB?
>
> As said, I would always consider the centralization risks first: I'd
> rather have a $5/tx decentralized Bitcoin than a Bitcoin with free
> transactions but effectively validated (when they validate blocks they
> mine on top of) by around 10 miners, specially if only 3 of them could
> easily collude to censor transactions [orphaning any block that
> doesn't censor in the same manner]. Sadly I have no choice, the later
> is what we have right now. And reducing the block size can't guarantee
> that the situation will get better or even that fees could rise to
> $5/tx (we just don't have that demand, not that it is a goal for
> anyone). All I know is that increasing the block size *could*
> (conditional, not necessarily, I don't know in which cases, I don't
> think anybody does) make things even worse.
>

I agree that we don't have good data about what exactly a 4 MB increase
would do. It sounds like you think the risks are too great / uncertain to
move from 1 MB to 4 MB blocks in the situation I described. I'm not clear
though on which specific risks you'd be most worried about at 4 MB, and if
there are any risks that you think don't matter at 4 MB but that you would
be worried about at higher block size levels. I also don't know if we have
similar ideas about the benefits of low tx fees. If we discussed exactly
how we were evaluating this scenario, maybe we'd discover that something I
thought was a huge benefit of low tx fees is actually not that compelling,
or maybe we'd discover that our entire disagreement boiled down to our
estimate of one specific risk.

For the record, I think these are the main harms of $5 tx fees, along with
the main risks I see from moving to 4 MB:

Fees of $5/tx would:
(a) Prevent a lot of people who could otherwise benefit from Bitcoin's
decentralization from having an opportunity to reap those benefits.
Especially people in poor countries with corrupt governments who could get
immediate benefit from it now.
(b) Prevent developers from experimenting with new Bitcoin use-cases, which
might eventually lead to helpful services.
(c) Prevent regular people from using Bitcoin and experimenting with it and
getting involved, because they think it's unusable for txns under many
hundreds of dollars in value, so it doesn't interest them. Not having the
public on our side could make us more vulnerable to regulators.

Changing the block size to 4 MB would:
(1) Probably reduce the number of full nodes by around 5%. Most of the drop
in full nodes over the past few years is probably due to Bitcoin Core being
used by fewer regular users for convenience reasons, but the extra HD space
required and extra bandwidth would probably make some existing people
running full nod

[bitcoin-dev] Wrapping up the block size debate with voting

2015-08-06 Thread Dave Scotese via bitcoin-dev
"Miners can do this unilaterally" maybe, if they are a closed group, based
on the 51% rule. But aren't they using full nodes for propagation?  In this
sense, anyone can vote by coding.

If and when we need to vote, a pair-wise runoff ("condorcet method") will
find an option that is championed by a majority over each other option.
There may not be any such option, in which case no change would make the
most sense.

The voting proposal has several appeals to authority (which no one has)
like "People with certain amount of contribution" and "Exchanges operated
for at least 1 year with 100,000BTC 30-day volume may also apply": who
decided what amount or whether or not the application is approved?  It also
doesn't specify how many btc equates to one vote.





> On Aug 4, 2015, at 12:50 AM, jl2012 via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org > wrote:
>
> As now we have some concrete proposals (
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html),
I think we should wrap up the endless debate with voting by different
stakeholder groups.
>
> -
> Candidate proposals
>
> Candidate proposals must be complete BIPs with reference implementation
which are ready to merge immediately. They must first go through the usual
peer review process and get approved by the developers in a technical
standpoint, without political or philosophical considerations. Any fine
tune of a candidate proposal may not become an independent candidate,
unless it introduces some “real” difference. “No change” is also one of the
voting options.
> -
> Voter groups
>
> There will be several voter groups and their votes will be counted
independently. (The time frames mentioned below are just for example.)
>
> Miners: miners of blocks with timestamp between 1 to 30 Sept 2015 are
eligible to vote. One block one vote. Miners will cast their votes by
signing with the bitcoin address in coinbase. If there are multiple
coinbase outputs, the vote is discounted by output value / total coinbase
output value.
> Many well-known pools are reusing addresses and they may not need to
digitally sign their votes. In case there is any dispute, the digitally
signed vote will be counted.
>
> Bitcoin holders: People with bitcoin in the UTXO at block 372500 (around
early September) are eligible to vote. The total “balance” of each
scriptPubKey is calculated and this is the weight of the vote. People will
cast their votes by digital signature.
> Special output types:
> Multi-sig: vote must be signed according to the setting of the multi-sig.
> P2SH: the serialized script must be provided
> Publicly known private key: not eligible to vote
> Non-standard script according to latest Bitcoin Core rules: not eligible
to vote in general. May be judged case-by-case
>
> Developers: People with certain amount of contribution in the past year
in Bitcoin Core or other open sources wallet / alternative implementations.
One person one vote.
>
> Exchanges: Centralized exchanges listed on Coindesk Bitcoin Index,
Winkdex, or NYSE Bitcoin index, with 30 days volume >100,000BTC are
invited. This includes Bitfinex, BTC China, BitStamp, BTC-E, itBit, OKCoin,
Huobi, Coinbase. Exchanges operated for at least 1 year with 100,000BTC
30-day volume may also apply to be a voter in this category. One exchange
one vote.
>
> Merchants and service providers: This category includes all bitcoin
accepting business that is not centralized fiat-currency exchange, e.g.
virtual or physical stores, gambling sites, online wallet service, payment
processors like Bitpay, decentralized exchange like Localbitcoin, ETF
operators like Secondmarket Bitcoin Investment Trust. They must directly
process bitcoin without relying on third party. They should process at
least 100BTC in the last 30-days. One merchant one vote.
>
> Full nodes operators: People operating full nodes for at least 168 hours
(1 week) in July 2015 are eligible to vote, determined by the log of
Bitnodes. Time is set in the past to avoid manipulation. One IP address one
vote. Vote must be sent from the node’s IP address.
>
> 
> Voting system
>
> Single transferable vote is applied. (
https://en.wikipedia.org/wiki/Single_transferable_vote). Voters are
required to rank their preference with “1”, “2”, “3”, etc, or use “N” to
indicate rejection of a candidate.
> Vote counting starts with every voter’s first choice. The candidate with
fewest votes is eliminated and those votes are transferred according to
their second choice. This process repeats until only one candidate is left,
which is the most popular candidate. The result is presented as the
approval rate: final votes for the most popular candidate / all valid votes
>
> After the most popular candidate is determined, the whole counting
process is repeated by eliminating this candidate, which will find the
approval rate for the second most popular candidate. The process repeats
until a

Re: [bitcoin-dev] Wrapping up the block size debate with voting

2015-08-06 Thread Pieter Wuille via bitcoin-dev
On Fri, Aug 7, 2015 at 1:26 AM, Dave Scotese via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

"Miners can do this unilaterally" maybe, if they are a closed group, based
> on the 51% rule. But aren't they using full nodes for propagation?  In this
> sense, anyone can vote by coding.
>

They don't need to use full nodes for propagation. Miners don't care when
other full nodes hear about their blocks, only whether they (eventually)
accept them.

And yes, full nodes can change what blocks they accept. That's called a
hard fork :)

-- 
Pieter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Block size implementation using Game Theory

2015-08-06 Thread Wes Green via bitcoin-dev
Bitcoin is built on game theory. Somehow we seem to have forgotten that and
are trying to fix our "block size issue" with magic numbers, projected
percentage growth of bandwidth speeds, time limits, etc... There are
instances where these types of solutions make sense, but this doesn't
appear to be one of them. Lets return to game theory.

Proposal: Allow any miner to, up to, double the block size at any given
time - but penalize them. Using the normal block reward, whatever
percentage increase the miner makes over the previous limit is taken from
both the normal reward and fees. The left over is rewarded to the next
miner that finds a block.

If blocks stay smaller for an extended period of time, it goes back down to
the previous limit/ x amount decrease/% decrease  (up for debate)

Why would this work?: Miners only have incentive to do raise the limit when
they feel there is organic growth in the network. Spam attacks, block bloat
etc would have to be dealt with as it is currently. There is no incentive
to raise the size for spam because it will subside and the penalty will
have been for nothing when the attack ends and block size goes back down.

I believe it would have the nice side effect of forcing miners to hold the
whole block chain. I believe SPV does not allow you to see all the
transactions in a block and be able to calculate if you should be adding
more to your reward transaction if the last miner made the blocks bigger.
Because of this, the miners would also have an eye on blockchain size and
wont want it getting huge too fast (outsize of Moore's law of Nielsen's
Law). Adding to the gamification.

This system would encourage block size growth due to organic growth and the
penalty would encourage it to be slow as to still keep reward high and
preserve ROE.

What this would look like: The miners start seeing what looks like natural
network growth, and make the decision (or program an algorithm, the beauty
is it leaves the "how" up to the miners) to increase the blocksize. They
think that, in the long run, having larger blocks will increase their
revenue and its worth taking the hit now for more fees later. They increase
the size to 1.25 MB. As a result, they reward would be 18.75 (75%). The
miner fees were .5BTC. The miner fees are also reduced to .375BTC. Everyone
who receives that block can easily calculate 1) if the previous miner gave
themselves the proper reward 2) what the next reward should be if they win
it. Miners now start building blocks with a 31.25 reward transaction and
miner fee + .125.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Wrapping up the block size debate with voting

2015-08-06 Thread Will via bitcoin-dev
I think the key is comity and humility in terms of being honest about our 
inability to predict future trends in a meaningful way while passing through 
scrutiny coming from divergent perspectives.  8MB + 40% annually (at whatever 
increase frequency is preferred, at coinbase halvings seems most ideal) is the 
proposal to use.

What if 40% is too fast?   
If 40% turns out to be excessive, miners have a built in incentive to cap their 
blocks at a lower amount that meets the supply / demand equilibrium vs. the 
price of bitcoin trading on the free market.   The key here is to understand 
“price of bitcoin on the free market”.  Most developers don’t understand free 
market economics beyond gambling, which is a good bit of the problem here.  

Bottom line is, miners directly benefit from higher bitcoin prices when 
denominated in other currencies.  This fact will naturally limit excessive 
growth of blk*.dat. size, because if the storage requirements for bitcoin grow 
out of reach of amateurs, it will lead to excessive centralization and capture 
by powerful interests, which would threaten to convert bitcoin to a form of 
sovereign currency and kill free demand for bitcoin (and tank the price).  
Miners don’t want that without some other body paying them to make econonically 
distorted decisions.  They will limit the size themselves if they see this as 
an emerging threat.  It’s in their best interests and will keep them in 
business longer.

Now, is there a risk that some excessively well funded entity could 
artificially inflate the price of bitcoin while this bribing miners to let 
blk*.dat size grow out of hand (distorting miner economics) in some sort of 
“subsidies for increased liquidity” corruption scheme?  No there isn’t, because 
we are going to have a cap on the size that is reasonable enough that we know 
it won’t force out any amateurs for at least 5 years, and likely longer.

What if 40% is too slow?
That’s a problem anyone who actually owns bitcoin would like to have.  We’ll 
gladly support an increase in the rate if demand supports it later with a 
subsequent change.  We’ll have to do this eventually anyway when SHA256 and 
RIPEMD160 exhibit collisions, or some other non-self imposed existential threat 
rears its head.

Long game
Now, this entire scheme is predicated on the price going higher over the 
extended term.  I would argue that if we are doing a good job, the price should 
go higher.  Isn’t that the best barometer of performance?  Demand for the 
scarce units inside the protocol denominated in other currencies?

8MB is 1MB + 40% a year from January 2009 to today.  40% a year is as good as 
we are going to get in terms of our extrapolated estimation of future ability 
to host full nodes.  Anything else is overengineering something we can’t 
predict anyway.  Any arguments against this setting and rate of growth that 
aren’t exclusively focused on the computer science side of the debate are 
misguided at best, and corrupted by competing incentives at worst.  This is 
because it’s not possible to predict the future any better than using an 
extrapolation of the past, which is exactly what 8MB + 40% represents.

So TLDR?  Go with 8MB + 40% annually and we will cross any (likely imaginary) 
bridges when we come to them down the road.

Will

From: Pieter Wuille via bitcoin-dev 
Reply: Pieter Wuille >
Date: August 6, 2015 at 5:32:20 PM
To: Dave Scotese >
Cc: Bitcoin Dev >
Subject:  Re: [bitcoin-dev] Wrapping up the block size debate with voting  

On Fri, Aug 7, 2015 at 1:26 AM, Dave Scotese via bitcoin-dev 
 wrote:

"Miners can do this unilaterally" maybe, if they are a closed group, based on 
the 51% rule. But aren't they using full nodes for propagation?  In this sense, 
anyone can vote by coding.

They don't need to use full nodes for propagation. Miners don't care when other 
full nodes hear about their blocks, only whether they (eventually) accept them.

And yes, full nodes can change what blocks they accept. That's called a hard 
fork :)

--
Pieter
 

___  
bitcoin-dev mailing list  
bitcoin-dev@lists.linuxfoundation.org  
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev  
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size implementation using Game Theory

2015-08-06 Thread Anthony Towns via bitcoin-dev
On 7 August 2015 at 09:52, Wes Green via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:​

> Bitcoin is built on game theory. Somehow we seem to have forgotten that
> and are trying to fix our "block size issue" with magic numbers, projected
> percentage growth of bandwidth speeds, time limits, etc... There are
> instances where these types of solutions make sense, but this doesn't
> appear to be one of them. Lets return to game theory.
>
> Proposal: Allow any miner to, up to, double the block size at any given
> time - but penalize them. Using the normal block reward, whatev
> ​H​
> er percentage increase the miner makes over the previous limit is taken
> from both the normal reward and fees. The left over is rewarded to the next
> miner that finds a block.
> ​​
>
> What this would look like: The miners start seeing what looks like natural
> network growth, and make the decision (or program an algorithm, the beauty
> is it leaves the "how" up to the miners) to increase the blocksize. They
> think that, in the long run, having larger blocks will increase their
> revenue and its worth taking the hit now for more fees later. They increase
> the size to 1.25 MB. As a result, they reward would be 18.75 (75%). The
> miner fees were .5BTC. The miner fees are also reduced to .375BTC.
>

The equilibrium for that game is just keeping the same block size, isn't
it? Assume there's a huge backlog of fee-paying transactions, such that you
can trivially fill 1MB, and easily fill 1.25MB for the forseeable future;
fee per MB is roughly stable at "f". Then at any point in time, a miner has
the choice between receiving 25 + f btc, or increasing the blocksize to 1+p
MB and earning (25+f+pf) * (1-p) = f-ppf = 25(1-p) + f(1-pp) < 25+f. Even
if you think bigger blocks are good long term, wouldn't you reason that
other people will think so too, so why pay the price for it yourself,
instead of waiting for someone else to pay it and just reaping the benefit?


An idea I've been pondering is having the block size adjustable in
proportion to fees being paid. Something like "a block is invalid if
a(B-c)*B > F" where B is the block's size, F is the total fees for the
block, and a and c are scaling parameters -- either hardcoded in bitcoin,
or dynamically adjusted by miner votes. ATM, bitcoin's behavour is
effectively the same as setting c=1MB, a>=21M BTC/byte.

Having a more reasonable value for a would make it much easier to produce a
fee market for bitcoin transactions -- if the blocksize is currently around
some specific "B", then the average cost per byte of a transaction is just
"a(B-c)". If you pay more than that, then a miner will include your txn
sooner, increasing the blocksize beyond average if necessary; if you pay
less, you may have to wait for a lull in transactions so that the blocksize
ends up being smaller than average and miners can afford to include your
transaction (or someone might send an unnecessarily high fee paying txn
through, and yours might get swept along with it).

To provide some real numbers, you need to make some assumptions on fee
levels. If you're happy with:

 - 1 MB blocks are fine, even if no one's paying any fees
 - if people are paying 0.1 mBTC / kB (=0.1 BTC/MB) in fees on average then
8MB is okay

then a(1-c)=0, so c=1MB, and a(8-1)=0.1, so a=0.0143 and the scaling works
out like:

 - 1MB blocks: free transactions, no fees
 - 2MB blocks: 0.0143 mBTC/kB, 0.02 btc in fees/block
 - 4MB blocks: 0.043 mBTC/kB, 0.17 btc in fees/block
 - 8MB blocks: 0.1 mBTC/kB, 0.8 btc in fees/block
 - 20MB blocks: 0.27 mBTC/kB, 5.4 btc in fees/block
 - 40MB blocks: 0.56 mBTC/kB, 22 btc in fees/block

In the short term, miners can just maximise fees for a block -- ie, add the
highest fee/byte txns in order until adding the next one would invalidate
the block.

Over the long term, you'd still want to be able to adjust a and c -- as the
price of bitcoin in USD/real terms goes up, a should decrease
proportionally; as hardware/software improve, a should decrease and/or c
should increase.  Essentially miners would want to choose a,c such that the
market for block space clears at a price of some $x/byte, where $x is
determined by their costs -- ie, hardware/software constraints. If they set
a too high, or c too low, then they'll be unable to accept some
transactions offering $x/byte, and thus lose out. If they set a too low or
c too high, they'll be mining bigger blocks for less reward, and lose out
that way too. At the moment, I think it's a bit of both problems -- c is
too low (meaning some transactions get dropped), but a is too high (meaning
fees are too low to pay for the effort of bigger blocks).

Note that, as described, miners could try cheating this plan by making a
high fee transaction to themselves but not publishing it -- they'll collect
the fee anyway, and now they can mine arbitrarily large blocks. You could
mitigate this by having a(B-c) set the /minimum/ fee/byte of every
transaction in a block,

Re: [bitcoin-dev] Block size implementation using Game Theory

2015-08-06 Thread jl2012 via bitcoin-dev
It won't work as you thought. If a miner has 95% of hashing power, he 
would have 95% of chance to find the next block and collect the penalty. 
In long term, he only needs to pay 5% penalty. It's clearly biased 
against small miners.


Instead, you should require the miners to burn the penalty. Whether this 
is a good idea is another issue.


Wes Green via bitcoin-dev 於 2015-08-06 19:52 寫到:

Bitcoin is built on game theory. Somehow we seem to have forgotten
that and are trying to fix our "block size issue" with magic numbers,
projected percentage growth of bandwidth speeds, time limits, etc...
There are instances where these types of solutions make sense, but
this doesn't appear to be one of them. Lets return to game theory.

Proposal: Allow any miner to, up to, double the block size at any
given time - but penalize them. Using the normal block reward,
whatever percentage increase the miner makes over the previous limit
is taken from both the normal reward and fees. The left over is
rewarded to the next miner that finds a block.

If blocks stay smaller for an extended period of time, it goes back
down to the previous limit/ x amount decrease/% decrease  (up for
debate)

Why would this work?: Miners only have incentive to do raise the limit
when they feel there is organic growth in the network. Spam attacks,
block bloat etc would have to be dealt with as it is currently. There
is no incentive to raise the size for spam because it will subside and
the penalty will have been for nothing when the attack ends and block
size goes back down.

I believe it would have the nice side effect of forcing miners to hold
the whole block chain. I believe SPV does not allow you to see all the
transactions in a block and be able to calculate if you should be
adding more to your reward transaction if the last miner made the
blocks bigger. Because of this, the miners would also have an eye on
blockchain size and wont want it getting huge too fast (outsize of
Moore's law of Nielsen's Law). Adding to the gamification.

This system would encourage block size growth due to organic growth
and the penalty would encourage it to be slow as to still keep reward
high and preserve ROE.

What this would look like: The miners start seeing what looks like
natural network growth, and make the decision (or program an
algorithm, the beauty is it leaves the "how" up to the miners) to
increase the blocksize. They think that, in the long run, having
larger blocks will increase their revenue and its worth taking the hit
now for more fees later. They increase the size to 1.25 MB. As a
result, they reward would be 18.75 (75%). The miner fees were .5BTC.
The miner fees are also reduced to .375BTC. Everyone who receives that
block can easily calculate 1) if the previous miner gave themselves
the proper reward 2) what the next reward should be if they win it.
Miners now start building blocks with a 31.25 reward transaction and
miner fee + .125.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev