> Pruned nodes are not the default configuration, if it was the default
configuration then I think you would see far more users running a pruned
node.

Default configurations aren't a big enough deal to factor into the critical
discussion of node costs versus transaction fee cost.  Default
configurations can be changed, and if nodes are negatively affected by a
default configuration, there will be an abundance of information about how
to correct that effect by turning on pruning.  Bitcoin can't design with
the assumption that people can't google - If we wanted to cater to that
population group right now, we'd need 100x the blocksize at least.

> But that would also substantially increase the burden on archive nodes.

This is already a big problem from the measurements I've been looking at.
There are alternatives that need to be considered there as well.  If we
limit ourselves to not changing the syncing process for most users, the
blocksize limit debate changes drastically.  Hard drive costs, CPU costs,
propagation times... none of those things matter because the cost of sync
bandwidth is so incredibly high even now ($130ish per month, see other
email).  Even if we didn't increase the blocksize any more than segwit,
we're already seeing sync costs being shifted onto fewer nodes - I.e., Luke
Jr's scan finding ~50k nodes online but only 7k of those show up on sites
like bitnodes.21.co.  Segwit will shift it further until the few nodes
providing sync limit speeds and/or max out on connections, providing no
fully-sync'd nodes for a new node to connect to. Then wallet providers /
node software will offer a solution - A bundled utxo checkpoint that
removes the need to sync.  This slightly increases centralization, and
increases centralization more if core were to adopt the same approach.

The advantage would be tremendous for such a simple solution - Node costs
would drop by a full order of magnitude for full nodes even today, more
when archival nodes are more restricted, history is bigger, and segwit
blocksizes are in effect, and then blocksizes could be safely increased by
nearly the same order of magnitude, increasing the utility of bitcoin and
the number of people that can effectively use it.

Another, much more complicated option is for the node sync process to
function like a tor network.  A very small number of seed nodes could send
data on to only other nodes with the highest bandwidth available(and good
retention policy, i.e. not tightly pruning as they sync), who then spread
it out further and so on.  That's complicated though, because as far as I
know the syncing process today has no ability to exchange a selfish syncing
node for a high performing syncing node.  I'm not even sure - will a
syncing node opt to sync from a different node that, itself, isn't fully
sync'd but is farther ahead?

At any rate, syncing bandwidth usage is a critical problem for future
growth and is solvable.  The upsides of fixing it are huge, though.

On Wed, Mar 29, 2017 at 9:25 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <andrew.johnso...@gmail.com>
> wrote:
>
> What's stopping these users from running a pruned node?  Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to