> On 9 Mar 2016, at 20:21, Bob McElrath <bob_bitc...@mcelrath.org> wrote:
> 
> Dave Hudson [d...@hashingit.com] wrote:
>> A damping-based design would seem like the obvious choice (I can think of a
>> few variations on a theme here, but most are found in the realms of control
>> theory somewhere).  The problem, though, is working working out a timeframe
>> over which to run the derivative calculations.
> 
> From a measurement theory perspective this is straightforward.  Each block is 
> a
> measurement, and error propagation can be performed to derive an error on the
> derivatives.

Sure, but I think there are 2 problems:

1) My guess is that errors over anything but a long period are probably too 
large to be very useful.

2) We don't have a strong notion of time that is part of the consensus.  Sure, 
blocks have timestamps but they're very loosely controlled (can't be more than 
2 hours ahead of what any validating node thinks the time might be).  
Difficulty can't be calculated based on anything that's not part of the 
consensus data.

> The statistical theory of Bitcoin's block timing is known as a Poisson Point
> Process: https://en.wikipedia.org/wiki/Poisson_point_process or temporal point
> process.  If you google those plus "estimation" you'll find a metric shit-ton 
> of
> literature on how to handle this.

Strictly it's a non-homogeneous Poisson Process, but I'm pretty familiar with 
the concept (Google threw one of my own blog posts back at me: 
http://hashingit.com/analysis/27-hash-rate-headaches, but I actually prefer 
this one: http://hashingit.com/analysis/30-finding-2016-blocks because most 
people seem to find it easier to visualize).

>> The problem is the measurement of the hashrate, which is pretty inaccurate at
>> best because even 2016 events isn't really enough (with a completely constant
>> hash rate running indefinitely we'd see difficulty swings of up to +/- 5% 
>> even
>> with the current algorithm).  In order to meaningfully react to a major loss
>> of hashing we'd still need to be considering a window of probably 2 weeks.
> 
> You don't want to assume it's constant in order to get a better measurement.
> The assumption is clearly false.  But, errors can be calculated, and 
> retargeting
> can take errors into account, because no matter what we'll always be dealing
> with a finite sample.

Agreed, it's a thought experiment I ran in May 2014 
(http://hashingit.com/analysis/28-reach-for-the-ear-defenders).  I found that 
many people's intuition is that there would be little or no difficulty changes 
in such a scenario, but the intuition isn't reliable.  Given a static hash rate 
the NHPP behaviour introduces a surprisingly large amount of noise (often much 
larger than any signal over a period of even weeks).  Any measurements in the 
order of even a few days has so much noise that it's practically unusable.  I 
just realized that unlike some of my other sims this one didn't make it to 
github; I'll fix that later this week.


Cheers,
Dave
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to