It follows from the above, that Shannon's entropy model is a
simplified, idealized model of information, that pretends that
algorithms have zero length and thus no entropy, and you can magically
share a codebook telepathically without actually transmitting it.

If you dismiss all these details and just pretend they do not exist,
then in a highly simplistic, idealized model, you may actually have an
entropy of zero in a *single*, special corner case. Even in Shannon's
highly simplified model, in virtually all other cases, entropy is
nonzero.

Later models of information, like the one of Kolmogorov, base the
amount of information not on statistical probability, but rather, the
minimal length of the algorithm needed to encode that information,
which turn out to be near equivalent definitions.

Quote from [1]:

"Firstly, we have to point out the relation between Kolmogorov
complexity and Shannon's entropy. Briefly, classic information theory
says a random variable X distributed according to P(X=x) has entropy
or complexity of a statistical form, where the interpretation is that
H(X) bits are on the average sufficient to describe an outcome x.
Computational or algorithmic complexity says that an object x has
complexity C(x)=the minimum length of a binary program for x. It is
very interesting to remark that these two notions turn out to be much
the same. Thus, the intended interpretation of complexity C(x) as a
measure of the information content of an individual object x is
supported by a tight quantitative relationship to Shannon's
probabilistic notion. In particular, the entropy H=-SUM P(x)*logP(x)
of the distribution p is asymptotically equal to the expected
complexity SUM P(x)*C(x)."

According to [2]:

"Algorithmic entropy is closely related to statistically defined
entropy, the statistical entropy of an ensemble being, for any
concisely describable ensemble, very nearly equal to the ensemble
average of the algorithmic entropy of its members."

According to [3]:

"Nevertheless, once allowance is made for the units used, the
expectation value of the algorithmic entropy for a set of strings
belonging to an ensemble is virtually the same as the Shannon entropy,
or the Boltzmann or Gibbs entropies derived for a set of states."

According to these sources, the algorithmic entropy "asymptotically
equals", "very nearly equals", or "virtually the same" as the
probabilistic entropy. Therefore, any signal that has nonzero length,
needs a nonzero-length program to encode, therefore, has nonzero
entropy.

[1] On Shannon, Fisher, and Algorithmic Entropy in Cognitive Systems
http://uni-obuda.hu/conferences/saci04/Crisan.pdf

[2] 2003, Niels Henrik Gregersen, From Complexity to Life
https://en.wiktionary.org/wiki/algorithmic_entropy

[3] 2009, Sean Devine, The Insights of Algorithmic Entropy
http://www.mdpi.com/1099-4300/11/1/85/pdf
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to