On Mon, 13 Oct 2014, Peter S wrote:
So I found that the number of transitions correlates with randomness
(statistically).
Ok. By eyeball:
not very random
0101001001110110111001100100011001101101 very random :-)
101010101010101010101010101
On 2014-10-20, Ethan Duni wrote:
It isn't like one even needs the concept of "entropy," nor any named
"framework," to express this simple fact that "The number of bits
required to specify one of N values is at most log2(N)." That single,
obvious statement - which is fully expressed by Shannon
20 oktober 2014, Max Little skrev:
> Many times in the past I've found very unexpected applications to
> audio DSP of mathematical concepts which initially seemed entirely
> unrelated to DSP. Particularly, this happens in nonlinear DSP, which
> is a very broad area. So I don't, as a rule, discoun
It would probably sound like a noise gate, and perhaps not terribly exciting.
On the next level up, frequency domain and/or variable filter adaptive noise
cancellation is a specialised but well-understood subset of DSP lore. (Try any
cell phone, VOIP service, or video chat system for a demo.)
B
Many times in the past I've found very unexpected applications to
audio DSP of mathematical concepts which initially seemed entirely
unrelated to DSP. Particularly, this happens in nonlinear DSP, which
is a very broad area. So I don't, as a rule, discount any mathematics
prejudicially, because you
On Mon, Oct 20, 2014 at 10:00:13AM -0700, Ethan Duni wrote:
> Meanwhile, I'll point out that it's been a long time since anybody on this
> thread has even attempted to say anything even tangentially related to
> music dsp.
The first thing that came to my mind after seeing Peter's image
processi
>>You can apply Hartley to any distribution, it doesn't have to be uniform.
>
> You don't "apply Hartley" to a "distribution." You apply it to a *random
> variable*,
Since when? You apply the Hartley formula to the distribution, as with
all entropy-like formulas, such as Shannon's formula. These a
>You can apply Hartley to any distribution, it doesn't have to be uniform.
You don't "apply Hartley" to a "distribution." You apply it to a *random
variable*, and in doing so you *ignore* its actual distribution (and,
equivalently, instead assume it is uniform). Which is the whole reason that
Hart
> Shannon entropy generalizes Hartley entropy. Renyi entropy generalizes
> Shannon entropy. By transitivity, then, Renyi also generalizes Hartley.
Well, Renyi directly generalizes Hartley.
You can apply Hartley to any distribution, it doesn't have to be uniform.
That's if we agree on the formula
Shannon entropy generalizes Hartley entropy. Renyi entropy generalizes
Shannon entropy. By transitivity, then, Renyi also generalizes Hartley.
Meanwhile, I'll point out that it's been a long time since anybody on this
thread has even attempted to say anything even tangentially related to
music dsp
>>I might find myself in the situation where I am given a nonuniform
>>distribution. Then Shannon and Hartley formulas would give a different
>>answer.
>
> And the Hartley formula would be inapplicable, since it assumes a uniform
> distribution.
Just to be clear, what formula are you referring to?
>I might find myself in the situation where I am given a nonuniform
>distribution. Then Shannon and Hartley formulas would give a different
>answer.
And the Hartley formula would be inapplicable, since it assumes a uniform
distribution. So you'd have to use the Shannon framework, both to calculate
On 20/10/2014, Max Little wrote:
> I might find myself in the situation where I am given a nonuniform
> distribution. Then Shannon and Hartley formulas would give a different
> answer.
Basicaly the Hartley formula always gives you an 'upper bound', or an
estimate of 'maximum' possible entropy (th
> Then just do Shannon's definition over a space with equidistributed
probability. Define it as so, and you have the precise same fundamental
framework. Seriously, there is no difference.
I might find myself in the situation where I am given a nonuniform
distribution. Then Shannon and Hartley form
On 2014-10-14, Max Little wrote:
Still, I might find myself finding a use for Hartley's 'entropy',
maybe someday. I don't discount any maths really, I don't have any
prejudices.
Then just do Shannon's definition over a space with equidistributed
probability. Define it as so, and you have the
Hi Everyone,
Instead of long and boring discussions:
let me show you something interesting.
"Entropy estimation in 2D sampled signals"
"Original image"
http://scp.web.elte.hu/entropy/think.jpg
"Estimated entropy distribution"
http://scp.web.elte.hu/entropy/entropy.gif
Doesn't this give you som
On 16/10/2014, B.J. Buchalter wrote:
> Well, I see one thing that could be an issue.
>
> If you send your algorithm the following sequence:
>
> 010101010101010101010…
>
> It will say that that there is one bit of information for each bit it
> receives. But in this case, that is not true; since it
On Oct 16, 2014, at 12:55 PM, Peter S wrote:
> Basically I just wanted some 'critique' on this theory, in other
> words, a logical analysis - maybe someone notices something that I
> missed. That's all, thanks for your comments.
Well, I see one thing that could be an issue.
If you send your al
On 16/10/2014, Thomas Strathmann wrote:
> But that's not additive is it? For a sequence of length one the
> algorithm always yields entropy = 0. But for a sequence of length
> greater than one this need not be the case.
Yep, good observation. The number of correlations between N bits is
always on
On 16/10/2014, Phil Burk wrote:
> On 10/16/14, 3:43 AM, Peter S wrote:
>> Quantifying information is not something that can be discussed in
>> depth in only a dozen messages in a single weekend
>
> Very true. Have you considered writing a book on entropy? You clearly
> can generate a lot of co
Peter,
On 16 Oct 2014, at 17:29, Peter S wrote:
> http://media-cache-ec0.pinimg.com/736x/57/e4/6e/57e46edcd3f18dd405db3dd756b6dca0.jpg
>
> What I say will only make sense for those people who "think". In other
> words, my words will make sense for only about 2% of the population.
>
> The fact
On 16/10/2014, STEFFAN DIEDRICHSEN wrote:
>
> How should this work? To categorize data you need categories.
I told the two 'categories' that my algorithm distinguishes:
1) noise
2) non-noise
Where you put the 'threshold' between noise and non-noise, in other
words, how you separate/segment your
On 16 Oct 2014, at 12:36, Peter S wrote:
> Is that _all_ you can care about?
>
> I'm talking about a potential way of categorizing arbitrary data,
How should this work? To categorize data you need categories. Therefore you
need to understand / interpret data. Otherwise you just get something
On 10/16/14, 3:43 AM, Peter S wrote:
Quantifying information is not something that can be discussed in
depth in only a dozen messages in a single weekend
Very true. Have you considered writing a book on entropy? You clearly
can generate a lot of content on a daily basis and could easily
On 16/10/2014, Alberto di Bene wrote:
> Just set a filter in my Thunderbird...
>
> From now on, all messages on this list having "Peter S"
> as originator are directly discarded into the trash bin.
http://media-cache-ec0.pinimg.com/736x/57/e4/6e/57e46edcd3f18dd405db3dd756b6dca0.jpg
What I say w
Just set a filter in my Thunderbird...
From now on, all messages on this list having "Peter S"
as originator are directly discarded into the trash bin.
73 Alberto I2PHD
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book
Saying that we cannot *precisely* estimate entropy because we cannot
calculate the correlation between all the bits, is like saying we
cannot *precisely* reconstruct a digital signal because the sinc
function we need to convolve it with is infinite in both directions,
and thus, would need infinite
On 16/10/2014, Peter S wrote:
> Let me make this clear distinction:
>
> "Entropy" does *NOT* mean "information".
> "Entropy" means "information density".
>
> ...which is just another way of phrasing: "probability of new
> information", if you prefer to use Shannon's term instead. If all the
> near
Let me make this clear distinction:
"Entropy" does *NOT* mean "information".
"Entropy" means "information density".
...which is just another way of phrasing: "probability of new
information", if you prefer to use Shannon's term instead. If all the
nearby symbols are correlated, then the "probabil
Look, when I go into a subject, I like to go into it in detail. We
barely even scratched the surface of the topics relevant to 'entropy'.
All that we've been talking about so far is just the basics.
Quantifying information is not something that can be discussed in
depth in only a dozen messages in
Is that _all_ you can care about?
I'm talking about a potential way of categorizing arbitrary data, and
all you argue about is the number of messages. How is that even
remotely relevant to the topic?
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code arch
Hi Peter,
I’m really glad to see some new faces on this list from time to time. You told
us a long introductory story of how bad people treated you on the old channel.
Just out of curiosity and to get the whole picture right, how many messages per
day did you drop of there in average? Did some
If for some reason, _all_ that you can think of is _separate_ messages
and probabilities, then let me translate this for you into simple
Shannonian terms so that you can understand it clearly:
Let's imagine that your have a message, which is essentially a string
of bits. What we're trying to estim
On 10/16/2014 09:22 AM, Peter S wrote:
entropy = 0
state = b(1)
for i = 2 to N
if state != b(i) then
entropy = entropy + 1
end if
state = b(i)
end for
But that's not additive is it? For a sequence of length one the
algorithm always yields entropy = 0. But for a sequence of
...and what I'm proposing is a simple, first-order decorrelation
approximator, which in my belief, roughly approximates the *expected*
entropy of an arbitrary message.
For better and less biased decorrelation approximators, feel free to
consult the scientific literature list I posted.
--
dupswapdr
In other words, what I'm saying:
"The *maximum* possible amount of information your message can contain
inversely correlates with how much your symbols are correlated. By
approximating the decorrelation between your symbols, I can
approximate the *maximum* possible amount of information in your
me
Let me show one further way, what the Hartley entropy (or also called,
the "max" entropy) means:
It just means, what is the *maximum* amount of information your
message can _possibly_ contain.
To turn it into a simple, real-world example, let's imagine that you
send me a single bit. In that case,
On Wed, 15 Oct 2014, Alan Wolfe wrote:
> For some reason, All I'm seeing are your emails Peter. not sure who you
> are chatting to or what they are saying in response :P
Good point, I observed the same.
Laszlo Toth
Hungarian Academy of Sciences *
Research Group
On 16/10/2014, Paul Stoffregen wrote:
> In the 130 messages you've posted since your angry complaint regarding
> banishment from an IRC channel nearly 2 weeks ago, I do not recall
> seeing any source code, nor any psuedo-code, equations or description
> that appeared to be "practical" and "working
Sorry for the low entropy message I sent.
Paul,
We never had any filters on this list and I think, that's good. I simply delete
most of this thread without reading it. The risk of missing something is quite
low. I liked the link to xkcd, that was a practical take away.
Best,
Steffan
PS: l
Von meinem iPhone gesendet
> Am 16.10.2014 um 02:16 schrieb Paul Stoffregen :
>
>> On 10/15/2014 12:45 PM, Peter S wrote:
>> I gave you a practical, working *algorithm*, that does *something*.
>
> In the 130 messages you've posted since your angry complaint regarding
> banishment from an IRC
On 10/15/2014 12:45 PM, Peter S wrote:
I gave you a practical, working *algorithm*, that does *something*.
In the 130 messages you've posted since your angry complaint regarding
banishment from an IRC channel nearly 2 weeks ago, I do not recall
seeing any source code, nor any psuedo-code, equ
Nevermind. Was just trying to find out how we could characterize some
arbitrary data.
Apparently all that some guys see from this, is that "n! that
does NOT fit into my world view!!"
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list
For some reason, All I'm seeing are your emails Peter. not sure who you
are chatting to or what they are saying in response :P
On Wed, Oct 15, 2014 at 2:18 PM, Peter S
wrote:
> Academic person: "There is no way you could do it! Impossibru!!"
>
> Practical person: "Hmm... what if I used a simple
Academic person: "There is no way you could do it! Impossibru!!"
Practical person: "Hmm... what if I used a simple upper-bound
approximation instead?"
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http:
Notice that in my system of symbol "A" and symbol "B", I can still
quantify the amount of information based on the size of the symbol
space using the Hartley entopy H_0 without needing to know the
_actual_ probability distributions, because I can estimate the
*expected* entropy based the size of th
...and we didn't even go into 'entropy of algorithms' and other fun topics...
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listi
...couldn't you throw those information theory books aside for a few
moments, and start thinking about information theory concepts with a
fresh mind for some moments with an "out of the box" approach?
Again, I'm not here to argue about whose "religion" is the best.
Rather, I'm trying to quantify i
On 15/10/2014, Peter S wrote:
> So it seems, I'm not the only one on this planet, who thinks _exactly_
> this way. Therefore, I think your argument is invalid, or all the
> other people who wrote those scientific entropy estimation papers are
> _all_ also "crackpots". (*)
(*) ... and it would fol
On 15/10/2014, r...@audioimagination.com wrote:
> sorry, Peter, but we be unimpressed.
I gave you a practical, working *algorithm*, that does *something*.
In my opinion, it (roughly) approximates 'expected entropy', and I
found various practical real-world uses of this algorithm for
categorizing
Let me show another way, why the total amount of "information" in a
system is expected to correlate with the total amount of
"decorrelation" in the system.
Let's imagine we have a simple information system, with two pieces of
information. Say, information "A", and information "B". Let's imagine,
t
�"Peter S" wrote:
> Okay, let's phrase it this way - what I essentially showed is that the
> 'Shannon entropy' problem can be turned into a 'symbol space search'
> problem, where the entropy inversely correlates with the probability
> of finding the solution in the problem space.
�
https:/
On 15/10/2014, Phil Burk wrote:
> That would take 16 questions. But instead of asking those 16 questions,
> why not ask:
>
> Is the 1st bit a 1?
> Is the 2nd bit a 1?
> Is the 3rd bit a 1?
> Is the 4th bit a 1?
Good question! In practical context, that's impossible, normally we
only have a one-wa
Hello Peter,
I'm trying to understand this entropy discussion.
On 10/15/14, 2:08 AM, Peter S wrote:
> Let's imagine that your message is 4 bits long,
> If we take the minimal number of 'yes/no' questions I need to guess
> your message with a 100% probability, and take the base 2 logarithm of
> t
Okay, let's phrase it this way - what I essentially showed is that the
'Shannon entropy' problem can be turned into a 'symbol space search'
problem, where the entropy inversely correlates with the probability
of finding the solution in the problem space.
Often, we don't care *precisely* what the p
On 15/10/2014, Theo Verelst wrote:
> Like why would professionally self-respecting
> scientists need to worry about colleagues as to use 20 character
> passwords based on analog random data?
FYI: When I communicate with my bank, before logging in, I have to
move my mouse randomly for about a half
On 15/10/2014, Theo Verelst wrote:
> Like why would professionally self-respecting
> scientists need to worry about colleagues as to use 20 character
> passwords based on analog random data?
Once all your money from your bank account gets stolen and goes up in
smoke, you'll realize the importance
Let me express the Hartley entropy another way:
The Hartley entropy gives the size of the symbol space, so it is a
good approximator and upper bound for the actual entropy. If the
symbols are fully decorrelated, then the _maximum_ amount of time it
takes to search through the entire symbol space i
Let me express the Hartley entropy another way:
The Hartley entropy gives the size of the symbol space, so it is a
good approximator and upper bound for the actual entropy. If the
symbols are fully decorrelated, then the _maximum_ amount of time it
takes to search through the entire symbol space i
Before would seemingly agree with some follies going on here: I
believe, like I've written for solid reasons, that the normal
Information Theory that led to a theoretical underpinning of various
interesting EE activities since long ago, is solidly understood by it's
makers, and when rightly ap
It also follows that if the symbol space is binary (0 or 1), then
assuming a fully decorrelated and uniformly distributed sequence bits,
the entropy per symbol (bit) is precisely log2(2) = 1.
>From that, it logically follows that an N bit long decorrelated and
uniform sequence of bits (= "white no
On 10/15/2014 01:01 PM, Peter S wrote:
Let me show you the relevance of Hartley entropy another way:
Here's xkcd's take on password strength.
http://xkcd.com/936/
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book review
Let me show you the relevance of Hartley entropy another way:
Which of the following 10 passwords contains more entropy, and thus
better for protecting your account?
a) DrhQv7LMbP
b) PHGF4V7uod
c) ndSk4YrEls
d) C38ysVOEDh
e) 3XfFmMT13Y
f) ayuyR9azD8
g) zuvptYRa1m
h) ssptl9pOGt
i) KDY2vwqYnV
j) qU
On 14/10/2014, Max Little wrote:
> I haven't seen Hartley entropy used anywhere practical.
Hartley entropy is routinely used in cryptography, and usually imply
'equal probability'.
This is why I recommended some of you guys take a few basic lessons in
cryptography, to have at least some bare min
On 14/10/2014, Ethan Duni wrote:
>>Well, I merely said it's interesting that you can define a measure of
>>information without probabilities at all, if desired.
>
> That's a measure of *entropy*, not a measure of information.
How you define 'information', is entirely subjective, with several
poss
On 14/10/2014, Peter S wrote:
> Again, the minimal number of 'yes/no' questions needed to guess your
> message with 100% probability is _precisely_ the Shannon entropy of
> the message:
Let me demonstrate this using a simple real-world example.
Let's imagine that your message is 4 bits long, and
The relevant limit here is:
lim x*log(x) = 0
x->0
It's pretty standard to introduce a convention of "0*log(0) = 0" early on
in information theory texts, since it avoids a lot of messy delta/epsilon
stuff in the later exposition (and since the results cease to make sense
without it, with empty por
�"Max Little" wrote:
> OK yes, 0^0 = 1.
�
depends on how you do the limit.
�
�� lim x^x� = 1
�� x->0
�
i imagine this might come out different
�
�� lim (1 - x^x)^(x^(1/2))
��
x->0
�
dunno.� too lazy to L'Hopital it.
�
r b-j
�--
dupswapdrop -- the music-dsp mailing list and website:
subscript
> If you look at the real audio signals out there, which statistic would you
> expect them to follow under the Shannonian framework? A flat one? Or
> alternatively, what precise good would it do to your analysis, or your code,
> if you went with the equidistributed, earlier, Hartley framework? Woul
On 2014-10-14, Max Little wrote:
Maths is really just patterns, lots of them are interesting to me,
regardless of whether there is any other extrinsic 'meaning' to those
patterns.
In that vein, it might even be the most humanistic of sciences. Moreso
even than poetry:
https://en.wikipedia.o
Yes, don't have time for a long answer, but all elegantly put.
I'm just reiterating the formula. And saying it's interesting. Maths
is really just patterns, lots of them are interesting to me,
regardless of whether there is any other extrinsic 'meaning' to those
patterns.
M.
On 14 October 2014
OK yes, 0^0 = 1. Delete the bit about probabilities needing to be
non-zero I guess!
Think you're taking what I said too seriously, I just said it's an
interesting formula! Kolmogorov seemed to think so too.
M.
On 14 October 2014 18:37, Ethan Duni wrote:
>> The Hartley entropy
>>is invariant to
> The Hartley entropy
>is invariant to the actual distribution (provided all the
>probabilities are non-zero, and the sample space remains unchanged).
No, the sample space does not require that any probabilities are nonzero.
It's defined up-front, independently of any probability distribution.
Ind
Max Little wrote:
...
Well, we'd probably have to be clearer about that. The Hartley entropy
is invariant to the actual distribution
Without going into the comparison of wanting to be able to influence the
lottery to achieve a higher winning chance, I looked up the
Shannon/Hartley theorem, w
On 2014-10-14, Max Little wrote:
Hmm .. don't shoot the messenger! I merely said, it's interesting that
you don't actually have to specify the distribution of a random
variable to compute the Hartley entropy. No idea if that's useful.
Math always has this precise tradeoff: more general but le
>>> Hartley entropy doesn't "avoid" any use of probability, it simply
>>> introduces the assumption that all probabilities are uniform which greatly
>>> simplifies all of the calculations.
>>
>>
>> How so? It's defined as the log cardinality of the sample space. It is
>> independent of the actual d
> Right, and that is exactly equivalent to using Shannon entropy under the
> assumption that the distribution is uniform.
Well, we'd probably have to be clearer about that. The Hartley entropy
is invariant to the actual distribution (provided all the
probabilities are non-zero, and the sample spac
On 2014-10-14, Max Little wrote:
Hartley entropy doesn't "avoid" any use of probability, it simply
introduces the assumption that all probabilities are uniform which
greatly simplifies all of the calculations.
How so? It's defined as the log cardinality of the sample space. It is
independent
>How so? It's defined as the log cardinality of the sample space. It is
>independent of the actual distribution of the random variable.
Right, and that is exactly equivalent to using Shannon entropy under the
assumption that the distribution is uniform. That's why it's also called
"maxentropy," si
> Hartley entropy doesn't "avoid" any use of probability, it simply
> introduces the assumption that all probabilities are uniform which greatly
> simplifies all of the calculations.
How so? It's defined as the log cardinality of the sample space. It is
independent of the actual distribution of th
>Although, it's interesting to me that you might be able to get some
>surprising value out of information theory while avoiding any use of
>probability ...
Hartley entropy doesn't "avoid" any use of probability, it simply
introduces the assumption that all probabilities are uniform which greatly
> If you look back, I already mentioned Kolmogorov complexity explicitly as an
> "iffy" subject, and someone (Ethan?) implicitly mentioned even
> Renyi/min-entropy, by bringing up randomness extractors as an extant theory.
>
> We do know this stuff. We already took the red pill, *ages* ago. Peter's
On 14/10/2014, Sampo Syreeni wrote:
> We do know this stuff. We already took the red pill, *ages* ago. Peter's
> problem appears to be that he's hesitant to take the plunge into the
> math, proper. Starting with the basics
Didn't you recently tell us that you have no clue of 'entropy estimation'
On 2014-10-14, Max Little wrote:
Prescient. Apparently, Kolmogorov sort of, perhaps, agreed:
"Discussions of information theory do not usually go into this
combinatorial approach [that is, the Hartley function] at any length,
but I consider it important to emphasize its logical independence of
Prescient. Apparently, Kolmogorov sort of, perhaps, agreed:
"Discussions of information theory do not usually go into this
combinatorial approach [that is, the Hartley function] at any length,
but I consider it important to emphasize its logical independence of
probabilistic assumptions", from "Thr
On 14/10/2014, Max Little wrote:
> Well, it just says that there is a measure of information for which
> the actual distribution of symbols is (effectively) irrelevant. Which
> is interesting in it's own right ...
Feel free to think "outside the box".
Welcome to the "real world", Neo.
--
dupswapd
Well, it just says that there is a measure of information for which
the actual distribution of symbols is (effectively) irrelevant. Which
is interesting in it's own right ...
Max
On 14 October 2014 11:59, Peter S wrote:
> On 14/10/2014, Max Little wrote:
>> Some might find it amusing and releva
Which is another way of saying: "a fully decorrelated sequence of bits
has the maximum amount of entropy".
So if we try to estimate the 'decorrelation' (randomness) in the
signal, then we can estimate 'entropy'.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, sour
On 14/10/2014, Max Little wrote:
> Some might find it amusing and relevant to this discussion that the
> 'Hartley entropy' H_0 is defined as the base 2 log of the cardinality
> of the sample space of the random variable ...
Which implies, that if the symbol space is binary (0 or 1), then a
fully
On 14/10/2014, Max Little wrote:
> P.S. Any chance people could go offline for this thread now please?
> It's really jamming up my inbox and I don't want to unsubscribe ...
Any chance your mailbox has the possibility of setting up a filter
that moves messages with the substring '[music-dsp]' in i
So, instead of academic hocus-pocus and arguing about formalisms, what
I'm rather concerned about is:
- "What are the real-world implications of the Shannon entropy problem?"
- "How could we possibly use this to categorize arbitrary data?"
--
dupswapdrop -- the music-dsp mailing list and website:
Longest discussion thread so far I think!
The discussion reminded me of more general measures of entropy than
Shannon's, examples are the Renyi entropies:
http://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
Some might find it amusing and relevant to this discussion that the
'Hartley entropy' H_0 is d
Another way of expressing what my algorithm does: it estimates
'decorrelation' in the message by doing a simple first-order
approximation of decorrelation between bits. The more "random" a
message is, the more decorrelated their bits are. Otherwise, if the
bits are correlated, that is not random an
Again, the minimal number of 'yes/no' questions needed to guess your
message with 100% probability is _precisely_ the Shannon entropy of
the message:
"For the case of equal probabilities (i.e. each message is equally
probable), the Shannon entropy (in bits) is just the number of yes/no
questions n
On 14/10/2014, ro...@khitchdee.com wrote:
> Peter,
> How would you characterize the impact of your posts on the entropy of this
> mailing list, starting with the symbol space that get's defined by the
> different perspectives on entropy :-)
I merely showed that:
1) 'entropy' correlates with 'ran
Peter,
How would you characterize the impact of your posts on the entropy of this
mailing list, starting with the symbol space that get's defined by the
different perspectives on entropy :-)
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list
n 13/10/2014, Peter S wrote:
> When we try to estimate the entropy of your secret, we cannot
> _precisely_ know it. (*)
(*) ... except when none of your symbols are correlated, for example
when they come from a cryptographically secure random number
generator. In that case, the entropy of your me
...which implies, that the Shannon entropy problem is - on one level -
a "guessing game" problem - "What is the minimal amount of 'yes/no'
questions to guess your data with 100% probability?"
The more "random" your data is, the harder it is to "guess".
--
dupswapdrop -- the music-dsp mailing list
Let's imagine that I'm a banker, and you open a savings account in my
bank, to put your life savings into my bank. So you send your life
savings to me, say you send me $1,000,000 dollars.
When you want to access your money, we communicate via messages. You
have a secret password, and you send it t
On 13/10/2014, r...@audioimagination.com wrote:
>> What was claimed:
>> "number of binary transitions _correlates_ with entropy" (statistically)
>
> it's a mistaken claim, Peter. in case you hadn't gotten it, you're getting
> a bit outa your league. there are some very sharp and productive peopl
1 - 100 of 159 matches
Mail list logo