Sounds like a fun project Scott.
One question though:
>Sample rate is approximately 44.6 kHz.
What's with the non-standard sampling rate?
E
On Sun, Oct 12, 2014 at 5:25 PM, Scott Gravenhorst
wrote:
>
> I've been working on a MIDI Karplus-Strong synthesizer using a Microchip
> dsPIC33F (price
�"Peter S" wrote"
> On 12/10/2014, Peter S wrote:
>> Correction: no 'information theory' model was proposed, and no form of
>> 'immunity' was claimed.
>
> What was claimed:
> "number of binary transitions _correlates_ with entropy" (statistically)
>
it's a mistaken claim, Peter.� in
I've been working on a MIDI Karplus-Strong synthesizer using a Microchip
dsPIC33F (priced around $5.50 for tiny quantities). The synthesizer is
polyphonic with 12 voices and has both pots and MIDI CC inputs to control
it's timbre in real time. It also supports pitch bend. The code (all
assembly
For advanced topics, also feel free to consult:
^ Marek Lesniewicz (2014) Expected Entropy as a Measure and Criterion
of Randomness of Binary Sequences [1] In Przeglad Elektrotechniczny,
Volume 90, pp. 42– 46.
^ Dinh-Tuan Pham (2004) Fast algorithms for mutual information based
independent compon
On 12/10/2014, Sampo Syreeni wrote:
> On 2014-10-12, Peter S wrote:
>
>> Rather, please go and read some cryptography papers about entropy
>> estimation. Then come back, and we can talk further.
>
> PLONK.
This could be a good start for you:
https://en.wikipedia.org/wiki/Entropy_estimation
--
dup
On 12/10/2014, Theo Verelst wrote:
> But the measure of entropy is still a statistical measure, based on a
> distribution which is a *given* prob. dist., i.e. either *you* are
> saying something with it by having one or more possible "givens' that
> you every time don't make explicit, or you searc
On 2014-10-12, Peter S wrote:
Rather, please go and read some cryptography papers about entropy
estimation. Then come back, and we can talk further.
PLONK.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
-
On 12/10/2014, Peter S wrote:
> Correction: no 'information theory' model was proposed, and no form of
> 'immunity' was claimed.
What was claimed:
"number of binary transitions _correlates_ with entropy" (statistically)
Was NOT claimed:
"number of binary transitions _precisely_ equal entropy"
--
...and let me point out you admitted yourself that you have no clue of
the topic:
On 12/10/2014, Sampo Syreeni wrote:
> As for entropy estimators, [...] I too once thought
> that I had a hang of it, purely by intuition, but fuck no; the live
> researchers at cryptography -list taught me well bett
On 12/10/2014, Sampo Syreeni wrote:
> Very much my point: Shannon's definition of information is fully immune
> to ROT13. Yours is not.
Correction: no 'information theory' model was proposed, and no form of
'immunity' was claimed. My algorithm is just an approximation, and
even a very crude one,
Peter S wrote:
...
... say, when you're a cryptographer, and want to decide if a certain
stream of bits would be safe enough ...
But the measure of entropy is still a statistical measure, based on a
distribution which is a *given* prob. dist., i.e. either *you* are
saying something with it by
On 2014-10-12, Peter S wrote:
When you're trying to approximate entropy of some arbitrary signal,
there is no such context.
Of course there is. Each and every one of the classical, dynamically
updated probability models in text compression has one, too. The best
ones even have papers behind
On 12/10/2014, Peter S wrote:
> On 12/10/2014, Peter S wrote:
>> When you're trying to approximate entropy of some arbitrary signal,
>> there is no such context.
>
> ... say, when you're a cryptographer, and want to decide if a certain
> stream of bits would be safe enough to protect your bank ac
On 12/10/2014, Peter S wrote:
> When you're trying to approximate entropy of some arbitrary signal,
> there is no such context.
... say, when you're a cryptographer, and want to decide if a certain
stream of bits would be safe enough to protect your bank account
access from hackers (knowing what
On 12/10/2014, Sampo Syreeni wrote:
> Now you're finally getting it. It's about that "context" at least as
> much as it is about the signal sent. Everything that is relevant about
> the context can then also be quantified in one form of probabilistic
> distribution of the source or another. That's
On 2014-10-12, Peter S wrote:
Because I assumed it to be. Let's say I sent it to you and just made
it into a deterministic generator. Without telling you how it was
generated. Where is the information, from *your* viewpoint?
In what context? You sent me a long stream of '0101010101...' So?
W
On 12/10/2014, Sampo Syreeni wrote:
> On 2014-10-12, Peter S wrote:
>
>>> 010101010101010101010101...
>>
>> How do you know that that signal is 'fully deterministic', and not a
>> result of coin flips?
>
> Because I assumed it to be. Let's say I sent it to you and just made it
> into a determinist
On 12/10/2014, Peter S wrote:
> On 12/10/2014, Paul Stoffregen wrote:
>> As long as you produce only chatter on mail lists, but no working
>> implementation, I really don't think there's much cause for anyone to be
>> concerned.
>
> I have several working implementations, and I'll post one if you
On 12/10/2014, Paul Stoffregen wrote:
> As long as you produce only chatter on mail lists, but no working
> implementation, I really don't think there's much cause for anyone to be
> concerned.
I have several working implementations, and I'll post one if you're a
bit patient.
--
dupswapdrop -- th
On 10/12/2014 10:17 AM, Peter S wrote:
Maybe I'm one of those cryptographers you're afraid of ;)
As long as you produce only chatter on mail lists, but no working
implementation, I really don't think there's much cause for anyone to be
concerned.
Annoyed, perhaps, but certainly not afraid.
On 2014-10-12, Peter S wrote:
010101010101010101010101...
How do you know that that signal is 'fully deterministic', and not a
result of coin flips?
Because I assumed it to be. Let's say I sent it to you and just made it
into a deterministic generator. Without telling you how it was
gener
On 12/10/2014, Sampo Syreeni wrote:
> So by your metric a fully deterministic, binary source which always
> changes state has the maximum entropy?
>
> 010101010101010101010101...
How do you know that that signal is 'fully deterministic', and not a
result of coin flips?
> As for entropy estimator
On 2014-10-12, Peter S wrote:
Define "random".
As I told, my randomness estimation metric is: "number of binary state
transitions".
It is a very good indicator of randomness, feel free test on
real-world data or pseudorandom number generators.
So by your metric a fully deterministic, bin
On 12/10/2014, Sampo Syreeni wrote:
>
> Your message is lost in those fifty self-reflective, little posts of
> yours. Which is precisely why you were already told to dial it back a
> bit. I'd also urge you to take up that basic information theory textbook
> I already linked for you, shut up for a
On 12/10/2014, Sampo Syreeni wrote:
>
> Define "random".
As I told, my randomness estimation metric is: "number of binary state
transitions".
It is a very good indicator of randomness, feel free test on
real-world data or pseudorandom number generators.
--
dupswapdrop -- the music-dsp mailing li
On 2014-10-12, Peter S wrote:
Again, it seems my message was lost in translation somewhere...
Your message is lost in those fifty self-reflective, little posts of
yours. Which is precisely why you were already told to dial it back a
bit. I'd also urge you to take up that basic information th
About the "hidden information" in presumed noise: *Given* that there is
a hidden "generator" of the noise (like a standard software pseudo
random generator), or some other form of noise pattern, *THEN* you could
try to find it, and for that, it may be equally hard as to crack the key
out of a 1
On 2014-10-12, Peter S wrote:
Well, if you prefer, you can call my algorithm 'randomness estimator'
or 'noise estimator' instead. Personally I prefer to call it 'entropy
estimator', because the more random a message is, the more information
(=entropy) it contains.
Define "random".
I fail t
Also "randomness" correlates with "surprise", so if you treat entropy
as "how likely are we to get surprises", then "randomness" correlates
with "entropy".
But this is just another way of saying "a more random message contains
more information (=entropy)".
--
dupswapdrop -- the music-dsp mailing
On 12/10/2014, Sampo Syreeni wrote:
> I hear you well and clear: you seem to think information exists separate
> from someone's ability to decode it unambiguously.
Absolutely not! See my earlier post (the 'philosophical' post) where I
handled precisely this topic in depth, posted on 11 October 20
Well, if you prefer, you can call my algorithm 'randomness estimator'
or 'noise estimator' instead. Personally I prefer to call it 'entropy
estimator', because the more random a message is, the more information
(=entropy) it contains.
I fail to see why you guys don't realize this trivial correlati
On 2014-10-12, Peter S wrote:
There is no way that could be possible. And I never claimed that.
Maybe my message was lost in translation.
I hear you well and clear: you seem to think information exists separate
from someone's ability to decode it unambiguously. That is not how
information th
On 2014-10-12, Rohit Agarwal wrote:
Typically, arithmetic coding does not buy you much and was not used
much in the past because IBM had some patents on it. I heard it only
buys you 5-10% over Huffmann.
That much is typical, and it can go down arbitrarily much. Though it
varies, and arithmet
And yes of course, if you have your original entropy and add an
external entropy like a noise source, then of course my algorithm
cannot differentiate between the original entropy and the entropy from
the noise. Exactly.
Which was never claimed in the first place.
--
dupswapdrop -- the music-dsp m
On 12/10/2014, Sampo Syreeni wrote:
> Your earlier algorithm just "segments" bitstrings. It doesn't tell you
> how to assemble those segments back into a code which can be understood
> unambiguously by any receiver.
There is no way that could be possible.
And I never claimed that. Maybe my messag
On 12/10/2014, Peter S wrote:
> On 12/10/2014, Richard Wentk wrote:
>> Yes, great. Now how many bits does a noisy channel need to flip before
>> your
>> scheme produces gibberish?
>
> Those flipped noise bits add entropy to the message, precisely.
>
> Which my algorithm detects, correctly, since
On 2014-10-12, Peter S wrote:
Those flipped noise bits add entropy to the message, precisely.
No, they do not, if they just follow the same statistics as the original
message.
Which my algorithm detects, correctly, since your noise is an entropy
source.
If your algorithm can detect any a
On 12/10/2014, Richard Wentk wrote:
> Yes, great. Now how many bits does a noisy channel need to flip before your
> scheme produces gibberish?
Those flipped noise bits add entropy to the message, precisely.
Which my algorithm detects, correctly, since your noise is an entropy source.
--
dupswapd
An academic method to deal with ill-founded theories concerning statics
and logic would be to turn around the not given assumption, and see what
happens with the theory: suppose you make the ones zeros and vice versa,
what becomes the meaning of "equal probablility". One step beyond, what
if yo
Yes, great. Now how many bits does a noisy channel need to flip before your
scheme produces gibberish?
Richard
> On 12 Oct 2014, at 12:36, Peter S wrote:
>
> So, for more clarity, my algorithm would segment the following bit pattern
>
> 0001001011001101
Typically, arithmetic coding does not buy you much and was not used
much in the past because IBM had some patents on it. I heard it only buys
you 5-10% over Huffmann. The art of compression lies in taking your source
symbol space and transforming it into another space with a more desirable
proba
On 2014-10-12, Rohit Agarwal wrote:
You need to show an exclusive 1:1 mapping between your source symbol
space and your encoded symbol space. Then you can determine output
bitrate based on the probabilities of your source symbols and the
lengths of your encoded symbols. One way to do this is w
You need to show an exclusive 1:1 mapping between your source symbol
space and your encoded symbol space. Then you can determine output bitrate
based on the probabilities of your source symbols and the lengths of your
encoded symbols. One way to do this is with a Huffman Code.
___
On 10/12/2014 04:36 AM, Peter S wrote:
So, for more clarity, my algorithm would segment the following bit pattern
Perhaps for better clarity, you could provide a reference implementation
in C, C++, Python or any other widely used programming language?
00010010110
On 2014-10-12, Peter S wrote:
In my model, I assume equal probability (just as I pointed out at
least a few times).
Equal, equidistributed? I.e. repeated flips of a fair coin?
Then any first year student of information theory and/or compression
algorithms can tell you each of the bits is a i
On 12/10/2014, Sampo Syreeni wrote:
>> To demonstrate this through an example - could you point out to
>> _where_ the most amount of information is located in the following
>> message?
>>
>> 00010010110011010
>
> There is absolutely no way of knowing
On 2014-10-12, Peter S wrote:
To demonstrate this through an example - could you point out to
_where_ the most amount of information is located in the following
message?
00010010110011010
There is absolutely no way of knowing that unless you h
So, for more clarity, my algorithm would segment the following bit pattern
00010010110011010
...into this:
000 ---> log2(27) = ~4.754
1 ---> 1
00 ---> 1
1 ---> 1
0 ---> 1
11 ---> 1
00 ---> 1
11 ---> 1
0 ---> 1
1 ---> 1
0
On 12/10/2014, Richard Dobson wrote:
> An idle question: what makes the message 1001011001101 and not, say,
> 00100101100110100?
There's no transition there at the edges. I segment the symbols at
transitions (algorithmically). The outer two zeros are contained in
the constant segment, so they're
On 12/10/2014, Richard Wentk wrote:
> Relying on human pattern recognition skills to say 'oh look, here's a
> repeating bit pattern' says nothing useful about Shannon entropy.
>
> The whole point of Shannon analysis is that it's explicit, completely
> defined, robust, and algorithmic.
It seems y
This is covered by Quine and Shannon, although I cannot cite you chapter and
verse.
Basically, you are correct. A message alone is only half of the story: relative
to some pre-agreed decoder matrix defining lowest entropy. (Contrast
to chemical entropy which has a fixed baseline set by physics of t
None at all, because Shannon only makes sense if you define your symbols first,
or define the explicit algorithm used to specify symbols.
Relying on human pattern recognition skills to say 'oh look, here's a repeating
bit pattern' says nothing useful about Shannon entropy.
The whole point of
On 12/10/2014 11:31, Peter S wrote:
On 12/10/2014, Peter S wrote:
To me, this message can be clearly separated into three distinct parts:
000 -> almost no information, all zeros
1001011001101 -> lots of information (lots of entropy)
0 -> almost n
On 12/10/2014, Peter S wrote:
> To me, this message can be clearly separated into three distinct parts:
>
> 000 -> almost no information, all zeros
> 1001011001101 -> lots of information (lots of entropy)
> 0 -> almost no information, all zeros
Basi
On 12/10/2014, Peter S wrote:
> What I'm trying to find out is:
> - What is the "entropy distribution" (information distribution) of the
> message?
> - Where _exactly_ is the entropy (information) located in the message?
> - Could that entropy be extracted or estimated somehow?
To demonstrate thi
On 11/10/2014, r...@audioimagination.com wrote:
> all "decompression" is is decoding. you have tokens (usually binary bits or
> a collection of bits) and a code book (this is something that you need to
> understand regarding Huffman or entropy coding), you take the token and look
> it up in the c
56 matches
Mail list logo