Re: [music-dsp] about entropy encoding

2015-07-17 Thread Peter S
On 18/07/2015, Peter S wrote: > In the tests that gave good results for *some* test material, I > simply grouped adjacent bits. (*) (*) ...and of course, building histograms from characters or samples also means 'grouping adjacent bits', since a character means "8 adjacent bits", a sample means "

Re: [music-dsp] about entropy encoding

2015-07-17 Thread Peter S
On 18/07/2015, robert bristow-johnson wrote: > > listen, one thing i have to remind myself here (because if i don't, i'm > gonna get embarrassed) is to not underestimate either the level of > scholarship nor the level of practical experience doing things related > to music (or at least audio) and

Re: [music-dsp] about entropy encoding

2015-07-17 Thread robert bristow-johnson
On 7/17/15 2:28 AM, Peter S wrote: Dear Ethan, You suggested me to be short and concise. My kind recommendation to you: 1) Read "A Mathematical Theory of Communication". 2) Try to understand Theorem 2. 3) Try to see, when p_i != 1, then H != 0. I hope this excercise will help you grasp this to

Re: [music-dsp] about entropy encoding

2015-07-17 Thread Peter S
I tested a simple, first-order histogram-based entropy estimate idea on various 8-bit signed waveforms (message=sample, no correlations analyzed). Only trivial (non-bandlimited) waveforms were analyzed. Method: 1) Signal is trivially turned into a histogram. 2) Probabilities assumed based on histo

Re: [music-dsp] about entropy encoding

2015-07-17 Thread Peter S
A linear predictor[1] tries to "predict" the next sample as the linear combination of previous samples as x'[n] = SUM [i=1..k] a_i * x[n-i] where x'[n] is the predicted sample, and a_1, a_2 ... a_k are the prediction coefficients (weights). This is often called linear predictive codin

Re: [music-dsp] about entropy encoding

2015-07-17 Thread Peter S
On 17/07/2015, robert bristow-johnson wrote: > On 7/17/15 1:26 AM, Peter S wrote: >> On 17/07/2015, robert bristow-johnson wrote: >>> in your model, is one sample (from the DSP semantic) the same as a >>> "message" (from the Information Theory semantic)? >> A "message" can be anything - it can be

Re: [music-dsp] about entropy encoding

2015-07-17 Thread robert bristow-johnson
On 7/17/15 1:26 AM, Peter S wrote: On 17/07/2015, robert bristow-johnson wrote: in your model, is one sample (from the DSP semantic) the same as a "message" (from the Information Theory semantic)? A "message" can be anything - it can be a sample, a bit, a combination of samples or bits, a set