[music-dsp] the time it takes to design a reverberator and related

2020-05-27 Thread Theo Verelst

My first home-studio reverberation constructions, both analog and digital are 
from the
80s, and I've always been impressed with those wonderful high grade musical 
products
with preparation for the speaker and listener room preparation and those great 
sound
scapes coming from all kinds of professional tools.

Only recently I've been able to create some of that myself, but that's not 
based on
endless tuning of basic reverb source code, so I can't give you information on 
that.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] the time it takes to design a reverberator and related

2020-05-26 Thread Theo Verelst

"I need some possibly quotable real world opinions and experiences on how
long stuff can take to design or develop"

Sound like that's interesting. But why? Project management, funding, hobby 
schedule,
historic insight, or .. ?

There's rather a different angle if you just want to start up Yoshimi / 
ZynAddSubFX
(an interesting Free and Open Source software synthesizer running on any Linux 
),
crank open the gverb send control and enjoy the reverberation making your 
playing more
grand and lively, or whether you are intrigued by the wonderful reverberation 
on top
records and want to build a signal path around for instance a Lexicon reverb 
where you
strive for excellence.

There are a lot of technical/scientific reasons for different reverb designs 
from echo
chambers to combined analogue and digital tools on fast computers and DSPs. The 
basics
seem easy, but, like you indicated, actually it's a very complicated subject, 
especially
when you factor in music knowledge, but also if you consider sampling issues 
(for digital
reverb designs), signal compression/limiting/gating effects, mastering issues, 
the
differences between various reverb setups, etc.

For a live reverb, you're going to want something that your mikes respond well 
to. And
probably a part of the design should be how the artificial reverb unit is going 
to
have effect on the naturally present reverberation. Possibly (a common 
principle for
pro setups) there's going to be a partial countering of the effect of natural
reverberation you want to strive for, which calls for a bit more analysis than
trivial filter rows and some tunings.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] FIR blog post & interactive demo

2020-03-19 Thread Theo Verelst

Maybe a side remark, interesting nevertheless: the filtering in digital domain, 
as
compared with the analog good ol' electronics filters isn't the same in any of 
the
important interpretations of sampled signals being put on any regular digital to
analog converter, by and large regardless of the sampled data and it's known 
properties
offered to the digital filter.

So, reconstructing the digital simulation of an analog filter into a electronic
signal through either a (theoretically , or near-) perfect reconstruction DAC 
or an
ordinary DAC with any of the widely used limited-time interval over sampled  
FIR or IIR
simplified "reconstruction" filters, isn't going to yield a perfect equivalent 
of a
normal, phase shift based electronics (or mechanics based) filter. Maybe 
unfortunately,
but it's only an approximation, and no theoretically pleasing sounding 
mathematical
derivation of filter properties is going to change that.

It is possible to construct digital signals, where givens are hard-known about 
the signal
which given a certain DAC will 'reconstruct' or simply result in an output 
signal which
approaches a certain engineered ideal to any degree of accuracy. In general 
though, the
signal between samples can only be known through perfect reconstruction 
filtering
(taking infinite time and resources), and DACs that are used in studio and 
consumer
equipment should be thoroughly signal prepared by pre-conditioning the digital 
signal
feeding it such that it's very limited reconstruction filtering is used such 
that certain
output signal ideals are approximated to the required degree of accuracy.

Including even a modest filter in that picture isn't easy!

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] An example video: C to FPGA programming

2020-01-11 Thread Theo Verelst

Thanks for the correction, I hope you guys weren't too disappointed by the
music demo video, indeed the corrected links are the ones I intended to give.

Hi Scott!

The complicated subject of a Silicon Compiler in the sense of C to a working
co-routine in FPGA is what interests me, and I'd not do much with FPGAs unless
I had to if it weren't for that possibility. It's quite possible to outcompute,
not just out-logic (so to speak) a modern full blown desktop processor like the 
I7
with such a humble, few watts chip with FPGA. And then there's clever use of 
logic
on top of that, and if you'd want to take a look at for instance a cloud node 
like
the well known AWS f1 compute there's also the possibility to have huge FPGA
connected memory and say a hundred times more power for use with a C compiler.

My point is that the path actually works in practice, and with a $99,- board
(the Parallella) and free vivado (or vitis) tools. The power achievable with 
that
is possible to use for normal C functions, it's quite advanced. Just like with 
C,
a system programmer may get way more mileage out of knowing how to write an 
efficient
DSP procedure for a certain architecture. And even more with FPGA logic: it's
quite interesting to see what parallel/pipelining constructions can be made
to work with the correct factoring.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] An example video: C to FPGA programming

2020-01-10 Thread Theo Verelst

Hi all

Maybe it's not everybody's cup of tea, but I recall some here are (like me) 
interested in
music applications of FPGA based signal processing. I made a video showing a 
real time
"Silicon Compile" and test program run on a Zynq board using Xilinx's Vivado 
HLS to create
an FPGA bit file that initializes a 64k short integer fixed point sine lookup 
table
(-pi/2 .. pi/2) which can be used like a C function with argument passing by a 
simple test
program running on ARM processors.

The main point is the power the C compilation can provide the FPGA with, and to see the 
use of the latest 2019.2 tools at work with the board, some may find that useful or

entertaining:

   https://youtu.be/Nel6QAvmGcs

Theo V
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sound Analysis

2019-01-14 Thread Theo Verelst

About the final accuracy of an FFT applied to note recording sample files, it's 
like
if you have a number of waves in you FFT interval, they seldom will wrap around 
at sample
boundaries, and certainly if you want an accurate match in your FFT, the FFT 
interval
should as closely as possible match the wavelength of the lowest frequency in 
you tone.

So you'd have to find out the number of samples per wave, and use a multiple 
number of
waves from your sampled file until the number of waves (in terms of as accurate 
as
possible matched sampled in floating point) becomes close to an integer number.

Then adjust the bin size, and hope transients (there's all kinds of electronic 
coupling
going on) and disturbances (which do not necessarily fit in the same harmonic 
analysis
interval for repeating waves) aren't too much a problem.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-08 Thread Theo Verelst

No matter how you go about this, the the Fast Fourier will in almost every case
act as some sort of ensemble measurement over it's length, and maybe do some
filtering between consecutive transform steps. Maybe you even continuously
average in the frequency domain, using per sample sliding FFT frames, even then
you're measuring which "bins" of the FFT respond to the samples in you signal,
not more and not less.

Usually if the math/algebra doesn't proof anything conclusive, no high spun
expectations about the mathematical relevance of the result is in order...


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] two fundamental questions Re: FFT for realtime synthesis?

2018-11-03 Thread Theo Verelst

It's a complicated subject when you fill in all the boundary conditions 
properly, isn't it?

Lots of frequency considerations look a  bit alike but aren't mathematically 
equivalent.

The human (and animal I suppose) hearing is very sensitive to a lot of these issues in all 
kinds of convoluted combinations, so finding a heuristic for some problem is easily going 
to be food for yet another theory getting disproven. Some theories (like Fourier) can only 
be ultimately, mathematically used to close to infinite accuracy success when applied 
properly, this is where usually the problem lies.


It might help to understand why in this case you'd chose for the computation according to 
a IFFT scheme for synthesis. Is it for complimentary processing steps, efficiency, because 
you have data that fits the practical method in terms of granularity, theoretical 
interest, example software, some sort of juxtaposition of alternatives, or maybe a well 
known engineering example where this appears logical ?


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-09-01 Thread Theo Verelst

Content-wise, you also need to consider what the meaning is of
the terminology "anti aliased"; technically, it means you prevent
the double interpretation of wave partials, which is usually associated
with AD or DA conversion, as in that high frequency components that are
put on a AD converter will be "aliased" because they will mirror around
the Nyquist rate and become indistinguishable from a similar frequency in the
input signal.

A more common thing to think about in softwaere waveform synthesis, apart
from this principle but then in the "virtual" sampling of a waveform,, is
to consider the error an actual DAC (digital to analog converter, like your
sound card has), as compared with a "perfect reconstructor", which would take
your properly bandwidth limited signal (to prevent aliasing) and (given a very
long latency) turn it into a perfect output signal from you sound card.

The DAC in you soundcard will not do this job perfectly, even if you're
perfectly anti-aliasing or bandwidth limiting your digital signal. That's 
because
of the sampling reconstruction theorem's need for a very long filter, while
and actual DAC has a very short reconstruction filter.

One of the effects of this limitation is probably the most important to consider
for musical instruments producing sound which will be amplified into the higher 
Decibel
domains: mid frequency blare. Especially in highly resonant spaces, like those 
with
un-damped parallel reflective walls, certain sound wave patterns tend to 
amplify through
reverberation causing a lot of clutter in the sensitive range of the human 
hearing, the
middle frequencies (lets say 1000 through 4000 Hz). This "blare" becomes louder 
because
of various digital processing and DAC reconstruction ensemble effects, and 
preferably
should be controlled.

So especially for serious "live" music reproduction, measures ought to be in 
place to
control the amount (and kind) of blare your software isntrument produces, 
probably as
higher priority than the exact type and amount of "anti-aliasing" you provide.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Theo Verelst

Kevin Chi wrote:

Hi,

Is there such a thing as today's standard for softSynth antialiased oscillators?



It's easier to know that for Open Source products, which use a variation of 
methods as
far I checked, and nothing out there I've heard (probably some stuff of most 
known
packages) is "standard worthy" in terms of deep understanding of the underlying
theory, or even made with quantitative error bounds in mind. Parts of some 
hardware based
oscillator designs probably excepted.

There's a lot of ways to look at wave tables and their use, for instance the way I used 
one quite some years ago in a for the time advanced enough Open Source hardware

analog synthesizer simulation, mostly dealing with signal 
aliasing/reconstruction errors
(don't confuse the two) and the possibility to store accurate waves forms that 
are hard
to compute in real time.

The dealings with the harmonic distortion from interpolating in the wave table, 
and it's
subsequent effects on the rest of the signal chain are often done in such a way 
as to
take a  few parts of the relevant theory, ignoring the others, leading to a 
rather
dead an bland sound in many commercial products, simply because preparing the 
samples
for DA conversion is hard to do, and even relatively simple interpolations cost 
work,
and it isn't easy to make a scheme based on WTs that overall gives guaranteed 
good
results, including pitch bends, fast and slow modulations and some sort of 
consistent
sound feel.

It's possible to make a real design, which tries to eliminate serious 
distortion, and
work for a number of musical applications with a lot better sound, in terms of
high fidelity, but thus far that's thus far above the reach of the people in 
this group,
to my knowledge.


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Creating new sound synthesis equipment

2018-08-08 Thread Theo Verelst

pa...@synth.net wrote:

I think what we can all agree on is;

1) right tool for the right job
2) right level of knowledge to use the tool



Disagree, in many cases designing bullets is science, but there are 
competitions,
competing design criteria, and in fact in cases, appropriate "silver bullets".

For instance when a FPGA board, cheaper than the CPU of a PC, beats the PC
in practical sense, there's every reason to prefer that solution, especially
if the tools are getting more advanced than C compilers on a moderately
functioning PC multi tasking platform.

Also, there's security issues, as has been rightfully mentioned, for instance
a programmable device could have a hardware decryption unit (has been around
for decades).

My point has been, if that's not clear, the trustworthiness of dedicated
solutions can be a lot higher, also in terms of forcing designers not
to take some standard software solutions from the shelve, but to analyze
the music synthesizer properly, and learn to use the best, most advanced,
and educationally valid tools, when possible.

TV

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Creating new sound synthesis equipment

2018-07-26 Thread Theo Verelst



The FPGA could do faster version of Midi with accurate time stamping, which CPUs don't 
always do easily, and the hard-coded FPGA stuff would never be interrupted by kernel

activity, etc.

Also, FPGAs can be really quite fast for important computations, I've used a 
$100 Zynq
board to demo that here a while ago, where it beats a decent I7 in double 
precision
trigonometric computations flat out.

It's like with a modern environment like I've tried , Vivado + Vivado_HLS, which is free 
to use, compiling from C code directly to FPGA blocks is possible, and there are

(I happen to have used Xilinx but there are others) communication primitives 
with quite
some speed available to communicate with ARM programs running on Linux.

I wouldn't say that when I tried some past versions of all this out it works 
perfect and
easy, but the potential of it, especially computations involving fine grained 
parallelism
and synchronization patterns, is quite good and leads to a block approach that 
can be
powerful in comparison with semi/real parallel programming.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Creating new sound synthesis equipment

2018-07-22 Thread Theo Verelst

Hi DSPers,

I would like to reflect a bit about creating (primarily music) synthesis 
machines,
or possibly software, as sort of talking about a dream that has been of some 
people
since let's say the first (mainly analog!) Moogs in the 60s. What is that idea 
of
creating a nice piece of electronic equipment to create blips and mieaauuws,
thundering basses, imitation instruments, and as recently has "relived" all 
kinds
of more or less exciting new sounds that maybe have never been used in music 
before.
As for some it's like a designer's dream to create exactly the sound they have 
in
mind for a piece of musique concrète, for others it's maybe to compensate for 
their
lack of compositional skills or musical instrument training, so that somehow 
through
the use of one of those cool synthetic sounds they may express something which
otherwise would be doomed to stay hidden, and unknown.

Digital computer programs for sound synthesis in some sense are thought to take
over from the analog devices and the digital sound synthesis machines like 
"ROMplers"
and analog synthesizer simulations. It's not true this has become the decisive 
reality
thus far: there's quite a renewed interest in those wonderful analog synthesis 
sounds,
various manufacturers recreate old ones, and some advanced ones make new ones, 
too.
Even though it is realistic that most folks at home will undoubtedly listen 
most of the
time to digital music sources, at the same time there's a lot of effort still 
in the
analog domain, and obviously a lot of attempts at processing digital sound in 
order
to achieve a certain target quality or coolness of sound or something else ?

Recently there's been a number of interesting combinations of analog and
digital processing as well as specific digital simulation machines (of analogue
type of sound synthesis) like the Prophets (DSI), The Valkyrie (Waldorf "Kyrie" 
IIRC)
based on FPGA high sampling frequency digital waveform synthesis and some 
others.

Myself I've done a Open Source hard- AND software digital synthesizer design 
based on
a DSP ( http://www.theover.org/Synth ) over a decade ago, before this all was 
considered
the hip, and I have to say there's still good reason for hardware over software synthesis, 
while I of course can understand it is likely computers will get better and

better at producing quality synthesis software. At the time I made my design, I 
liked to
try out the limits I liked as a musician, such as extremely low, and very 
stable latency
(one audio sample, with accurate timed Midi message reading in programmable 
logic)
straight signal path (no "Xruns" ever, no missed samples or re-sampling ever, 
no multi
processing quirks, etc). My experience is that a lot of people just want to 
mess around
with audio synthesizers in a box! They like sounds and turning some knobs, and 
if a
special chip gives better sound, for instance because of higher processing 
potential
than a standard processor, they like that too, as well as absence of strange 
software
sound- and control-interface latency.

I'm very sure there are a lot of corners being cut in many digital processing 
based
synthesis products, even if the makers aren't too aware, for instance related 
to sample
reconstruction reality compared with idealized design theories as well as a 
hope for
congruency between the Z transform with a proper Hilbert transform, which is 
unfortunately
a fairy tale. It is possible to create much better sounding synthesis in the 
digital
domain, but it's still going to demand a lot of processing power, so people 
interested in
FPGA acceleration, parallel software, supercomputing, etc, might well have a 
hobby for
quite a while to come, in spite of all kinds of adds about music software 
suggesting
perfection is in reach!

Theo V
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] sound transcription knowledge

2018-07-10 Thread Theo Verelst



For artistic purposes, there are a number of professional signal recognition
tracks, but they are too complicated, and successful "light organ" applications
or Rock Show lighting plans are directly correlated with the artistic intentions
in the music production.

Technology students with some experience on a audio platform of your choice 
should
be able to create most of the main tasks you're asking about, some of them are
standard things. You could take a look at some of the Open Source Linux 
applications
such as oscilloscope, VU meter, spectrum meter applications, they all exist in 
various
forms, and the advantage of OS is: you can modify them.

T


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Playing a Square Wave

2018-06-17 Thread Theo Verelst



First, let me agree with the notion that through interference of the supersonic 
elements,
audible artifacts through reverberation can come into existence. The early reverb can 
easily transfer some energy into the audio band and the (partially non-linear) reverb tail 
of any non-dead listening space will even out frequencies (measuring supersonic components 
as well) and will easily concentrate mid-long reverb components in standing waves, which 
can be driven by higher than 20kHz harmonics as well.


Moreover, the speaker will have a big impact on the sound of the square wave: 
the long
horizontal parts will be strongly influenced by how low you sub-woofer goes and how 
straight it is, the rise time will be influenced by the amplifier, the impedance of the

tweeter and it's physical properties. In my case for instance I use a ribbon 
tweeter with
fast enough rise time very neutral amplifier, with a separation filter at 8kHz, 
so that
essentially it reproduces the top octave of the audible frequencies only. The 
mids must
be very straight to make the rest of a "perfect" input square come out your 
speakers
neutrally, which especially digitally is hard to do (the reconstruction filter 
will at
the very least show lag of phases in the mid frequency range) unless you know 
how to
prevent that.

The limiting of the frequency components in order to prevent aliasing is kind 
of needed
per the definition of the reconstruction theory, *how* you do that is of course another 
discussion. For instance you could add all harmonics with high mathematical accuracy 
according to the curve of a perfect 2 order filter, and then cut off all the harmonics
above a little under half the sampling frequency (so including natural looking phase shift 
above a certain frequency) which might look more natural.


*How* you're going to, lets say right at the output of your chosen DAC, make sure your 
digital square + DAC preparation is going to be as close to a "normally pure" square as

possible isn't really the issue, but if you do nothing but a simple curve, my 
statement
is that you might not get harmonic distortion down as far as I would deem 
needed for HiFi
response.

T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Playing a Square Wave

2018-06-13 Thread Theo Verelst



There's this preoccupation I have since the advent of going "digital", let's say since I 
heard music being played on CD in the early 80s. I grew up with access to electronics 
equipment that would generate "square waves" in some sorts of analogue fashion, including 
originally "digital" chips, even driven from frequency stable crystals and so on. In fact 
I built my own organ/synthesizer based on a top octave synthesizer chip around 1980 which 
I gave CMOS divider chips to get well symmetrical, pure and pretty undistorted square 
waves to a analog mixing rail construction, and I must say (I was a teenager) I recall the 
different sounds. the feel if you like, of all those different square waves by themselves 
and some the filter and modulation constructs I made quite well.


Now, like everybody else, I'm used to listening to a lot of audio in some form of digital 
source format, ending up at one of the varying types of Digital to Analog Converters, to 
enjoy digital music on for instance a smart phone, a HDMI based digital stream converted 
by a TV/Monitor, a very high quality DIY kit based converter setup, standard computer and 
bluray player outputs (both not bad) and known brand studio quality USB ADC/DAC units 
(Lexicon, Yamaha, and a Burr Brown/TI chip based DIY kit) and finally from some variety of 
digital music synthesizers (a.o. a Kurzweil and a Yamaha).


The simple question that forced itself on me often, as I"m sure some can relate, after 
having been used to all those early signal sources including a host of analog synthesizers 
I had in the past, and a lot of music in various analog forms from standard pop to G. Duke 
and Rose Royce to mention a few of my favorites from an earlier era, is how can it be that 
such a simple wave like the square wave, just two signal levels with a near instantaneous 
jump between them, can be so hard to make digital, if you listen with a HiFi system and 
some normal musical signal discernment ?


The answer is relatively simple: a digital square wave for musical application comes out 
of all current standard DACs with imperfections that I recognize and have an immediate 
form of musical dislike about. Not that a software synth can't be put on, played and 
create some fun with square waves, I'm sure it can to some degree be fun and played with 
in some music, but for sound enthusiasts, all that digital signal processing does come 
across as often the same sounding and not as musical as I remember it can be by far.


Is it possible to do something about that? I'm an univ. EE so im y official background 
knowledge, there's enough to understand some of the reasons for these sound limitations 
easily. Solving all of them will prove to be very hard, given standard DSP and normal 
current DACs, so there is that. To begin with the understanding *why* such a simple 
"digital" square wave doesn't sound warm and nicely flutey from a digital system in many 
cases: the wave as to be "rounded" to fit in the sample timing, and the DAC essentially 
doesn't necessarily "know" how to create those up and down signal edges with accurate 
timing. So for instance 1 standard 1kHz square wave coming out of a CD-rate (44.1e3 
samples per second) DAC will have maximum up and down square wave edge timing errors in 
the order of 1000/44100 * 100% ~= a few percent timing errors. Doesn't sound like much, 
but all the harmonics might be involved, and for a High Fidelity system, and error of 1/10 
of a percent nowadays just like in the early days of tube HiFi is considered quite 
noticeable or even unacceptable.


Can a DAC do a better job ? Yes, but not by just feeding it a pure square wave, rounded to 
the samples. One could make use of serious oversampling, and a much higher rate DAC, for 
instance I've tested a very high quality DAC with adjustable type of built in 
"oversampling" filter (low pass or short, hard window reconstruction) at almost 10 times 
CD rate (384k s/s),and surely this makes the sound more acceptable. The monitoring and 
pre-amplification as well as the analogue (electronics based) DAC filtering will matter 
for the sound, too.


Now recently I've worked on quite a different type of problem, not important for this 
sharing at the moment, which as one of it's (complicated) side effects can produce 
components to a digital signal that try to use the (limited) DAC filtering, usually some 
internal up-sampling ("oversampling") with either a built in DSP  FIR (some short impulse 
with at least some low-pass qualities) or IIR (some standard low pass response) to create 
a purer sounding square wave approximation from a frequency limited digital wave source.


Anyone else worked on this to some extend ?

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Real-time pitch shifting?

2018-05-22 Thread Theo Verelst
Interesting topic since it became a bit of hype when computer audio processing  programs 
started to include "stretching" as an option.


Two main thoughts I haven't heard yet: the real time matter, which in the normal digital 
domain sampling frequency ranges poses the question of the length of the re-sample kernel, 
and the impossibility of "getting it right" in terms of waves becoming fundamentally cut 
short (or long) when changing pitch. That's the resample filter has an accuracy which is 
fundamentally limited by the length of the sinc (-like) "perfect" resample kernel, and the 
required delay for accurate re-sampling might be considerable!


The time stretch operator isn't usually a bijection, because in the case of many often 
occurring stretch parameters, there's going to be rounding of the number of waves being 
used for certain harmonics. Simply put, if you shift a sine wave of 100 full waves to 1/3 
it's length, you're getting a huge transient at the end, or you do some sort of smoothing. 
Is fine, but that's a fundamental matter, not a matter of the inaccuracy of the method.


Also, for programs meant for the human voice, there might be issues because those programs 
might do estimations of the voice parameters in order to change pitch, such as to preserve 
the perceived size of the vocal tract, for instance (to prevent Micky Mouse sound). 
Or,when used on a sound with let's say FM components like a piano tone, you might want to 
analyze that as well, to include that in the re-synthesis pf the tone(s). Of course that 
would seem hard.


TV.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-27 Thread Theo Verelst

gm wrote:

What are good frequencies for band splits? (2-5 bands)


For standard mastering applications there are norms for binoral and Equal Loudness Curve 
related reasons. The well known PC software probably doesn't get there but it may be you 
want to tune those frequencies based on the following criteria:


  - type of filter (FIR/IIR/FFT, resonant or not, congruent with standard linear
(analogue) filter constructions or not) and the associated impulse response 
length
  - the properties of the filter impulse and combinations during standard
signal reconstruction (at the DAC) or up/down sampling
  - energy distribution for white/pink noise or standard signals for your
specific application
  - the function of the application in terms of being somewhere on the line of
a High Fidelity slight clipping prevention, over a radio mastering with 
significant
compression, or a wild tool where the use is as a very significant signal 
alteration
tool


Theo V.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reading a buffer at variable speed

2018-03-08 Thread Theo Verelst

I cannot understand from the problem definition if it's a matter of
learning to understand the frequency at a certain point of re-sampling
in mathematical form, supposedly some solution of the constants in a
standard e-power formula like a*e^(b*t+c) with two points of the graph
of such transform given, or if it's about some form of implementation
problem, like how does the number of samples come out exactly, what's the
integral of the function, or rounding issues?

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-10-10 Thread Theo Verelst
What was that about? I did chip design elements back in the 90s, what does that have to do 
with making a FPGA design that at high level verifies as interesting because it even seems 
to  out-compute a pentium? I've done things with both DSPs and FPGAs and it strikes me 
that the way to go for intelligent algorithms can well include FPGA nowadays because it's 
becoming easier to "compile to silicon".


I was reading about High Bandwidth Memory stacked on FPGA to make even PC memory bandwidth 
look pale in comparison I think it's an interesting development that can be practically 
experimented with already.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] What happened to that FPGA for DSP idea

2017-09-23 Thread Theo Verelst
Glad to see there's good interest in the subject, I'm sure it will become more important 
with time, because of the traditional computer architectures being relatively ineffective 
for a lot of stuff FPGAs can do,


Now, I've put Vivado high level design (free) on the fastest computer I use, even on SSD 
for testing it, and looked at the examples some more to "get it". There's a lotto get 
still, even simple things like making a C function with a array lookup, and it's clear the 
sw is still being built up, for instance such C function will work, and can be pipelined 
to a 2-delay, 1 repeat cycle lookup simply by using traditional C, but as soon as I 
changed the example's source code from "256" short array to any other memory size, the 
implementation would take like 256 or even 1000 clock cycles :) .


Also, I'm feeling like looking a bit at stuff that (out of necessity) interested me as a 
student, like does it make a difference to write a standard formula, like exp(x)sin(x) 
with separate function blocks for the two library computations. In single precision 
floating point a design comes through the C-to-verilog+dedicated-blocks easily enough 
(though it takes a bit longer than simpler examples), and fits my small Zynq 7010, but now 
it would require a little lower clock than 100MHz (which never happened with the other, 
even more complicated examples, which usually could run higher).


I had hoped to put my Bwise->Maxima->Fortran->C wave form formulas straight through the 
compiler , which is like really big to try out, and that's hopeless unless I make 
intelligent coprocessors for the basic function elements and somehow would make my own 
connections and schedules. It's still cool though that more or less these chips (mine was 
just a bit too small for this software version's output) I could run double precision 
trigonometric functions faster than a 3GHz I7 with 4 cores running could keep up with, 
using the same C code (if I didn't err in the data somewhere, but I don't think so).


T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] What happened to that FPGA for DSP idea

2017-09-14 Thread Theo Verelst

Hi all

Well somehow it's alive and kicking, and for good reason, lots of digital signal 
processing can be done efficiently on FPGA, and it has at least one advantage, which is 
that possibly the digital signal path design is more sensible (the structure is fixed, 
maybe like dedicated micro processors could be) and less influenced by computer properties 
like caches and unknowns (memory access latency, parallelism constructs, dynamic pipeline 
issues).


Some people (I recall there are at least some here, too) also find it just fun, and 
actually, which is my opinion, it deserves a place because it's also very efficient, in 
many ways all computer and DSP chips seldom are: from bit-level fast responsiveness, to 
pipe-lined architectures on programmable logic ("Field Programmable Gate Arrays") that 
outperform heavy computers or DSP machines.


So I looked at a great tool, which I think I've mentioned here, to make use of an 
accessible and readily available FPGA technology (Xilinx Zynq chips, for instance on the 
$99 Parallella board I bought from Kickstarter years ago), which is the latest Vivado_hls, 
the (Xilinx, I'm not affiliated with them, just like I'm not with Intel when using PC's), 
a design environment which allows you to quickly (...) design a FPGA co-processor for the 
Linux ARM-type processors on the same Zynq chip, FROM C CODE.


Lots of catches right there, especially bout "fast" and "easy", which is only true for 
some problems and after you've done all the work, as is often the case for implementing 
some processing algorithm on a (new) technology. It's fun though that in principle you can 
take a C function, and turn it into a processing block on the cheap FPGA, usually running 
at at least 100 MHz clock frequency. To much to go into now, but it's not overly hard to 
get an example started if you're familiar with most of the concepts, I'll make a video 
about that for people who like that, but now, there's that question of "why" would anyone 
be interested in such technology ?


In short, there's good motivation when a little board can out-compute a PC on essential 
computations, like a double (!) precision cosine computation of given good accuracy, with 
a short pipeline. Personally, I also like to make things like a DAC connection and (high 
speed) keyboard connection lightning fast, which isn't really easy on a PC with USB and so 
on, but for the moment I just downloaded the freely available (but not Open source) 
"Vivado" design environment on a Linux computer (running Fedora) and played around with 
some of the examples to see what computations for musical instruments could be done with it.


I think it is insightful for interested persons to have a little impression of some of the 
criteria and practical achievements involved in the latest wave of possible FPGA music 
related computations (FPGAs are mainstream EE tools for decades already, probably present 
in your digital mixer and synthesizers, etc.).


For basic waveforms and all kinds of other musical computations, one could use a sine wave 
computation, so that's a good start to take an example from, and I found out the 
following, after noting that a simple (like I said on boards of far less than $200) FPGA 
can easily do serious math with an instantiated multiplier, adder, etc, with this kind of 
tool including in serious floating point formats. So I made an example where I took a 
double precision cosine function, connected it over a (not blazing fast) interface with 
the ARM linux cores on a Zynq 7010, and could at least create an actual FPGA programming 
file that could be FTP-ed to the little board from the PC running Vivado, and subsequently 
(without rebooting) be loaded in the FPGA fabric. It worked accurately (closer than 10e-16 
compared with double precision C computations on the Linux cores) and reliably, in 
practice. I've reported this on the Parallella.org (site is down at this moment, but it 
can be searched there, if you have a login), including the necessary design files to try 
it yourself.


So there was an upgrade to the design env, and suddenly, it reported a lot faster 
"(double) cos(double x);" function mapping to FPGA, up to a pipeline length of under 30 
clock cycles and full (every clock cycle one) pipeline activity at over 100MHz, which 
means a double precision cosine computation, with the accuracy of a computer per 10 nS. 
that's faster than most PCs can do, so for such a humble, low power chip, that's cool. 
There's a catch, though: it didn't fit in my chip. Argh.


Here's some data I gathered (this is exploring the 1st pass of the design software, actual 
implementation could differ, but the design I actually tested on the chip matched ok), 
without the data, because I seem to recall there's some form of "no benchmark" clause in 
the use of the software, only what ideas are possible to think about when contemplating on 
that "organ on a chip" or "1000 free running harmonic oscillators with 

Re: [music-dsp] Sampling theory "best" explanation

2017-09-08 Thread Theo Verelst

So when is a "system", or better put, it's mathematical model, a linear system ?

In a way the scalar high school definition suffices, when the inputs are added, the result 
must be the same as when the inputs are processed separately, and a mulitplied input gives 
a multiplied output. Simple enough, though I'd prefer the integral definition, because 
then it would be a proper definition for the question at hand.


The problem is: is your re-sampling and filtering going to be "linear" when the samples 
have shifted a bit. If you take a resample by having a second AD converter running on the 
same input signal, but say half a sample shifted, there isn't an easy way to correlate 
that sampled signal, unless you do a lengthy sinc based resampling (preferably with error 
analysis).


From that point it becomes a difficult process of explaining theory, it seems. Then, 
there's the idea of linearity in the digital signal processing domain: are two filters 
applied to a set a samples dignifying the "law" for LTI ?


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Sampling theory "best" explanation

2017-08-28 Thread Theo Verelst

Nigel Redmon wrote:

Well, it’s quiet here, why not…



Right, a good reiteration never hurts! I quickly read through and find your explanation 
fine, it's not right to expect everybody to be theoretically sound and solid up to the 
level of mathematical proof, but I'm a strong proponent of preventing the main errors:


   - the main definitions and theorem must be solid and phrased unambiguously
   - the understanding created should not lead people astray in divergent 
diractions

It's like, it's good to know there's a mathematical theory that uniquely and like a proper 
bijection relate samples to the analog signal, under proper and necessary conditions.


Then there's the practical side: can an existing ADC create a file with samples that can 
be enjoyed back in their analog form to a high level of fidelity, and if so:


   - how
   - at which computational/electronic cost
   - for which (sub-)  class of "CDs" or other consumer digital signal forms
   - with qualifiable and quantifiable error ?

That's for engineers that want to really work in the subject, too, but a bit of an idea a 
lot of people will appreciate.


Theo
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Linux audio sync and conversion

2017-08-17 Thread Theo Verelst

Hi all,

A little venting today about something I've worked on as a sideline while doing an 
interesting test with phaser/filter banks and different sample rate multicompression for 
extracting much better reconstructable audio from A grade music.


As most will know, I like, to some extend, running digital audio processing prototype 
setups on Linux, using Ladspa plugins and some more programs, that are even available on 
the humble Raspberry Pi and such, with the "Jack" (jackd deamon) streaming audio 
processing, connected to the Alsa "sound system". This works good in that you can take for 
instance a Usb Audio Class 2.0 audio device (in my case like a Yamaha MGxxXu series mixer, 
or a Xmos based high quality ground separated DIY audio device), connect the jack server 
to the Alsa device with proper settings (sample rate buffer size and number #channels) and 
you'll be able to use jack-based processing (for instance with "jack-rack") where you have 
perfect sync with the audio card/device, no sample conversions or sample clock issues.


This way, you also will know (in most cases), while pushing the processing limits such 
that the processor gets hot with the amount of audio processing, when so called "x-runs" 
are taking place, indicating the your computer is starting not to keep up with the amount 
of audio work. so far, so good.


No, I've a need for multiple sampling domains, as in that I want parts of my processing to 
take place in 96kHz sampling rate, some at 48.0/44.1kHz and some at 192 or even 384 (the 
highest I've tried, and even an I-3930 at 4.5GHz runs out of juice at some point that 
way), with specific rate conversion types in between. Now, for instance there's a tool 
(for those interested, called alsa_out/alsa_in) that allows you to connect from a jack 
domain running at a certain rate to connect to an Alsa sound device running at another 
rate, and do a sort of PID controlled stretch based conversion fo quality 1 through 4 
(heavy uni-core load). For connecting a sound card to a lower frequency sampling domain 
that works fine.


Now, I've used a virtual "Loopback" device, with this tool as well, and audio 
player/playback with streaming conversion, and sync based and other off line file 
conversion, and even analog methods (quality DAC out possibly through filter, to quality 
ADC in), but I'd like to easily setup a few Jack based sampling domains, with exact given 
live conversions (back and forth) in between, all synced to one (1) specific clock, either 
a software clock or hpet clock or (preferably as well) a single high quality usb audio 
clock. Appears to be not prepared in the given software...


T. verelst
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] tracking drum partials

2017-08-02 Thread Theo Verelst

Thomas Rehaag wrote:

Dear DSP Experts,

can anybody tell me how to track drum partials? Is it even possible?
What I'd like to have are the frequency & amplitude envelopes of the partials 
so that I
can rebuild the drum sounds with additive synthesis.



Hi

In theory, just a little like using mpeg compression om a drum track, nothings in the way, 
you can do a frequency / amplitude encoding of "partials" from drum sounds.


I recall when I was 'shopping around" for a ((Electrical) Network Theory) MSc subject in 
the 80s there was some work in the literature about such things being done or considered, 
because sample space in drum computers was limited and expensive. The idea that also some 
of the sound path of a decomposition in partials would magically concide with interesting 
moments of actual or electronic drums sounds appears to have been not very true. So it 
wasn't a great idea after all, most likely.


Of course it's fun to make something of a digital percussion machine, I concur with that 
desire, but from my experience with sound design and studio effects I know it's a very 
complicated thing to project a working, decent and pleasant percussion sound out of your 
digital monitors, so it would be interesting to do some matching sine components and 
exponential dampings, but exceeding that nice electronic organ drum sound from the 60s 
might be quite some work to do!


Theo V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] A 1/3 octave multi filter limiting based noise reduction setup

2017-06-20 Thread Theo Verelst

Hi all,

Recently I've gotten some work done on a experimental noise reduction setup using the Free 
and Open Source Jack/Ladspa tools under Linux, at 192khz and with quality enough results 
that I think some other people might want to try it out.


The idea is that when for instance using microphones in an environment where there are 
background noises of varying frequencies, either for recording or for sound reinforcement, 
I want not to have to use the various standard available "noise gate" effects to filter 
out disturbance or get the signal quiet in silent passages. Why ? Because usually a noise 
gate is bad news for the sound quality, doesn't work to remove noise when the gate has 
opened, cannot be set with a very low gate response level in the face of serious 
background noise levels, and it soon ends up having a detrimental pumping effect on 
subsequent studio processing tools, as well that it creates  as envelope response 
repetition effect that gets amplified and soon boring as well.


After having (in the past I might have reported about that here) tried out 
multi-compression and multi-gating approaches at up to 1/3 octave approach, I thought I'd 
try out a side-compensation approach that doesn't require sending the main signal through 
a filter bank, because that would give unacceptable phase distortion for my serious 
recording needs.


So the approach I prototyped last week has 30 frequency band filters, automatically 
created to cover the audio spectrum (example tcl scripts are in the download below 
"createbands.tcl" creates a previous version of set of band filters for jack-rack, and 
"start30.tcl" starts them up automatically with a version of jack-rack I made that allows 
for scripted window placement), which I fitted with a look ahead hard limiter and trivial 
network. The network is an input rack (empty at the moment, maybe add volume control), and 
output rack (output volume control/limiting additional gating ?), a delayed signal path 
(sample for sample delay, no signal change, except for a 30kHz 1st order congruent  low 
pass which matches the combined limiter outputs low pass), a 30 band fan out input driver 
(essentially a 60dB signal boost) going to the individual filter/limit racks, and a 
gatherer/summer of their outputs which contains a 60dB attenuator and, obviously, a 
polarity invertor.


So for low signals, the limiters remain inactive, and the filters, combined with delays 
and the parallel main path delay, damp the signal at all frequencies, such that 
effectively at the output, there is a lot less noise. As soon as a limiter starts to limit 
the particular filter's signal, the difference between the side-path and the main 
(accurately delayed) signal path lets the surplus of the signal at that frequency band 
pass through.


I needed to individually set the delays for the frequency+limiter bands to compensate for 
the differences in effective filter delay per frequency, for which I used a little 
measurement script with band noise generator form which is included in the download zip.


A example of one of the 30 jack-rack bands:
http://www.theover.org/Dsp/Multilimitdenoise/scrdmp_d15.gif

Here's the setup I used to successfully record some stuff with the use of a Yamaha MG16Xu 
mixer's audio interface, with a lot less pronounced noise as a result, and no serious 
audible  signal degradation. Additions to the approach are clearly possible, maybe I need 
individual gain controls, and the filters could be analyzed more accurately, and of course 
a side-chain gate with inverted control signal could be used at higher volumes to squelch 
out the filter bank compensation signal altogether.


Also I think it would be possible to use more logical reasoning about which filters and 
which compensation signal paths and which limiter control signals could be used. Might be 
nice for a FPGA version...


Anyhow, for those interested to have look or try this on their Linux audio workstation, 
here's a zip file with the files required to start my experimental setup up:


   http://www.theover.org/Dsp/Multilimitdenoise/mulimdenoise1.zip

I've gathered up some main tips for working with command line controlled Jack/Ladspa audio 
tools here: http://www.theover.org/Dsp/Multilimitdenoise/linuxtricks.html .


Greetings,

 T. verelst
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] A practical note on using the Ladspa "plugins"

2017-06-02 Thread Theo Verelst

Hi

While designing and testing an automatically created filter/multi-compressor
bank of 30 "jack-rack" holders of (Linux) Ladspa DSP elements, I was reminded of
something I've noticed before. Some audio effects use a lot of CPU resources 
when
idle, or in other words when no signal input is present.

The way I work is having Jack run it's standard 32 bit floating point audio, and having a 
lot of these racks suddenly use half a thread processor of the CPU a piece has two 
negative side effects: the CPU gets hot (when the clock is maxed out even too hot), and 
there are X-Runs (audio processing graph run failures) which, even when signal returns, 
take a while to disappear before normal processing resumes.


I tested one solution of this problem, there's another: check out the (Open) Source code 
and fix the problem, namely: inject the signal graph with a -160dB tiny noise signal, 
which prevents it from going into overload mode. So I made another jack-rack with a noise 
generator, adjusted it to -80dB, limited the signal and attenuated another 80dB so that no 
signal processing I work with has serious trouble with the added noise. This appears to 
work: the processor load remains constant in the absence of signal.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Curves from a step function

2017-03-13 Thread Theo Verelst
a little thought to consider in the context of a lot of digital algorithms for audio: if 
one takes a step function (doesn't matter how exactly, but say a single, unipolar one for 
the sake of the simple argumentation), like "0" before sample time == 0, and 1 after, in 
often occurring situations in DSP, how does it get turned into a curvaceous signal ?


In a neat oscillator design of some kind, it might get bandwidth limited to a bit under 
the Nyquist rate to make sure the signal in digital domain can be reconstructed into a 
continuous signal properly, either theoretically or practically after Digital to Analog 
conversion. That filtering or signal construction using bandwidth limited components is 
bound to make it curvy in one way or the other.


Obviously a lot of processing steps that involve some form of IIR or FIR filtering are 
going to make our step function (harmonic frequency components limited or not) into a 
"signal curve", all the more smooth when the filtering frequencies are lower and/or the 
number of simulated poles is larger.


Then there's the simulated coupling capacitors (actually filters) as well as real ones in 
your DAC and preamp and monitor amplifiers, possibly some up-sampling filters that might 
be in the signal path will introduce a nice curviness here and there around the step point.


Finally, the DAC's anti-aliasing filtering (probably best understood as analog, real-time 
electronics low-pass filters with a small amount of resonance) and built in reconstruction 
filtering (digital and oversampling: ranging from very little over short IIR to medium 
short FIR and possibly others) is going to make some sort of curve from your step, in fact 
inverting such filters isn't easy to do with high accuracy.


I'm sure there are more often occurring curvature inducing processing steps besides using 
the obvious graphics related tools that have been around for a long time like splines and 
interpolation polynomials, which however aren't related to sampling or filtering 
congruence with the analog domain, but usually on mechanics and geometric considerations.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-09 Thread Theo Verelst
Thinking about it, I recall there was some from of transform used for frequency/time 
analysis for instance for radar problems (maybe books from before WWII, or more recent 
frequency/time analyzers) and without checking though it was in popular DSP speak 
something like the Hilbert transform, but it seems that's just about the idea of 
transforming repeating waveforms for the purpose of analyzing harmonics.


It's possible to make a decent short filter that convolutes with a random or prepared 
digital signal such as to create an indication of frequency. From that creating out of 
phase synthesized sine tones shouldn't be a problem either, though I think the accurate 
phase responses required for the binorals of the human hearing a very subtle and the 
inherent results coming from the quickly browsed limited Hilbert transform as the 
projection onto sine and cosine components seem nothing particularly suited to any effect 
I know of at the moment.


I repeat the main problem, unless you'd have prepared and limited the domain of input 
signals available and make credible you can do more accurate matching that way, is that 
unless you do significant upsampling or high frequency sampling, the signal, or it's 
integral usable for convolution  between the samples is only properly reconstructed by a 
long sinc interpolator. Any other filter, by mathematical incongruence and therefor 
strictly logically speaking incorrect, doesn't do a perfect job, no matter how long ou 
search for alternatives.


The idea of estimating a single sine wave frequency, amplitude and phase with a short and 
easy as possible filter appeals to me though. Either there's the possibility of tuning an 
interval to the wavelength, or using some form of filter that outputs an estimate.


T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Recognizing Frequency Components

2017-01-28 Thread Theo Verelst


Always fun to extrapolate into the blue, huh ? Not so much, while it's interesting to look 
at FFTs and try to form an opinion on using some (maybe readily available) form of it to 
analyze a frequency, and maybe even interesting parallel/pipe-lined versions are FOS 
available, it's not just a lot of computations (your DSP maybe doesn't have time for) for 
the purpose, it's also not the best averaging function for determining precise 
frequencies. Apart from errors stemming from the fact that the FFT interval often won't 
match the length of the waveform (like our sine wave)  being measured, any further 
reasoning on the basis of the used FFT's main interval will be biased in that it is 
assumed the actual waveform is being transformed. This is not true, there is no inherent 
reconstruction of the waveform going on in the FFT, and "averaging" batches or streaming 
measurements doesn't accurately remedy this. The Hilbert transform of the required 
integral of sinc functions isn't a constant, it can not be ultimately accurate unless in 
this example we measure smarter, or perform (costly and high latency) reconstruction 
computations.


Having used the correct hypothesis that our sampled sine has the form

  A*sin(f*x+p)

It should be possible to check samples across the proposed 10 seconds (e.g. 44,100 * 10 
samples) of "measurement" and arrive at pretty stunning accuracy! Using various types of 
measurement might be interesting to prevent the incircumventable additions of errors from 
getting a standard type of bias that will make future audio computations on the basis of 
the results subject to a character that is hard to remove, once it has entered. It (in 
know from experience) is easy to add (and find) components in audio signals that come up 
relatively clear in FFT type of computations, and can send that peak up and down and a bin 
back and forth easily. Other filters have their own character as well. Arriving at a 
overall as neutral as possible digital signal path, for instance int the sense of sensibly 
taking care of errors staying statistically independent and/or  not easily accumulate to 
sound of certain modern audio products is a real challenge!


I read only a part fo the RBJ "From zero to first order principal component synthesis" 
article yet, but am aware that, just like some other generalizations, drawing from general 
mathematics of the last century all too enthusiastically, making a number of possibly 
incorrect assumptions will not necessarily create a strong or mathematically sound proof 
set of theorems.. Wavelets, (semi-) orthogonal filter banks, and wonderful Gaussian 
summings are no exception, even though it is an interesting and challenging subject.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Recognizing Frequency Components

2017-01-27 Thread Theo Verelst

Some serious articles !

From a dry but hopefully somewhat interesting scientific point of view I'd think you need 
only 4 samples of the original example (accurate quantization, perfectly equal sample 
distance, noise and distortion-free sine samples), because you need to estimate amplitude, 
frequency, phase and offset, or 3, if you allow the latter to be zero. 3 Unknowns with not 
too wild non-linear equations, should be solvable system with 3 or just a few more points 
of data (the few more for possible coinciding data points as a consequence of getting a 
repeat or phase problem wrong, I didn't think that through). Every doubling of data length 
from henceforth should double vertical resolution (presuming computations way more 
accurate than the sample data). In an new fashioned system, maybe very fast intelligent 
associative memory could be used to simply look up wave patterns from a small database in 
parallel to find a close match without computations, but containing low vertical match 
accuracy..


Many DSP methods can apply, I'm sure something can be done with an FFT, certainly a 
(convergent ?) variable length one, or a Discrete Fourier Transform with a length which 
isn't related to sample frequency in some way. Fouriers have the advantage of the 
orthogonal basis such that pure harmonics can be instantly and uniquely quantified as well 
as the fundamental, which is the only tone present in my example.


An octave filter bank, just like a single low-cut filter, either analog (if we would have 
an analog signal source to work with) or digital could give a pretty fast estimate of the 
octave the tone is in. An iterating, calibrated digital resonant filter, or a "parallel 
running" sine wave generator algorithm subtracting from the signal with iterating 
parameters might work well. It's not easy to get, say, a 1 cent (1 percent of a semi-tone 
interval) frequency measurement resolution, but using a long sample of good purity testing 
shouldn't be overly problematic.


A fun solution IMO woudl be to use some interesting (streaming, special chip enhanced) 
method to upsample the example to much higher sampling frequency with a real accurate sinc 
filter, say for the middle 3 seconds with a filter width of 3 seconds (or more !), and 
fill a significant portion of a modern PCs main memory with the result and imitate fast 
frequency recognition methods from analog (measurement) domain in their equivalent (..) 
digital form.


Theo V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Recognizing Frequency Components

2017-01-26 Thread Theo Verelst

Hello DSP list

It's one of those subjects that interest me, even though there are lots of solutions 
around, some partial, some inaccurate, some elaborate, etc etc.


Suppose it's given that we have a properly sampled sound file of 1 (one) sinusoidal 
component of frequency above 20Hz and at least significantly under the Nyquist frequency. 
The samples are of given and significant vertical resolution, contain no noticeable noise 
or other impure components.


As a first main question, seeming a bit overly boring: how do we determine, or measure, 
the frequency of this component, and as accurate as possible or to a certain good enough 
error bound, the initial phase and amplitude ?


This one can have various answers, I know at least a few and don't mind making tiny 
program to test some of those answers, of course it gets a bit more difficult when 
practical considerations come into play such as guaranteed upper error bounds (probably 
familiar territory for engineering types) error-free-ness of the method (i.e. will you 
find only one solution, and not a multiple or something) or of course what happens with 
more elaborate methods like various length FFT sets which might measure multiple components.


Say the sample length is long enough for any purpose, like 10 seconds.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Using an actual database for storage of DSP settings

2016-12-27 Thread Theo Verelst
Not everybody here will have this problem, but maybe some will recognize the issue, and 
maybe even some use Linux sound tools.


What I propose to do is take the settings of a certain (set of) streaming signal 
processing graph (based on jackd) with many parameters (like, thousands or at least 
hundreds), and store these parameters in a database, such as SQL. I've used postgresql in 
the past, but now I've installed mysql. So I what to do things like "get the gain settings 
of the bands of a 30 band equalizer that I used last week to process song s-and-so". Or: 
"create a set of plugin parameter files that combines graph part this what graph part that".


Did anyone here do any previous work on such subject ? I mean I don't expect some to come 
up and say : "sure ! here's a tool that automatically does that for all Ladspa's" or 
something, but maybe people have ideas...


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-22 Thread Theo Verelst

robert bristow-johnson wrote:


..
From: "Theo Verelst" <theo...@theover.org>
..
 >
 > To me it seems the preoccupation of maximizing the mix output isn't wrong, 
but the digital
 > domain problems usually has other handles. The choir example of adding say a 
thousand
 > voices and needing 10 more bits

you would need 10 more bits only if there was much of a possibility of all 1000 
voices
singing a synchronized-phase tone, a coherent waveform like an acoustic laser 
beam.  you
wouldn't need 10 more bits otherwise (assuming each of the 1000 has the same 
power as 1).
  every bit gains you 6dB of headroom and every time you double the power, you 
lose 3 dB
of headroom.
...


It's for a normal choir a game of reflections, I suppose. Every source will bounce of the 
walls and form a bit diffuse background wave after a few bounces, which adds to the direct 
waves and probably averages out to lower than max-phase additions with respect to a 
certain listening point. Though in principle when you consider a nice coherent incident 
wave front coming together at a certain listening spot, it could be that the "Space 
Odyssey" choir could put a few hundred voices coherent into the reverberation, too, that 
would be scary!


Dynamics for mixing weak sources probably is in a Equal Loudness curve where the mid 
frequency range is all that can be perceived unless the amp is turned up for a soft 
passage. What the voices should in such case do with respect to each other is maybe making 
sure the (normal, additive) interferences (bows and throughs) sound comely in the 2.5-4kHz 
range instead of the for a choir nice few hundred Hertz range.


T.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-20 Thread Theo Verelst



To me it seems the preoccupation of maximizing the mix output isn't wrong, but the digital 
domain problems usually has other handles. The choir example of adding say a thousand 
voices and needing 10 more bits to capture the highest amplitude of the combined tones at 
a point where they all arrive at their extreme value simultaneously is fine, like it is 
for a synthesizer, but for a choir with people creating tones with their bodies i a 
reverberant space, the addition of the acoustics isn't necessarily exhibiting the same 
amplification factor. On the other hand the reverberation can add more amplification than 
you'd compute from the average statistics of the choir elements.


Usually in a mix for loudspeakers you want to include various acoustical preparations that 
send waves into the listening space and it's walls that are nice to listen to and create a 
sound field that at lest on the sweet spot is reminiscent of the style the recording is 
in. That's called producing and mixing in the traditional language, that since sometimes 
replaced by computer and "new audio" people received new meaning.


Say you want to record some sine tones that are analog, or even recorded from flutes in a 
acoustically undead space, and you put the signals in the digital domain properly enough 
(very little harmonics above Nyquist frequency in the input signal , taking very steadily 
clocked in, instantaneous, many bits vertical resolution samples), what can you tell about 
combining these flute recordings in the digital domain ? Most people will agree simply 
mixing them together by adding corresponding time samples will give the listener, after DA 
conversion, the impression flute recordings resound together. Not entirely, because the 
infinite order delays in the reverberation, and non-linear aspects of air movement can 
make separate recordings in the same space come out to sound different than combined ones, 
but that's not my point.


The samples in the files of the example flute recordings can safely by time transposed, 
also sub-sample, and linearly added without upsetting the audio DNA components, for pretty 
sure. Now, if the combined recordings exceed a certain amplitude, found out by trying, or 
even by demanding the (close to perfect or actual) reconstructed waves from the output 
samples remain smaller than some maximum, outside of shifting the flutes a little but in 
time (if that's permissible) or putting some softer, so that the sum becomes a but 
smaller, there's nothing more "neat" to do then put down the volume, i.e. multiply the 
output samples with a factor smaller than 1.


I tmight be interesting to know what the sampled flute files look like, and if besides 
reasonable forms of (normal, multi- or sideband) compression there are other ways to make 
the combined flutes sound at natural volume or trick for instance CD customers into 
accepting a more interesting maximum loudness schema. Adding harmonics isn't nice but 
could work. Compression creates "wibbles" on the output that can dance around the 
listening space nicely or nastily, but honestly, digitally that's a mess when precautions 
aren't taken. You could add attack-wave loudness, intelligently analyze the partials in 
the tones and do something with that, or you could even try to decide on the model and 
make of the flutes and microphones used as well as invert the space the flutes sound in, 
determine accurate perception parameters from that and synthesize a mix which includes 
these findings a a some choice of pepping up the sound or creating a realistic listener 
feel for those exact criteria.


That is hard. Even getting accurate momentous frequencies up to a cent accurate isn't easy 
to do digital, and every added digital filter and certainly envelope trackers and curve 
functions in compressors add their own audio DNA to the "mix" result.


T. Verelst



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] A game on equal loudness

2016-12-10 Thread Theo Verelst

Hi all,

I was working on a (by now) very complicated signal processing graph for studio purposes, 
about short and mid signal self convolution, encoding sample reconstruction information in 
digital music streams, acoustic preparation of various frequency bands, as well as 
safeguarding from excessive mid frequencies that might be dangerous to the hearing of the 
listener, especially in a listening space that accumulates energy in "blare" ways. Also 
the the 96k/32bit/CH streaming processing setup (on Linux I7 machines), which involves 
specific analog processing as well, and takes at the very least two powerful PCs in 
networked audio mode, is unhiding limiting/compression information in most high quality 
recordings as well as some other processing I presume to work, firstly on existing mixed 
materials, but, by in a sense reversing the sense of their operation, also on high quality 
recordings I made.


My intention is to use my "BWise" graph connection program to represent the processing 
blocks running under Jack maybe at a few levels of aggregation and store the parameters of 
all the Ladspa and other "plugins" in SQL. Until then, unless someone would be 
specifically interested, I won't present my current work in the internet, yet!


Now, the Loudness processing I mention here is a moderate complexity addition I've 
recently made that fits in the processing I do at a specific point, which is unlike I've 
heard other people work on (except the famous A grade studio recordings apparently all 
contain the information I presume, that is a hypothesis that for me no longer is an 
hypothesis), so I won't bother to try to get accurate about that until after I have some 
decent internet representation of all the blocks and a graphically appealing and 
explanatory summary.


In short I've looked at the well known Equal Loudness Curves like from the wikipedia ( 
https://en.wikipedia.org/wiki/Equal-loudness_contour ), and thought: what would happen if 
I detect the amplitude of say 10 bands at various frequency bands from these curves, and 
pretend that the louder the band, the more I move the response of a processing network 
from enforcing a 40 dB Equal Loudness Contour to a 60 dB ELC or to an 80dB ELC.


I did this by taking some bands (in the case of my first test, adjusted by hand using 
sliders of filter plugins in "jack-rack": all Free and Open Source Linux tools), adding 
them together to cover the whole audio range, adding an unchanged pass of the input signal 
in a parallel way, and adding a compressor to each filtered band which subtracts it's 
output to the mix output, but,because of the compression effect, above a volume set in 
correspondence with a equal loudness curve point starts to subtract relatively less and less.


In a way, the design I'm after includes serious overall sampling considerations about 
frequency bands getting reconstructed into analog signals in a standardized way such as to 
make certain acoustical results pleasing and free from a number of often present 
artifacts. That's really hard, and becomes the game I'm after when dynamical signals are 
analyzed for change of a number of parameters in the processing. Here, I was mainly after 
changing the feel of the spectrum according to Loudness Perception, which isn't all 
perfect, nor intended to, because the eventual tuning includes eigenfunctions that have 
completely different properties than the direct analogy between the continuous time 
loudness perception per frequency getting translated into a digital signal processing 
graph. Even though that's fun, too.


So let's say we pay attention to the curves at hand and align them (sorry for the crude 
graphical processing, it was late when I did this) at the 1kHz gauge point to compare the 
difference in shape between the various curves that make up the graph presented in the 
above mentioned wikipedia page:


   http://www.theover.org/Musicdspexas/Eloudness/Lindos1_406080.png

Now we can see that the green and the blue curves which I translated to the level of the 
yellow 60 dB curve indicate the particular changes that take place when going from 40 dB 
loudness over 60dB to 80dB. What most will know is clear: lower frequencies require more 
power at 40dB and less at 80dB, idem for the high frequencies, where at the highest 
sensitive frequencies around 3.5 Khz the situation is the other way around: less 
sensitivity at more volume.


So my processing blocks use that main idea to change the relative loudness of the input 
frequencies, and it does so by at very low volume subtracting the chosen frequency bands 
(pretty much the whole audio range in more or less first order per octave parts except for 
1kHz in this case) from the pass-through signal, and the more the volume comes close to 
the 60dB ELC, the more the compressors will double subtract the signal until the output of 
the filters are compensated and there's a situation where mainly the pass-through is audible.



[music-dsp] How much bandwidth ?

2016-09-27 Thread Theo Verelst

Hi DSPers,

searching for interesting computer hardware I ended up at one of the important junctions 
of DSP machine design: apart from available memory size and MIPS/FLOPS: how much bandwidth 
is available to communicate with the various memories involved in for instance a virtual 
synthesizer design, or between DSP cores: how much data and control can be communicated 
per second, and as secondary question: how big must the communication chunks be ?


In PC design for music applications, I'm sure it can pay to involve the various bandwidths 
in the machine in a design, including the free bandwidth like after considering the 
instruction pre-fetcher running into SDRAM instead of some level of cache memory for 
instance. Probably (usually automatic) cache management and granularity (use big buffers) 
are important factors in getting good bandwidth between various computation parts.


For DIY projects, I used to count on say about 50MHz or something in that range max for a 
normal wire. But of course a lot of chips, certainly memory chips, could communicate a lot 
faster than that with FPGA or DSP IO pins, but getting communication bandwidth of an 
appreciable number of hundreds of mega-bits or more per connected wire may not be easy to 
get to work reliably, maybe unless you're making a pro-grade printed circuit board for a 
project. My Blackfin DSP experiments of a decade ago (multi delay, analog simulation 
synth) of connecting up FPGA/CPLD with the system bus of the DSP running at 500/600MHz led 
to a bus connection speed of a 100M/s IIRC, but a lot of starter kits, development boards 
and available memory and DSP chips cannot easily be connected up to escape, let's call it 
micro processor hobby speeds...


Then there's the latest things I've worked on with the Zynq ARM/FPGA that has wonderful 
communication bandwidth between the (in my case 2) ARM cores and the programmable logic, 
which can connect to the outside world with a lot of external FPGA IO pins. But: not at 
gigabit level (requires more expensive FPGA). And as it is I can easily program and 
connect up a AXI-lite ARM-bus interface to for instance communicate sample data from giga 
memory to the FPGA processing structures, but the disadvantage is that gives about 12 
MByte/s bandwidth, which isn't very much compared to the harder to achieve (and there 
aren't examples I've found easy enough to follow) DMA based bandwidths the bus and memory 
interfaces promise.


How is your experience, any better ?

T. V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Lakh MIDI Dataset v0.1 released

2016-08-29 Thread Theo Verelst

Colin Raffel wrote:

Hi all,

I'm pleased to announce the release of the Lakh MIDI Dataset v0.1


I haven't gone through the reports/site, but aren't there copyright issues with collecting 
MIDIs for professional purposes ?


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Choosing the right DSP, what things to look out for?

2016-08-25 Thread Theo Verelst




How important do you reckon FFT hardware acceleration


A good question might be to think of what that acceleration is composed of, and that which 
you want to use it for. I can easily come up with professional use of certain FFT 
variations, including straight, medium long, and fully overlapping ones, but if you don't 
know what you are going to do with it, who should ?


There are lines of DSP, similarly there are lines of general purpose (and even graphics) 
processors, such as short instruction time RISC and long instructions CiSC architectures. 
It is possible on various register/compute architecture DPSs of the reduced instruction 
set type, that there are no specific architecturally recognizable FFT support 
characteristics, because the manufacturer relies on their C compiler and optimizer for 
that. Alternatively, a complex instruction set DSP, with a lot of pipe-lining and support 
for parallel instruction execution may well not care what type of computations you're 
doing with it, which can include certain optimized variations of FFT-like transforms and 
inverse transforms, like the PC Intel processors tend toward (but aren't specifically 
optimized for).


So if you have a DSP and/or it's compiler you could try to take FFT code you like (like 
FFTw3 for the PC) and see how optimal it turns out and if you can figure out way to 
optimize it.


Software radio will tend to do FFT type of filtering in FPGA, and there are of course a 
lot of criteria to chose a certain processor or technology for a project, including 
competitiveness and not limited to being a power-match with the project at hand.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?

2016-08-22 Thread Theo Verelst

Hi, B.

In all honesty, I've looked at some of the supplied materials in the Vivado_hls software, 
the free web-version has got a number of examples that can work on my very cheap 
Parallella board. It's the cheapest Zynq board on the marker, which of course might not be 
the best way to start a Artix-6 project with large DSP computation blocks, partial 
reconfiguring and commercial Intellectual Property and commercial Vivado modules like DSP 
designer. So I wasn't trying to sound like a blue shirt, perfectly correct and intent 
sales-person, rather I was sharing enthusiasm about this technology and the relatively 
powerful tool to get your DSP code rolling from a C program.


Of course, nothing is a simple as it seems at first sight appears to be some sort of way 
of engineers that want interesting projects to get respected by their fellow designer, but 
I have tested some code (one small but not simple example I've shared here, IIRC) made on 
my free Linux Xilinx software, ftp-ed and device-loaded into the Zynq board's FPGA, and it 
worked perfectly. That was using a 100MHz clock. The Parallella board is able to run at 
333Mhz and I seem to recall some of the standard design to make the Zynq work with the 
additional chip on that little board runs even faster, but I didn't check.


As I see it, the result of the optimized C-to-Verilog effort is to end up with a netlist 
with ports and (in this case) Xilinx IP like Rams, DSP slices, etc, so that indeed it's a 
matter of loading the resulting IP "project" into the normal Vivado to compile the DSP 
function into a function that in some cases can be coupled with the AXI interface, so that 
the Zynq ARM cores can talk to the function you've made (at about 3 mega 32 bit word r/w 
accesses per second, the way I did it). So it could well be that if vivado_hlx says the 
timing for the chip involved is 3 Nano seconds for clock, all kinds of factors make the 
final Verilog compile decided it should be less. Also, that might well be true only for 
relatively simple computations, like multiplies or something.


Having used the example that is supplied with Vivado (used a 2015 one and now the latest 
2016.2), there's a 1 clock cycle optimized FPGA design coming from the C-to-Verilog 
compile, but only after about 6 steps of optimization, that include a lot of "#pragma"'s 
or parallel Tcl code per C-program to get a matrix multiplication to that point!! It's 
pretty smart to optimize, but it won't do parallel generation of blocks to increase 
pipeline start up time or core skewed pipe lining to my knowledge. The example, and a 
course to run it is in application note


  ug871-vivado-high-level-synthesis-tutorial.pdf

from Xilinx (should be easy to find on the web if you would like to have a look 
at it).

T. Verelst
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?

2016-08-16 Thread Theo Verelst
Thanks all for your non-scarce replies to my question and a lot of practical 
considerations. I suppose mostly it's always nice to know what people are working on.


About the reasons for my inquiry: mostly tho whole of the modern day PC as a Complex 
Instruction Set Computer has become so convolved that dealing with it's pipelines 
(instruction )pre-)fetch, memory access delay (depending on address- and data-flow access 
times memory page reuse, clock frequency matching between (frequency governed/turbo 
boosted) cores/threads and the memory access and DMA control units, etc.) the cache 
filling, write-through and hierarchical access times, and the contentions of threads/cores 
accessing memory and devices, is pretty hard, and looks almost non-determinable for 
software engineers.


Add to that the hardware and network related related multilevel buffering and the 
difficulty of executing kernel activities like memory segment and bank assignment, 
process/thread scheduling/check-pointing/virtual multiprocessing state preservation, it's 
hard to know what efficient programming, and gives reliable and repeatable interactions 
times unless you trim the number of processes, run a real-time kernel Linux (which of 
course aren't a scientifically RT finite state machine yet), us a machine with little 
variation in it's load or stay way under the full load of the CPU and memory such that 
they hardly rise in temperature as they would when you'd use a significant portion of it's 
actual processing power, or maybe resort to a simpler processor with simpler heat 
management and build you own OS+software from the ground up without a claim to general 
usefulness.


In practice task switching, which can be related to thread instructions, as well as memory 
management can be in the way of the fine grained real-time responsiveness you may want, 
and the many pipelines in the modern PC (say an I7 machine) as well as the many caches in 
combination with access granularity to the main memory can be very in the way of even 
deciding which small computation should follow the other and then efficiently execute a 
small number of computations.


An FPGA like the cheap put powerful Zynq 7010 I use can, when running at 1/3 of a GHz 
compute fast logical sequences very efficiently, and for instance theoretically can run 
certain filters at up to 10Gigaops per second, which isn't necessarily easy on a PC, but 
it then still can connect up signal parts with almost no buffering in between and very 
little pipe-lining. Of course in case you want to make good use of your virtual CPUs ALU 
or even FPU, you need to run more samples through it than simply one per clock cycle. For 
cases of straightforward logic resulting from optimized silicon compiling a C program with 
the latest Xilinx Vivado HLx, it is possible to run computations in 1 (one) clock cycle of 
the 333MHz FPGA fabric. That means at CD rate you should use 333,000kHz/44.1kHz ~ 7,551.0 
samples per computation unit to make full use of that hardware instance's full abilities. 
I for myself regularly use a "jackd" (Linux/alsa) process framesize of 8192 as this makes 
the system very stable when a lot of computations are to be done by various pipelines of 
various cores, while running 192kHz audio, which is an empirical given, and not on an 
optimized machine (for instance it still doubles as a web server, TV, and I prefer to run 
Firefox as well).


Anyhow, it's an interesting subject which I as very advanced musician like when it becomes 
more accurate and responsive !


Theo V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Anyone think about using FPGA for Midi/audio sync ?

2016-08-13 Thread Theo Verelst

Hi all,

This is one of those things I've been toying in thought about lately (again): everything 
on PCs and probably phones and pads as well that does input/output, be it in the family of 
proper thread management or actual devices like Midi and Audio interfaces (and certainly 
over networks as well), has weak timing accuracy associated with it.


When software that does digital signal processing includes musical synthesis, unless it is 
an off-line music reader (like Midi files), gets Midi-like note and control messages, the 
DSP needs to follow real time messages, and the current timing accuracy can be limited. 
For a class of applications where at least you would want sample accurate control 
messages, buffering for efficient pipeline and cache use dictates some form of delay 
control, which IMHO should be such that from Midi event to note coming out of the DAC, 
there is always a accurately fixed delay.


Timing midi messages isn't yet necessarily very accurate, for instance here is some Linux 
info:



http://wiki.linuxaudio.org/faq/start?redirect=1#qwhat_is_the_difference_between_jack-midi_and_alsa-midi

That has also to do with most devices using buffering a a low machine level, and thread 
switching isn't going to be brilliantly fast and instantaneous all by itself, either. So I 
though it might be a good idea to time stamp Midi messages with an Fpga (I use a Xilinx 
Zynq), and built in some form of timing scheduler in the FPGA to help the kernel. I'm not 
talking about a hardware Linux "select()" call as kernel accellerator or single sample 
delay Fpga DSP, or upgrading to dozens of Fpga pins at a hundred times Midi clock rate 
doing clock edge-accurate timing, but an intersting beginning point for the next 
generation of accurate DSP software and interfaces.


If you have ideas or facts to mention about such projects, please respond.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Intellectual Property management in popular Digital Signal Processing

2016-08-07 Thread Theo Verelst

Hi all,

What might be thought to contain some political dimension, IP management, in fact, the way 
I see it, is a pretty neutral intellectual subject a lot of audio engineers have occupied 
themselves with in some point of their lives. In the broad field of Electrical 
Engineering, the principles often quoted here werepart of teaching history since at least 
the (19)60s, and the idea of "inventing" an algorithm, a computer method, or even a piece 
of art-work is of course as old as the beta sciences are.


Sometimes I have the idea people hobby-ing around in contemporary "music" are interested 
in things like "maximum loudness', management of Public Address power audio systems, 
popularity of the circuit of their friends and other more or less normal subjects for 
technical support persons for music makers. Of course there is a lot of software on the 
market and available as Open Source products that contain an amount of science and 
invention in the field of synthesis/filter of sound, musical processing and analysis, and 
since I was one of the first on the internet to publish some (humble) results of 
simulating classic synthesizer parts on a computer I have wondered what the official and 
desirable behavior around these subjects should be, I mean, it's not like Dr. Moog (who 
was still very alive and kicking at the time) let me know to immediately quit simulating 
his circuit diagrams from service manuals from the 60s!


It's a bit like I recall a working group at uni from the late 80s where the subject was to 
determine what should be considered the right method for managing (computer) software 
right from a moral, economic and legal point of view. I did some DSP programming at the 
time (like FM synthesis on a 68000 processor) which I considered a hobby, and maybe good 
for teaching purposes, but I also made commercial software for synthesizer sound 
management (with signal processing considerations for musician's use) and I sure was 
concerned with commercial IP management about my hard programming work, which was 
interesting and partly, but certainly not overall successful.


Some people seem to occupy themselves a bit more with obfuscating certain principles in 
(theoretical) DSP, and evil minds could (mis-?)construe that as attempts to steal 
intellectual property of others, which in a friendly hobby setting of course shouldn't 
need to be talked about, and in a proper science setting would be immediately noticed by 
"peers".


Anyone care to comment ?

Regards,

  Ir. T. Verelst
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] BW limited peak computation?

2016-08-01 Thread Theo Verelst

Paul Stoffregen wrote:

Does anyone have any suggestions or references for an efficient algorithm to 
find the peak
of a bandwidth limited signal?



Hi,

I think without getting lost in quadratic algebra or endless searches for a holy grail 
that doesn't exist that I don't take part in, you've answered the main theoretical 
question yourself: to know the signal between the samples, you need perfect reconstruction 
to the actual signal, and then analyze that.


Of course, like the "Fast Lookahead Limiter" from Ladspa or LV2 which I use regularly 
does, you could up-sample to a reasonably high sampling frequency with the best tools 
you've got, and hope the best of a tool that up-samples another 8 times (IIRC) and leave 
it at it that if you're using decent input signals to your sampling path that there aren't 
a great many signals actually mirrors around the Nyquist frequency so that a tool like 
that will doe a reasonable flattening job.


Of course it's possible there's one peak in your signal at 1/4*PI between two samples such 
that no matter what a rational fraction between samples you compute you could never find 
it with infinite accuracy... I suppose however in most practical cases you can have a 
pre-conditioned situation where you know which possible up-samplers are going to be used 
on your decent digital signal product, like wide window sinc, standard short interpolation 
and a couple of other methods (FIR/IIR approximations, wave shape approximations, 
multi-band approaches, and for the pro's: average based frequency components with 
multi-limited computations in the 96kHz or higher sampling domain). If you know what the 
customers are going to use as up-sampler, and once with the final product you make a 
reasonable quality wide-window sync up-sampled test run to see if there are any special 
cases to tend to, you could work with that, unless people enjoy endless (but not 
particularly useful) discussions on heuristics.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] up to 11

2016-06-22 Thread Theo Verelst
About this usually dreadful subject I know a few easy and some harder suggestions, that 
might be useful to some.


It's not necessarily hard to get loud in this time of technological advancement. Remember 
those hand held personal alarm units or more contemporary smoke alerms, cooking timers, 
etc, that work on a small battery but produce quite a tone when heard by human beings or pets.


Maybe it's another thing to try to beat a decent rock concert and create a sound that is 
loud enough to be heard in such a situation, ob the other hand amplifier and also speaker 
power isn't hard to get by, and pretty cheap: an amplifier of a few hundred watts is no 
more than a few hundred euros/dollars, and a speaker system capable of producing sound 
pressures up to the human threshold of acceptability of loudness isn't hard to get by or 
use either.


So what' s the point of the question ? I could be asked from a certain context, which 
would be necessary to know ! A frankly with so much ability for (at least most Western) 
people to create loud sound on the cheap, a more important question should be how to 
control all that power into a safe for the hearing and pleasantly usable system.


DAC reconstruction filters are usually in the way of easily reproducing certain types of 
musical "loudness" analog equipment were known to create, thats a technical issue.


Perception-wise there's a lot to deal with when thinking about waves that do no make 
people feel disoriented, give the intended impression of proximity (close or far), and 
contain properly constructed properties that either do or do not warn the animal instincts 
of the listener. So an alarm unit or megaphone blaring at you with an alarming tone isn't 
the same as a nicely amplified snare drum coming from a big P.A., because the latter 
should feel good to the audience, and not intruding or upsetting.


Acoustically, you'd need to analyze what your digital waves through the DAC, amplifier and 
speaker are going to effectuate in the way of actual audio waves, based on your digital 
signal. Sounds trivial, but lots of these types of components enforce a pretty clear 
character on the produced audio beams reaching the listener through his or her listening 
space.


The tricks of limiting, (multi-)compression and square wave types of signals usually in 
this time lack a proper analysis of the actual signal coming out of the DAC, and how this 
signal (perhaps through small and somewhat hyped speakers) is going to form beams, 
reflections and resonances that make you form you opinion in the listening spot.


My personal opinion is that achieving proper, neutrally sounding power is way harder than 
trying to get loud at any cost. And human binoral hearing capacity demands way deeper 
knowledge to get a well working guitar amp on batteries or something, which proven by a 
lot of digital products I've heard fail in that respect..


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Creating and maintaining digital signal processing graphs

2016-05-09 Thread Theo Verelst

Hi all.

Just a little thought about something I regularly encounter and that is 
interesting me.

Working with Linux and Jack/Ladspa (usually at 192kHz/32bit) I make moderately complicated 
let's call them "patch graphs" of software acting on streaming audio data connected 
together with a flow structure where Jack transfers audio buffer data between application 
parts. Typically there are tens or even hundreds of audio connections that connect up for 
instance audio "racks" of plugins as to achieve digital processing like multi-band 
compression and gating, mastering FFT related dynamics and frequency depending music 
sub-bands, basic stuff like equalization, delaying of signals. input output from and to 
music players, mixer input/output, DACs, probably something most people deal with.


So I make a graph of racks connected up to do some mixing job I like to perform on my own 
music or for DSP purposes acting on existing tracks, and as long as my I7s aren't getting 
too hot, I combine certain types of processing, so that I can make a mix sound I like from 
various building blocks, often in an additive way way in limited interpretation of the 
idea, regularly using various ADC/DACs and intermediate analog and digital processing by 
external devices.


Most people probably put tracks in DAWs, but I like to have several processing branches in 
parallel that of which the results are delay compensated and "live" can connect to 
external sources and drive each other sequentially in other cases. So I start say, a rack 
with an equalizer that I feel works good as compensation for the sound of a certain CD, 
then I want to extract a certain frequency band, do processing on that, dynamics process 
the subband, and add the result to the output of the Eq rack unit and monitor it.


Ideally that means if I want a certain number of racks with interconnections to be called 
up I want to be able to do so while other processing jobs remain active, and as easily as 
possible interconnect the existing digital streaming processing blocks with the newly 
started ones, save the result to go further the next day, and keep track of all associated 
files etc., standard engineering tasks. Practically speaking, using O.S. software, which 
is a good blessing when doing audio DSP because then you at least have the option of 
knowing what you're using in the software and have the possibility to change things you 
don't like, I run "patchage" on Linux, which automatically makes a nice lay-out graph of 
all the processing nodes in the "jack" audio domain and their connections (with named 
ports). Because I start my commands from a Unix shell or script I can find out which 
processing nodes I've started with "ps -f" which gives me the exact Unix commands I've 
used. A while ago I've started using "Carla" for storing the connections between the 
digital audio processing blocks, and for restoring then once I've started the various 
racks and tools up again.


Now, if there's already a number of nodes running audio, starting up new processing blocks 
is easy, and Carla allows to connect them up according to some pre-saved pattern. 
Alternatively Qjackctl or Patchage can be used to hand draw the patch wires to arrive at 
the desired processing graph, which can be tedious when there are dozens of connections! 
SO a pre-saved macro-"block" is handier. So Carla allows connecting Jack blocks which are 
only a subset of the processing going on, and incrementally too, because if I'm not 
mistaking it will leave block names that aren't in it's saved network alone, and it never 
removes existing connections.


That leaves the ideas open to "play" with patching virtual audio racks and some tool that 
intelligently can do more than add connections to fixed block names, like what if you like 
to keep certain blocks running and change their interaction patterns, implying some 
connection need to be broken, or how can you rename connection graphs for instance for 
repetition.


I thought that might be some interesting food for thought.

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] High quality really broad bandwidth pinknoise (ideally more than 32 octaves)

2016-04-14 Thread Theo Verelst

HI,

Talking about "perfect noise", you may want to consider these theoretics:

 - what do you do near the Niquist frequency ? Or more practical: noise that gets near 
the NF will probably cause strange effects in practical DACs and when the digital signal 
is to be interpreted as "perfectly re-constructable" there's probably a lot of trouble in 
the high frequency range


 - "perfect noise" is also uncorrelated for most peoples' understanding, which creates a 
problem when using filters: all FIR responses or digital quasi poles and zeros you use 
show up as correlation at the output of the noise generator.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] A DAC improvement experiment I did

2016-04-11 Thread Theo Verelst

Hi all,

There's been a bit of discussion here at times about Digital To Analog Conversion that 
inevitably follows a lot of DSP experiments, and the lack of complete accuracy that might 
make music sound less good than intended. Depending on the POV and varying opinions of 
course, but it's my opinion things are far from perfect and nice, as yet.


So I did this experiment, and for good types of music recordings (like HDTracks high 
resolution audio from well known artists) I can get a lot more pleasant results when I put 
my large and good quality audio system loud.


So if you're interested, here's a page on my server about what I did:

   http://www.theover.org/Dsp/Simpleundistort/

Enjoy, and discuss if you feel like it, though this is not part of an attempt to start a 
deep science discussion about the subject, even though I'd probably prefer it i that were 
possible; it's a practical experiment, and there's a download containing all the DSP blocks.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Delays: sampling rate modulation vs. buffer size modulation

2016-03-25 Thread Theo Verelst

Matthias Puech wrote:

Hello,

...

I am trying to understand what is the observable relationship between these two 
scenarios
when modulated ...


The main thing to remember when thinking of the maybe easier to get example of a tape 
delay is that the the difference in tape speed at the moment of recording and playback 
determines the amount of pitch shift. So if the tape runs at one speed, there's no pitch 
shift, only delay. If the tape, or film camera, runs faster at recording time than at 
playback time, the relatively speaker lower playback time will make the music or the film 
go slower (like for a slow motion scene, you need high speed cameras). If playing the 
"echo tape" back goes much faster than the recording, you get a very fast Donald Duck 
sound. Neither variable "step size" digital delays, nor variable sample rate Bucket 
Brigade Electronic delay devices can escape the main idea of that kind.


The integral I don't know, but there's a simple limit: tape can run too long such that the 
effect becomes slow, or you can speed up playback too much, so you run out of delayed 
sound, those are hard limits, also regardless of the method used.


The main difference, except for the historical difference (digital, i.e. amplitude 
sampling delay lines were harder to make and more expensive at first, but in fact also can 
have variable sampling rates, though nowadays those aren't made anymore to my knowledge, I 
used to use one in the 80s; and BBD even though they don't quantify amplitude are noisy 
and limited to maybe a few thousand samples) lies in the way the sampling errors will sound.


You cannot escape the idea that a delay line with normally low sampling rates (in the 
range of CD sampling rate), irrespective of it being fixed or varying clock digital or 
varying clock "analog" BBD, will introduce signal distortion because of the dealing with a 
signal that is chopped up in samples, and somewhat accurately is put back into an analog 
signal when it comes out of the delay unit.


The difference between using inter-sample interpolation, fixed sample rate "sound" and the 
different sound effect the errors create when you use a variable sample rate digital delay 
line (with fixed delay length) lies in how the interpolation or the sample rate change is 
going to change the signal at the output of the delay line. If you idealize them, the 
output should be the same. So if you'd be able to use a perfect resample algorithm, and a 
perfect digital to analog reconstruction filter at the output of your delay line (and a 
"perfect" AD converter of course), you'd get an idealized tape delay, i.e. the sound would 
be the same as a perfect tape machine. It isn't possible to do perfect signal 
reconstruction in the Digital to Analog convertor in zero time, so that's a problem, and a 
similar reasoning holds for resampling.


The "analog" BBD based delays which time-sample but not amplitude-sample, cannot 
"resample" and therefor need a varying sample clock, do not have a reconstruction filter, 
not even the very limited ones that are in pretty much all DACs, so they sound very ragged 
and with major sound artifacts, but not like the standard distortion (limited analog 
signal reconstruction ) of all audio DACs.


T.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Is Beamforming really required in my Speakerphone setup?

2016-03-22 Thread Theo Verelst

Hi,

I suppose the question implies the concern for the necessity of the beamforming, or maybe 
alternatives. It isn't too difficult to do a little estimation about what is going to 
happen with your setup when signals come from various sides and angles, assuming a sample 
delay corresponds to about 7 millimeters of "wavelength" at 48 kHz. For the higher 
frequencies, when summing over the two 15 cm apart sides, a little angle will create a 7 
or 14 mm delay difference which will lead to cancellation and amplification of high 
frequencies. How bad that is, depends, in the worst case, somebody talking on the phone, 
or that the application is, from the side, at 15 cm there will be a cancellation at a few 
kilohertz, which could be tricky.


To prevent "frost" or other types of intelligent beam forming (might be a fun project, 
though!) you could consider adding the somewhat lower frequencies from all 4 mics, and 
using only 1 for the highest frequencies.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-02 Thread Theo Verelst

Paul Stoffregen wrote:

Does anyone have any suggestions or publications or references to best 
practices for what
to do with the state variables of a biquad filter when changing the 
coefficients?
...


I am not directly familiar with the programming of the particular biquad filter variation, 
but some others, and like many have said, I suspect that varying the external cutoff 
and/or resonance control, computing the effects on the various filter coefficients, and 
essentially not making any changes across some sort of singularity (if there is one), or 
overly rapid changes, should leave the audio running through fine.


The state variable filter theory and origins in the use of audio equipment like the early 
analog synthesizers isn't exactly the same as the digital implementations, so there are 
things to consider more accurately if you want them wonderful sweeping resonant filter 
sounds on a sound source: there are non-linearities for instance in a Moog ladder filter, 
and the "state" which is remembered in an analog filter in the capacitors, gets warped 
when the voltage controls change. A mostly linearized digital filter simulation will 
probably not automatically exhibit the same behavior as the well known analog filters. 
Also, all kinds signal subtleties are lost in the approximation by sampling the time axis, 
which may or may not be a problem.


I looked at the code quickly, couldn't find the "definition" datastructure definition in 
the .h file and didn't look much further, but I suppose you should use the part where you 
initially compute all the (5?) coefficients you use from the biquad filter again to 
gradually change the coefficient of the actual filter code. I do not know what the 
relation is between the biquad coefficients (at sufficiently high sampling frequency) and 
the equivalent analog "state" of the filter. Maybe someone else knows how the biquads 
behave in that comparison.


T.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Tip for an audio resampler library doing cubic interpolation

2016-02-28 Thread Theo Verelst

Hi

I can't offer you a solution to the programming assignment, and in fact there are 2 
reasons it might also not become a great general purpose solution. I do however have some 
affiliation with the interpolation function field and the particular types mentioned in 
this thread. With interest I've looked at the pdf about short FIR filters based on the 
well known mathematical/computer graphics interpolators, which far a part came from 
mechanical/industrial engineering fields quite a while ago.


I struck me there appears to be no particular reason for wanting any of these well known 
interpolation polynomials for audip processing purposes on the the basis of their 
mathematical and graphical merits. The interpolation curves (or surfaces in more 
dimensions) of these kinds have certain properties, which sets them apart from any 
n-degree polynomial that via a number of equations and their solution could be directed to 
pass through a number of given data points. For instance, the Bezier curve in it's 
quadratic form will pass through the data points at the extremes of it's control polygon 
and there are possibilities to match the direction (the first order derivative) of the 
curve at one end. You could want to use that, or maybe the generally averaging properties 
of a Spline function to do some amount of curve matching with given wave shapes.


However, just putting the curve control points at the samples of an input signal, and 
doing nothing but interpreting the interpolating function as a very short FIR function 
getting sampled in the target (re-) sample domain, does not mean all too much to me. Also, 
I suppose you are aware there is only one "perfect" reconstruction function you can use to 
compute the "amplitude value" or what you want to call it, between samples, and that's an 
infinite sum of sinc functions, which unfortunately for all of us doesn't have a simple 
substitute and diminishes rather slowly, so you need a pretty long, or long delaying, 
resample filter to become accurate. Might not be a problem for for instance a gaming 
application.


The reason the interface and the implementation of a general resample function might be 
not as readily found, and not as trivial as some may want it might be that some issues 
aren't easily resolved int he area, even if you've found the type of interpolator you 
like. From a physics or signal point of view, you might need information about the mutual 
phase relationship of sampling domain 1 (the original sample data) and 2 (the resampled 
data). Also, you may want to know the actual frequency relationship between the two 
(virtual) sampling clocks,not just the idealized one. Finally, in practical cases (I don't 
know what your example was actually for) the sampling clocks might be not completely 
stable, so that you could have to factor in what happens to all kinds of signals in 
extreme cases when there are slight variations in clock frequency of (one of the) sampling 
domains, possibly you 'd need some buffer space to solve those issues, but if you want to 
define the practical "resample" function, you might have to take these factors into 
account for higher quality cases, and somehow quantify the issues you're solving, maybe 
including signal distortion/noise as well as the prevention of potentially dangerous 
spurious errors and the change of temporal power in the output signal. Of course depending 
on your application.


I for sample rate conversion between various AD/DA convertor domains use "alsa_out|" and 
"alsa_in" from the Linux Jack/Alsa/Ladspa toolkit, which can be given various quality 
ratings and allows some control over the estimation of the various clock rates and the 
response of a sort of control loop for the real time resample computation. Also I 
frequently use Mplayer (Linux) which can resample nice enough in integer ratios, and can 
deal with small difference between input clock for an external (web) audio(/video) source, 
real time clock, and output Digital to Analog converter sampling clock. The alsa_ tools 
aren't very big in source code sense, Mplayer resampler is possibly in a directory of it's 
own, both are Open Source.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Cheap spectral centroid recipe

2016-02-25 Thread Theo Verelst

Evan Balster wrote:

...

To that end:  A handy, cheap algorithm for approximating the power-weighted 
spectral
centroid -- a signal's "mean frequency" -- which is a good heuristic for 
perceived sound
brightness .  In 
spite of
its simplicity, ...

Hi,

Always interesting to learn a few more tricks, and thanks to Ethan's explanation I get 
there are certain statistical ideas involved. I wonder however where those ideas in 
practice lead to, because of a number of assumptions, like the "statistical variance" of a 
signal. I get that a self correlation of a signal in some normal definition gives an idea 
of the power, and that you could take it that you compute power per frequency band. But 
what does it mean when you talk about variance ? Mind you I know the general theoretics up 
to the quantum mechanics that worked on these subjects long ago fine, but I wonder what 
the understanding here is?


Some have remarked about the analysis of a signal into ground frequency and harmonics that 
it might be hard to summarize and make an ordinal measure for "brightness" as a one 
dimensional quantity, I mean of you look at a number of peaks in a frequency graph, how do 
you sum up the frequency of the signal, if there is one, and the meaning of the various 
harmonics in the spectrum, if they are to be taken as a measure of the brightness? So a 
trick is fine, though I do not completely understand the meaning of a brightness measure 
for frequency analysis.


Of course to determine a statistical measure about a spectrum, either based on sampled 
signals or (where the analysis comes from and is only generally correct for signal from - 
to + inf) on a continuous signal, and based either on a Fourier integral/summation or a 
Fast Fourier analysis (with certain analysis length and frequency bin accuracy), you could 
use the general big numbers theorem and presume there's a mean and a variance. It would be 
nice to at least make credible why this is an ok analysis, because a lot of signals are 
far from Gaussian distributed in the sense of the frequency spectrum.


T.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-23 Thread Theo Verelst

Ross Bencina wrote:

..
http://epickrram.blogspot.co.uk/2015/09/reducing-system-jitter.html
...


Interesting read, centered around java and the idea of calling virtual multiprocessing 
overhead jitter.


Probably some form of thread control is on for attention to optimize complex computations, 
maybe an area where FPGAs can help the kernel, for instance for parallel prioritizing. But 
as it, the descriptions and commands in the "blog" give an idea of how the threading on a 
modern Linux kernel can be made more quickly responsive and to an extend can be controlled.


Interesting, but of course a number of subjects at hand here where the idea is to run 
audio computations (say a function or a macro) fast and reliably timed aren't part of that 
such as the memory contention and caching behavior, the access to the hardware ports of 
some kind for getting data (from and) to a DAC (ADC) the memory paging approach, the 
memory bank and RAS/CAS equivalent access speeds, and the scheduling by the multicore 
processors of the prioritization of accessing memory locations from the one core or the 
other. Memory segments/paging is probably in some way controlled by the kernel, memory 
banks follow from the hardware schematics and parts information, but the various access 
times for memory rows and columns, and potentially the various numbers that define the 
memory properties for burst access, refresh delays, etc. are a different ball game.


Probably it's best to control the number of processes and their memory access behavior 
that run on the core that's running Linux, and make sure the IO structure and competing 
patterns for DMA hardware access are known and not competing between cores and potential 
FPGA DMA access cycles.


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-05 Thread Theo Verelst

Scott Gravenhorst wrote:


to: Theo

I saw your suggestion regarding JACK and I'm wondering "why?".  ... ...  Using 
the example program pcm.c
(modified to use threads), I've been able to run the direct write loop on an 
isolated core
and got 45 "voices" of sine production with a period size of 15 frames at SR 
44100 (I
calculated some 1/3 millisecond of latency).

That's pretty neat, the Jack is because it checks for "xruns" when it finds there have 
been samples missing in the computations, and it seems to keep alsa settings such that my 
accurate clock DAC gets followed as an audio clock exactly, whereas alsa playback can do 
all kinds of things when computations aren't exactly ready in time for the audio buffer. 
Of course there's overhead on the RPI to use all those Jack buffers and schedule rules so 
I imagine you chose not to use it.


T.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-04 Thread Theo Verelst

Paul Stoffregen wrote:

...
First is the one I've been working on... the Teensy Audio Library.  The main 
advantage of
this way is you can integrate with almost all stuff designed for Arduino, such 
as the MIDI
library and USB MIDI, and of course lots of hardware.

http://www.pjrc.com/teensy/td_libs_Audio.html
...


Heh, I looked at the application on your website, the one with the blocks and curved cords 
to create program code from a self editable network, that's cool, I like it !


Theo V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-01 Thread Theo Verelst
About caching: modern Linuxes, apart from having access to kernel sources and making your 
own changes, IIRC allow for certain multiprocessing parameters to be influenced, such a 
locking memory ranges to threads/cores/processes.


Not overly discipline thoughts about such semi pro project (I suppose), but the raspberry 
isn't a OC, and for those who have interest, if you connect up a dynamic memory like DDR3 
to something that can read and write to it, like an FPGA or a PC processor or a ARM 
processor like on the Raspbury, regardless of cache settings of processors, you will in 
pretty much all cases get the full memory bandwidth that is a result of the clock speed 
and data width of you memory at your disposal. For PC memory banks, it might be hard to 
predict the actual processor and memory clock speed at any given moment, including the 
settings of the cache, but except for hard to predict clock alignment issues (which cause 
only minimal timing difference, normally) the whole digital machinery is strictly 
deterministic, and can be predicted when you know the settings of all parameters (which a 
decent kernel does).


So say you connect up a memory with a bandwidth of 100 Mega Bytes per second, and you 
factor in the time it takes to do RAS/CAS equivalent addressing modes, you can know 
exactly what latency and bandwidth you have available. Of course caches can increase the 
perceived average memory bandwidth over some time, and you could want to make use of fast 
cache memory for critical program parts, and of course you could want to make sure those 
program and data elements stay in cache level 1 while you want it. Actual and simulated 
multiprocessing as in Unix since the 70s at least (and I recall HP real time Unix from the 
80s which was cool) do not really directly dictate what caching, memory paging (and 
potential swapping), memory bank choice and data structures are going to be used, and how 
the cache of let's say the processor in a windows PC is going to respond to that. So if 
(like on Linux) you do not have the opportunity to bind a process to a core, and a piece 
of memory, and assign certain communication resources (like the Zynq can) via the kernel 
and the operating system to your program, you're going to get some response times on 
average that depend on contention for memory space and cache use of other processes, and 
the complete load of the machine, with a lot of seeming randomness.


A PC running very far from real time OS can better do more than a few samples audio 
buffering to realistically put it's cores to work on DSP computations (sometimes I run 
Linux sound programs with 4k (!) buffers and hundreds of plugins at 192kHz with hundreds 
of connection over Jack, which makes that process run quite stable and usable, except with 
a little delay, of course). PCs aren't really modeled after small microprocessors, but 
rather after some elements of supercomputers from the past, which requires a whole 
different programming style to actually get some of the real potential in terms of 
MIPS/Flops and such going, and raise the temperature of the I7 (or what  have you) 
significantly to know it's number crunching. Of course if you know how to use your cache 
entries (and keep write throughs from slowing you down) there's a lot more power in a PC 
than easily is harnessed for DSP for instance memory tests on my very fast PC say I can 
have cahce access for level 1..3 of 136GByte/s to 40MB/s in sizes from 32k to 122Mega 
byte, which is a quite little more than the suggestion in this thread. So it may well be 
worth hooking up an OS where all cache can be used for programs and data, and maybe paging 
control information that allows a programmer to use that for where it is handy, and have 
faith in the repeatability of the memory setup.


The ARMs in the RP have smaller cache and the ARMs are more of a Risc than a Cisc 
architecture, so they are more suitable to do embedded control tasks with short turnaround 
times.


Oh and just as a practical side remark: prefer to use Jack and a stable sample clock over 
Alsa, which is more fiddly and may be not stable and might even try to re-sample and such 
misery (for what I suppose is your purpose here).


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-01 Thread Theo Verelst

Scott Gravenhorst wrote:

I'm looking for advice regarding the design of a MIDI synthesizer.
...

Advice regarding this endeavor would be appreciated.  Questions include which 
of the
transfer methods will come closest to my goal of low latency (from time to MIDI 
message
receipt to ..


Hi Scott,

Good to hear you're working with the new PI, it seems you've answered a lot of your own 
questions already!


I haven't looked into everything about the time slicer, memory management (+ associated 
kernel sources about page and segment management) and the connections of the various 
hardware controls with the RPI cores and the various memory bus and other kinds of 
contentions going on, I do recall from the Parallella forums that some one was using a 
Linux boot command alteration to make the 2 core Zynq into essentially a 1-core Linux+ 
free core, if you want I could look that up for you.


My experiments with cors and Midi (like my long ago DSO based design 
http://www.theover.org/Synth ) is very good in the sense of getting low latency going, so 
I'd have some interest myself in connecting the graphics accelerated, 4-Usb port RPi 
sensibly to (FPGA based) synthesis modules. It seems to me, given that any response speed 
up to per-sample accurate (i.e. one audio sample delay between receipt of message and 
starting a tone) could be your target, the following setup might be interesting to think 
about (not necessarily to implement).


A (or more) midi message(s), or a message coming from a relatively simple piece of FPGA 
that much quicker than MIDI scans a musical keyboard (I have some old keyboards lying 
around that I wouldn't mind turbo charging) could be time stamped by FPGA, relatively (but 
not super fast needed) send to for instance the RPI (or a Zynq based linux process, or 
like in my case a classic version 1 RPI) , processed including the time stamp, send back 
to the FPGA or send to a software module, and then WITH FIXED DELAY played into the chosen 
audio stream. That way, latency can be small, but not near zero, which in a real time OS 
is harder, but constant.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-19 Thread Theo Verelst

Hi all,

Maybe a bit forward, but hey, there are PhDs here, too, so here it goes: I've played a 
little with the latest Vivado HLx design tools fro Xilinx FPGAs and the cheap Zynq 
implementation I use (a Parallella board), and I was looking for interesting examples to 
put in C-to_chip compiler that I can connected over AXI bus to a Linux program running on 
the ARM cores in the Zynq chip.


In other words, computations and manipulations with additions, multiplies and other 
logical operations (say of 32 bits) that compile nicely to for instance the computation of 
y=sin(t) in such a form that the Silicon Compiler can have a go at it, and produce a nice 
relative low-latency FPGA block to connect up with other blocks to do nice (and very low 
latency) DSP with.


Regards,

 Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Memory Bandwidth and DSP

2015-12-03 Thread Theo Verelst

Hi list

Thinking a bit about my Zynq board having now a working Linux supported AXI connection 
with it's FPGA that can be used (thus far simple) examples with Vivado 1015.3, I would 
like to plan using the right niche or corner or "edge" when making some examples that use 
FPGA acceleration for music digital signal processing I have in mind. The current 
connection that works properly and Open Source between the ARM cores and the decent FPGA 
isn't very fast (little over 10 Mega Byte per second I suppose) but it's possible to run 
quite some data amounts over the AXI bus, like the working (also FOS) HDMI interface 
interrupt/DMA example (from the Parallella project/ADI), and possibly special DSP 
functions that can make use of the (small) on chip memory bank.


It used to be so that a number of interesting DSP functions, just like other functions 
including mathematics on supercomputers and computer graphics, are more memory bandwidth 
limited than CPU/DSP power limited, which today is a bit of an obfuscated subject because 
you have to take pipelined memory access into account as well as average and worst case 
cache performance.


I was wondering what the devices currently in use in various DSP setups have as essential 
memory bandwidth. I now my very fast PC has dynamic memory access bandwidth of about 80 
Giga byte/sec (4 banks), and big graphics cards with DDR5 memory get what, say a few 
hundred GB/s easily enough. Of course graphics cards aren't often used for audio DSP and 
in the case of the PCs memory banks, we have to probably count considerable latency for 
single, random-bank accesses.


Then there's the dedicated DSPs, I know about some fast multi-core TI DPSs and have used 
some Analog Devices ones, which essentially use dynamic RAM chips for main memory of their 
core(s), and there are dedicated ASIC/FPGA chip designs with internal memories up to a 
part of or a few megabytes, and some had external fast static memories, but that appears 
to be not fashionable.


What's the range of bandwidth people compute to use for serious dSP applications or is 
there even an objective measure ? The big dream of computing for instance live FFTs for 
studio quality audio is achieved for PCs depending on what you want but where is the 
analysis ?


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-18 Thread Theo Verelst
I don't recall the various chapters of Information Theory and practical digtial 
electronics or something about these kinds of power estimates, so I'm not putting this 
thread in some direction, except two small considerations.


When the power of a signal is concerned, which in the continuous case for white noise has 
an interesting statistical convergence, you can try to square it, and do a frequency 
analysis. For a blocky signal like a sequence of amplitude values, that interpretation can 
go quite a bit wrong when those are samples of a real (analog) signal or a theoretical 
signal where you don't distinguish the possible meaning of a spectrum. If the signal is 
white noise coming from a actual white noise signal being sampled, there's going to be 
aliasing that utters itself statistically, and/or there's an issue with the band limiting 
filtering. Not so important you might think, but how much "energy"  of the signal is 
actually producing aliasing (and what is the mapping that then takes place into the 
frequency measurement ?). For the EE interested persons in computing acoustic power in the 
digital domain, you might even in some cases need to know the relation in phase sense of 
the voltage (the value of the sample) and the corresponding current, or there could be 
quite noticeable discrepancies.


Of course you could say, I take a random generator and N samples from the same, and 
compute an FFT. Sure, but theoretically, that not a very clear definition, and you might 
be surprised that practice is a bit different. Also, when the signal is sort of sampled 
(as for instance in HF systems), taking the samples as indicative of the power can give 
you errors because the summation of the squared signal samples isn't the same as the 
integral of the original analog signal! Also, there are interesting little complex 
functions that you can do funny iterations with, and that seem perfectly decent and easy 
to take for instance an average or a frequency analysis of, while in fact that isn't so 
trivial, so I'm not always convinced that a simplified statistical analysis is actually 
close to the real analysis.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-12 Thread Theo Verelst
Everywhere in the exact sciences there's the dualism between statistical analysis and 
deterministic engineering tools, since the major break through in quantum physics at the 
beginning of the 20th century. Whether that's some sort of diabolical duality or, as it 
actually is at the higher levels of mathematics, some natural state of affairs with on one 
side theoretical science that properly decorrelates what's not connected and the better 
beta scientists construct working theories and machinery on the basis of deterministic 
("analytic" or number based) hypotheses depends in my opinion on the nature of the beast.


In physics, the strong and hard mathematical foundation of the main solutions for the 
quantum mechanical equations of name comes primarily from physical observations: nature 
appears to play a lot of dice at some level, whether we like it or not! That's a real 
given, not a lack of high frequency measurements or lack of practical knowledge about 
electromagnetics, waveguides and linear and non-linear electronic networks, but as of a 
century ago until this day, because of physics laws that in incredible accuracy appear to 
be based on pure statistics, and hard givens about "causality".


Electronics in the higher frequency ranges, since the beginning, are usually designed in 
terms of networks (oscillators, mixers, amplifiers, cables), EM field considerations 
(antennas, waveguides) and a quantum mechanics at the level of transistor design. There 
are many fields around communications obviously in progress the last decades, including 
better measurement equipment and better high speed digital processing tools, as well as 
design software for creating (digital) transmitters and receivers.


Recently I've witnessed Agilent software for mobile phones and other applications digital 
transmitters and other circuitry, and had some hands on experience with Keysight 
technology oscilloscopes in the many tens of giga Hertz range. Pretty interesting to 
actually being able to sample signals of 10s of GHz into a computer memory and for 
instance do eye-based analysis on digital signals, or play with the various statistics 
modules in such a device.


I heard the story that some of the latest Xilinx high speed FPGAs with their 28Gb 
transceiver links when connected over a back-plane create working "eye" diagrams, i;e; the 
communication works good, but measurement equipment fails to acknowledge this by proper 
measurement. That's an interesting EE design dilemma right there: is the measurement 
equipment better than the design at hand, or: do you need a bigger and faster computer 
than the target computer system you're designing, etc.


So the statistics being discussed come mainly I think from electronics about information 
theory, and some people, as is normal in inf. th. find it fun to take out some singular 
(simpler) components like basic statistical signal considerations, in the hope to easily 
design some competing digital communication protocols. Scientific relevance: close to 
zero, unless maybe you'd get lucky.


With respect to musical signal analysis, it could be fun to theorize a bit about corner 
cases that exist since a long time, like a noise source feeding a sample and hold circuit, 
and making interesting tones and processing with that. Like a S unit from a classical 
60's modular Moog synthesizer, which probably can be clocked with varying clock, and 
feedback signals. The prime objective at the time was probably more related to finding out 
the deep effects of sampling on signals, and encoding small signal corrections in analog 
signal for when they where going to be on CD. My guess...


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread Theo Verelst
In the course of these discussions, let's not forget the difference between a convolution 
with 1/(Pi*t) (a Hilbert transform kernel) and the inversion of the transfer function of a 
linear system.


T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Theo Verelst

Ross Bencina wrote:

Hi Everyone,

Suppose that I generate a time series x[n] as follows:

 >>>
P is a constant value between 0 and 1

At each time step n (n is an integer):

r[n] = uniform_random(0, 1)
x[n] = (r[n] <= P) ? uniform_random(-1, 1) : x[n-1]

Where "(a) ? b : c" is the C ternary operator that takes on the value b if a is 
true, and
c otherwise.
<<<

What would be a good way to derive a closed-form expression for the spectrum of 
x?
(Assuming that the series is infinite.)

...


Hi, from me at the moment only some generalities that appear difficult (at any level) 
usually: if you want the Fourier transform of a "signal" in this case a sequence of 
numbers between 0 and 1 (inclusive), the interpretations are important: is this coming 
from a physical process you sampled (without much regarding Shannon), or is this some sort 
of discrete Markov chain output, where you interpret the sequence of samples zeroth order 
interpolated as an continuous signal that you take the FT of? In the last case, you'll get 
a spectrum that shows clear multiples of the "sampling frequency" and that is highly 
irregular because of the randomness, and I don't know if the FT's infinite integral of 
this signal converges and is unambiguous, you might have to prove that first.


Statistically, often a problem, this sequence of numbers is like two experiments in 
sequence, on depending on the other. The randomness of the P invoked choice still easily 
works with the norm "big numbers" approximation, clearly, but the second one, and therefor 
the result of the function prescription, has a ***dependency** which makes normal 
statistical shortcuts invalid. I don't know a proper way to give a proper and correct 
statistical analysis of the number sequence, and I am not even sure there is a infinite 
summation based proper DC average computable. Two statistical variables with a 
inter-dependency requires the use of proper summations sums or maybe Poisson sequence 
analysis, I don't recall exactly, but the dependency makes it hard to do an "easy" analysis.


It could be a problem from an electronics circuit for switched supplies or something, or 
maybe in a more restricted form it's an antenna signal processor step or something, 
usually there are more givens in these cases, analog or digital, that you might want to 
know before a proper statistical analysis can be in order, but anyhow, you could write a 
simple program and do some very large sum computations, as separate experiments a number 
of times with different random seeds or generators and see what happens, for instance if 
simulation results soon give the impression of a fixed signal average.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Theo Verelst
Think of the Fast Fourier Transform as computing the inner product of a piece of signal 
(the length of the transform) with all kinds of basis functions: the various frequencies 
that can "fit" in the interval. Without going into engineering basics, you can take a sine 
and a cosine as a basis function got each frequency that "fits" in the chosen interval (so 
a sine the exact length of the interval, two sines that exactly fit the chosen interval, 3 
full sines, etc., until you have a sine that isn't distinguishable anymore because it's 
frequency is too high because it's peaks appear shorter together than the elements of you 
time vector. As it appears, it is a good idea to take sine and cosines, because than you 
can proof/make credible that there is one precise and only one FFT transformed signal that 
can be added to your time "sample" or signal data point vector, which given certain 
accuracies is also a bi-jective mapping. The idea of measuring the sine and cosine 
correlation is as an ancient EE trick (probably in mechanics and physics before that) 
connected with the idea that each signal (given certain conditions) can be seen as an 
addition of sine waves, where there's phase associated with each component. Writing that 
as a complex number is a trick, it has in this case not really to do with the transform, 
it's a short way of writing things. I assure you there's a lot of mathematical hassle with 
complex numbers and matrices possible you might not want to get into, because it is often 
not very insightful.


T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] What would you do with a fl.pt. 1024 FFT transform per 10 microseconds ?

2015-10-01 Thread Theo Verelst

Matthias Meyer wrote:

Hey Theo,

it is nice to see interest in FPGAs here. ...

..., or the calculation of the power spectrum can be done on
the FPGA with only little impact on latency.

Best regards,
Matthias



Maybe this board can be interesting for people wanting to get going a bit, I didn't yet 
deeply look into it, but with the supplied software and examples it might be possible to 
turn this into a co-processor board with ethernet audio connection, maybe connect to a 
good DAC (like this one for instance: 
http://www.diyinhk.com/shop/audio-kits/68-768khz32bit-ak4490eq-dac-i2sdsd-input.html ), 
and get going with these types of computation blocks, with a few less hindrances because 
of the microblaze processor being supported by Xilinx.


I think it is interesting to couple these DSP means with the proper analog computations 
such as a windowed power computation, and that that computation can be updated and 
recomputed per sample, with under a sample latancy. That's maybe not sublimelove for the 
ultimate network theoretician's dreams, but it sure gets a lot closer to fullfilling all 
kinds of promises about network theoretical theorems and even simple basic electronics to 
find it's way into musical (instrument) applications, obviously coupled with computers.


Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] What would you do with a fl.pt. 1024 FFT transform per 10 microseconds ?

2015-09-29 Thread Theo Verelst

Allen Downey wrote:

On a related note, how about laying out 2 1024-sample FFTs so you can alternate
overlapping windows on a live stream?



Hi Allen,

I'm not feeling entirely sure about what your suggestion means and why, but I presume you 
want to take another go at the averaging of bins before IFFT-ing back to audio ?


I've made use of "jamin" an open source tool that does an amount of cosine (IIRC) weighed 
averaging after the FFT-->EQ step. It sounds ok, but I use the averaging effect that 
creates only as an effect, and only in a tiny amount of dynamically mixed in audio, with a 
time shifted direct audio stream, which is great for preventing ugly loud signals in 
reverberation and other processing.


Your second suggestion is understood.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] What would you do with a fl.pt. 1024 FFT transform per 10 microseconds ?

2015-09-23 Thread Theo Verelst

Matthias Meyer wrote:

Hey Theo,

it is nice to see interest in FPGAs here. I have worked with FPGAs for
some years now and always wanted to use it for audio processing. Because
the FFT is indeed really fast I thought about using it for real-time
pitch detection. The Zynq platform is ideal because you can use Linux
and the FPGA. I implemented some basic (multi)pitch detection algorithms
on it using an accelerated FFT on the FPGA.
...


I've done a number of things with programmable logic in connection with audio and 
synthesis, let me mention here the synthesizers that Scott Gravenhorst made for the 
Spartan 3E board which are cool.


Compared with fast DSPs, the advantage of the FPGA fabric and available IP blocks is that 
the instruction set of the DSP is limited, and the instructions and data must all the time 
come one instruction at the time from a few special registers or the main memory plus 
caches, there's not very much parallel data for the heavy arithmetic unit computing 
elements. So in this case, there's a smart, hundreds of mega hertz internal clock 
frequency set of basic DPS blocks with short communication time and over a lot of fast 
FPGA internal wires connected with special (small) FPGA internal memories, and a control 
logic created by design software such that the whole operation is very quick on the, what 
is it, Virtex-6 or so chip fabric.


Before you'd have the whole thing working though, it's an amount of engineering that I 
didn't completely look into, in order to do the preferably stereo processing on a real 
time 96kHz/24bits audio stream, which would fit in the smallest Zynq (which I have). I can 
communicate over the AXI bus to do DSP connected with the ARM cores, but getting that 
communication working reliably for my type of processing may well be some work, and 
feeding and using the output of the FFT block I looked at may be somewhat hard as well, I 
can't tell up front.


It would allow at least at 96kHz (I'd prefer at 192kHz, because that's in the line of what 
I use on fast PCs) to do a *WHOLE* FFT of 1024 points in this case, *within 1 sample 
time*! That's quite fun. That makes it possible without additional hassle to impose a 
weighing function and get a completely averaged iFFT (didn't look into that) perfectly per 
sample. Maybe that can be done more efficiently when decomposing the FFT computations, but 
that would make a good audio processing mastering effect, even more neatly averaged than 
the Linux "Jamin" (which I think averages a number of FFT->Filter-IFFT output datapoints, 
but not a completely sliding window, IIRC).


Very good for mid-frequency power control/measurement.

Anyhow, of course there are more uses, like measuring, extrapolating different audio frame 
processing outcomes, etc.


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] What would you do with a fl.pt. 1024 FFT transform per 10 microseconds ?

2015-09-21 Thread Theo Verelst
I've tried to compose this mail from scratch, so there should be no strange (in my mail 
program) invisible tags left.


Anyhow, I was playing around a bit with an FPGA programmer tool called Vivado (from 
Xilinx), for use with the cheap ($99 and up) Zynq based Parallella board, and looked at he 
available freely usably IP blocks that can be automatically designed with it, and found 
this FFT computation block:


   http://www.theover.org/Musicdspexas/Screenshot_from_2015-09-21_16:57:02c.gif

which in the parameters on the other pages is set to 32 bit floating point as input format 
(it automatically picks 24 or 25 bits for phase data) and at a clock frequency available 
on the cheap little boards should have a 1024 sized FFT computation done in under 10 uS, 
which is pretty fast.


Now, these blocks aren't very easy to interface, but the fun of the (freely downloadable 
and usable) setup in this design package is that you can simply route wires between 
correspondign size inputs and outputs, so you could also make an audio frequency flow 
graph that can than be hardware compiled to fit in the FPGA part of the Zynq chip, 
addressable by the 2 ARM cores.


What would you do with that ?

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] DSP connections with basic acoustics

2015-09-18 Thread Theo Verelst

Hi all

For those inclined to have an interest for some of the signal processing I have involved 
myself with (the averaging ideas), and for general interest, I've worked a bit with the 
(demo version) Comsol 5.1 physics simulation software and in particular one example: the 
Standing Wave Computations.


Here's a picture showing "standing waves" at a frequency of 90.6 Hz (the lowest mode in 
this example was 74 Hz) in a room with some furniture of 4x5x2.5 meter, and the sound 
pressure level in dBs on the walls, floor and furniture with a color legend:


   http://www.theover.org/Musicdspexas/comsol5cm.jpg   (59 kilo Byte)
   http://www.theover.org/Musicdspexas/comsol5c.png(217 kB, original size)

What's the connection with DSP ? Well, if you take it you work at CD sample rate, the 
corresponding "size" of a sample is appr. 340m/s / 44100 Hz ~~ 7.7 millimeter. Suppose we 
do some form of filtering with a FIR length of 256 samples (a small FFT, a tabled 
convolution FIR, or a sample reconstruction reconstruction filter), the associated 
"length" of that sample train of 256 samples becomes an acoustic wave of about 2 meters.


The shown computation of the scientific solver for the eigen-modes shows little errors in 
the low pressure parts of the standing waves, where the "S" figures on the wall indicate 
knots of the self-resonance of the room, around a band of say 30 cm.


What are the connections ? Well driving the eigen-modes isn't usually what people want to 
do, but it happens automatically, and everything that comes out of your speakers will 
excite a resonance in the room. Of course, you may also want to compute early reflections, 
for instance at the listener "sweet spot", or the whole sound transient that comes from an 
excitation like a balloon pop, etc., but this is still interesting to think about, at the 
very least concerning where to put damping for monitoring DSP experiments, and what the 
combined error of a digital processing and D to A chain does in a listening space!


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] How do you, or would you want to, schedule audio data flow and processing in multiple audio stream software

2015-09-13 Thread Theo Verelst
I raise this question out of some form of curiosity about how people deal with the idea of 
putting their and other people's "plugins" or processing blocks or software pieces 
together to form a whole.


Say you wrote a filter, and you want to add a source and a sink. Once you've chosen your 
operating system and audio interface of choice, you implement your audio filter, say in 
44.1/16, and connect it to a source, maybe non-real-time from a file, maybe demand or 
supply driven in some form of flow, and subsequently, or semi-parallel connect the output 
of your filter to a sink, maybe a sound card.


You could do that all by hand, you could use existing streams (line Unix sockets), OO 
classes to in some way implement this game, or you could use the Steinberg plugin-kit (I 
forgot what it's called, even though I've downloaded one recently), or some other 
pre-fixed audio streaming regime.


Myself, I often use the Linux Jack/Ladspa Free and Open Source tools, which offer realtime 
streams without much of an upper bound (at least a few hundred streams aren't a problem on 
a good computer) between audio callback routines in one or multiple processes/threads, 
where Jack does a smart schedule such that the illusion remains throughout the audio 
processing graph that callback routines work on a given buffer size, aren't running for 
nothing, and always if there's enough compute power, servicing the parts of a larger flow 
remains correct and predictable. Now, I've been into PhD level stuff around these kinds of 
subjects, so I know the main underpinnings of Unix streams and solving schedules for 
functional decomposition with intermediate result re-use, so I'm not fishing for 
solutions, just wondering what people here think about this subject, or would like to work 
on, etc.


Ir. (M.Sc.) T. Verelst

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-09 Thread Theo Verelst
A short note on the Linux sound APIs (or that they should be called) like 
pulseaudio/alsa/jack. The difference as far as I know is that pusleaudio tried to be a 
general interface that would work for all applications, intended for general use. Alsa is 
a relatively low level interface that doesn't do much such as dealing with multiple 
applications and resampling, which pulseaudio can do I think, not necessarily what people 
want.


Jack is the most rigid interface in the sense that it can time strictly and stay 
synchronized with a certain audio interface or real time clock (using "dummy" driver). 
Also, jack can handle complicated and big graphs of audio connections with (globally , at 
start up) adjustable buffering, which it syncs to 1 (one) audio interface, and if there 
are no "xruns" reported, it thinks all buffers and processing by active jack clients has 
happened on time. It's great that in that case all streaming works completely transparent. 
So you can connect an audio source to, say, 5 programs to process it, and an inverter, if 
you than put the 5 programs in "bypass" mode, meaning they process audio but actually 
don't change a bit of information, you can (automatically in Jack, by connecting to the 
same sink/input) mix the inverted with the 5 times neutrally processed outputs, and you'll 
nicely get zero (i.e. the inverted signal cancels out the other passing streams).


The stdout output might be related to keeping the process scheduler module in the 
Linux/OsX kernel (or what such is called under Windows) active about the audio process/thread.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Mails with images?

2015-09-01 Thread Theo Verelst
Good idea to have images and typeset. Bandwidth can soon become a hurdle, and maybe there 
are still people with older browsers and mail clients. I could see the test image in the 
first mail of this thread, but I don't mind the method I used myself to just put a link in 
mail to a stable server with images. A Tex to image converter is no prob for me, to create 
an image of a formula for instance.


Maybe there could be a way to include images (from other or specific servers) on the web 
server for the archives, so that if a mail attracts attention, the ckicking on images can 
be prevented by looking the mail up in the archives.


T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 20k

2015-08-31 Thread Theo Verelst

Scott Gravenhorst wrote:

music-dsp@music.columbia.edu wrote:
 >On 2015-08-30, Scott Gravenhorst wrote:
 >
 >> This amounted to using a microphone to sample the effects of an
 >> impulse (starter's pistol or some such) on some audio environment like
 >> a church or concert hall,

-- ScottG


I haven't browsed over all the answers yet, but yeah that's fun, making and playing with 
sampled impulses. Of course, for the proper linear system theory to work as in to give you 
all correct reverb responses to all possible excitations, with long delays being part of 
the system to be modeled, essentially proper theory says you get problems because of the 
alarm-shot sound having made a single snapshot, which you can only apply once, because for 
the second sample of your own input to the linear system represented by the convolution of 
the measured input with your own input signal, the air mass isn't anymore "at rest", or in 
other words there are different initial conditions, so linear system theory is broken, and 
also, linear systems with delays have infinite poles/zeros, so there's that.


It's still fun, but it sounds decidedly flat and boring on musical input signals is my 
experience, and yes most likely there's a difference between the Free and Open Source 
implementations of a "brute force" and FFT based long convolutor programs. Probably the 
FFT ones will show a certain dimensionality in their computations, as a consequence of the 
length of the chosen FFT. Might not be important for some applications, but when I make 
audio productions that must work professionally, it could well be a first grade issue. I 
didn't decode the whole source code (I think Fons, who offered the FOS "jconvolve" I've 
tried was actually a graduate from the same university as I), nor run any direct 
comparisons between the two convolution variations, which could be an interesting thing to 
do, but I'm too busy programming some actual current digital synthesizers I work on. Also, 
it's important to include shift invariance in the considerations about computations with 
impulses.


Regards

T.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-30 Thread Theo Verelst

Interesting story about the interpolation noise from very
high oversampled signal approximations. I tend to think ïf it doesn't
concern an actual sinc function of significant width and accuracy then
the up-sampling is wrong unless the signal is prepared for it.

I can imagine in sample processing machines and software that you could
do effect processing in an oversampled domain which is arrived at by
having knowledge of the sample signals and possibly their detune filters,
and that it is possible to compute sample sets which allow oversampling
by relatively simple or cheap DSP operations to anyway fulfill certain
accuracy criteria, such as low noise, frequency accuracy, or effect
accuracy (like the number of fractional sampling accuracy in a phaser
effect for instance).

The linear interpolation I've used a decade ago for chorussing on a moderately
strong DSP of the time was ok bvecause of signal properties and the tunings of
the chorus I programmed, of course there's a lot to be said for using more than
zero order interpolations in general. I've looked at Taylor expansions for the
sinc function and it's possible accuracies, for instance. In mechanical design,
it was one of the early computer math issues to use all kinds of interpolation
schemes for a variety of purposes, with some terminology I suppose from the 
early
days of the industrial revolution. However, a good understanding of these should
be based on an understanding of what they are for. Some interpolations are for
minimal stress, some for minimal distance given a certain curvature, others
are statistically neutral in some sense, etc. More interesting is to look at
more dimensional curves and surfaces or try out these in functional analysis
or computations, which is far outside the scope here, and not very useful in
normal audio subjects.

Unfortunately the averaging and continuity considerations of the various
interpolation curves and their mathematical properties aren't very well 
correlated
with audi signals, and certainly not necessarily with sampling issues. So think
about what some suggested here: what is the filte rkernel that you're putting 
over your
signal when using them, and what does the sampled nature add in terms of
misery by the side ?

T.V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Theo Verelst

Hi,

A suggestion for those working on practical implementations, and lighten 
up the tone of the discussion with some people I know from worked on all 
kinds of (semi-) pro implementations when I wasn't even into more than 
basic DSP yet.


The tradeoffs about engineering and implementing on a platform with 
given limitations (or for advanced people making filters: possibly even 
trading off the computation properties required for a self-designed DSP 
unit) including memory use, required clock speed, and heat build-up (not 
so important nowadays for simple filters) can be more accurately met by 
being specific about the requirements in terms of the quality and the 
quantification of the error bounds, as in this case how much high 
frequency loss can I prevent, at which engineering (or research) cost, 
and how many extra clock cycles of my DSP/CPU.


In some cases, it can pay to do the extra effort of separating your 
audio frequency range in a couple of bands, so say you make an 
interpolator for low frequencies (e.g. simple zero-order) for 
mid-frequencies (with some attention for artifacts in the oh so 
sensitive m3 kHz range, and for instance for frequencies above 10kHz, 
where you can then pay most attention to the way the damping of the 
higher frequencies come across more than the exact accuracy of the short 
time convolution filter you use. Such a in this case limited multi-band 
approach costs a few filters and a little thinking about how those bands 
will later add back up properly to a decent signal, but it can make 
audio quality higher without requiring extreme resources.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Non-linearity or filtering

2015-08-20 Thread Theo Verelst
Thanks to all the participants in this thread, I hope it was at least a 
little educational, except maybe for some that seem to take everything 
as a test to their imaginations of themselves being little computers, 
and not human being with normal associations and lasting affections for 
serious subjects in the field of digital signal processing.


When I first got on the World Wide Web, must have Mosaic in about '93, 
after having used email for half a decade and enjoying participating on 
bulletin boards and so on, it required a serious scientific workstation 
and a serious network connection to be able to actually browse the 
information on ftp and early http sites, and I counted myself lucky to 
have those facilities at my disposal.


In the now, a list like this can be done on an old computer with a 
un-optimal modem connection, and you're fine! Lot's have changed about 
what people are used to, and what people get sued over on Twitter, etc., 
but some things remain the same: science isn't a dirty subject on the 
big internet, and being able to download and view all kinds of 
information has a democratizing effect, in the west, the 2d world, and 
hopefully even the third world.


It seems still hard to distinguish though what people would for instance 
sum up a scientific subject properly when history seems to call for it, 
and win a Nobel Prize (like a Dutchman in the 90s for Physics), and what 
people are only good for, say a government job to teach first year 
technical studies, or who deserve to run a multinational or become a 
famous professor in say the humanities, and I feel that that is because 
a lot of fundamental science questions have been put in certain corners 
since the Information Technology worlds has become more popular than 
most of it's constituents deserved, and somewhat that applies here as 
well, which is a pity, because good teachers valuable, leading 
professors that come out right are scarce in the field of computers and 
computer design, and frankly, the digital audio world isn't as much a 
success as many people hope for!


Anyhow, nothing much to personally interest me in the subject of this 
thread, except I'm more than average interested (and capable) in proper 
scientific theories, and it happens to be an essential subject for 
instance for first year (academic and otherwise, I'm sure) technical 
science students, and for good reason. How are those FIR filters to be 
considered non-linear, and how are those FFT effects going to filter 
like a linear system ? Just saying.


Ir (M.Sc.) T. Verelst

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Theo Verelst
SOmetimes I feel the personal integrity about these undergrad level 
scientific quests is nowhere to be found with some people, and that's a 
shame.


Working on a decent subject like these mathematical approximations in 
the digital signal processing should be accompanied with at least some 
self-respect in the treatment of subjects one involves oneself in, 
obviously apart from chatter and stories and so on, because otherwise 
people might feel hurt to be contributing only as it were to feed da 
Man or something of that nature, and that's not cool in my opinion.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Theo Verelst


For people including scientific oriented it always surprises me how 
little actual science is involved in this talk about tradeoffs.


First, what it is you want to achieve by preserving high frequencies 
(which of course I'm all for)? Second, is it really only at the level of 
first order interpolations ? And if so, isn't the compensation 
interpolation much more expensive than a solution that tries to qualify, 
and preferably quantify the errors involved.


Using least squares and error estimates is a bit too easy for our 
sampling issues because of at least the mid and high frequencies getting 
interpreted but the DAC reconstruction filter, subsequent digital signal 
processing or as I prefer: the perfect reconstruction interpretation of 
the resulting digital signal streams.


IMO, high frequencies will be most served by leaving them alone as much 
as possible, and honoring the studio and post processing that has 
checked them out and pre-averaged them for normal sound reproduction. 
However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.


Now I get it, everyone has a sound card and endless supplies of digital 
materials, and from my student efforts I recall it is fun to understand 
the theoretics of interpolation curves (and (hyper-) surfaces), but 
unfortunately they correlate only very very loosely with useful sampled 
signal theories, unless you want an effort for a particular niche.


T.V.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] about entropy encoding

2015-08-12 Thread Theo Verelst
Just to make clear where I stand on this: I've remarked without wasting 
seas and seas of bandwidth, that it isn't true that a general estimate 
of the entropy in a bunch of bits on a row can be just defined or 
given. Some people will like to pretend some sort of half-academic 
discourse about this is interesting, but I prefer to stay cool and 
observe the ground rules, that, once more, are in my case simply part of 
undergrad courses which were being taught, not researched.


If you take it that a digital signal is defined as either a row of 
bits, or messages over a stream with for instance a fixed bit rate, has 
entropy in the common meaning, you want to make clear what it is that 
you mean by that, or you can take any additional *given* and prove 
yourself wrong. Simply put, I could define a protocol (or after more 
thought than I apply here a more expensive mathematical word for the 
exact occasion) that states that every next bit in a communication 
stream is per default to be inverted. Or, I could say that a message 
only become meaningful after a 1000 bytes of 0xFF or 0x00 or whatever.


So a random file, or streamed word sequence can be assigned a meaningful 
measure of information only after you've decided on the *givens* 
involved. Or it becomes tedious and pretty soon pointless.


Now, there are also formulas involved for serious mathematicians, which 
are I'm pretty sure undergrad formulas, and it is a well known that 
these given probability formulas are the usual stumbling block for 
people working with statistics who aren't used to that.


Just some givens.

T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] about entropy encoding

2015-08-12 Thread Theo Verelst

Peter S wrote:

...
Shannon defines entropy as the minimum average length of a message in
bits when you want to transmit messages of a probabilistic ensemble.
...


From wikipedia (should be readable for non-academics, non-engineers, 
etc., too) In information theory, entropy (more specifically, Shannon 
entropy) is the expected value (average) of the information contained in 
each message received.


To explain the first year (in many cases I think) statistics involved, 
suppose we put 3 balls in a vase, with colors psycho-red, schizo-green, 
and socio-yellow (please, a little joke, but nothing personal), we close 
out eyes, and we draw two ping-pong (or billiard ?) balls from the vase, 
and we know that the chance for the remaining one to be red is 
officially 1/3. If we repeat this experiment N times, than for N~large, 
the red ball remains in the vase for 1/3 of the times. Boring, sure, now 
lets talk about the chances of bits on a row with a certain pattern, 
shall we?


If we presume perfect messages of a certain bit size, e.g. 2 bits, a 
perfect encoded message will use them perfectly such that if we send N 
messages for N~large, we have perfect white noise, and dissociated 
messages when all messages occur just as often. So we can use the 2 bit 
words optimal to for instance transfer a large file (if desired even 
repeatedly for different large files), when the correlation between the 
messages is zero?


Now, where is the assumption I mean, and what's wrong with all those 
perverted derivations and phrasings here I tried to correct ? (there's a 
real answer, and if someone can't find that answer I'll be happy to give 
it at request. Hint: white noise over a channel is optimal from the 
standpoint of maximum bit's of information per second). Now some of 
these measures, like correlating the channel noise properties with 
signals, and all kinds of difficult analog and digital computations for 
instance in a cable modem, aren't my target here, because I feel that 
most here are lost in the associated engineering (or what would prefer: 
basic theoretical physics) mathematics, which is fine, but I prefer to 
arrive at some basic agreed on normalities, as I perceive most people on 
the list work like, too. Clearly some first year engineering school 
assignments would be desirable in this case, or the engineering schools 
at hand (with certainly no academics yet) would go to hell in a hand basket.


Anyhow, that's it for now, I won't interfere anymore, even though I 
thoroughly appreciate a (actual) theoretical discussion including my 
responsibility in it, I refuse to fake a catharsis when obviously the 
ranting going on is at best babble about peripherals.


T

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Non-linearity or filtering

2015-08-10 Thread Theo Verelst

robert bristow-johnson wrote:

On 8/9/15 6:23 PM, Sampo Syreeni wrote:

1) a dithered sigma-delta converter is typically better quality than
one without dithering


Correct.

there is and always had been **some** discussion and controversy about
that every time i seen it discussed at an AES convention.  i remember
hearing (and talking with) Stan Lipshitz about it.



I'm glad some people might not hold the gospel, but at least keep some 
tabs on the subject.


Now a serious concern: it has come to my attention that some people seem 
to deem it appropriate to talk about their favorite subject, without any 
self-scrutiny concerning which communication thread they abuse for that. 
It could be clear to only reasonably intelligent people that trying to 
convince everybody that all DSP is ancient message encoding and some 
privatized form for file compression is not just a sign of weakness and 
lack of intelligence or perhaps knowledge about the subject at hand, but 
also a nuisance to people that might discuss subject like reasonable 
adults instead of sophist zealots. It might even be that people miss the 
point of actually interesting subjects being brought forward, and that 
can not be the general intention of internet communication in the free 
west, IMO.


The main point I was trying to make is that I appreciate efforts that 
keep various traditional filter and (non-) linear subjects what they 
are, and I prefer to be able to describe the effects a piece of DSP code 
has, which appears often to be not as clear as it seems at first sight.


Trying to quantify all kinds of errors as encoded signals is not a 
solution to any of the inaccuracies at hand, because DSP versus ideal 
filter congruence, (re-) sampling related inaccuracies, analog error 
correction in DSP domain, noise suppression or (proper) signal recovery 
are all served by strong analytical thinking and explicit methods for 
most of which accurate mathematical underpinning exists (like I said 
some of it for EE undergrads, and I don't like people researching those 
subjects as if they can be claimed as new, but that's another 
discussion). Messing about with error coding schemes that can hardly 
predict di-grams has nothing whatsoever to do with proper engineering in 
those fields, let alone with proper science.


I understand some people can feel a bit embarrassed about some subjects 
I raised, but I've worked enough on theoretics to know how to behave 
myself, and have a genuine interest in the some DSP subjects I have 
results with at high levels of aggregation and some serious digital 
audio equipment, and in the theoretical progress I think is possible 
here. Engineering knowledge is hard to get, in some countries maybe not 
even possible, but that's no reason to devaluate the main lines it can 
offer, or hide the higher education under a burden of non-sense.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Non-linearity or filtering

2015-07-29 Thread Theo Verelst
Again, I don't respond to certain derailers I at some point don't miss 
content-wise when not reading.


I started this thread, like quite a while ago I coined the basics of 
sampling on this list, because as both practically and theoretically 
inclined (and talented and educated) I felt a lot of people, from 
serious hobbyists to professionals were missing the accuracy mark on this.


Now, what was i writing this particular thread about ? Not about 
ADC/DACs no matter how I like that subject (I seem to recall me asking 
for some attention to use higher sampling rates quite a while ago, I'm 
glad that caught on, in fact since then I read that there's even going 
to be consumer support for rates I find interesting: up to 1.5 mega 
sample per second in HDMI 2.0, but, that aside). Also not about 
dithering, which I understand fine, is mainly a rounding matter 
(preferably with an eye for frequency distributions and power control), 
but IMHO has nothing to do with what I was talking about.


I wrote about the difference between Non-Linearities and Linear 
Filtering in DSP, with as one of the main motivations the possibility to 
correct electronics circuits and mechanical devices that exhibit 
non-linearities and filtering aspects that are correctable to an extend 
by Digital Signal Processing. And more over, there are circuits known in 
the DSP sphere, which I think exhibit an amount of non-linearity 
(distortion) while filtering, and possible sampling issues when acting 
as non-linear effects.


For me still first year materials (which I still like, but they're 
basic), in fact the linearity definitions in my school was a high school 
subject, there's a complete difference between a system, or a circuit, 
or a mathematical operation (like in differential equations) which does 
linear filtering, think perfect equalizer and a circuit or it's 
simulation, which distorts!


T.V.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0

2015-07-26 Thread Theo Verelst

vadim.zavalishin wrote:

Hi all!
...
http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.0.pdf
...
Feedback appreciated.



I've (pretty quickly) gone over all sections, and I must say I can't 
find back a  lot of important basics that I would have to require to 
make an authoritative statement about what's right and wrong.


I don't mind playing with some DSP blocks, and occasionally designing 
something decent, and even working on interesting DSP subjects, and of 
course these subjects in the sense of the analog filters and theory are 
one or more centuries old, about which most books make a proper build up 
in a part of the subject, or you'd end up with a proper Network Theory 
course for university students (I'm a NT graduate, and I like the 
basics, too), which theoretically only works proper IMHO when there is a 
good mathematical foundation. That's a bit missing here. The proper 
conclusions most importantly about the signal analysis and the circuit 
and signal congruence transferring a small network from analog 
schematics and (very basic) analysis to a digital implementation 
schematic and recurrence relation is an activity from long ago that 
certainly still deserves attention. I'd prefer something new to deal 
with a host of complicated subjects to make digital synthesis accurate 
and exciting, and I don't feel that's on the horizon of this technical 
report.


Theo V.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Non-linearity or filtering

2015-07-23 Thread Theo Verelst

robert bristow-johnson wrote:

Peter...

 ...

it's you that are applying concepts of the old or conventional
converters here.
..
I'm not answering much to treacherous psychopaths (from the use of words 
and the content of the communication here, anyway) because that doesn't 
contribute much, and it is (in all honesty) not what inspired me to the 
obvious enough subject for this communication.



also, even in the least-significant bits, there is signal embedded (or
buried) in the noise.  it's not as if they appended 4 noisy bits to the
right of a 20-bit word.  they didn't do that.


Of course, agreed, an clear. In fact, the same is true for some forms of 
Johnson noise and certainly for microphone air induced noise patterns, 
reception of signals through connecting wires (including ground current 
induced noise), expert use of studio equipment appears to break a lot 
of simplistic rules that of course certain people are satisfied with for 
everyday use. It's almost as if the matrix of using a microphone + 
pre-amp+ AD+ computer + DA + amplification favors certain correlations!


To stay a proper theoretician (albeit, seriously, not necessarily 
condescendingly, undergrad basic material) it's essential, in the face 
of the possible sample based processing errors, and the limited 
reconstruction filtering induced errors (which require a little, but not 
that much more, measurement and normalizing for human perception), I was 
breaking a lance for distinguishing between a curve in a signal coming 
from a non-linearity, meaning the amplitude is somehow determining the 
amplification factor for instance, and (linear in normal EE language) 
filtering of some order, both in the analog and in the digital domain, 
though there are differences, I mean just in general. A proper circuit 
analysis depends on understanding that difference, which can lead to 
interesting conclusions about DSP.
Lest we sit with our feet in a bowl of water with a cat on out lap for 
something interesting to take place.


T
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Non-linearity or filtering

2015-07-22 Thread Theo Verelst

Uli Brueggemann wrote:

...
My simple assumption was: if the DAC is a 24 bit DAC it should be possible
to get down to e.g. -90 dB with the sum of the signals. But it seems to be
quite challenging.


It may be wideband (i.e. including sub-sonic and super-sonic) noise that 
keeps the sum to about or a little more -70dB.


I don't have any problem matching channels from the *same* DAC to add to 
close to zero with a very high quality summing stage. Inverting the 
signals is easy to do if you have some streaming plugins and connections 
to play, and small volume changes are possible, too of course. If the 
reconstruction filter in the DAC is the same, the sum should be 
independent of the signal. I can verify that with a DC coupled quality 
DAC with precision SMD filter components at it's output feeding a high 
impedance buffer stage, but of course very small differences could exist.


Harder but more interesting is to take a signal like music or TV or 
anything that has wideband frequency components, and take two different 
DACs, with a as neat as possible re-sample filter feeding the one DAC to 
work at the same clock in the digital domain as the other, and subtract 
(or analog add with a digital inversion built in one of the two digital 
paths to the DACs) the two signals, lets say in stereo, for fun. The 
fact that the DACs will have slightly different clock frequencies, or 
slightly not integer relations (for instance 48kHz for the one DAC and 
384kHz for the other, but not based on the same master clock) makes the 
re-sampler necessary. Anyhow, make sure you delay the fastest DAC path 
to match the slowest, and the fun can start: when the clocks are a bit 
stable, very different kinds of DACs can add to zero, but you'll hear 
all kinds of interesting phasing going on when you accurately adjust the 
delay (and the volume of course), and when stability is achieved (unless 
you'd be able to use the same master clock for both DACs of which the 
outputs are being subtracted), you'll get an idea of what reconstruction 
filters really do (and can't)...


Of course resistors and capacitors will influence frequency 
characteristics around the low and high frequency cut-off frequencies, 
and idem for the signal phase.


Then there is the harmonic distortion as a consequence of the 
limitations of the DACs reconstruction filter, and the (transient-) 
inter modulation distortion that all DACs have, as consequence also of 
inherent non-linearities and the mere fact that transients coming out of 
the digital domain (through the standard very short reconstruction 
filters) are far from perfect for human listeners, so depending on the 
setup, the output signal will certainly be no more accurate than these 
distortion figures indicate. Which should be a lot better than .1 dB 
which would imply an error of over 1 percent, which wouldn't be very 
good for a 50s HiFi system. But the specified harmonic distortion of a 
lot of well known DACs sure isn't such that 24 bits accuracy is achieved.


T.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Non-linearity or filtering

2015-07-21 Thread Theo Verelst

Hi DSPers,

For a long time it has bothered me that there's a bit hidden way to deal 
with all kinds of signal distortion, both in the analog domain as in 
the digital domain, that isn't necessarily clear. I think this is a real 
subject, and I'd prefer to have a good angle on it to essentially make 
clearer and more transparent recordings and DSP algorithms.


Simply put, if you take a standard test signal or oscillator signal like 
a SAW wave (you know, linearly up to a fixed point, virtually zero time 
back to a given voltage, and then very linearly up again (obviously)), 
and you put that signal through a preamp or a digital processing device, 
you can get distortion because of transistors (or tubes) or errors in 
the AD/DA conversion, and you can get signal aberrations as a 
consequence of (linear) filtering, like coupling capacitors, DC offset 
control (similar function), limited frequency response, and general 
imperfections showing up as frequency dependent amplification errors 
stemming from linear filtering elements.


To give an example, you could get exponential curves superimposed over 
your pure saw wave (forgetting for the moment the imperfections 
related to aliasing in case of digital signal processing) from 
non-linearities in the signal chain, such as small signal response 
curves in amplifier elements, or from filtering elements responding to 
the signal with normal (and linear) time responses based on the poles 
and zeros at stake. So the input of the amplifier amplifying a 
microphone could show some sort of transcendental non-linear distortion 
(standard for a lot of circuits). On the other hand, the high pass 
filter built in many amplifier stages (in terms of the coupling 
capacitor) and the limited frequency response of any analog circuit 
change the curves in the test saw wave a bit too, which translates to 
certain types of exponentials as well. (first year excercises for EEs).


Now what's the point related to digital signal processing ? Well, there 
are a number of signal corrections, harmonic analysis and adaptation 
methods and even noise reduction methods that can work pretty ok in the 
digital domain that could benefit from analysis and tuning with standard 
test and synthesizer signals . So I'd prefer to work a bit on getting a 
very good quality saw wave, put that on a mixer and digitizer, and then 
use digital measurements to adjust whatever can be adjusted towards 
perfect mixing and pre-amplification (as far as there are errors than 
come to the attention in productions), but also, it would be interesting 
to then feed the perfect signal at the input of a AD convertor 
connected to the mixed signal through some form of digital correction 
audio streaming software such that we can compare the output of a also 
connected DAC to a mixer or electronic subtraction device to compare the 
Digital to Analog converters' output to the original signal (except for 
a phase shift) to as it were work on that sort of perfection!



Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] A little frivolous diversion on the effect of using a delay

2015-07-21 Thread Theo Verelst

robert bristow-johnson wrote:

...
just for the record, none of them content words were written by me.
...

And it's back, in the Prophet-6. I build one of those dual BBD effects
with filters and electronics, with interesting options, with a sine
wave LFO modulated clock to act as a Leslie effect, which was fun,
though without noise suppression very hissy.


so it's delay modulation, which is supposed to be the Leslie?  you
really should have a synchronous, but outa-phase, amplitude modulation
along with the delay modulation, to emulate a Leslie.  and multiple
reflection paths to an auditory scene (a wall or two with reflections)
and a stereo output derived from that.



I was talking about the new P6 having BBDs (/simulation). Not directly 
connected with that, I used BBDs in the early 80s for simulating among 
other things the hard to do phase shifting of a imitated organ signal, 
with an added compander I designed. Nowhere near the sonic riches of 
good digital simulations from later time, but it sounded not eeky, or 
those to me dreadful serene listen to this messing with sampling 
errors way. I don't know how much error was in the balanced BBD I used, 
probably there was leaking between parts of charge passing stages, and 
forms of unspecified filtering. It was fun to just modulate the clock 
analog, like there were also digital delays in that time that would let 
you smoothly modulate the sampling clock. Doing the same proper with a 
digital simulation *including correction for sampling errors* isn't 
necessarily easy.



That sure is better even with 

certain

synthesizers (in this case the Yamaha Motif line) have nice
oscillators, but it isn't possible to get certain sampled waves to not
overlay more than 1 sample,

...
uhm, what do you mean?  do you mean that the samples for each voice are
being played out at different sample rates ...


What I mean is that for sound reasons and possibly for preserving 
intellectual property reasons (I don't know), the machines in many cases 
output more than one sample at the same time, even if you take one 
oscillator and one note is played, it outputs a combined waveform 
consisting at least (this has been a while since I looked at it) of two 
time shifted versions of the same sample. So the assignment would be 
to take a source which outputs a layer of the same sample, possibly (so 
let's presume) at the same frequency, but the layers shifted in time. SO 
you put all modulations, envelopes and filters of a Motif synth off, 
output a string wave form from only one oscillator, and you'd get two 
waves, in the simplest case I would like to get the sample out of some 
un-add delay effect which was layered and time shifted at the output 
of the synthesizer, so that out the delay-remover effect, I'd get the 
sample used in the synth.


So essentially, you'd have to estimate the delay time used, and undo the 
adding of the delayed signal. Going frequency domain is fine, but some 
work and might not give sample accurate delay removal!







I realize that's a bit tough and might involve inversions with linear
algebra and some iterations even, but it's a fun subject. I mean so
much going on, but simply extracting a signal in the presence of a
delayed added version of the same signal isn't generally available!



you mean inverting the filter:

  H(s) =  1 + alpha*e^(-s*T)

where T is the delay and alpha is the relative gain of the delayed added
version?  that can be generally done if you know T and alpha.





T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] A little frivolous diversion on the effect of using a delay

2015-07-20 Thread Theo Verelst

Hi all,

No theoretical dumbfounding or deep searching incantations from me this 
Monday, but just something I've through about and that somehow has since 
long been a part of music and analog and digital productions.


I recall when I was doing some computer audio experiments say in the 
early 80s that there was this tantalizing effect that outside of special 
tape based machines hadn't really existed as an effect for using with 
random audio sources: the digital delay. I recall I was happy when I'd 
used (low fidelity) AD and DA converters and a early home computer with 
64 kilobytes of memory to achieve an echo effect. It was fun. For 
musical purposes, a bit later I used various digital effect units that 
optionally could act as a delay line, and with a feedback control, as an 
echo unit.


It seems however that with time, the charm of the effect wore off. Just 
like nowadays some people occupy themselves with (arguably desirable) 
reverb reduction, it seems that using a delay isn't very cool anymore, 
doesn't necessarily make your audio workstation output prettier waves 
when playing a nice solo, and even it makes samples sound uglier when a 
digital delay effect is used on them, now that everybody with a computer 
and a sound card can do some audio processing, in a way that's a shame.


Some of the early charm must have been that the effect was featured in 
popular music, and wasn't easy enough to get for a hobbyist in the 70s, 
and possibly that the grungy and loose feel of the low bit depth and the 
jittery or modulated AD/DA converter clock signals was only fun while it 
lasted. Maybe instruments aren't designed to sound good with a delay 
effect either, or there's a conflict with audio system's internal 
processing, and as last suggestion, the studio delay effect does a 
little bit more than just delaying that makes it so addictive...


T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] A little frivolous diversion on the effect of using a delay

2015-07-20 Thread Theo Verelst

robert bristow-johnson wrote:

On 7/20/15 2:44 PM, padawa...@obiwannabe.co.uk wrote:

Whenever vintage delays come to my mind, I hear the sound of the
bucket brigade
delay lines
And it's back, in the Prophet-6. I build one of those dual BBD effects 
with filters and electronics, with interesting options, with a sine wave 
LFO modulated clock to act as a Leslie effect, which was fun, though 
without noise suppression very hissy.


That sure is better even with a simple software delay and cheap built-in 
sound card now, even at 16 bit, a delay can work fine at CD quality.


My interest at some point, which got me thinking, is that certain 
synthesizers (in this case the Yamaha Motif line) have nice oscillators, 
but it isn't possible to get certain sampled waves to not overlay more 
than 1 sample, in certain cases probably the same waveform playing over 
two sample replay partial engines, with a delay in between. So it would 
be a nice idea to be able to record the signal of a single note, and 
somehow extract the one sample from the two or three that play at the 
same time, presuming they're just time shifted.


I realize that's a bit tough and might involve inversions with linear 
algebra and some iterations even, but it's a fun subject. I mean so much 
going on, but simply extracting a signal in the presence of a delayed 
added version of the same signal isn't generally available!


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] about entropy encoding

2015-07-16 Thread Theo Verelst
Nonono, you don't get it, but I suppose only academics should try to 
do a proper universal theory application attempt, I won't respond to 
this anymore. I do suggest that if you'd take your own impulses and 
encode them with you own algorithms you would find less interesting and 
far less poetic information decay than you seem to suggest. I mean the 
diminishing values of your own elasticity coefficients are worrying and 
one sided.


It's like, if you'd make an mpeg, or an alternative audio lossless 
encoder, you want to draw conclusions about what the encoding efficiency 
or particular basis vectors and their weight factors are going to 
actually mean. That's not really how most people see the relatively 
simple statistical method you so meticulously (for me boringly) 
outlined, just like your own quote from the famous computer theoretician 
von Neumann doesn't mean what you think it means in the proper context 
(because there were theoretical issues related to determinism, ability 
to store the whole universe in bits stored in itself, and the difference 
between a computer program running with non-infinite speed on a computer 
and the differential equations that rule physics).


Think about the true givens in any theoretical statistics 
game/theory/program you want to work on, as soon as there are givens in 
statistical interpretations, formulas, programs, etc., there are 
different formulas at play ( P(AuB)=P(A)+P(B)-P(AB) kind of thing, for 
those fortunate enough to have received education at the required level, 
that also holds for continuous, and for multi-dimensional probability 
distributions, and then there's the chain rule as well)


For a bit of the historic idea (don't feel obliged to click links, it's 
just some basics):


https://www.khanacademy.org/computing/computer-science/informationtheory/moderninfotheory/v/intro-to-channel-capacity-information-theory


T.V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] about entropy encoding

2015-07-13 Thread Theo Verelst
Look, I don't know how far you got in the academic formal training of 
statistics, and in this case the basics of information theory, but 
picture it this way: the information game being thought about in the 
early EE of communication channels and signal processing and analysis 
had to do with messages standing out in an noisy environment and, when 
received digitally, having a certain maximum information content given 
the noise and sampling frequency.


It's very cute that  some programmer wants to do the parallel work of 
creating a good audio codec, but that's not going to be theoretically 
interesting if you don't observe or clearly don't even know the *main* 
laws involved (which for me both about information theory and statistics 
were undergrad material).


A _given_ statistics reasoning good like this: under the assumption your 
channel has information that isn't close to the maximum white noise 
bits per second of actual information, you could try to create a theory 
that will give a barrel of gas (probably the most widely known old 
measure of entropy) a different measure of entropy by simply turning the 
observer. Not a very theoretical useful measure.


There's a open source flac codec already, so ...



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-13 Thread Theo Verelst

Vadim Zavalishin wrote:

...
How about the equation

u''=-w*u+g

where v is sinc and w is above the sampling frequency?



Aw man

You're now going to argue your every day signals are the exact outcome 
of a differential equation, and ON TOP OF THAT are bandwidth limited ?


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-12 Thread Theo Verelst

Charles Z Henry wrote:

...

y=conv(u,  f_s*sinc(f_s*t) )



Think about it that that is a shifting integral with an sin(x)/x in it, 
for which there isn't even an easy solution if f_s is really simple.


T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-07-06 Thread Theo Verelst
So we're back where I started to make comments on a while ago. Hmm, I 
knew that.


Let's go over the problem shortly again, and let me give one pointer for 
you guys (and gals ?) who feel lost about the perfection many of us 
probably would like.


It isn't that we cannot create frequency limited signals, and please not 
again some dumb-heinie-ness about given, long existing EE theory unless 
you're a decent (and preferably, but not necessarily, mature) 
mathematician about it, so for instance a wave table can be viewed as a 
sum of sine waves, and for the sake of argument made to repeat ad 
infinitum, so that essentially it's a bunch of sine waves. That can also 
be thought to be true for a properly used iFFT, if the repeat factor is 
exactly the length of the FFT interval (and no averaging with previous 
FFT frames is considered).


The main problem is still that the waves that are generated by all kinds 
of simulation software will in many cases still contain erroneous or 
highly restrictive components, for instance with non-shift invariant 
e-powers (which honestly can give horrendous signal distortion, I mean 
certain methods of shifting samples can give dBs of different signal 
amplitude, can't it ?), even if somehow you'd (additionally or 
subtractively) frequency limit them, the resulting sample stream would 
still sound a bit wrong on a standard DA converter, so maybe you'd want 
to invert those high frequency patterns that form the error, and correct 
for a specific kind of DACs, I don't know, but it's hard, that's for sure.


Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sampling theorem extension

2015-06-11 Thread Theo Verelst

HI

While it's cute you all followed my lead to think about simple 
continuous signals that are bandwidth limited, such that they can be 
used as proper examples for a digitization/synthesis/reconstruction 
discipline, I don't recommend any of the guys I've read from here to 
presume they'll make it high up the mathematical pecking order by 
assuming all kinds of previous century generalities, while being even 
more imprecise about Hilbert Space related math than already at the 
undergrad EE sampling theory and standard signal analysis and 
differential equations solving. I don't even think there's much chance 
you'll get lucky enough to score a solution with a empty domain or 
something funny like that, and all terminology I've heard is material 
that has been known to many scientists for a very long time.


So, let's talk about the engineering type of issue at hand afresh from 
the correct starting point again, shall we ?


There are three main operations involved in the relevant sampling theory 
at hand, the digitization, where the bandwidth limitation should be 
effective, the digital processing, which whether you like it or not is 
very far from perfect (really, it is often, no matter how you insist in 
fantasies on owning perfect mathematical filter in the sense of the 
parallel with idealized analog filters), and the reconstruction, which 
in the absence of processing can be guaranteed to yield back the input 
signal when it's properly bandwidth limited.


Of course you want to work or hobby (it's a mystery to me how little or 
how imperfect leadership or professoring has taken place concerning 
these subjects, I suppose many people want very much to prove themselves 
in the popular subject and don't mind moonlighting) around the subject 
of software synthesis, or desire to have some pietistic occupation in 
software synthesis without going through the proper motions of 
understand the academic EE basics, it's a free world, at least in the 
West, so fine.


I repeat the main error at hand here: it's important to have bandwidth 
limited synthesis forms, but it is equally important to *quantify* 
(which is harder than just qualifying some of them) the errors taking 
place in digital filtering (like I mentioned the shifting versus 
reshaped e-powers problem)) the errors in the processing, and to 
understand that using a digital simulation has it's own range of errors, 
which won't solve by reversing the problems.


Finally *IF* you have perfect mathematical signal, say as a function of 
time, *AND* you can make or reasonably speaking guarantee it's frequency 
content is limited to half the sampling rate you're going to apply 
(remember when *I* pointed that problem out a while ago I wasn't exactly 
welcome to with some...) THEN you can start to do a perfect 
reconstruction. AND IF you somehow can make a perfect signal, or a 
perfectly reconstructed signal to sufficient accuracy THEN you could try 
to get EEs/musicians opinions about inverting and partially preventing 
the errors in common DAC reconstruction filtering. Why is there a 
discussion needed about that ? Well some of the modernistic boys and 
girls like to be loud, and don't think about that these subjects can 
turn ok audio into dangerous for the hearing audio. Important subject.


So, all the math and methodology I've seen of this little club of people 
who seem to think they can single-handedly deal with this important part 
of the EE history, ignoring the apparent decisions made about these 
subjects possibly before they (and possibly me) to me suggests not a 
single workable mathematical line or truthful solution strategy. Start 
with the basics, and goof of into some strange faith in miraculous 
mathematics to solve complexities that are inherent in the problem.


Now a little not about the aliasing, and the creating of synthesized 
signals: I can understand the desire of some persons to want to run som 
virtual oscillators, connect some digital envelops+VCAs, do some good 
sounding filtering, and wanting then in a limited latency to arrive a 
decent signal to send to a normal or extra quality (but standard) 
Digital to Analog COnverter. It's tempting to chose a few shape 
enforcing operations, and squash all signals in soem of the ways that 
can be imagined, and call it a day. That's not the quality of accurate 
virtual samples I'm talking about, and will probably sound tedious and 
repetitive soon. Even worse idea is to mash such idea up with the signal 
generators and filters, without concern for sample shifting, filtering 
errors, generator waveform reconstruction issues, and so on. That's not 
going to be my dream virtual anything. just saying.


So it's tempting to do tricks in digital audio processing, some of the 
aliasing guessing and output signal mangling (to stay inside a small 
range of possible signals that do not sound all too bad, and important: 
may well be ok for human loud consumption) may improve the impression of 

Re: [music-dsp] Sampling theorem extension

2015-06-10 Thread Theo Verelst

robert bristow-johnson wrote:

On 6/9/15 4:32 AM, Vadim Zavalishin wrote:

Creating a new thread, to avoid completely hijacking Theo's thread.


it's a good idea.



I agree that there was the possibility of an unstable offense 
resolution, but I wasn't aware people were being afraid of that concept.


Look, it's a matter of decent engineer interactions and more so: decent 
scientific ausbildung, scientific method, and let's say respect after 
failure. Moreover, for a lot of people who either are glad not to have 
to delve much in the mathematics, or who didn't get the chance to access 
proper higher education in the various fields, it might be the 
intellectual robbery that could be imminent isn't clear, and certainly 
disgusting.


I was glad to have been informed, as undergrad student, of the 
underpinnings I've put forward here, and maintain there are a few main 
things about sampling that I think some people ought to know, and as it 
so happens do know a bit about now. It's decent for academic engineers 
to follow a path where first they score enough points in the undergrad 
realm, then get taught decent mutual respect and communication modes 
about engineering subjects, then hopefully learn how to master a subject 
in science, and then they're off to be a decent, usually on the cool 
side, person with the ability to get into engineer problems at the 
appropriate level and deal with scientific sides to their work.


I've been around a European (at that time) top university long enough to 
know why that is so, and what's wrong with all kinds of funny and 
slightly interesting nerdy students trying to work themselves to a 
position of power, and I won't condone it in general, if I can in all 
decency help it. So when people work on a subject, get corrected at 
undergrad level (the same where many have passed through and are 
satisfied and usually successful with, and where the subjects taught are 
centuries old and tried) it's not proper to just happily go on and act 
as if a personal and professional sense of honor can be seconded to some 
end that will justify all inter-engineer injustice, and in the end 
social interactions with all people matter not enough to be a solid and 
recognizable person.


Anyhow, as a summary once more: the only proper way to sample a signal 
(with the obvious conditions luckily reiterated regularly) and process 
it or play it back properly is based on a theory that cannot be 
internally reversed or made into a local signal processing idea, while 
maintaining general applicability. And I know there are some signal 
precautions and some modes of processing possible that IMO have been 
thought about at least in the 60s, and maybe before I was born.


Unfortunately a lot of software and DSP code is just as limited as it is 
and that's not going to change if enthusiastic and clearly extremely 
immature mathematicians are going to try out new theories or engage in 
opportunistic word games. It just is no different, even if I'd want it 
to be.


So once more: it doesn't matter what you do in sample space much, if I 
don't see sinc function reconstruction preferably with a quantification 
of the errors involved, I'm not going to ratify the ideas as 
scientifically proper enough to make a theoretic strong point with, let 
alone history. Maybe I am actually sorry as a person that there are so 
many errors in the often promoted as perfect digital signal processing 
domain, but that doesn't change anything!


So about that idea (not really of mine) to think about the effect of the 
DACs interpolation and smoothing filters: that's real, but still you 
need the *properly reconstructed signal* first, and THEN on top of that 
make sure the signal wurst-ing that goes on in the DAC comes out the way 
you want. Terribly complicated as that seems, to me it's rather basic.


T.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Did anybody here think about signal integrity

2015-06-03 Thread Theo Verelst

Hi,

Playing with analog and digital processing, I came to the conclusion I'd 
like to contemplate about certain digital signal processing 
considerations, I'm sure have been in the minds of pioneering people 
quite a while ago, concerning let's say how accurate theoretically and 
practically all kinds of basic DSP subjects really are.


For instance, I care about what happens with a perfect sine wave getting 
either digitized or mathematically and with an accurate computer program 
put into a sequence of signal samples. When a close to perfect sample 
(in the sense of a list of signal samples) gets played over a Digital to 
Analog Converter, how perfect is the analog signal getting out of there? 
And if it isn't all perfect, where are the errors?


As a very crude thinking example, suppose a square wave oscillator like 
in a synthesizer or an electronic circuit test generator is creating a 
near perfect square wave, and it is also digitized or an attempt is made 
in software to somehow turn the two voltages of the square wave into 
samples.


Maybe a more reasonable idea is to take into account what a DAC will do 
with the signal represented in the samples that are taken as music, 
speech, a musical instrument's tones, or sound effects. For instance, 
what does the digital reconstruction window and the build in 
oversampling make of a exponential curve (like the part of an envelope 
could easily be) with it's given (usually FIR) filter length.


In that context, you could wonder what happens if we shift a given 
exponential signal (or signal component) by half a sample ? Add to the 
consideration that a function a*exp(b*x+c) defines a unique function for 
each a,b and c.


Anyone here think and/or work on these kinds of subjects, I'd like to 
hear. (I think it's an interesting subject, so I'm serious about it)


T. Verelst

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Comb filter decay wrt. feedback

2015-05-13 Thread Theo Verelst



A comb filter has a certain purpose. Do you know it?

Not everybody is good at electronics (where comb filters have a clearer 
meaning) or physics, but it does pay to take a little time to at least 
try to understand the difference between free wheel thinking about some 
glossy subject (DSP) and what actually goes on the computations, 
regardless where they were electronically or digitally published.


The comb filter Z equation doesn't tell you what's right or wrong with a 
signal getting filtered in the digital domain. It's not easy to get a 
correct frequency idea in the head of DSP designers, usually, because 
they tend to think that their doings are the norm, and all the 
concepts (like frequency and filtering and phase shift, and linearity, 
etc.) will follow them along their thought journey. Unfortunately 
nature, and decent electronics are a natural science that will not yield 
to all kinds of DSP thoughts at all, so reverb and it's components would 
best be described and computed on using proper ideas.


The problem with the tuning of any filter, FIR or IIR, state varying 
or linear, is that you need to know what those samples in you program 
actually stand for. In classic sampling theory, samples are equidistant 
signal samples of a continuous signal (like reverb creating a voltage in 
a microphone), that can be reconstructed to reform the original 
continuous signal under the condition that the signal was bandwidth 
limited to at most half the sampling frequency. Boringly familiar, I'm 
aware, and then then the actual Reconstruction Function that RBJ 
referred to is an infinite sum functions, which can also tell you the 
actual continuous domain effect of every DSP filter or whatever, in 
perfect sample thinking space.


T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   3   >