Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Ross Bencina
On 28/02/2014 2:06 PM, Michael Gogins wrote: I think the VSTHost code could be adapted. It is possible to mix managed C++/CLI and unmanaged standard C++ code in a single binary. I think this could be used to provide a .NET wrapper for the VSTHost classes that C# could use. I agree. Maybe I mis

Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Michael Gogins
Sorry for the misunderstanding. I think the VSTHost code could be adapted. It is possible to mix managed C++/CLI and unmanaged standard C++ code in a single binary. I think this could be used to provide a .NET wrapper for the VSTHost classes that C# could use. Regards, Mike

Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Ross Bencina
On 28/02/2014 12:16 AM, Michael Gogins wrote: For straight sample playback, the C library FluidSynth, you can use it via PInvoke. FluidSynth plays SoundFonts, which are widely available, and there are tools for making your own SoundFonts from sample recordings. For more sophisticated synthesis,

Re: [music-dsp] Best way to do sine hard sync?

2014-02-27 Thread robert bristow-johnson
On 2/27/14 6:33 PM, Theo Verelst wrote: Frequency modulation, which is what happens when the "to be synced with" signal changes from one frequency to another is theoretically not limited in bandwidth, the issue is that, however you try to model it, the result of a hard-sync oscillator is stil

[music-dsp] Introductory literature for loudspeaker predistortion

2014-02-27 Thread Jerry
Does anyone know the literature for loudspeaker predistortion--literature appropriate for senior-year electrical engineering students? (That's not me.) I suppose this would rule out fancy stuff like Volterra series inversion and use of psychoacoustic metrics. How dependent on the signal is a no

Re: [music-dsp] Best way to do sine hard sync?

2014-02-27 Thread Theo Verelst
Thinking a bit about the theoretical generalities involved in the problem, it might be a good idea to imagine a few of the main "rules" in the sampling domain, with the problem of limited bandwidth. To know the exact phase of a sine wave in the sample domain it is at least theoretically poss

Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Michael Gogins
For straight sample playback, the C library FluidSynth, you can use it via PInvoke. FluidSynth plays SoundFonts, which are widely available, and there are tools for making your own SoundFonts from sample recordings. For more sophisticated synthesis, the C library Csound, you can use it via PInvoke

Re: [music-dsp] Hosting playback module for samples

2014-02-27 Thread Danny
If i understand correctly, Juce would be the solution. You say you already have the working c++ code, so you could use that and add an audioprocessor from juce to do your playback. > Op 27 feb. 2014 om 06:36 heeft Ross Bencina het > volgende geschreven: > > Hello Mark, > >> On 27/02/2014 3:52

Re: [music-dsp] Mastering correction by FFT-based filtering followed by 1 octave or 1/10 octave equalizer

2014-02-27 Thread Peter S
Okay, so in a nutshell you are doing de-mastering and re-mastering on a track (if I understand correctly). It's still not clear, what is the conclusion from all this? - Peter -- dupswapdrop -- the music-dsp mailing list and website: subscription info, FAQ, source code archive, list archive, book

Re: [music-dsp] Mastering correction by FFT-based filtering followed by 1 octave or 1/10 octave equalizer

2014-02-27 Thread Theo Verelst
The big graphs of signal processing are for things like mid-frequency averaging, mid-low subbands tuning, sample spoiling removal, low-frequency decompression, reverse CD equalization, and more than a few other mastering effect corrections. If that doesn't mean anything to you, fine, it has t

Re: [music-dsp] Mastering correction by FFT-based filtering followed by 1 octave or 1/10 octave equalizer

2014-02-27 Thread Peter S
Checked the video again, so seems like you have some signal (music), then you process that through some modular graph processor (maybe something FFT-based?), plus (?) some hardware processor(s) (reverb?), and then the two signals differ in the 2-4k range. I'm not sure, what's that supposed to mean