Very interesting reads. Thanks!
Here's some papers specifically geared toward DSP processors that
support the use of fixed point:
Superior Audio Requires Fixed-Point DSP
http://www.rane.com/note153.html
48-Bit Integer Processing Beats 32-Bit Floating-Point for Professional
Audio
On Mon, 2005-10-17 at 17:34 +0200, Maarten de Boer wrote:
Very interesting reads. Thanks!
Here's some papers specifically geared toward DSP processors that
support the use of fixed point:
Superior Audio Requires Fixed-Point DSP
http://www.rane.com/note153.html
what a lame
On Mon, Oct 17, 2005 at 02:18:52PM -0400, Paul Davis wrote:
Here's some papers specifically geared toward DSP processors that
support the use of fixed point:
Superior Audio Requires Fixed-Point DSP
http://www.rane.com/note153.html
what a lame author!
I'd agree, but he has
On Fri, 2005-10-14 at 21:00 +0200, [EMAIL PROTECTED] wrote:
Then I also fail to see why it's bad for overflows to ocurr in fixed point.
Those signals (above 0dB FS) would clip on the hardware anyway, and are
expected to do so, since they were either badly recorded or amplified.
In a
On Fri, 2005-10-14 at 10:34 +0100, Steve Harris wrote:
On Fri, Oct 14, 2005 at 11:37:30AM +0300, Hannu Savolainen wrote:
Floating point in turn has 24 bits of precision which is enough for audio.
The exponent part takes care of scaling while the mantissa stays always
normalized to the
On Fri, 2005-10-14 at 22:57 +0200, fons adriaensen wrote:
... ... Worse, unless you are using some specialised DSP chips that can do
clipping in the ALU, implementing clipping in fixed point software usually
leads
to an immense waste of processing power.
The hardware need not be that
Hallo!
I am in the early planning stage of an audio processing application and I
have come to the point of making the choice between floating point or fixed
point (signed 2's complement) processing.
What do you think is better, and why? Why does jack use floating point? Why
it depends of course
On Fri, 14 Oct 2005 [EMAIL PROTECTED] wrote:
I am in the early planning stage of an audio processing application and I
have come to the point of making the choice between floating point or fixed
point (signed 2's complement) processing.
What do you think is better, and why? Why does jack use
On Fri, Oct 14, 2005 at 11:37:30AM +0300, Hannu Savolainen wrote:
Floating point in turn has 24 bits of precision which is enough for audio.
The exponent part takes care of scaling while the mantissa stays always
normalized to the full 24 bit precision. In fact 64 bit precision is used
for
[Hannu Savolainen]
This is the main reason why many audio applications use floating point
internally. However many (or most) applications do just simple
computations which are easy to do in fixed point too.
Hint: Scale the input samples to the -1.0..1.0 range regardless of the
input precision
On Fri, Oct 14, 2005 at 07:44:16AM +0200, [EMAIL PROTECTED] wrote:
I am in the early planning stage of an audio processing application and I
have come to the point of making the choice between floating point or fixed
point (signed 2's complement) processing.
What do you think is better, and
For example if you divide a sample by a large enough value the result
will be zero (total loss of precision). Equally well if you multiply it by
a large number there will be an overflow. This means that a programmer
using fixed point needs to keep thinking about precision and overflows
all
On Fri, 2005-10-14 at 21:00 +0200, [EMAIL PROTECTED] wrote:
[...]If numbers
from binary 0.1 to 1.0 are represented using 24 bits (sign bit + mantissa, I
think the implicit 1 does not count)
You could represent numbers to 1 bit of precision using a zero bit
mantissa (ie exponent only) format.
On Fri, Oct 14, 2005 at 09:00:37PM +0200, [EMAIL PROTECTED] wrote:
First of all, thank you all very much for you comments, the picture is much
clearer now.
I just don't fully understand the floating-point precission part. If numbers
from binary 0.1 to 1.0 are represented using 24 bits (sign
On Fri, Oct 14, 2005 at 09:00:37PM +0200, [EMAIL PROTECTED] wrote:
I just don't fully understand the floating-point precission part. If numbers
from binary 0.1 to 1.0 are represented using 24 bits (sign bit + mantissa, I
think the implicit 1 does not count),
It does.
and the numbers from
fons adriaensen wrote:
The bottom line is really quite simple: if your app is to run on a PC, use
floats.
There are some very good reasons why floats were chosen as JACK's default audio
type.
I agree with that conclusion. Just don't forget about the denormals
issue on Intel (has this been
On 14-Oct-05, at 1:44 AM, [EMAIL PROTECTED] wrote:
I am in the early planning stage of an audio processing application
and I
have come to the point of making the choice between floating point
or fixed
point (signed 2's complement) processing.
What do you think is better, and why? Why does
I am in the early planning stage of an audio processing application and I
have come to the point of making the choice between floating point or fixed
point (signed 2's complement) processing.
What do you think is better, and why? Why does jack use floating point? Why
does AES use fixed point in
http://reduz.dyndns.org/resamp_fixp.c // fixed point version
http://reduz.dyndns.org/resamp_float.c // floating point version, portable
http://reduz.dyndns.org/resamp_float_fistl.c // X86 VERSION ONLY!! Uses fistl
instruction
Any code which uses the X86 fistl instruction can be
(resent, previously sent from non member account)
First: Is the code correct?
pos=lrintf(counter);
Shouldn't pos be lower than counter and pos+1 higher? Like this
pos = counter pos+1
pos = (int)(counter);
= I use this in my tests.
Second: Why the 'volatile'?
You
On Fri, Oct 17, 2003 at 02:34:16 -0300, Juan Linietsky wrote:
There's 2 ways, if you check the benchmarks, the ones that say float
use a cast, and the ones that say fistl use that instruction (which is
faster in most cases). I think what surprised me the most about all this
is the slow
On Fri, Oct 17, 2003 at 05:00:52 +0200, Robert Jonsson wrote:
I've only tested with GCC, a few test with some different parameters. My
conclusion is that floating point performance is extremely sensitive to
architecture and the optimizations you enable for the architecture.
Integer on the
OK, some new code.. Attached are some of my experiments.
- Code that uses split registers to process two samples per loop.
- Code for processing interleaved stereo (equally fast to mono version)
TODO:
- Implement resample code in libDSP in e3dnow asm
--
Jussi Laako [EMAIL PROTECTED]
Ohh, this is very interesting... Thanks Juan for the code snippets!
I did some short tests with extremely varying and surprising results. But... I
got no time right now, will proceed with more tests tonight.
As much as don't like it, I have to support the claim that integer is faster
as of
Some MORE results on SPARC:
4x450MHz some SPARCv9
(SUNW,Ultra-80)
native Sun's compiler
options: -fast -xarch=v9
resamp_fixp - user 10.24
resamp_float - user 13.60
horsh
On Thu, 16 Oct 2003 20:09:13 -0300
Juan Linietsky [EMAIL PROTECTED] wrote:
The code is available at:
http://reduz.dyndns.org/resamp_fixp.c // fixed point version
http://reduz.dyndns.org/resamp_float.c // floating point version, portable
http://reduz.dyndns.org/resamp_float_fistl.c // X86
For info - fixed point still wins!
vendor: AuthenticAMD
model name: AMD Athlon(TM) XP 2000+
bogomips: 3329.22
resamp_fixp: 4.560s
resamp_float: 10.270s
resamp_float_fistl: 13.210s # due to double counter?
resamp_float_lrint: 12.460s # double counter
resamp_float_lrintf: 6.820s # float counter
OK, it would help if I used the compiler options!
-O3 -ffast-math -march=athlon-xp
fixp: 1.720s
float: 3.250s
fistl: 3.420s
-O3 -ffast-math -march=athlon-xp -lm
lrint: 3.370s (double counter)
lrintf: 2.670s (float counter)
-
Myk
Hi!
This was indeed interesting, I set out to prove that floating point would be
the better choice from a performance perspective, but I can't do that,
atleast not judging from the data from my systems!
I've only tested with GCC, a few test with some different parameters. My
conclusion is
This was indeed interesting, I set out to prove that floating point would be
the better choice from a performance perspective, but I can't do that,
atleast not judging from the data from my systems!
in most typical systems, the conversion between integer and FP formats
happens once per
On Friday 17 October 2003 12:20, Paul Davis wrote:
if you're writing software where you do this conversion more than once
per interrupt, i think it probably needs a redesign. if the cost of
the conversion is an appreciable fraction of the over
cycles/interrupt, then and only then does it seem
When you need to get the actual sample position (and/or the surrounding
ones if you are interpolating), a fractional - integer conversion is needed.
I've seen this problem happening on existing software, which greatly limits
the CPU availability. Some examples of this are fluidsynth, csound and
On Fri, 2003-10-17 at 02:09, Juan Linietsky wrote:
Benno Sennoner and I were discussing today on IRC about
the usual fixed point vs floating point (regarding to some resampling code)
Attached is some quick-and-dirty hack written tired and drunk at friday
night...
--
Jussi Laako [EMAIL
Benno Sennoner and I were discussing today on IRC about
the usual fixed point vs floating point (regarding to some resampling code)
We developed some tests and ran them on a variety of computers.
It would be interesting if ladders here could run them on different computers
(and specially non x86,
On Thu, Oct 16, 2003 at 08:09:13 -0300, Juan Linietsky wrote:
My own conclusions about the subject is that the float - int conversion is
STILL the biggest bottleneck on most common architectures. And until this is
sorted out, fixed point is still the best solution for some specific cases,
On Friday 17 October 2003 01:20, Steve Harris wrote:
On Thu, Oct 16, 2003 at 08:09:13 -0300, Juan Linietsky wrote:
My own conclusions about the subject is that the float - int conversion
is STILL the biggest bottleneck on most common architectures. And until
this is sorted out, fixed point
36 matches
Mail list logo