On Fri, Sep 29, 2017 at 11:28:42AM +0200, Otto Moerbeek wrote:
> On Fri, Sep 29, 2017 at 11:16:24AM +0200, Alexandre Ratchov wrote:
> 
> > On Wed, Sep 27, 2017 at 03:09:48PM +0200, Joerg Sonnenberger wrote:
> > > On Wed, Sep 27, 2017 at 08:40:26AM +0200, Alexandre Ratchov wrote:
> > > > Even on modern amd64s integer arithmetics and bitwise operations are
> > > > faster (and more precise in many cases) than floating point
> > > > equivalents.
> > > 
> > > Can you actually substanciate this claim?
> > 
> > [OT]
> > 
> > I'm working on typical signal processing code doing mostly
> > multiplications, additions and table lookups. For instance, when we do
> > the following with floats:
> > 
> >     float a, b, x, y;
> >     y = a * x + b;
> > 
> > we get 24 bits of precision as floats have 24-bit mantissa. If, with
> > proper typedefs and #ifdefery, we replace floats by ints representing
> > the same numbers as 1:4:27 fixed-points:
> > 
> >     int a, b, x, y;
> >     y = ((long long)a * x >> 27) + b;
> > 
> > we get 28 bits of precision, i.e. 4 extra bits of precision compared
> > to floats.
> > 
> > I just did a quick test on a i5, gcc 4.2. The fixed-point version
> > consumes 10% to 50% less CPU depending on the algorithm. To test, I
> > just run the algorithm during 10 seconds and measure the throughput.
> > 
> > In the past, I've tested on other CPUs, with other compilers and the
> > floating point version of this code was never faster than the
> > fixed-point.
> 
> Add to that this is platform indepedent code and that we run or at
> least ran on platforms having softfloat, where the speed difference is
> significant.
> 

This was my initial motivation, zaurus had no fpu.

Reply via email to