Carsten Haitzler (The Rasterman) wrote:
> On Thu, 14 Aug 2008 21:54:34 -0700 Russell Sears <[EMAIL PROTECTED]>
> babbled:
> 
>> Carsten Haitzler (The Rasterman) wrote:
>> ....
>>> though we need to accept that we need to move beyond SDR into DDR/DDR2 ram
>>> and higher clockrates anyway - we need more performance to do the things
>>> people want, we just need to do it with the right generation of SOC that
>>> has reigned these power requirements in a bit... and well - maybe accept we
>>> need a meatier battery :)
>>>
>> What about an FPU?  It probably wouldn't eat much power, and would help 
>> a *lot* for audio applications, since stuff wouldn't need to be 
>> rewritten in fixed point arithmetic.
> 
> modern arms have fpu's - FR doesn't but the ones i mentioned (omap3xxx and
> snapdragon) i am certain have fpu's.
> 
> nb. u dont need fp - it's just "lazy programming" to have used floating point
> math for the audio - it can be trivially done in integer space. with 16bit
> input you can easily do all your work in 32bit scratch-pad registers (no need
> for 64bit math). as such this is NOT a reason for an fpu. for 3d geometry and
> so on it definitely makes sense though. and with the more modern systems come
> graphics units capable of something vaguely decent graphics-wise :) so an fpu
> makes sense there. :) 

Yes, but there's tons of legacy code that assumes an FPU. (ladspa plugins)

Also, after some naive conversions from floating point to fixed point 
programming, I hit a situation where 32bit arithmetic doesn't quite cut it.

Some filters want to multiply two floats together.  You can do this to 
simulate floating point math:

float f = ...;  // 0 <= f < 1.
int a, c;

const int b = (int)(f * 65536.0)

while(1) {
   ... // update a
   c = a * b
   c >> 16
   ... // do more math
}

Where a is signal data, perhaps dependent on past data, and b is a 
filter parameter.  Some formulas derive b from user tunable parameters.

In some corner cases, f is small (on the order of 1 / 2^16), which leads 
to rounding error assigning to the filter parameter b.

This distorts the filter setup, and leads to artifacts, feedback loops, 
etc.  Using 64 bit math better approximates the filter, and helps these 
corner cases a bit.  There might also be cases where you'd want to carry 
32 bits of precision throughout the calculation, though I haven't hit 
one yet.

The good news is that the hit associated with 64 bit math (and a 48 bit 
filter parameter) is measurable, but tolerable on the FR.

> 
>> On a related note (and perhaps on the wrong end of an NDA), any idea 
>> what the specs are on the glamo's built in openrisc processors?
> 
> low to useless. it's slow. also not even under nda is there any info on just
> how to program it. it's more of a control cpu - designed for keeping the
> internal bits of glamo's silicon in line than actually doing any heavy lifting
> of its own.

Thanks for the info.  That makes sense...

-Rusty

_______________________________________________
Openmoko community mailing list
community@lists.openmoko.org
http://lists.openmoko.org/mailman/listinfo/community

Reply via email to