On 6/21/11 6:14 AM, dk...@arcor.de wrote:
There is an excellent article about cordic on
http://www.andraka.com/files/crdcsrvy.pdf
Yes..good explanation..
So, in the "general case" where you might want to rotate by an arbitrary
angle at each time step, where the angle doesn't happen to be 1/2^n, you
still need either multiple shift/add operations, or a multiply add.
(i.e. it's no different than longhand multiplication... it takes N
(optional) adds to multiply by an N bit precision number, or you
cleverly parallelize/pipeline it).
And, in a lot of applications, you'll need to still do a multiply by the
"gain" (product of all those cos phi) term you factored out
I can see that there could be advantages in implementation, but in the
general case, is it actually that much more efficient?
(in a CPU with no multiply, yes, it's a lot better, because it's
essentially the same as doing long multiplication, but you get more done
for the same amount of work)
Maybe it's a "how many gates do you need for a given precision" sort of
thing? Or the fact that it generates sin/cos together (which is very
useful in some cases)
in a random "give me the cos(theta)" sort of situation (e.g. a
calculator), compared to computing the series expansion, clearly CORDIC
is the way to go.
But in a DDS, you're generating a continuous series of samples.
I'll have to think about it.
(and, because it's integrating a difference equation, CORDIC does have
the accumulating roundoff, unless you compute each sin/cos from scratch
each time)
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.