Re: Power Efficiency Tradeoffs

2008-03-31 Thread Catalin Patulea
Hi everyone,

Sorry to wake up an ancient thread -- I've had a bit more time to
think about your comments.

Indeed, there is a non-zero static power dissipation at f = 0. I will
assume it's constant for a given voltage. In addition, a fairly good
approximation is that power increases linearly with frequency (not
quadratically; it does that with voltage). So power for a chip can be
roughly written as:
P = P0 + Ps*f
where P0 is the static power in W and Ps is the switching power in W/Hz.

Once again, for the entire discussion, we take core voltage to be constant.

Together with my previous assumption that time required to carry out a
given task is exactly inversely proportional to the frequency, we can
define a constant W of a given task that is the amount of work
required for it. This is unitless because it's seconds * Hertz.

So, the total amount of energy required for the task is:
E = P * t
   = P0*t + Ps*f*t
And since t = W/f,
E = P0*W/f + Ps*W

Now, we are looking to minimize the energy needed for a particular
task by varying frequency and only frequency (we assume the target
architecture does not automatically change voltage along with
frequency, which I believe is a reasonable assumption for the
medium-range hardware targeted by Rockbox). So we set the first
derivative of E to 0:

dE/df = 0
-P0*W/(f^2) = 0

What does this mean? Energy is at a minimum when f tends toward infinity.

This even makes sense intuitively: as frequency increases, the
contribution of the Ps term become large enough to cancel out the P0
term. In other words, the task is executed so fast that the static
power doesn't have enough time to matter.

Okay, so that was a lot of text to make one point. Thoughts?

In intend to actually try this with a PIC microcontroller at some point.

-Cat

On Mon, Jan 21, 2008 at 6:43 PM, Bertrik Sikken [EMAIL PROTECTED] wrote:
 Burelli Luca wrote:
  
   On Sat, 12 Jan 2008, Catalin Patulea wrote:
  
   - I believe (correct me if I'm wrong) that, in general, power
   consumption is proportional to the core frequency. Let the power
   consumptions of the cores be P_1 = k*f_1 and P_2 = k*f_2.
  
   An accurate estimation of power consumption in digital electronics is
   not so easy to figure out. However, as a rule of thumb, you may assume
   there's a constant power dissipation which is due to leakage (non-ideal
   switches allowing current to flow even where and when it should not),
   and a dynamic power dissipation that is proportional to Vdd (the
   switching voltage) and to _the square_ of the switching frequency. So
   the above would be better written as P_1 = P_{1,leak} + k * f_1^2,
   meaning that if you run the CPU twice as fast, you need four times the
   energy (ignoring leakage). That's why clock throttling helps _a lot_ in
   reducing battery drain!
  
   Hope I remembered things correctly from my University classes :-)

  I think you have the relations mixed up.

  In one clock tick, a bunch of internal nodes acting as tiny capacitors
  need to be charged or discharged, dissipating an amount of energy equal
  to the energy contained in those capacitors.
  A charged capacitor C has energy E = 1/2 * C * V * V, which demonstrates
  the quadratic relation between voltage and consumed dynamic power.

  Increasing the clock simply means that this happens more often each
  second, which points to a linear relation between frequency and
  dynamic power.

  Kind regards,
  Bertrik



Re: Power Efficiency Tradeoffs

2008-01-14 Thread Linus Nielsen Feltzing

Mike Holden wrote:

But that's precisely what proportional means - linearly proportional!

To be proportional, the two values have to be always at exactly the same
ratio, such as y = x * 2.


Well, it can also be exponentially or logarithmically proportional, as 
far as I know.


Linus



Re: Power Efficiency Tradeoffs

2008-01-14 Thread Mike Holden
Jerry Van Baren wrote:
 While power consumption is proportional to clock rate, it isn't
 necessarily *linearly* proportional

But that's precisely what proportional means - linearly proportional!

To be proportional, the two values have to be always at exactly the same
ratio, such as y = x * 2.

proportional from dictionary.com:

4. Mathematics. a. (of two quantities) having the same or a constant ratio
or relation: The quantities y and x are proportional if y/x = k, where k
is the constant of proportionality.
b. (of a first quantity with respect to a second quantity) a constant
multiple of: The quantity y is proportional to x if y = kx, where k is the
constant of proportionality.
-- 
Mike Holden

http://www.by-ang.com - the place to shop for all manner of hand crafted
items, including Jewellery, Greetings Cards and Gifts





Re: Power Efficiency Tradeoffs

2008-01-14 Thread Mike Holden
Linus Nielsen Feltzing wrote:
 Mike Holden wrote:
 But that's precisely what proportional means - linearly proportional!

 To be proportional, the two values have to be always at exactly the
same ratio, such as y = x * 2.

 Well, it can also be exponentially or logarithmically proportional, as
far as I know.

They can be related via a logarithmic or exponential scale (or cubic,
square, root, tan, cos, sine or whatever), but by definition, that isn't
proportional.
-- 
Mike Holden

http://www.by-ang.com - the place to shop for all manner of hand crafted
items, including Jewellery, Greetings Cards and Gifts




Re: Power Efficiency Tradeoffs

2008-01-14 Thread Mark Allums

Linus Nielsen Feltzing wrote:

Mike Holden wrote:

But that's precisely what proportional means - linearly proportional!

To be proportional, the two values have to be always at exactly the same
ratio, such as y = x * 2.


Well, it can also be exponentially or logarithmically proportional, as 
far as I know.


Linus



A proportion is usually written in the form

y = kx + c

where k is called the constant of proportionality.

A proportionality can be expressed by almost any function; the 
definition of proportional however implies a linear function. One 
possible more general equation might be


y = F(x) = Px + c

where P is some polynomial in some other variable, with P generally a 
constant, or close to constant, for the range of values we are 
interested in.  This is really a function in two variables:


e.g.,

P(q) = s^2 + 2s + 3

y = xs^2 + 2xs + 3x + c

If s is close to 1.0 and we can assume it *stays* there, then it becomes

y = x + 2x + 3x + c

y = 6x + c


If it can be represented by an exponential, logarithmic, harmonic or 
some other function, it is not strictly a proportion, but that is just 
nitpicking.  It is still useful to make statements like a is 
proportional to the square root of b.


a = k(b^0.5) + c, where c == 0


And if we *know* the function that approximates the value, we can use 
it, whatever it is.


At any rate, we know what you mean when you say proportional.

:)

--Mark Allums





Re: Power Efficiency Tradeoffs

2008-01-12 Thread Jerry Van Baren

Catalin Patulea wrote:

Hey everyone,

I've been pondering a power efficiency tradeoff problem in dual-core
embedded systems. (This obviously directly stems from the m:robe
architecture, but the discussion should be fairly general.)

Take a system with two identical cores.
- One runs at some given clock f_1 and another that runs at f_2,
different from f_1.
- I believe (correct me if I'm wrong) that, in general, power
consumption is proportional to the core frequency. Let the power
consumptions of the cores be P_1 = k*f_1 and P_2 = k*f_2.
- Take some task that takes a constant number of instructions to
execute to completion. Assume that this task, given the requirements
of the system, may be executed on either core. Then, since the number
of instruction is constant, the amount of time required for the same
task on each core is inversely proportional to the clock (with the
simplifying assumption that these are 1 clocks per instruction
machines): T_1 = N/f_1 and T_2 = N/f_2.
- If you consider power to be constant and you integrate over time,
you end up with the following equations for consumed *energy* for the
same given task, on each core running at a different frequency:
E_1 = P_1*T_1 = k*f_1 * N/f_1 = k*N
E_2 = P_2*T_2 = k*f_2 * N/f_2 = k*N

In other words, all other things being equal, a given task on a given
machine takes a given constant amount of energy to complete,
regardless of the clock frequency.

Is this right? This basically means that underclocking brings no
benefit in terms of energy consumption if the CPU is fully active. In
fact, if we assume that the machine's idle mode consumes zero power,
this also means there's no underclocking benefit for not-fully-loaded
CPUs.

Clearly, though, since underclocking is so commonly accepted as a way
of reducing power consumption, there must be an explanation. Can
anyone shed some light?

Ultimately, I would like to use the conclusion of this discussion to
decide where to place certain code for the m:robe's audio system: in
the ARM core, or in the (lower-clocked) DSP core. I know that these
are far from identical architectures, but it's often helpful to
examine an ideal situation before drawing real conclusions.

Thanks,

Catalin


While power consumption is proportional to clock rate, it doesn't go to 
zero when the clock goes to zero - there is always leakage that consumes 
power even when the clock is zero.  Chip manufacturers strive very, very 
hard to minimize the leakage... right now, Intel has a significant 
breakthrough on controlling leakage at 45nm that AMD/IBM/Mot/Freescale 
are having *major* problems matching.

  http://www.intel.com/technology/magazine/silicon/it01041.pdf
(note the running out of atoms part - amazing).

Nowadays, hardware scales voltage proportional to clock rate: at lower 
clock rates, the chip doesn't need as high of voltage to still run, so 
the voltage is reduced as well.  Since power is proportional to voltage, 
reducing voltage reduces power.


While power consumption is proportional to clock rate, it isn't 
necessarily *linearly* proportional, especially with hardware that 
scales voltage as well as the clocks.


Most clocking-based power control, for instance, with your laptop or a 
music player, is to reduce the clock when there isn't anything to do. 
Rather than the processor doing NOPs (or even a WAIT instruction) at 
2GHz, it is much more efficient to do NOPs at 32KHz (just making up 
numbers).


HTH,
gvb