In message <[email protected]>
"Eric Osborne (eosborne)" writes:
> 
> draft-mjsraman-panet-bgp-power-path says
> " Power consumption can be
>    reduced by trading off performance related measures like latency. For
>    example, power savings while switching from 1 Gbps to 100 Mbps is
>    approximately 4 W and from 100 Mbps to 10 Mbps around 0.1 Watts."


Interesting example.  I really don't think service providers have core
links that small.  Even the metro is going to 1GbE and 10GbE.

The issue is really how many parallel 10 Gb/s links (older or edge),
or 100 Gb/s links (the newest core stuff) are lit and operational and
for this topic the issue is whether not using some of them saves
power.  Not using some ports and concentrating all the traffic on a
subset of ports saves very little power.

Most cores don't have any lower power higher delay paths.  Most cores
have some paths with less parallel links and some paths with more
parallel links, but generally don't have one link running a partial
capacity and one slower link available as an alternate.  If all the
links are same capacity and its just a matter of how many are running
in parallel, shortest hop count is the lowest power path.

There has been some discussion of building into high end line cards an
ability to go into a reduced power state, but most don't support this
yet (none that I am aware of).  If so, then the leakage power is
saved.  There is active power and leakage power in any ASIC.  No
traffic means almost no active power.  Leakage power is generally
small compared to active power except in a few type of logic blocks
like SRAM.  So the savings might not be all that great.

Anyone studied this?  It would help to know the max power draw and the
power draw of an idle high end card, because the idle power draw would
be the savings if the card could be put in a standby state.  Usually
just max power figures are given.

Data center is similar, typically built using the fat-tree model.
Lots of parallel links.  Often there is a jump up in link capacity at
levels of aggregation (top of rack, end of rack, building
interconnect), but the lowest speed interface would typically be the
two (for redundancy) GbE from a blade server to the top of rack.

Curtis
_______________________________________________
rtgwg mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/rtgwg

Reply via email to