In message <[email protected]>
Tony Li writes:
 
>  
> On Feb 8, 2013, at 12:25 AM, Hannes Gredler <[email protected]> wrote:
>  
> > tony,
> > 
> > agree that saving power is a worthwhile goal;
> > 
> > in fact existing hardware technology is making that happen today by
> > e.g. automatically shutting down unused lookup engines, CPU cores, memory 
> > banks etc.
> > when there is low processing demand.
> > 
> > the part where i am not yet convinced is that additional off-peak 
> > "optimization" of
> > infrastructure links by e.g. computing a routing mesh which only uses
> > 70% of the nominal links does actually give much power savings.
> > 
> > note that line cards which are running at 70% have already throttled down 
> > their
> > power consumption - so what is the point emptying the link and loading 
> > another ?
> > appears to me a zero sum game.
> > 
> > my concern about the core (and SP edge) is not about business or technology 
> > -
> > it is more about if we try to optimize an already optimized (and solved) 
> > problem.
>  
>  
> If there are no more optimizations to be had, then talking about it
> isn't going to hurt anything.
>  
> The point of emptying links is the ability to turn off NPUs (50-100W
> ea.), turn off edge optics (2-20W ea.) and most importantly turn off
> long-haul optics (100-400W ea.).  Yes, I realize that the latter part
> is really problematic because DWDM systems are balanced with all
> lambdas active, and that disabling any one of them can cause issues
> with other lambdas.
>  
> Still, it would be nice to understand what CAN be done, if only to
> feel like we've truly explored every corner in the solution space and
> exhausted every option.
>  
> Tony


Tony,

It may be that the industry is already doing what they can but for a
completely different reason that CO2 emissions.

The reason is getting more capacity in not much more space and power.
Network capacity grows exponentially, about a decimal order of
magnitude in 5-8 years is a reasonable estimate (circa 2000 up to 10G
IP core links, circa 2012 approaching 1T IP core links, transport
capacities of 3.2T to 8T per fiber).

Each generation of equipment replaces a row of racks with a rack or
less of the same capacity at far less power per bit.  As each
generation ages, the row of racks fills again and the cycle repeats.
The older stuff used to be redeployed further out from the core in
edge, but I'm not sure if that occurs as much anymore (the older stuff
may simply be retired now).  This is true of IP/MPLS and transport.

Given the goal of replacing existing space and power footprint with
5-10 times the capacity, it is space density and peak power density
that has been the focus.  The key question after "do we need to look
at leasing more space or relocating the facility" has been "are the
power feeds and the AC going to need to be upgraded and is that
feasible".

Operating power has been a secondary concern.  Operating power is a
secondary concern, but it dictates the power bill, and therefore is a
big part of the OpEx equation.  Secondary doesn't mean ignored, and
given both big power bills and potential political presure due to
having multiple facilities requiring 100s of KW the problem is not
being ignored.  OTOH - the datacenters of content providers and the
Internet exchanges, requiring many MW of power have even more
incentive to think about operating power.

A useful question is, if equipment vendors and providers placed a
greater focus on operating power, how much could be gained and what
are the best means of doing so.

The question is really not whether in theory a routing or TE based
approach can make gains in a hypothetic network, it is whether it is
worthwhile approach given how networks are actually built and where
the power actually goes.  This is assuming a solution before
understanding the problem, and IMHO it is starting off with a poor
solution.

Anyone who thinks the industry does not look at this problem, can look
to past NANOG presentations.  Most have focused on data centers and
IXP.  At least one specifically on core routers, but appealing to
relax NEBS temperatures and sustained PPS rates to reduce power
http://www.nanog.org/meetings/nanog54/presentations/Wednesday/Wobker.pdf
This has good info on slides 11 and 12.  An interesting point is on
slide 10 "cooling 1 slot ~= cooling ALL slots" (reason is airflow past
cooling fins has to be maintained at a given rate per ambient temp).

Curtis
_______________________________________________
rtgwg mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/rtgwg

Reply via email to