Re: [AFMUG] Data center temperatures

2016-05-19 Thread Paul Stewart
Absolutely agree… these are not “normal” cabinets for customers … although if 
you have a cage, you have more freedom to do within it what you want (within 
reason) hence why these cabinets were chosen/built.

 

>From a DC perspective, yeah the more you can jam in the better (again, within 
>reason)

 

;)

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 19, 2016 3:28 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

 

Agreed from a end user perspective but not so much in a $$$/sq ft revenue 
perspective of a datacenter operator. With the very highest density/high power 
cabinets I've seen recently, the cable management is actually not so bad. For 
hypervisor platforms what used to be 4 x 1000BaseT connections and maybe a 5th 
cable for OOB in a previous generation design is now a few 10GbE and 40Gb links 
to a TOR switch over regular, thin, yellow spaghetti two strand singlemode.

 

 

On Thu, May 19, 2016 at 12:23 PM, Josh Reynolds mailto:j...@kyneticwifi.com> > wrote:

Extra wide cabinets are awesome for cable management.

On Thu, May 19, 2016 at 12:42 PM, Paul Stewart mailto:p...@paulstewart.org> > wrote:
> The cabinets are 50 or 52U in size – custom size I know for sure… extra wide
> too which is nice
>
>
>
> When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of
> power J
>
>
>
>
>
> From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On 
> Behalf Of Eric Kuhnke
> Sent: May 14, 2016 7:50 PM
> To: af@afmug.com <mailto:af@afmug.com> 
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a single
> 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart"  <mailto:p...@paulstewart.org> > wrote:
>

> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On 
> Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM
>
>
> To: af@afmug.com <mailto:af@afmug.com> 
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not correlate
> with a statistically significant rate of failure.
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  <mailto:af...@kwisp.com> > wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is a
> consideration now.
>
>
>
>
>
> From: That One Guy /sarcasm
>
> Sent: Wednesday, May 11, 2016 3:51 PM
>
> To: af@afmug.com <mailto:af@afmug.com> 
>
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  <mailto:lesm...@ecsis.net> > wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> Just cu

Re: [AFMUG] Data center temperatures

2016-05-19 Thread Paul Stewart
Extra wide 19” inch cabinets … they were specially ordered for “us” :)  I can’t 
comment too much on the cooling as the design is rather custom in nature but 
can say that we’ve never had any issues there … 

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 19, 2016 3:11 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

 

52U and 23" rack, or just an extra-wide/roomy 19"?  

I'd like to see a photo of the hot/cold aisle set up for that, if they have a 
bunch of 208V 60A capable cabinets in a row

 

On Thu, May 19, 2016 at 10:42 AM, Paul Stewart mailto:p...@paulstewart.org> > wrote:

The cabinets are 50 or 52U in size – custom size I know for sure… extra wide 
too which is nice

 

When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of 
power :)

 

 

From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On Behalf 
Of Eric Kuhnke
Sent: May 14, 2016 7:50 PM
To: af@afmug.com <mailto:af@afmug.com> 
Subject: Re: [AFMUG] Data center temperatures

 

How does a 44U cabinet need 208V 60A for storage arrays?

In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for 
controller/motherboard and fans. 650W in 4U.

44 / 4 = 11

Multply by 650

7150W

More realistically with a normal amount of drives (like 40 per 4U) a single 208 
30A is sufficient,

208 x 30 = 6240W

Run at max 0.85 load on the circuit, so

6240 x 0.85 = 5304W

In a really dense 2.5" environment all of the above is of course invalid, you 
could probably need up to 7900W per cabinet
Then there's 52U cabinets as well...

On May 13, 2016 6:16 PM, "Paul Stewart" mailto:p...@paulstewart.org> > wrote:

Yup … general trends on new data centers are pushing those temperatures higher 
for efficiency but also with better designs ..

 

One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)

 

From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On Behalf 
Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM


To: af@afmug.com <mailto:af@afmug.com> 
Subject: Re: [AFMUG] Data center temperatures

 

There have been some fairly large data set studies done shown that air intake 
temperature for huge numbers of servers, at 77-78F does not correlate with a 
statistically significant rate of failure.  

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

how/what you do for cooling is definitely dependent on the load. Designing a 
colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a hot/cold 
air separated configuration is very different than 'normal' older facilities 
that are one large open room.

 

On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof mailto:af...@kwisp.com> > wrote:

I’m not sure you can answer the question without knowing the max heat load per 
cabinet and how you manage airflow in the cabinets.

 

AFAIK it used to be standard practice to keep data centers as cold as possible 
without requiring people to wear parkas, but energy efficiency is a 
consideration now.

 

 

From: That One Guy /sarcasm <mailto:thatoneguyst...@gmail.com>  

Sent: Wednesday, May 11, 2016 3:51 PM

To: af@afmug.com <mailto:af@afmug.com>  

Subject: Re: [AFMUG] Data center temperatures

 

apparently 72 is the the ideal for our noc, i set our thermostat to 60 and it 
always gets turned back to 72, so i just say fuck it, I wanted new gear in the 
racks anyway

 

On Wed, May 11, 2016 at 3:46 PM, Larry Smith mailto:lesm...@ecsis.net> > wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net <mailto:lesm...@ecsis.net> 





 

-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.

 

 



Re: [AFMUG] Data center temperatures

2016-05-19 Thread Mike Hammett
Varies. Used to be in the 60s?, but newer tech and newer designs can push that 
up near 100. 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 




- Original Message -

From: "Josh Luthman"  
To: af@afmug.com 
Sent: Wednesday, May 11, 2016 3:37:38 PM 
Subject: [AFMUG] Data center temperatures 


Just curious what the ideal temp is for a data center. Our really nice building 
that Sprint ditched ranges from 60 to 90F (on a site monitor). 


Re: [AFMUG] Data center temperatures

2016-05-19 Thread Eric Kuhnke
Agreed from a end user perspective but not so much in a $$$/sq ft revenue
perspective of a datacenter operator. With the very highest density/high
power cabinets I've seen recently, the cable management is actually not so
bad. For hypervisor platforms what used to be 4 x 1000BaseT connections and
maybe a 5th cable for OOB in a previous generation design is now a few
10GbE and 40Gb links to a TOR switch over regular, thin, yellow spaghetti
two strand singlemode.


On Thu, May 19, 2016 at 12:23 PM, Josh Reynolds 
wrote:

> Extra wide cabinets are awesome for cable management.
>
> On Thu, May 19, 2016 at 12:42 PM, Paul Stewart 
> wrote:
> > The cabinets are 50 or 52U in size – custom size I know for sure… extra
> wide
> > too which is nice
> >
> >
> >
> > When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW
> of
> > power J
> >
> >
> >
> >
> >
> > From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> > Sent: May 14, 2016 7:50 PM
> > To: af@afmug.com
> > Subject: Re: [AFMUG] Data center temperatures
> >
> >
> >
> > How does a 44U cabinet need 208V 60A for storage arrays?
> >
> > In a 4U chassis the max hard drives (front and rear) is about 60 x
> 3.5"...
> >
> > Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> > controller/motherboard and fans. 650W in 4U.
> >
> > 44 / 4 = 11
> >
> > Multply by 650
> >
> > 7150W
> >
> > More realistically with a normal amount of drives (like 40 per 4U) a
> single
> > 208 30A is sufficient,
> >
> > 208 x 30 = 6240W
> >
> > Run at max 0.85 load on the circuit, so
> >
> > 6240 x 0.85 = 5304W
> >
> > In a really dense 2.5" environment all of the above is of course invalid,
> > you could probably need up to 7900W per cabinet
> > Then there's 52U cabinets as well...
> >
> > On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
> >
> > Yup … general trends on new data centers are pushing those temperatures
> > higher for efficiency but also with better designs ..
> >
> >
> >
> > One of our data centers runs at 78F and have no issues – each cabinet is
> > standard 208V 30A as you mention but can go per cabinet much higher if
> > needed (ie. 208V 60A for storage arrays)
> >
> >
> >
> > From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> > Sent: May 11, 2016 5:15 PM
> >
> >
> > To: af@afmug.com
> > Subject: Re: [AFMUG] Data center temperatures
> >
> >
> >
> > There have been some fairly large data set studies done shown that air
> > intake temperature for huge numbers of servers, at 77-78F does not
> correlate
> > with a statistically significant rate of failure.
> >
> >
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
> >
> >
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
> >
> > how/what you do for cooling is definitely dependent on the load.
> Designing a
> > colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> > hot/cold air separated configuration is very different than 'normal'
> older
> > facilities that are one large open room.
> >
> >
> >
> > On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
> >
> > I’m not sure you can answer the question without knowing the max heat
> load
> > per cabinet and how you manage airflow in the cabinets.
> >
> >
> >
> > AFAIK it used to be standard practice to keep data centers as cold as
> > possible without requiring people to wear parkas, but energy efficiency
> is a
> > consideration now.
> >
> >
> >
> >
> >
> > From: That One Guy /sarcasm
> >
> > Sent: Wednesday, May 11, 2016 3:51 PM
> >
> > To: af@afmug.com
> >
> > Subject: Re: [AFMUG] Data center temperatures
> >
> >
> >
> > apparently 72 is the the ideal for our noc, i set our thermostat to 60
> and
> > it always gets turned back to 72, so i just say fuck it, I wanted new
> gear
> > in the racks anyway
> >
> >
> >
> > On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
> >
> > On Wed May 11 2016 15:37, Josh Luthman wrote:
> >> Just curious what the ideal temp is for a data center.  Our really nice
> >> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
> >
> > I try to keep my NOC room at about 62F, that puts many of the CPU's
> > at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> > 60F.
> > Loads of computers (data center type) are primarily groupings of little
> > heaters...
> >
> > --
> > Larry Smith
> > lesm...@ecsis.net
> >
> >
> >
> >
> >
> > --
> >
> > If you only see yourself as part of the team but you don't see your team
> as
> > part of yourself you have already failed as part of the team.
> >
> >
>


Re: [AFMUG] Data center temperatures

2016-05-19 Thread Josh Reynolds
Extra wide cabinets are awesome for cable management.

On Thu, May 19, 2016 at 12:42 PM, Paul Stewart  wrote:
> The cabinets are 50 or 52U in size – custom size I know for sure… extra wide
> too which is nice
>
>
>
> When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of
> power J
>
>
>
>
>
> From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> Sent: May 14, 2016 7:50 PM
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a single
> 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM
>
>
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not correlate
> with a statistically significant rate of failure.
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is a
> consideration now.
>
>
>
>
>
> From: That One Guy /sarcasm
>
> Sent: Wednesday, May 11, 2016 3:51 PM
>
> To: af@afmug.com
>
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> Just curious what the ideal temp is for a data center.  Our really nice
>> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team as
> part of yourself you have already failed as part of the team.
>
>


Re: [AFMUG] Data center temperatures

2016-05-19 Thread Eric Kuhnke
52U and 23" rack, or just an extra-wide/roomy 19"?

I'd like to see a photo of the hot/cold aisle set up for that, if they have
a bunch of 208V 60A capable cabinets in a row

On Thu, May 19, 2016 at 10:42 AM, Paul Stewart  wrote:

> The cabinets are 50 or 52U in size – custom size I know for sure… extra
> wide too which is nice
>
>
>
> When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of
> power J
>
>
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 14, 2016 7:50 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm 
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team
> as part of yourself you have already failed as part of the team.
>
>
>
>


Re: [AFMUG] Data center temperatures

2016-05-19 Thread Paul Stewart
Hahah… I’ve seen that several times especially in telco CO’s ;)

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Chuck McCown
Sent: May 14, 2016 11:40 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

 

I remembering being at a data center on a hot summer day.   Power went out, 
generator started.  Things were fine... then all the air conditioners switched 
on at the same time.  Actually stalled the generator.  We  had to put 
sequencers on the AC.  

 

From: Faisal Imtiaz <mailto:fai...@snappytelecom.net>  

Sent: Saturday, May 14, 2016 9:20 PM

To: af@afmug.com <mailto:af@afmug.com>  

Subject: Re: [AFMUG] Data center temperatures

 

FYI, Electrical Code (NECA) and most datacenters require the power not to be 
loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.

 

Additionally, one also has to have head room on the power circuit to deal with 
start up draw (current rush). It's not pretty when you have a crap load of 
servers starting up all together 

 

 

:)

 

Faisal Imtiaz
Snappy Internet & Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 
<mailto:supp...@snappytelecom.net> 

 

  _  

From: "Eric Kuhnke" mailto:eric.kuh...@gmail.com> >
To: af@afmug.com <mailto:af@afmug.com> 
Sent: Saturday, May 14, 2016 7:50:22 PM
Subject: Re: [AFMUG] Data center temperatures

How does a 44U cabinet need 208V 60A for storage arrays?

In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for 
controller/motherboard and fans. 650W in 4U.

44 / 4 = 11

Multply by 650

7150W

More realistically with a normal amount of drives (like 40 per 4U) a single 208 
30A is sufficient,

208 x 30 = 6240W

Run at max 0.85 load on the circuit, so

6240 x 0.85 = 5304W

In a really dense 2.5" environment all of the above is of course invalid, you 
could probably need up to 7900W per cabinet
Then there's 52U cabinets as well...

On May 13, 2016 6:16 PM, "Paul Stewart" mailto:p...@paulstewart.org> > wrote:



Yup … general trends on new data centers are pushing those temperatures higher 
for efficiency but also with better designs ..

 

One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)

 

From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On Behalf 
Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM


To: af@afmug.com <mailto:af@afmug.com> 
Subject: Re: [AFMUG] Data center temperatures

 

There have been some fairly large data set studies done shown that air intake 
temperature for huge numbers of servers, at 77-78F does not correlate with a 
statistically significant rate of failure.  

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/




how/what you do for cooling is definitely dependent on the load. Designing a 
colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a hot/cold 
air separated configuration is very different than 'normal' older facilities 
that are one large open room.




 

On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof mailto:af...@kwisp.com> > wrote:

I’m not sure you can answer the question without knowing the max heat load per 
cabinet and how you manage airflow in the cabinets.

 

AFAIK it used to be standard practice to keep data centers as cold as possible 
without requiring people to wear parkas, but energy efficiency is a 
consideration now.

 

 

From: That One Guy /sarcasm <mailto:thatoneguyst...@gmail.com>  

Sent: Wednesday, May 11, 2016 3:51 PM

To: af@afmug.com <mailto:af@afmug.com>  

Subject: Re: [AFMUG] Data center temperatures

 

apparently 72 is the the ideal for our noc, i set our thermostat to 60 and it 
always gets turned back to 72, so i just say fuck it, I wanted new gear in the 
racks anyway

 

On Wed, May 11, 2016 at 3:46 PM, Larry Smith mailto:lesm...@ecsis.net> > wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net <mailto:lesm...@ecsis.net> 





 

-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.

 

 



Re: [AFMUG] Data center temperatures

2016-05-19 Thread Paul Stewart
The cabinets are 50 or 52U in size – custom size I know for sure… extra wide 
too which is nice

 

When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of 
power :)

 

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 14, 2016 7:50 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

 

How does a 44U cabinet need 208V 60A for storage arrays?

In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for 
controller/motherboard and fans. 650W in 4U.

44 / 4 = 11

Multply by 650

7150W

More realistically with a normal amount of drives (like 40 per 4U) a single 208 
30A is sufficient,

208 x 30 = 6240W

Run at max 0.85 load on the circuit, so

6240 x 0.85 = 5304W

In a really dense 2.5" environment all of the above is of course invalid, you 
could probably need up to 7900W per cabinet
Then there's 52U cabinets as well...

On May 13, 2016 6:16 PM, "Paul Stewart" mailto:p...@paulstewart.org> > wrote:

Yup … general trends on new data centers are pushing those temperatures higher 
for efficiency but also with better designs ..

 

One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)

 

From: Af [mailto:af-boun...@afmug.com <mailto:af-boun...@afmug.com> ] On Behalf 
Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM


To: af@afmug.com <mailto:af@afmug.com> 
Subject: Re: [AFMUG] Data center temperatures

 

There have been some fairly large data set studies done shown that air intake 
temperature for huge numbers of servers, at 77-78F does not correlate with a 
statistically significant rate of failure.  

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

how/what you do for cooling is definitely dependent on the load. Designing a 
colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a hot/cold 
air separated configuration is very different than 'normal' older facilities 
that are one large open room.

 

On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof mailto:af...@kwisp.com> > wrote:

I’m not sure you can answer the question without knowing the max heat load per 
cabinet and how you manage airflow in the cabinets.

 

AFAIK it used to be standard practice to keep data centers as cold as possible 
without requiring people to wear parkas, but energy efficiency is a 
consideration now.

 

 

From: That One Guy /sarcasm <mailto:thatoneguyst...@gmail.com>  

Sent: Wednesday, May 11, 2016 3:51 PM

To: af@afmug.com <mailto:af@afmug.com>  

Subject: Re: [AFMUG] Data center temperatures

 

apparently 72 is the the ideal for our noc, i set our thermostat to 60 and it 
always gets turned back to 72, so i just say fuck it, I wanted new gear in the 
racks anyway

 

On Wed, May 11, 2016 at 3:46 PM, Larry Smith mailto:lesm...@ecsis.net> > wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net <mailto:lesm...@ecsis.net> 





 

-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.

 



Re: [AFMUG] Data center temperatures

2016-05-15 Thread Josh Reynolds
It happens during system start before raid array assembly and before OS boot
On May 15, 2016 8:34 AM, "Josh Luthman"  wrote:

> How would staggered drive start up work?  Or is the array not available
> until the timer is done?
>
> Josh Luthman
> Office: 937-552-2340
> Direct: 937-552-2343
> 1100 Wayne St
> Suite 1337
> Troy, OH 45373
> On May 15, 2016 1:01 AM, "Josh Reynolds"  wrote:
>
>> A lot of the dell servers I use, as well as a lot of the supermicro
>> servers have that as well. Thankfully many of the RAID JBOD cards I
>> use (softraid ftw, and zfs doesn't like it either) can also stagger
>> drive startup.
>>
>> On Sat, May 14, 2016 at 11:13 PM, Faisal Imtiaz
>>  wrote:
>> > It would be interesting to note that, we are putting in some new
>> servers,
>> > and in the bios these have a setting that delays a random amount of time
>> > between 50 - 120seconds, before returning to power on state after a
>> power
>> > loss   .
>> >
>> > :)
>> >
>> > Faisal Imtiaz
>> > Snappy Internet & Telecom
>> > 7266 SW 48 Street
>> > Miami, FL 33155
>> > Tel: 305 663 5518 x 232
>> >
>> > Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>> >
>> > 
>> >
>> > From: "Chuck McCown" 
>> > To: af@afmug.com
>> > Sent: Saturday, May 14, 2016 11:40:09 PM
>> >
>> > Subject: Re: [AFMUG] Data center temperatures
>> >
>> > I remembering being at a data center on a hot summer day.   Power went
>> out,
>> > generator started.  Things were fine... then all the air conditioners
>> > switched on at the same time.  Actually stalled the generator.  We  had
>> to
>> > put sequencers on the AC.
>> >
>> > From: Faisal Imtiaz
>> > Sent: Saturday, May 14, 2016 9:20 PM
>> > To: af@afmug.com
>> > Subject: Re: [AFMUG] Data center temperatures
>> >
>> > FYI, Electrical Code (NECA) and most datacenters require the power not
>> to be
>> > loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
>> circuit.
>> >
>> > Additionally, one also has to have head room on the power circuit to
>> deal
>> > with start up draw (current rush). It's not pretty when you have a crap
>> load
>> > of servers starting up all together
>> >
>> >
>> > :)
>> >
>> > Faisal Imtiaz
>> > Snappy Internet & Telecom
>> > 7266 SW 48 Street
>> > Miami, FL 33155
>> > Tel: 305 663 5518 x 232
>> >
>> > Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>> >
>> > 
>> >
>> > From: "Eric Kuhnke" 
>> > To: af@afmug.com
>> > Sent: Saturday, May 14, 2016 7:50:22 PM
>> > Subject: Re: [AFMUG] Data center temperatures
>> >
>> > How does a 44U cabinet need 208V 60A for storage arrays?
>> >
>> > In a 4U chassis the max hard drives (front and rear) is about 60 x
>> 3.5"...
>> >
>> > Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>> > controller/motherboard and fans. 650W in 4U.
>> >
>> > 44 / 4 = 11
>> >
>> > Multply by 650
>> >
>> > 7150W
>> >
>> > More realistically with a normal amount of drives (like 40 per 4U) a
>> single
>> > 208 30A is sufficient,
>> >
>> > 208 x 30 = 6240W
>> >
>> > Run at max 0.85 load on the circuit, so
>> >
>> > 6240 x 0.85 = 5304W
>> >
>> > In a really dense 2.5" environment all of the above is of course
>> invalid,
>> > you could probably need up to 7900W per cabinet
>> > Then there's 52U cabinets as well...
>> >
>> > On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>> >
>> > Yup … general trends on new data centers are pushing those temperatures
>> > higher for efficiency but also with better designs ..
>> >
>> >
>> >
>> > One of our data centers runs at 78F and have no issues – each cabinet is
>> > standard 208V 30A as you mention but can go per cabinet much higher if
>> > needed (ie. 208V 60A for storage arrays)
>> >
>> >
>> >
>> > From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
>> > Sent: May 11, 2016 5:15 

Re: [AFMUG] Data center temperatures

2016-05-15 Thread Josh Luthman
How would staggered drive start up work?  Or is the array not available
until the timer is done?

Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373
On May 15, 2016 1:01 AM, "Josh Reynolds"  wrote:

> A lot of the dell servers I use, as well as a lot of the supermicro
> servers have that as well. Thankfully many of the RAID JBOD cards I
> use (softraid ftw, and zfs doesn't like it either) can also stagger
> drive startup.
>
> On Sat, May 14, 2016 at 11:13 PM, Faisal Imtiaz
>  wrote:
> > It would be interesting to note that, we are putting in some new servers,
> > and in the bios these have a setting that delays a random amount of time
> > between 50 - 120seconds, before returning to power on state after a power
> > loss   .
> >
> > :)
> >
> > Faisal Imtiaz
> > Snappy Internet & Telecom
> > 7266 SW 48 Street
> > Miami, FL 33155
> > Tel: 305 663 5518 x 232
> >
> > Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
> >
> > ____________
> >
> > From: "Chuck McCown" 
> > To: af@afmug.com
> > Sent: Saturday, May 14, 2016 11:40:09 PM
> >
> > Subject: Re: [AFMUG] Data center temperatures
> >
> > I remembering being at a data center on a hot summer day.   Power went
> out,
> > generator started.  Things were fine... then all the air conditioners
> > switched on at the same time.  Actually stalled the generator.  We  had
> to
> > put sequencers on the AC.
> >
> > From: Faisal Imtiaz
> > Sent: Saturday, May 14, 2016 9:20 PM
> > To: af@afmug.com
> > Subject: Re: [AFMUG] Data center temperatures
> >
> > FYI, Electrical Code (NECA) and most datacenters require the power not
> to be
> > loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
> circuit.
> >
> > Additionally, one also has to have head room on the power circuit to deal
> > with start up draw (current rush). It's not pretty when you have a crap
> load
> > of servers starting up all together
> >
> >
> > :)
> >
> > Faisal Imtiaz
> > Snappy Internet & Telecom
> > 7266 SW 48 Street
> > Miami, FL 33155
> > Tel: 305 663 5518 x 232
> >
> > Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
> >
> > 
> >
> > From: "Eric Kuhnke" 
> > To: af@afmug.com
> > Sent: Saturday, May 14, 2016 7:50:22 PM
> > Subject: Re: [AFMUG] Data center temperatures
> >
> > How does a 44U cabinet need 208V 60A for storage arrays?
> >
> > In a 4U chassis the max hard drives (front and rear) is about 60 x
> 3.5"...
> >
> > Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> > controller/motherboard and fans. 650W in 4U.
> >
> > 44 / 4 = 11
> >
> > Multply by 650
> >
> > 7150W
> >
> > More realistically with a normal amount of drives (like 40 per 4U) a
> single
> > 208 30A is sufficient,
> >
> > 208 x 30 = 6240W
> >
> > Run at max 0.85 load on the circuit, so
> >
> > 6240 x 0.85 = 5304W
> >
> > In a really dense 2.5" environment all of the above is of course invalid,
> > you could probably need up to 7900W per cabinet
> > Then there's 52U cabinets as well...
> >
> > On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
> >
> > Yup … general trends on new data centers are pushing those temperatures
> > higher for efficiency but also with better designs ..
> >
> >
> >
> > One of our data centers runs at 78F and have no issues – each cabinet is
> > standard 208V 30A as you mention but can go per cabinet much higher if
> > needed (ie. 208V 60A for storage arrays)
> >
> >
> >
> > From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> > Sent: May 11, 2016 5:15 PM
> >
> >
> > To: af@afmug.com
> > Subject: Re: [AFMUG] Data center temperatures
> >
> >
> >
> > There have been some fairly large data set studies done shown that air
> > intake temperature for huge numbers of servers, at 77-78F does not
> correlate
> > with a statistically significant rate of failure.
> >
> >
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
> >
> >
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
> >
> > how/what you do for cooling 

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
A lot of the dell servers I use, as well as a lot of the supermicro
servers have that as well. Thankfully many of the RAID JBOD cards I
use (softraid ftw, and zfs doesn't like it either) can also stagger
drive startup.

On Sat, May 14, 2016 at 11:13 PM, Faisal Imtiaz
 wrote:
> It would be interesting to note that, we are putting in some new servers,
> and in the bios these have a setting that delays a random amount of time
> between 50 - 120seconds, before returning to power on state after a power
> loss   .
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> 
>
> From: "Chuck McCown" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 11:40:09 PM
>
> Subject: Re: [AFMUG] Data center temperatures
>
> I remembering being at a data center on a hot summer day.   Power went out,
> generator started.  Things were fine... then all the air conditioners
> switched on at the same time.  Actually stalled the generator.  We  had to
> put sequencers on the AC.
>
> From: Faisal Imtiaz
> Sent: Saturday, May 14, 2016 9:20 PM
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
> FYI, Electrical Code (NECA) and most datacenters require the power not to be
> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
>
> Additionally, one also has to have head room on the power circuit to deal
> with start up draw (current rush). It's not pretty when you have a crap load
> of servers starting up all together
>
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> 
>
> From: "Eric Kuhnke" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 7:50:22 PM
> Subject: Re: [AFMUG] Data center temperatures
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a single
> 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM
>
>
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not correlate
> with a statistically significant rate of failure.
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is a
> consideration now.
>
>
>
>
>
> From: That One Guy /sarcasm
>
> Sent: Wednesday, May 11, 2016 3:51 PM
>
> To: af@afmug.com
>
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Luthman
Neat :)

Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373
On May 15, 2016 12:26 AM, "Faisal Imtiaz"  wrote:

> nothing special, dell C2100's looks like these settings are getting to
> be more common in stuff designed for high density data center install.
>
> Regards.
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Josh Luthman" 
> *To: *af@afmug.com
> *Sent: *Sunday, May 15, 2016 12:14:49 AM
> *Subject: *Re: [AFMUG] Data center temperatures
>
> Wow that's cool!  What kind of hardware are they?
>
>
> Josh Luthman
> Office: 937-552-2340
> Direct: 937-552-2343
> 1100 Wayne St
> Suite 1337
> Troy, OH 45373
>
> On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz 
> wrote:
>
>> It would be interesting to note that, we are putting in some new servers,
>> and in the bios these have a setting that delays a random amount of time
>> between 50 - 120seconds, before returning to power on state after a power
>> loss   .
>>
>> :)
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>> --
>>
>> *From: *"Chuck McCown" 
>> *To: *af@afmug.com
>> *Sent: *Saturday, May 14, 2016 11:40:09 PM
>>
>> *Subject: *Re: [AFMUG] Data center temperatures
>>
>> I remembering being at a data center on a hot summer day.   Power went
>> out, generator started.  Things were fine... then all the air conditioners
>> switched on at the same time.  Actually stalled the generator.  We  had to
>> put sequencers on the AC.
>>
>> *From:* Faisal Imtiaz 
>> *Sent:* Saturday, May 14, 2016 9:20 PM
>> *To:* af@afmug.com
>> *Subject:* Re: [AFMUG] Data center temperatures
>>
>> FYI, Electrical Code (NECA) and most datacenters require the power not to
>> be loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
>> circuit.
>>
>> Additionally, one also has to have head room on the power circuit to deal
>> with start up draw (current rush). It's not pretty when you have a crap
>> load of servers starting up all together
>>
>>
>> :)
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>> --
>>
>> *From: *"Eric Kuhnke" 
>> *To: *af@afmug.com
>> *Sent: *Saturday, May 14, 2016 7:50:22 PM
>> *Subject: *Re: [AFMUG] Data center temperatures
>>
>> How does a 44U cabinet need 208V 60A for storage arrays?
>>
>> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>>
>> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>> controller/motherboard and fans. 650W in 4U.
>>
>> 44 / 4 = 11
>>
>> Multply by 650
>>
>> 7150W
>>
>> More realistically with a normal amount of drives (like 40 per 4U) a
>> single 208 30A is sufficient,
>>
>> 208 x 30 = 6240W
>>
>> Run at max 0.85 load on the circuit, so
>>
>> 6240 x 0.85 = 5304W
>>
>> In a really dense 2.5" environment all of the above is of course invalid,
>> you could probably need up to 7900W per cabinet
>> Then there's 52U cabinets as well...
>> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>>
>> Yup … general trends on new data centers are pushing those temperatures
>> higher for efficiency but also with better designs ..
>>
>>
>>
>> One of our data centers runs at 78F and have no issues – each cabinet is
>> standard 208V 30A as you mention but can go per cabinet much higher if
>> needed (ie. 208V 60A for storage arrays)
>>
>>
>>
>> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
>> *Sent:* May 11, 2016 5:15 PM
>>
>> *To:* af@afmug.com
>> *Subject:* Re: [AFMUG] Data center temperatures
>>
>>
>>
>> There have been some fairly large data set studies done shown that air
>> intake temperature for huge numbers of servers, at 77-78F does not
>> correlate with a st

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
nothing special, dell C2100's looks like these settings are getting to be 
more common in stuff designed for high density data center install. 

Regards. 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Josh Luthman" 
> To: af@afmug.com
> Sent: Sunday, May 15, 2016 12:14:49 AM
> Subject: Re: [AFMUG] Data center temperatures

> Wow that's cool! What kind of hardware are they?

> Josh Luthman
> Office: 937-552-2340
> Direct: 937-552-2343
> 1100 Wayne St
> Suite 1337
> Troy, OH 45373

> On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz < fai...@snappytelecom.net >
> wrote:

>> It would be interesting to note that, we are putting in some new servers, 
>> and in
>> the bios these have a setting that delays a random amount of time between 50 
>> -
>> 120seconds, before returning to power on state after a power loss .

>> :)

>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232

>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

>>> From: "Chuck McCown" < ch...@wbmfg.com >
>>> To: af@afmug.com
>>> Sent: Saturday, May 14, 2016 11:40:09 PM

>>> Subject: Re: [AFMUG] Data center temperatures

>>> I remembering being at a data center on a hot summer day. Power went out,
>>> generator started. Things were fine... then all the air conditioners 
>>> switched
>>> on at the same time. Actually stalled the generator. We had to put 
>>> sequencers
>>> on the AC.
>>> From: Faisal Imtiaz
>>> Sent: Saturday, May 14, 2016 9:20 PM
>>> To: af@afmug.com
>>> Subject: Re: [AFMUG] Data center temperatures
>>> FYI, Electrical Code (NECA) and most datacenters require the power not to be
>>> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
>>> Additionally, one also has to have head room on the power circuit to deal 
>>> with
>>> start up draw (current rush). It's not pretty when you have a crap load of
>>> servers starting up all together
>>> :)
>>> Faisal Imtiaz
>>> Snappy Internet & Telecom
>>> 7266 SW 48 Street
>>> Miami, FL 33155
>>> Tel: 305 663 5518 x 232

>>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

>>>> From: "Eric Kuhnke" < eric.kuh...@gmail.com >
>>>> To: af@afmug.com
>>>> Sent: Saturday, May 14, 2016 7:50:22 PM
>>>> Subject: Re: [AFMUG] Data center temperatures

>>>> How does a 44U cabinet need 208V 60A for storage arrays?

>>>> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

>>>> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>>>> controller/motherboard and fans. 650W in 4U.

>>>> 44 / 4 = 11

>>>> Multply by 650

>>>> 7150W

>>>> More realistically with a normal amount of drives (like 40 per 4U) a 
>>>> single 208
>>>> 30A is sufficient,

>>>> 208 x 30 = 6240W

>>>> Run at max 0.85 load on the circuit, so

>>>> 6240 x 0.85 = 5304W

>>>> In a really dense 2.5" environment all of the above is of course invalid, 
>>>> you
>>>> could probably need up to 7900W per cabinet
>>>> Then there's 52U cabinets as well...
>>>> On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

>>>>> Yup … general trends on new data centers are pushing those temperatures 
>>>>> higher
>>>>> for efficiency but also with better designs ..

>>>>> One of our data centers runs at 78F and have no issues – each cabinet is
>>>>> standard 208V 30A as you mention but can go per cabinet much higher if 
>>>>> needed
>>>>> (ie. 208V 60A for storage arrays)

>>>>> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
>>>>> Sent: May 11, 2016 5:15 PM

>>>>> To: af@afmug.com
>>>>> Subject: Re: [AFMUG] Data center temperatures

>>>>> There have been some fairly large data set studies done shown that air 
>>>>> intake
>>>>> temperature for huge numbers of servers, at 77-78F does not correlate 
>>>>> with a
>>>>> statistically significant rate of failure.

&

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Luthman
Wow that's cool!  What kind of hardware are they?


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz 
wrote:

> It would be interesting to note that, we are putting in some new servers,
> and in the bios these have a setting that delays a random amount of time
> between 50 - 120seconds, before returning to power on state after a power
> loss   .
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Chuck McCown" 
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 11:40:09 PM
>
> *Subject: *Re: [AFMUG] Data center temperatures
>
> I remembering being at a data center on a hot summer day.   Power went
> out, generator started.  Things were fine... then all the air conditioners
> switched on at the same time.  Actually stalled the generator.  We  had to
> put sequencers on the AC.
>
> *From:* Faisal Imtiaz 
> *Sent:* Saturday, May 14, 2016 9:20 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
> FYI, Electrical Code (NECA) and most datacenters require the power not to
> be loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
> circuit.
>
> Additionally, one also has to have head room on the power circuit to deal
> with start up draw (current rush). It's not pretty when you have a crap
> load of servers starting up all together
>
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Eric Kuhnke" 
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 7:50:22 PM
> *Subject: *Re: [AFMUG] Data center temperatures
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm 
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermos

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
It would be interesting to note that, we are putting in some new servers, and 
in the bios these have a setting that delays a random amount of time between 50 
- 120seconds, before returning to power on state after a power loss . 

:) 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Chuck McCown" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 11:40:09 PM
> Subject: Re: [AFMUG] Data center temperatures

> I remembering being at a data center on a hot summer day. Power went out,
> generator started. Things were fine... then all the air conditioners switched
> on at the same time. Actually stalled the generator. We had to put sequencers
> on the AC.
> From: Faisal Imtiaz
> Sent: Saturday, May 14, 2016 9:20 PM
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
> FYI, Electrical Code (NECA) and most datacenters require the power not to be
> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
> Additionally, one also has to have head room on the power circuit to deal with
> start up draw (current rush). It's not pretty when you have a crap load of
> servers starting up all together
> :)
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232

> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

>> From: "Eric Kuhnke" 
>> To: af@afmug.com
>> Sent: Saturday, May 14, 2016 7:50:22 PM
>> Subject: Re: [AFMUG] Data center temperatures

>> How does a 44U cabinet need 208V 60A for storage arrays?

>> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

>> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>> controller/motherboard and fans. 650W in 4U.

>> 44 / 4 = 11

>> Multply by 650

>> 7150W

>> More realistically with a normal amount of drives (like 40 per 4U) a single 
>> 208
>> 30A is sufficient,

>> 208 x 30 = 6240W

>> Run at max 0.85 load on the circuit, so

>> 6240 x 0.85 = 5304W

>> In a really dense 2.5" environment all of the above is of course invalid, you
>> could probably need up to 7900W per cabinet
>> Then there's 52U cabinets as well...
>> On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

>>> Yup … general trends on new data centers are pushing those temperatures 
>>> higher
>>> for efficiency but also with better designs ..

>>> One of our data centers runs at 78F and have no issues – each cabinet is
>>> standard 208V 30A as you mention but can go per cabinet much higher if 
>>> needed
>>> (ie. 208V 60A for storage arrays)

>>> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
>>> Sent: May 11, 2016 5:15 PM

>>> To: af@afmug.com
>>> Subject: Re: [AFMUG] Data center temperatures

>>> There have been some fairly large data set studies done shown that air 
>>> intake
>>> temperature for huge numbers of servers, at 77-78F does not correlate with a
>>> statistically significant rate of failure.

>>> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

>>> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

>>> how/what you do for cooling is definitely dependent on the load. Designing a
>>> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
>>> hot/cold
>>> air separated configuration is very different than 'normal' older facilities
>>> that are one large open room.

>>> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof < af...@kwisp.com > wrote:

>>>> I’m not sure you can answer the question without knowing the max heat load 
>>>> per
>>>> cabinet and how you manage airflow in the cabinets.

>>>> AFAIK it used to be standard practice to keep data centers as cold as 
>>>> possible
>>>> without requiring people to wear parkas, but energy efficiency is a
>>>> consideration now.

>>>> From: That One Guy /sarcasm

>>>> Sent: Wednesday, May 11, 2016 3:51 PM

>>>> To: af@afmug.com

>>>> Subject: Re: [AFMUG] Data center temperatures

>>>> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and 
>>>> it
>>>> always gets turned back to 72, so i just say fuck it, I wanted new gear in 
>>>> t

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Chuck McCown
I remembering being at a data center on a hot summer day.   Power went out, 
generator started.  Things were fine... then all the air conditioners switched 
on at the same time.  Actually stalled the generator.  We  had to put 
sequencers on the AC.  

From: Faisal Imtiaz 
Sent: Saturday, May 14, 2016 9:20 PM
To: af@afmug.com 
Subject: Re: [AFMUG] Data center temperatures

FYI, Electrical Code (NECA) and most datacenters require the power not to be 
loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.

Additionally, one also has to have head room on the power circuit to deal with 
start up draw (current rush). It's not pretty when you have a crap load of 
servers starting up all together 


:)

Faisal Imtiaz
Snappy Internet & Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net




  From: "Eric Kuhnke" 
  To: af@afmug.com
  Sent: Saturday, May 14, 2016 7:50:22 PM
  Subject: Re: [AFMUG] Data center temperatures

  How does a 44U cabinet need 208V 60A for storage arrays?

  In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

  Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for 
controller/motherboard and fans. 650W in 4U.

  44 / 4 = 11

  Multply by 650

  7150W

  More realistically with a normal amount of drives (like 40 per 4U) a single 
208 30A is sufficient,

  208 x 30 = 6240W

  Run at max 0.85 load on the circuit, so

  6240 x 0.85 = 5304W


  In a really dense 2.5" environment all of the above is of course invalid, you 
could probably need up to 7900W per cabinet
  Then there's 52U cabinets as well...

  On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:

Yup … general trends on new data centers are pushing those temperatures 
higher for efficiency but also with better designs ..



One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)



From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM


    To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures


There have been some fairly large data set studies done shown that air 
intake temperature for huge numbers of servers, at 77-78F does not correlate 
with a statistically significant rate of failure.  


http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/


http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/



how/what you do for cooling is definitely dependent on the load. Designing 
a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
hot/cold air separated configuration is very different than 'normal' older 
facilities that are one large open room.





On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:

  I’m not sure you can answer the question without knowing the max heat 
load per cabinet and how you manage airflow in the cabinets.



  AFAIK it used to be standard practice to keep data centers as cold as 
possible without requiring people to wear parkas, but energy efficiency is a 
consideration now.





  From: That One Guy /sarcasm 

  Sent: Wednesday, May 11, 2016 3:51 PM

  To: af@afmug.com 

  Subject: Re: [AFMUG] Data center temperatures



  apparently 72 is the the ideal for our noc, i set our thermostat to 60 
and it always gets turned back to 72, so i just say fuck it, I wanted new gear 
in the racks anyway



  On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really 
nice
> building that Sprint ditched ranges from 60 to 90F (on a site 
monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 
to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net







  -- 

  If you only see yourself as part of the team but you don't see your team 
as part of yourself you have already failed as part of the team.






Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
FYI, Electrical Code (NECA) and most datacenters require the power not to be 
loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit. 

Additionally, one also has to have head room on the power circuit to deal with 
start up draw (current rush). It's not pretty when you have a crap load of 
servers starting up all together 

:) 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Eric Kuhnke" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 7:50:22 PM
> Subject: Re: [AFMUG] Data center temperatures

> How does a 44U cabinet need 208V 60A for storage arrays?

> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.

> 44 / 4 = 11

> Multply by 650

> 7150W

> More realistically with a normal amount of drives (like 40 per 4U) a single 
> 208
> 30A is sufficient,

> 208 x 30 = 6240W

> Run at max 0.85 load on the circuit, so

> 6240 x 0.85 = 5304W

> In a really dense 2.5" environment all of the above is of course invalid, you
> could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

>> Yup … general trends on new data centers are pushing those temperatures 
>> higher
>> for efficiency but also with better designs ..

>> One of our data centers runs at 78F and have no issues – each cabinet is
>> standard 208V 30A as you mention but can go per cabinet much higher if needed
>> (ie. 208V 60A for storage arrays)

>> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
>> Sent: May 11, 2016 5:15 PM

>> To: af@afmug.com
>> Subject: Re: [AFMUG] Data center temperatures

>> There have been some fairly large data set studies done shown that air intake
>> temperature for huge numbers of servers, at 77-78F does not correlate with a
>> statistically significant rate of failure.

>> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

>> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

>> how/what you do for cooling is definitely dependent on the load. Designing a
>> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
>> hot/cold
>> air separated configuration is very different than 'normal' older facilities
>> that are one large open room.

>> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof < af...@kwisp.com > wrote:

>>> I’m not sure you can answer the question without knowing the max heat load 
>>> per
>>> cabinet and how you manage airflow in the cabinets.

>>> AFAIK it used to be standard practice to keep data centers as cold as 
>>> possible
>>> without requiring people to wear parkas, but energy efficiency is a
>>> consideration now.

>>> From: That One Guy /sarcasm

>>> Sent: Wednesday, May 11, 2016 3:51 PM

>>> To: af@afmug.com

>>> Subject: Re: [AFMUG] Data center temperatures

>>> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and 
>>> it
>>> always gets turned back to 72, so i just say fuck it, I wanted new gear in 
>>> the
>>> racks anyway

>>> On Wed, May 11, 2016 at 3:46 PM, Larry Smith < lesm...@ecsis.net > wrote:
>>>> On Wed May 11 2016 15:37, Josh Luthman wrote:
>>>> > Just curious what the ideal temp is for a data center. Our really nice
>>>> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).

>>>> I try to keep my NOC room at about 62F, that puts many of the CPU's
>>>> at 83 to 90F. Many of the bigger places I visit will generally be 55 to 
>>>> 60F.
>>>> Loads of computers (data center type) are primarily groupings of little
>>>> heaters...

>>>> --
>>>> Larry Smith
>>>> lesm...@ecsis.net
>>> --

>>> If you only see yourself as part of the team but you don't see your team as 
>>> part
>>> of yourself you have already failed as part of the team.


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
Or if you had 3 or 4 mx960s per cabinet...
38A per power supply x 4 power supplies = 152A. 152A per chassis x 3 = 456A.

An MX2020 is fun if running via DC power.
". A total of four PDMs can be installed into a router. Each DC PDM
operates with up to nine separate feeds of either 60-amp or 80-amp current
limit . The capacity of these feeds is relayed to system software through a
switch located on the DC PDM."

Oh, GPU clusters. Those are MASSIVE power hogs.

I wonder how much power bitcoin clusters eat?

These are all semi rare or rare cases of course :)
On May 14, 2016 6:59 PM, "Seth Mattinen"  wrote:

> On 5/14/16 16:50, Eric Kuhnke wrote:
>
>> In a really dense 2.5" environment all of the above is of course
>> invalid, you could probably need up to 7900W per cabinet
>>
>
>
> I have customers that peak at 10kW per cabinet, but that's HPC, not
> storage.
>
> ~Seth
>


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
Unless you're Dropbox, then you have all kinds of drives crammed in custom
enclosures.

"Basically Diskotech stores 1PB in 18" × 6" × 42" = 4,536 cubic inch
volume, which is 10% bigger than standard 7U. [Backblaze] is [storing]
180TB in 4U. ... Doing the math reveals that Dropbox is basically packing
793TB in 4U.
…
Diskotech is about 30% bigger in volume than [Backblaze] Storage Pod 5.0
but with 470% more storage."
On May 14, 2016 6:50 PM, "Eric Kuhnke"  wrote:

> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm 
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team
> as part of yourself you have already failed as part of the team.
>
>
>
>


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Seth Mattinen

On 5/14/16 16:50, Eric Kuhnke wrote:

In a really dense 2.5" environment all of the above is of course
invalid, you could probably need up to 7900W per cabinet



I have customers that peak at 10kW per cabinet, but that's HPC, not storage.

~Seth


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Eric Kuhnke
How does a 44U cabinet need 208V 60A for storage arrays?

In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
controller/motherboard and fans. 650W in 4U.

44 / 4 = 11

Multply by 650

7150W

More realistically with a normal amount of drives (like 40 per 4U) a single
208 30A is sufficient,

208 x 30 = 6240W

Run at max 0.85 load on the circuit, so

6240 x 0.85 = 5304W

In a really dense 2.5" environment all of the above is of course invalid,
you could probably need up to 7900W per cabinet
Then there's 52U cabinets as well...
On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:

Yup … general trends on new data centers are pushing those temperatures
higher for efficiency but also with better designs ..



One of our data centers runs at 78F and have no issues – each cabinet is
standard 208V 30A as you mention but can go per cabinet much higher if
needed (ie. 208V 60A for storage arrays)



*From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
*Sent:* May 11, 2016 5:15 PM

*To:* af@afmug.com
*Subject:* Re: [AFMUG] Data center temperatures



There have been some fairly large data set studies done shown that air
intake temperature for huge numbers of servers, at 77-78F does not
correlate with a statistically significant rate of failure.

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

how/what you do for cooling is definitely dependent on the load. Designing
a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
hot/cold air separated configuration is very different than 'normal' older
facilities that are one large open room.



On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:

I’m not sure you can answer the question without knowing the max heat load
per cabinet and how you manage airflow in the cabinets.



AFAIK it used to be standard practice to keep data centers as cold as
possible without requiring people to wear parkas, but energy efficiency is
a consideration now.





*From:* That One Guy /sarcasm 

*Sent:* Wednesday, May 11, 2016 3:51 PM

*To:* af@afmug.com

*Subject:* Re: [AFMUG] Data center temperatures



apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
it always gets turned back to 72, so i just say fuck it, I wanted new gear
in the racks anyway



On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net





-- 

If you only see yourself as part of the team but you don't see your team as
part of yourself you have already failed as part of the team.


Re: [AFMUG] Data center temperatures

2016-05-13 Thread Paul Stewart
Yup … general trends on new data centers are pushing those temperatures higher 
for efficiency but also with better designs ..

 

One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

 

There have been some fairly large data set studies done shown that air intake 
temperature for huge numbers of servers, at 77-78F does not correlate with a 
statistically significant rate of failure.  

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/



how/what you do for cooling is definitely dependent on the load. Designing a 
colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a hot/cold 
air separated configuration is very different than 'normal' older facilities 
that are one large open room.



 

On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof mailto:af...@kwisp.com> > wrote:

I’m not sure you can answer the question without knowing the max heat load per 
cabinet and how you manage airflow in the cabinets.

 

AFAIK it used to be standard practice to keep data centers as cold as possible 
without requiring people to wear parkas, but energy efficiency is a 
consideration now.

 

 

From: That One Guy /sarcasm <mailto:thatoneguyst...@gmail.com>  

Sent: Wednesday, May 11, 2016 3:51 PM

To: af@afmug.com <mailto:af@afmug.com>  

Subject: Re: [AFMUG] Data center temperatures

 

apparently 72 is the the ideal for our noc, i set our thermostat to 60 and it 
always gets turned back to 72, so i just say fuck it, I wanted new gear in the 
racks anyway

 

On Wed, May 11, 2016 at 3:46 PM, Larry Smith mailto:lesm...@ecsis.net> > wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net <mailto:lesm...@ecsis.net> 





 

-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.

 



Re: [AFMUG] Data center temperatures

2016-05-12 Thread Paul McCall
We settled ours at 72 degrees many years ago, after trying different ranges for 
a while.  We have a 2nd “standby” AC that we can kick in if we get notices of 
server room temperature.  We get notified by a SiteMonitor and by the Honeywell 
thermostat also of significant changes in temperature

From: Af [mailto:af-boun...@afmug.com] On Behalf Of That One Guy /sarcasm
Sent: Thursday, May 12, 2016 12:31 AM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures


We did a 4.9ghz project for a municipality once, their server room was like a 
freezer, you could see your breath and everything
On May 11, 2016 10:43 PM, "Travis Johnson" mailto:t...@ida.net>> 
wrote:
We always kept our NOC temps around 72-74F... mainly because that would give us 
time if an A/C unit failed (or switched off due to power failure, etc.) to get 
physically to the NOC before temps reached above 100F (which did happen a few 
times in my 16 years). Servers start shutting down when the air intake hits 
about 105F. LOL

Travis


On 5/11/2016 5:53 PM, Robert Andrews wrote:
Exactly...  Hence our love for the old MAE East...

On 05/11/2016 04:47 PM, Josh Luthman wrote:
Parking garages are generally hotter then hell or balls cold in my
experience.


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Wed, May 11, 2016 at 7:31 PM, Eric Kuhnke 
mailto:eric.kuh...@gmail.com>
<mailto:eric.kuh...@gmail.com<mailto:eric.kuh...@gmail.com>>> wrote:

The temperature sensor location on a 6503/6506/6509 isn't really at
the 'raw' air intake, so it's showing warmer than it should be, but
yes that cabinet gets warm...  It's a couple of hundred watts heat
load in a ventilated box. I would estimate the actual intake air
temperature if you were to measure it manually with a thermometer is
26-27C on the right side of the 6503 as you're facing the front.

The parking garage is pretty much the ambient air temperature of the
city it's located in, but not exposed directly to sunlight.

On Wed, May 11, 2016 at 4:26 PM, Josh Luthman
mailto:j...@imaginenetworksllc.com> 
<mailto:j...@imaginenetworksllc.com<mailto:j...@imaginenetworksllc.com>>>
wrote:

104F air intake?  No way!!!

On May 11, 2016 7:15 PM, "Eric Kuhnke" 
mailto:eric.kuh...@gmail.com>
<mailto:eric.kuh...@gmail.com<mailto:eric.kuh...@gmail.com>>> wrote:

Here's a chart from 2014, it's the air intake temperature
sensor for a cisco 6503 in a wall mounted cabinet 9' in the
air in a parking garage. The daily cycles are the ambient
air temperature in the garage changing.



On Wed, May 11, 2016 at 4:04 PM, Keefe John
mailto:keefe...@ethoplex.com> 
<mailto:keefe...@ethoplex.com<mailto:keefe...@ethoplex.com>>> wrote:

We do 75 degrees



On 5/11/2016 5:51 PM, Robert Andrews wrote:

This is related to the lubricant that is used in the
drives.   Seagate is to blame..   They discovered
higher spindle speeds require lubricants that like
higher temps...   There is a secondary effect due to
the way that magnetized materials flip and hold at
higher temps.   Again, my data may be old as I
worked in that industry 20 years ago..

On 05/11/2016 02:58 PM, Chuck McCown wrote:

Yep, hot is good according to Google. Somewhere
there is a rotating
media study that shows they last longer at
higher temps.  Who woulda thunk.

-Original Message- From: Josh Reynolds
Sent: Wednesday, May 11, 2016 2:48 PM
        To: af@afmug.com<mailto:af@afmug.com> 
<mailto:af@afmug.com<mailto:af@afmug.com>>
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and
humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/



On Wed, May 11, 2016 at 3:37 PM, Josh Luthman

mailto:j...@imaginenetworksllc.com>
<mailto:j...@imaginenetworksllc.com<mailto:j...@imaginenetworksllc.com>>> wrote:

Just curious what the ideal temp is for a
data center.  Our really nice
building that Sprint ditched ranges from 60
to 90F (on a site monitor).









Re: [AFMUG] Data center temperatures

2016-05-11 Thread That One Guy /sarcasm
We did a 4.9ghz project for a municipality once, their server room was like
a freezer, you could see your breath and everything
On May 11, 2016 10:43 PM, "Travis Johnson"  wrote:

> We always kept our NOC temps around 72-74F... mainly because that would
> give us time if an A/C unit failed (or switched off due to power failure,
> etc.) to get physically to the NOC before temps reached above 100F (which
> did happen a few times in my 16 years). Servers start shutting down when
> the air intake hits about 105F. LOL
>
> Travis
>
>
> On 5/11/2016 5:53 PM, Robert Andrews wrote:
>
>> Exactly...  Hence our love for the old MAE East...
>>
>> On 05/11/2016 04:47 PM, Josh Luthman wrote:
>>
>>> Parking garages are generally hotter then hell or balls cold in my
>>> experience.
>>>
>>>
>>> Josh Luthman
>>> Office: 937-552-2340
>>> Direct: 937-552-2343
>>> 1100 Wayne St
>>> Suite 1337
>>> Troy, OH 45373
>>>
>>> On Wed, May 11, 2016 at 7:31 PM, Eric Kuhnke >> <mailto:eric.kuh...@gmail.com>> wrote:
>>>
>>> The temperature sensor location on a 6503/6506/6509 isn't really at
>>> the 'raw' air intake, so it's showing warmer than it should be, but
>>> yes that cabinet gets warm...  It's a couple of hundred watts heat
>>> load in a ventilated box. I would estimate the actual intake air
>>> temperature if you were to measure it manually with a thermometer is
>>> 26-27C on the right side of the 6503 as you're facing the front.
>>>
>>> The parking garage is pretty much the ambient air temperature of the
>>> city it's located in, but not exposed directly to sunlight.
>>>
>>> On Wed, May 11, 2016 at 4:26 PM, Josh Luthman
>>> mailto:j...@imaginenetworksllc.com>>
>>> wrote:
>>>
>>> 104F air intake?  No way!!!
>>>
>>> On May 11, 2016 7:15 PM, "Eric Kuhnke" >> <mailto:eric.kuh...@gmail.com>> wrote:
>>>
>>> Here's a chart from 2014, it's the air intake temperature
>>> sensor for a cisco 6503 in a wall mounted cabinet 9' in the
>>> air in a parking garage. The daily cycles are the ambient
>>> air temperature in the garage changing.
>>>
>>>
>>>
>>> On Wed, May 11, 2016 at 4:04 PM, Keefe John
>>> mailto:keefe...@ethoplex.com>>
>>> wrote:
>>>
>>> We do 75 degrees
>>>
>>>
>>>
>>> On 5/11/2016 5:51 PM, Robert Andrews wrote:
>>>
>>> This is related to the lubricant that is used in the
>>> drives.   Seagate is to blame..   They discovered
>>> higher spindle speeds require lubricants that like
>>> higher temps...   There is a secondary effect due to
>>> the way that magnetized materials flip and hold at
>>> higher temps.   Again, my data may be old as I
>>> worked in that industry 20 years ago..
>>>
>>> On 05/11/2016 02:58 PM, Chuck McCown wrote:
>>>
>>> Yep, hot is good according to Google. Somewhere
>>> there is a rotating
>>> media study that shows they last longer at
>>> higher temps.  Who woulda thunk.
>>>
>>> -Original Message- From: Josh Reynolds
>>> Sent: Wednesday, May 11, 2016 2:48 PM
>>> To: af@afmug.com <mailto:af@afmug.com>
>>> Subject: Re: [AFMUG] Data center temperatures
>>>
>>> Ours is at 68deg F, and we monitor dewpoint and
>>> humidity ranges.
>>>
>>> However...
>>>
>>> http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/
>>>
>>>
>>>
>>> On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
>>> >> <mailto:j...@imaginenetworksllc.com>> wrote:
>>>
>>> Just curious what the ideal temp is for a
>>> data center.  Our really nice
>>> building that Sprint ditched ranges from 60
>>> to 90F (on a site monitor).
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Travis Johnson
We always kept our NOC temps around 72-74F... mainly because that would 
give us time if an A/C unit failed (or switched off due to power 
failure, etc.) to get physically to the NOC before temps reached above 
100F (which did happen a few times in my 16 years). Servers start 
shutting down when the air intake hits about 105F. LOL


Travis


On 5/11/2016 5:53 PM, Robert Andrews wrote:

Exactly...  Hence our love for the old MAE East...

On 05/11/2016 04:47 PM, Josh Luthman wrote:

Parking garages are generally hotter then hell or balls cold in my
experience.


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Wed, May 11, 2016 at 7:31 PM, Eric Kuhnke mailto:eric.kuh...@gmail.com>> wrote:

The temperature sensor location on a 6503/6506/6509 isn't really at
the 'raw' air intake, so it's showing warmer than it should be, but
yes that cabinet gets warm...  It's a couple of hundred watts heat
load in a ventilated box. I would estimate the actual intake air
temperature if you were to measure it manually with a thermometer is
26-27C on the right side of the 6503 as you're facing the front.

The parking garage is pretty much the ambient air temperature of the
city it's located in, but not exposed directly to sunlight.

On Wed, May 11, 2016 at 4:26 PM, Josh Luthman
mailto:j...@imaginenetworksllc.com>>
wrote:

104F air intake?  No way!!!

On May 11, 2016 7:15 PM, "Eric Kuhnke" mailto:eric.kuh...@gmail.com>> wrote:

Here's a chart from 2014, it's the air intake temperature
sensor for a cisco 6503 in a wall mounted cabinet 9' in the
air in a parking garage. The daily cycles are the ambient
air temperature in the garage changing.



On Wed, May 11, 2016 at 4:04 PM, Keefe John
mailto:keefe...@ethoplex.com>> 
wrote:


We do 75 degrees



On 5/11/2016 5:51 PM, Robert Andrews wrote:

This is related to the lubricant that is used in the
drives.   Seagate is to blame..   They discovered
higher spindle speeds require lubricants that like
higher temps...   There is a secondary effect due to
the way that magnetized materials flip and hold at
higher temps.   Again, my data may be old as I
worked in that industry 20 years ago..

On 05/11/2016 02:58 PM, Chuck McCown wrote:

Yep, hot is good according to Google. Somewhere
there is a rotating
media study that shows they last longer at
higher temps.  Who woulda thunk.

-Original Message- From: Josh Reynolds
Sent: Wednesday, May 11, 2016 2:48 PM
        To: af@afmug.com <mailto:af@afmug.com>
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and
humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/



On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
mailto:j...@imaginenetworksllc.com>> wrote:

Just curious what the ideal temp is for a
data center.  Our really nice
building that Sprint ditched ranges from 60
to 90F (on a site monitor).












Re: [AFMUG] Data center temperatures

2016-05-11 Thread Robert Andrews

Exactly...  Hence our love for the old MAE East...

On 05/11/2016 04:47 PM, Josh Luthman wrote:

Parking garages are generally hotter then hell or balls cold in my
experience.


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Wed, May 11, 2016 at 7:31 PM, Eric Kuhnke mailto:eric.kuh...@gmail.com>> wrote:

The temperature sensor location on a 6503/6506/6509 isn't really at
the 'raw' air intake, so it's showing warmer than it should be, but
yes that cabinet gets warm...  It's a couple of hundred watts heat
load in a ventilated box. I would estimate the actual intake air
temperature if you were to measure it manually with a thermometer is
26-27C on the right side of the 6503 as you're facing the front.

The parking garage is pretty much the ambient air temperature of the
city it's located in, but not exposed directly to sunlight.

On Wed, May 11, 2016 at 4:26 PM, Josh Luthman
mailto:j...@imaginenetworksllc.com>>
wrote:

104F air intake?  No way!!!

On May 11, 2016 7:15 PM, "Eric Kuhnke" mailto:eric.kuh...@gmail.com>> wrote:

Here's a chart from 2014, it's the air intake temperature
sensor for a cisco 6503 in a wall mounted cabinet 9' in the
air in a parking garage. The daily cycles are the ambient
air temperature in the garage changing.



On Wed, May 11, 2016 at 4:04 PM, Keefe John
mailto:keefe...@ethoplex.com>> wrote:

We do 75 degrees



On 5/11/2016 5:51 PM, Robert Andrews wrote:

This is related to the lubricant that is used in the
drives.   Seagate is to blame..   They discovered
higher spindle speeds require lubricants that like
higher temps...   There is a secondary effect due to
the way that magnetized materials flip and hold at
higher temps.   Again, my data may be old as I
worked in that industry 20 years ago..

On 05/11/2016 02:58 PM, Chuck McCown wrote:

Yep, hot is good according to Google. Somewhere
there is a rotating
media study that shows they last longer at
higher temps.  Who woulda thunk.

-Original Message- From: Josh Reynolds
Sent: Wednesday, May 11, 2016 2:48 PM
        To: af@afmug.com <mailto:af@afmug.com>
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and
humidity ranges.

However...

http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/



On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
mailto:j...@imaginenetworksllc.com>> wrote:

Just curious what the ideal temp is for a
data center.  Our really nice
building that Sprint ditched ranges from 60
to 90F (on a site monitor).








Re: [AFMUG] Data center temperatures

2016-05-11 Thread Josh Luthman
Parking garages are generally hotter then hell or balls cold in my
experience.


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Wed, May 11, 2016 at 7:31 PM, Eric Kuhnke  wrote:

> The temperature sensor location on a 6503/6506/6509 isn't really at the
> 'raw' air intake, so it's showing warmer than it should be, but yes that
> cabinet gets warm...  It's a couple of hundred watts heat load in a
> ventilated box. I would estimate the actual intake air temperature if you
> were to measure it manually with a thermometer is 26-27C on the right side
> of the 6503 as you're facing the front.
>
> The parking garage is pretty much the ambient air temperature of the city
> it's located in, but not exposed directly to sunlight.
>
> On Wed, May 11, 2016 at 4:26 PM, Josh Luthman  > wrote:
>
>> 104F air intake?  No way!!!
>> On May 11, 2016 7:15 PM, "Eric Kuhnke"  wrote:
>>
>>> Here's a chart from 2014, it's the air intake temperature sensor for a
>>> cisco 6503 in a wall mounted cabinet 9' in the air in a parking garage. The
>>> daily cycles are the ambient air temperature in the garage changing.
>>>
>>>
>>>
>>> On Wed, May 11, 2016 at 4:04 PM, Keefe John 
>>> wrote:
>>>
>>>> We do 75 degrees
>>>>
>>>>
>>>>
>>>> On 5/11/2016 5:51 PM, Robert Andrews wrote:
>>>>
>>>>> This is related to the lubricant that is used in the drives.   Seagate
>>>>> is to blame..   They discovered higher spindle speeds require lubricants
>>>>> that like higher temps...   There is a secondary effect due to the way 
>>>>> that
>>>>> magnetized materials flip and hold at higher temps.   Again, my data may 
>>>>> be
>>>>> old as I worked in that industry 20 years ago..
>>>>>
>>>>> On 05/11/2016 02:58 PM, Chuck McCown wrote:
>>>>>
>>>>>> Yep, hot is good according to Google. Somewhere there is a rotating
>>>>>> media study that shows they last longer at higher temps.  Who woulda
>>>>>> thunk.
>>>>>>
>>>>>> -Original Message- From: Josh Reynolds
>>>>>> Sent: Wednesday, May 11, 2016 2:48 PM
>>>>>> To: af@afmug.com
>>>>>> Subject: Re: [AFMUG] Data center temperatures
>>>>>>
>>>>>> Ours is at 68deg F, and we monitor dewpoint and humidity ranges.
>>>>>>
>>>>>> However...
>>>>>>
>>>>>> http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/
>>>>>>
>>>>>>
>>>>>> On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
>>>>>>  wrote:
>>>>>>
>>>>>>> Just curious what the ideal temp is for a data center.  Our really
>>>>>>> nice
>>>>>>> building that Sprint ditched ranges from 60 to 90F (on a site
>>>>>>> monitor).
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Eric Kuhnke
The temperature sensor location on a 6503/6506/6509 isn't really at the
'raw' air intake, so it's showing warmer than it should be, but yes that
cabinet gets warm...  It's a couple of hundred watts heat load in a
ventilated box. I would estimate the actual intake air temperature if you
were to measure it manually with a thermometer is 26-27C on the right side
of the 6503 as you're facing the front.

The parking garage is pretty much the ambient air temperature of the city
it's located in, but not exposed directly to sunlight.

On Wed, May 11, 2016 at 4:26 PM, Josh Luthman 
wrote:

> 104F air intake?  No way!!!
> On May 11, 2016 7:15 PM, "Eric Kuhnke"  wrote:
>
>> Here's a chart from 2014, it's the air intake temperature sensor for a
>> cisco 6503 in a wall mounted cabinet 9' in the air in a parking garage. The
>> daily cycles are the ambient air temperature in the garage changing.
>>
>>
>>
>> On Wed, May 11, 2016 at 4:04 PM, Keefe John 
>> wrote:
>>
>>> We do 75 degrees
>>>
>>>
>>>
>>> On 5/11/2016 5:51 PM, Robert Andrews wrote:
>>>
>>>> This is related to the lubricant that is used in the drives.   Seagate
>>>> is to blame..   They discovered higher spindle speeds require lubricants
>>>> that like higher temps...   There is a secondary effect due to the way that
>>>> magnetized materials flip and hold at higher temps.   Again, my data may be
>>>> old as I worked in that industry 20 years ago..
>>>>
>>>> On 05/11/2016 02:58 PM, Chuck McCown wrote:
>>>>
>>>>> Yep, hot is good according to Google. Somewhere there is a rotating
>>>>> media study that shows they last longer at higher temps.  Who woulda
>>>>> thunk.
>>>>>
>>>>> -Original Message- From: Josh Reynolds
>>>>> Sent: Wednesday, May 11, 2016 2:48 PM
>>>>> To: af@afmug.com
>>>>> Subject: Re: [AFMUG] Data center temperatures
>>>>>
>>>>> Ours is at 68deg F, and we monitor dewpoint and humidity ranges.
>>>>>
>>>>> However...
>>>>>
>>>>> http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/
>>>>>
>>>>>
>>>>> On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
>>>>>  wrote:
>>>>>
>>>>>> Just curious what the ideal temp is for a data center.  Our really
>>>>>> nice
>>>>>> building that Sprint ditched ranges from 60 to 90F (on a site
>>>>>> monitor).
>>>>>>
>>>>>
>>>>>
>>>
>>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Josh Luthman
104F air intake?  No way!!!
On May 11, 2016 7:15 PM, "Eric Kuhnke"  wrote:

> Here's a chart from 2014, it's the air intake temperature sensor for a
> cisco 6503 in a wall mounted cabinet 9' in the air in a parking garage. The
> daily cycles are the ambient air temperature in the garage changing.
>
>
>
> On Wed, May 11, 2016 at 4:04 PM, Keefe John  wrote:
>
>> We do 75 degrees
>>
>>
>>
>> On 5/11/2016 5:51 PM, Robert Andrews wrote:
>>
>>> This is related to the lubricant that is used in the drives.   Seagate
>>> is to blame..   They discovered higher spindle speeds require lubricants
>>> that like higher temps...   There is a secondary effect due to the way that
>>> magnetized materials flip and hold at higher temps.   Again, my data may be
>>> old as I worked in that industry 20 years ago..
>>>
>>> On 05/11/2016 02:58 PM, Chuck McCown wrote:
>>>
>>>> Yep, hot is good according to Google. Somewhere there is a rotating
>>>> media study that shows they last longer at higher temps.  Who woulda
>>>> thunk.
>>>>
>>>> -Original Message- From: Josh Reynolds
>>>> Sent: Wednesday, May 11, 2016 2:48 PM
>>>> To: af@afmug.com
>>>> Subject: Re: [AFMUG] Data center temperatures
>>>>
>>>> Ours is at 68deg F, and we monitor dewpoint and humidity ranges.
>>>>
>>>> However...
>>>>
>>>> http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/
>>>>
>>>>
>>>> On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
>>>>  wrote:
>>>>
>>>>> Just curious what the ideal temp is for a data center.  Our really nice
>>>>> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>>>>>
>>>>
>>>>
>>
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Keefe John

We do 75 degrees


On 5/11/2016 5:51 PM, Robert Andrews wrote:
This is related to the lubricant that is used in the drives.   Seagate 
is to blame..   They discovered higher spindle speeds require 
lubricants that like higher temps...   There is a secondary effect due 
to the way that magnetized materials flip and hold at higher temps.   
Again, my data may be old as I worked in that industry 20 years ago..


On 05/11/2016 02:58 PM, Chuck McCown wrote:

Yep, hot is good according to Google. Somewhere there is a rotating
media study that shows they last longer at higher temps.  Who woulda 
thunk.


-Original Message- From: Josh Reynolds
Sent: Wednesday, May 11, 2016 2:48 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/ 




On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
 wrote:

Just curious what the ideal temp is for a data center.  Our really nice
building that Sprint ditched ranges from 60 to 90F (on a site monitor).






Re: [AFMUG] Data center temperatures

2016-05-11 Thread Robert Andrews
This is related to the lubricant that is used in the drives.   Seagate 
is to blame..   They discovered higher spindle speeds require lubricants 
that like higher temps...   There is a secondary effect due to the way 
that magnetized materials flip and hold at higher temps.   Again, my 
data may be old as I worked in that industry 20 years ago..


On 05/11/2016 02:58 PM, Chuck McCown wrote:

Yep, hot is good according to Google.  Somewhere there is a rotating
media study that shows they last longer at higher temps.  Who woulda thunk.

-Original Message- From: Josh Reynolds
Sent: Wednesday, May 11, 2016 2:48 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/


On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
 wrote:

Just curious what the ideal temp is for a data center.  Our really nice
building that Sprint ditched ranges from 60 to 90F (on a site monitor).




Re: [AFMUG] Data center temperatures

2016-05-11 Thread Chuck McCown
Yep, hot is good according to Google.  Somewhere there is a rotating media 
study that shows they last longer at higher temps.  Who woulda thunk.


-Original Message- 
From: Josh Reynolds

Sent: Wednesday, May 11, 2016 2:48 PM
To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures

Ours is at 68deg F, and we monitor dewpoint and humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/

On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
 wrote:

Just curious what the ideal temp is for a data center.  Our really nice
building that Sprint ditched ranges from 60 to 90F (on a site monitor). 




Re: [AFMUG] Data center temperatures

2016-05-11 Thread Eric Kuhnke
There have been some fairly large data set studies done shown that air
intake temperature for huge numbers of servers, at 77-78F does not
correlate with a statistically significant rate of failure.

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/


how/what you do for cooling is definitely dependent on the load. Designing
a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
hot/cold air separated configuration is very different than 'normal' older
facilities that are one large open room.



On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:

> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
> *From:* That One Guy /sarcasm 
> *Sent:* Wednesday, May 11, 2016 3:51 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
>> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> > Just curious what the ideal temp is for a data center.  Our really nice
>> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>>
>> I try to keep my NOC room at about 62F, that puts many of the CPU's
>> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
>> 60F.
>> Loads of computers (data center type) are primarily groupings of little
>> heaters...
>>
>> --
>> Larry Smith
>> lesm...@ecsis.net
>>
>
>
>
> --
> If you only see yourself as part of the team but you don't see your team
> as part of yourself you have already failed as part of the team.
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Josh Reynolds
I have a jacket I leave in the data room. Thankfully our noc is a
different building from retail.

On Wed, May 11, 2016 at 3:51 PM, That One Guy /sarcasm
 wrote:
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>>
>> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> > Just curious what the ideal temp is for a data center.  Our really nice
>> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>>
>> I try to keep my NOC room at about 62F, that puts many of the CPU's
>> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
>> 60F.
>> Loads of computers (data center type) are primarily groupings of little
>> heaters...
>>
>> --
>> Larry Smith
>> lesm...@ecsis.net
>
>
>
>
> --
> If you only see yourself as part of the team but you don't see your team as
> part of yourself you have already failed as part of the team.


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Ken Hohhof
I’m not sure you can answer the question without knowing the max heat load per 
cabinet and how you manage airflow in the cabinets.

AFAIK it used to be standard practice to keep data centers as cold as possible 
without requiring people to wear parkas, but energy efficiency is a 
consideration now.


From: That One Guy /sarcasm 
Sent: Wednesday, May 11, 2016 3:51 PM
To: af@afmug.com 
Subject: Re: [AFMUG] Data center temperatures

apparently 72 is the the ideal for our noc, i set our thermostat to 60 and it 
always gets turned back to 72, so i just say fuck it, I wanted new gear in the 
racks anyway

On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

  On Wed May 11 2016 15:37, Josh Luthman wrote:
  > Just curious what the ideal temp is for a data center.  Our really nice
  > building that Sprint ditched ranges from 60 to 90F (on a site monitor).

  I try to keep my NOC room at about 62F, that puts many of the CPU's
  at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
  Loads of computers (data center type) are primarily groupings of little
  heaters...

  --
  Larry Smith
  lesm...@ecsis.net





-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.

Re: [AFMUG] Data center temperatures

2016-05-11 Thread Josh Luthman
62 to 72 is all I was looking for, just curious =)  Thanks for the quick
answers.


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Wed, May 11, 2016 at 4:52 PM, Eric Kuhnke  wrote:

> Depends on where the temperature sensors are?  If we're talking about
> colocation type cabinets with front-to-rear airflow, and mesh doors on both
> ends, air intake temperature should be around 20C on the intake side.
>
> Or open two-post/relay racks in a room?
>
> Hot aisle/cold aisle separated, or everything together in one space?
>
> The key metric is air intake temperature, most routers have a separate
> SNMP OID for intake temperature where there is a small diode near the air
> intake vents. Same with better servers, there's an air intake sensor on
> Dells and others near the front intake fans.
> http://en.community.dell.com/cfs-file/__key/communityserver-discussions-components-files/956/Fan2.PNG
>
>
>
>
> On Wed, May 11, 2016 at 1:37 PM, Josh Luthman  > wrote:
>
>> Just curious what the ideal temp is for a data center.  Our really nice
>> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>>
>
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Eric Kuhnke
Depends on where the temperature sensors are?  If we're talking about
colocation type cabinets with front-to-rear airflow, and mesh doors on both
ends, air intake temperature should be around 20C on the intake side.

Or open two-post/relay racks in a room?

Hot aisle/cold aisle separated, or everything together in one space?

The key metric is air intake temperature, most routers have a separate SNMP
OID for intake temperature where there is a small diode near the air intake
vents. Same with better servers, there's an air intake sensor on Dells and
others near the front intake fans.
http://en.community.dell.com/cfs-file/__key/communityserver-discussions-components-files/956/Fan2.PNG




On Wed, May 11, 2016 at 1:37 PM, Josh Luthman 
wrote:

> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>


Re: [AFMUG] Data center temperatures

2016-05-11 Thread That One Guy /sarcasm
apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
it always gets turned back to 72, so i just say fuck it, I wanted new gear
in the racks anyway

On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>



-- 
If you only see yourself as part of the team but you don't see your team as
part of yourself you have already failed as part of the team.


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Josh Reynolds
Ours is at 68deg F, and we monitor dewpoint and humidity ranges.

However...
http://www.geek.com/chips/googles-most-efficient-data-center-runs-at-95-degrees-1478473/

On Wed, May 11, 2016 at 3:37 PM, Josh Luthman
 wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).


Re: [AFMUG] Data center temperatures

2016-05-11 Thread Larry Smith
On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

-- 
Larry Smith
lesm...@ecsis.net