Wow that's cool!  What kind of hardware are they?

Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz <fai...@snappytelecom.net>
wrote:

> It would be interesting to note that, we are putting in some new servers,
> and in the bios these have a setting that delays a random amount of time
> between 50 - 120seconds, before returning to power on state after a power
> loss   .....
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> ------------------------------
>
> *From: *"Chuck McCown" <ch...@wbmfg.com>
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 11:40:09 PM
>
> *Subject: *Re: [AFMUG] Data center temperatures
>
> I remembering being at a data center on a hot summer day.   Power went
> out, generator started.  Things were fine... then all the air conditioners
> switched on at the same time.  Actually stalled the generator.  We  had to
> put sequencers on the AC.
>
> *From:* Faisal Imtiaz <fai...@snappytelecom.net>
> *Sent:* Saturday, May 14, 2016 9:20 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
> FYI, Electrical Code (NECA) and most datacenters require the power not to
> be loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
> circuit.
>
> Additionally, one also has to have head room on the power circuit to deal
> with start up draw (current rush). It's not pretty when you have a crap
> load of servers starting up all together
>
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> ------------------------------
>
> *From: *"Eric Kuhnke" <eric.kuh...@gmail.com>
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 7:50:22 PM
> *Subject: *Re: [AFMUG] Data center temperatures
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart" <p...@paulstewart.org> wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof <af...@kwisp.com> wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm <thatoneguyst...@gmail.com>
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith <lesm...@ecsis.net> wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team
> as part of yourself you have already failed as part of the team.
>
>
>
>
>

Reply via email to