Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
A lot of the dell servers I use, as well as a lot of the supermicro
servers have that as well. Thankfully many of the RAID JBOD cards I
use (softraid ftw, and zfs doesn't like it either) can also stagger
drive startup.

On Sat, May 14, 2016 at 11:13 PM, Faisal Imtiaz
 wrote:
> It would be interesting to note that, we are putting in some new servers,
> and in the bios these have a setting that delays a random amount of time
> between 50 - 120seconds, before returning to power on state after a power
> loss   .
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> 
>
> From: "Chuck McCown" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 11:40:09 PM
>
> Subject: Re: [AFMUG] Data center temperatures
>
> I remembering being at a data center on a hot summer day.   Power went out,
> generator started.  Things were fine... then all the air conditioners
> switched on at the same time.  Actually stalled the generator.  We  had to
> put sequencers on the AC.
>
> From: Faisal Imtiaz
> Sent: Saturday, May 14, 2016 9:20 PM
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
> FYI, Electrical Code (NECA) and most datacenters require the power not to be
> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
>
> Additionally, one also has to have head room on the power circuit to deal
> with start up draw (current rush). It's not pretty when you have a crap load
> of servers starting up all together
>
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> 
>
> From: "Eric Kuhnke" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 7:50:22 PM
> Subject: Re: [AFMUG] Data center temperatures
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a single
> 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM
>
>
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not correlate
> with a statistically significant rate of failure.
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is a
> consideration now.
>
>
>
>
>
> From: That One Guy /sarcasm
>
> Sent: Wednesday, May 11, 2016 3:51 PM
>
> To: af@afmug.com
>
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> Just curious what the ideal temp is for a data center.  Our really nice
>> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit 

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Luthman
Neat :)

Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373
On May 15, 2016 12:26 AM, "Faisal Imtiaz"  wrote:

> nothing special, dell C2100's looks like these settings are getting to
> be more common in stuff designed for high density data center install.
>
> Regards.
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Josh Luthman" 
> *To: *af@afmug.com
> *Sent: *Sunday, May 15, 2016 12:14:49 AM
> *Subject: *Re: [AFMUG] Data center temperatures
>
> Wow that's cool!  What kind of hardware are they?
>
>
> Josh Luthman
> Office: 937-552-2340
> Direct: 937-552-2343
> 1100 Wayne St
> Suite 1337
> Troy, OH 45373
>
> On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz 
> wrote:
>
>> It would be interesting to note that, we are putting in some new servers,
>> and in the bios these have a setting that delays a random amount of time
>> between 50 - 120seconds, before returning to power on state after a power
>> loss   .
>>
>> :)
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>> --
>>
>> *From: *"Chuck McCown" 
>> *To: *af@afmug.com
>> *Sent: *Saturday, May 14, 2016 11:40:09 PM
>>
>> *Subject: *Re: [AFMUG] Data center temperatures
>>
>> I remembering being at a data center on a hot summer day.   Power went
>> out, generator started.  Things were fine... then all the air conditioners
>> switched on at the same time.  Actually stalled the generator.  We  had to
>> put sequencers on the AC.
>>
>> *From:* Faisal Imtiaz 
>> *Sent:* Saturday, May 14, 2016 9:20 PM
>> *To:* af@afmug.com
>> *Subject:* Re: [AFMUG] Data center temperatures
>>
>> FYI, Electrical Code (NECA) and most datacenters require the power not to
>> be loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
>> circuit.
>>
>> Additionally, one also has to have head room on the power circuit to deal
>> with start up draw (current rush). It's not pretty when you have a crap
>> load of servers starting up all together
>>
>>
>> :)
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>> --
>>
>> *From: *"Eric Kuhnke" 
>> *To: *af@afmug.com
>> *Sent: *Saturday, May 14, 2016 7:50:22 PM
>> *Subject: *Re: [AFMUG] Data center temperatures
>>
>> How does a 44U cabinet need 208V 60A for storage arrays?
>>
>> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>>
>> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>> controller/motherboard and fans. 650W in 4U.
>>
>> 44 / 4 = 11
>>
>> Multply by 650
>>
>> 7150W
>>
>> More realistically with a normal amount of drives (like 40 per 4U) a
>> single 208 30A is sufficient,
>>
>> 208 x 30 = 6240W
>>
>> Run at max 0.85 load on the circuit, so
>>
>> 6240 x 0.85 = 5304W
>>
>> In a really dense 2.5" environment all of the above is of course invalid,
>> you could probably need up to 7900W per cabinet
>> Then there's 52U cabinets as well...
>> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>>
>> Yup … general trends on new data centers are pushing those temperatures
>> higher for efficiency but also with better designs ..
>>
>>
>>
>> One of our data centers runs at 78F and have no issues – each cabinet is
>> standard 208V 30A as you mention but can go per cabinet much higher if
>> needed (ie. 208V 60A for storage arrays)
>>
>>
>>
>> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
>> *Sent:* May 11, 2016 5:15 PM
>>
>> *To:* af@afmug.com
>> *Subject:* Re: [AFMUG] Data center temperatures
>>
>>
>>
>> There have been some fairly large data set studies done shown that air
>> intake temperature for huge numbers of servers, at 77-78F does not
>> correlate with a statistically significant rate of failure.
>>
>>
>> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>>
>>
>> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>>
>> how/what you do for cooling is definitely dependent on the load.
>> Designing a colo facility to use a full 208V 30A circuit per cabinet
>> (5.5kW) in a hot/cold air separated configuration is very different than
>> 'normal' older facilities that are one large open room.
>>
>>
>>
>> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>>
>> I’m not sure you can answer the question without knowing the max heat
>> load per cabinet and how you manage airflow in the cabinets.
>>
>>
>>
>> AFAIK it used to be standard practice to keep data centers as cold as
>> possible without requ

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
nothing special, dell C2100's looks like these settings are getting to be 
more common in stuff designed for high density data center install. 

Regards. 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Josh Luthman" 
> To: af@afmug.com
> Sent: Sunday, May 15, 2016 12:14:49 AM
> Subject: Re: [AFMUG] Data center temperatures

> Wow that's cool! What kind of hardware are they?

> Josh Luthman
> Office: 937-552-2340
> Direct: 937-552-2343
> 1100 Wayne St
> Suite 1337
> Troy, OH 45373

> On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz < fai...@snappytelecom.net >
> wrote:

>> It would be interesting to note that, we are putting in some new servers, 
>> and in
>> the bios these have a setting that delays a random amount of time between 50 
>> -
>> 120seconds, before returning to power on state after a power loss .

>> :)

>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232

>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

>>> From: "Chuck McCown" < ch...@wbmfg.com >
>>> To: af@afmug.com
>>> Sent: Saturday, May 14, 2016 11:40:09 PM

>>> Subject: Re: [AFMUG] Data center temperatures

>>> I remembering being at a data center on a hot summer day. Power went out,
>>> generator started. Things were fine... then all the air conditioners 
>>> switched
>>> on at the same time. Actually stalled the generator. We had to put 
>>> sequencers
>>> on the AC.
>>> From: Faisal Imtiaz
>>> Sent: Saturday, May 14, 2016 9:20 PM
>>> To: af@afmug.com
>>> Subject: Re: [AFMUG] Data center temperatures
>>> FYI, Electrical Code (NECA) and most datacenters require the power not to be
>>> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
>>> Additionally, one also has to have head room on the power circuit to deal 
>>> with
>>> start up draw (current rush). It's not pretty when you have a crap load of
>>> servers starting up all together
>>> :)
>>> Faisal Imtiaz
>>> Snappy Internet & Telecom
>>> 7266 SW 48 Street
>>> Miami, FL 33155
>>> Tel: 305 663 5518 x 232

>>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

 From: "Eric Kuhnke" < eric.kuh...@gmail.com >
 To: af@afmug.com
 Sent: Saturday, May 14, 2016 7:50:22 PM
 Subject: Re: [AFMUG] Data center temperatures

 How does a 44U cabinet need 208V 60A for storage arrays?

 In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

 Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
 controller/motherboard and fans. 650W in 4U.

 44 / 4 = 11

 Multply by 650

 7150W

 More realistically with a normal amount of drives (like 40 per 4U) a 
 single 208
 30A is sufficient,

 208 x 30 = 6240W

 Run at max 0.85 load on the circuit, so

 6240 x 0.85 = 5304W

 In a really dense 2.5" environment all of the above is of course invalid, 
 you
 could probably need up to 7900W per cabinet
 Then there's 52U cabinets as well...
 On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

> Yup … general trends on new data centers are pushing those temperatures 
> higher
> for efficiency but also with better designs ..

> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if 
> needed
> (ie. 208V 60A for storage arrays)

> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM

> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures

> There have been some fairly large data set studies done shown that air 
> intake
> temperature for huge numbers of servers, at 77-78F does not correlate 
> with a
> statistically significant rate of failure.

> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

> how/what you do for cooling is definitely dependent on the load. 
> Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
> hot/cold
> air separated configuration is very different than 'normal' older 
> facilities
> that are one large open room.

> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof < af...@kwisp.com > wrote:

>> I’m not sure you can answer the question without knowing the max heat 
>> load per
>> cabinet and how you manage airflow in the cabinets.

>> AFAIK it used to be standard practice to keep data centers as cold as 
>> possible
>> without requiring people to wear parkas, but energy efficiency is a
>> consideration now.

>> From: That 

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Luthman
Wow that's cool!  What kind of hardware are they?


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Sun, May 15, 2016 at 12:13 AM, Faisal Imtiaz 
wrote:

> It would be interesting to note that, we are putting in some new servers,
> and in the bios these have a setting that delays a random amount of time
> between 50 - 120seconds, before returning to power on state after a power
> loss   .
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Chuck McCown" 
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 11:40:09 PM
>
> *Subject: *Re: [AFMUG] Data center temperatures
>
> I remembering being at a data center on a hot summer day.   Power went
> out, generator started.  Things were fine... then all the air conditioners
> switched on at the same time.  Actually stalled the generator.  We  had to
> put sequencers on the AC.
>
> *From:* Faisal Imtiaz 
> *Sent:* Saturday, May 14, 2016 9:20 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
> FYI, Electrical Code (NECA) and most datacenters require the power not to
> be loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp
> circuit.
>
> Additionally, one also has to have head room on the power circuit to deal
> with start up draw (current rush). It's not pretty when you have a crap
> load of servers starting up all together
>
>
> :)
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
> --
>
> *From: *"Eric Kuhnke" 
> *To: *af@afmug.com
> *Sent: *Saturday, May 14, 2016 7:50:22 PM
> *Subject: *Re: [AFMUG] Data center temperatures
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm 
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 6

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
It would be interesting to note that, we are putting in some new servers, and 
in the bios these have a setting that delays a random amount of time between 50 
- 120seconds, before returning to power on state after a power loss . 

:) 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Chuck McCown" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 11:40:09 PM
> Subject: Re: [AFMUG] Data center temperatures

> I remembering being at a data center on a hot summer day. Power went out,
> generator started. Things were fine... then all the air conditioners switched
> on at the same time. Actually stalled the generator. We had to put sequencers
> on the AC.
> From: Faisal Imtiaz
> Sent: Saturday, May 14, 2016 9:20 PM
> To: af@afmug.com
> Subject: Re: [AFMUG] Data center temperatures
> FYI, Electrical Code (NECA) and most datacenters require the power not to be
> loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.
> Additionally, one also has to have head room on the power circuit to deal with
> start up draw (current rush). It's not pretty when you have a crap load of
> servers starting up all together
> :)
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232

> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

>> From: "Eric Kuhnke" 
>> To: af@afmug.com
>> Sent: Saturday, May 14, 2016 7:50:22 PM
>> Subject: Re: [AFMUG] Data center temperatures

>> How does a 44U cabinet need 208V 60A for storage arrays?

>> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

>> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
>> controller/motherboard and fans. 650W in 4U.

>> 44 / 4 = 11

>> Multply by 650

>> 7150W

>> More realistically with a normal amount of drives (like 40 per 4U) a single 
>> 208
>> 30A is sufficient,

>> 208 x 30 = 6240W

>> Run at max 0.85 load on the circuit, so

>> 6240 x 0.85 = 5304W

>> In a really dense 2.5" environment all of the above is of course invalid, you
>> could probably need up to 7900W per cabinet
>> Then there's 52U cabinets as well...
>> On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

>>> Yup … general trends on new data centers are pushing those temperatures 
>>> higher
>>> for efficiency but also with better designs ..

>>> One of our data centers runs at 78F and have no issues – each cabinet is
>>> standard 208V 30A as you mention but can go per cabinet much higher if 
>>> needed
>>> (ie. 208V 60A for storage arrays)

>>> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
>>> Sent: May 11, 2016 5:15 PM

>>> To: af@afmug.com
>>> Subject: Re: [AFMUG] Data center temperatures

>>> There have been some fairly large data set studies done shown that air 
>>> intake
>>> temperature for huge numbers of servers, at 77-78F does not correlate with a
>>> statistically significant rate of failure.

>>> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

>>> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

>>> how/what you do for cooling is definitely dependent on the load. Designing a
>>> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
>>> hot/cold
>>> air separated configuration is very different than 'normal' older facilities
>>> that are one large open room.

>>> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof < af...@kwisp.com > wrote:

 I’m not sure you can answer the question without knowing the max heat load 
 per
 cabinet and how you manage airflow in the cabinets.

 AFAIK it used to be standard practice to keep data centers as cold as 
 possible
 without requiring people to wear parkas, but energy efficiency is a
 consideration now.

 From: That One Guy /sarcasm

 Sent: Wednesday, May 11, 2016 3:51 PM

 To: af@afmug.com

 Subject: Re: [AFMUG] Data center temperatures

 apparently 72 is the the ideal for our noc, i set our thermostat to 60 and 
 it
 always gets turned back to 72, so i just say fuck it, I wanted new gear in 
 the
 racks anyway

 On Wed, May 11, 2016 at 3:46 PM, Larry Smith < lesm...@ecsis.net > wrote:
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center. Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).

> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F. Many of the bigger places I visit will generally be 55 to 
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...

> --
> Larry Smith
> lesm...@ecsis.net
 --

 If you only see yourself as part o

Re: [AFMUG] Data center temperatures

2016-05-14 Thread Chuck McCown
I remembering being at a data center on a hot summer day.   Power went out, 
generator started.  Things were fine... then all the air conditioners switched 
on at the same time.  Actually stalled the generator.  We  had to put 
sequencers on the AC.  

From: Faisal Imtiaz 
Sent: Saturday, May 14, 2016 9:20 PM
To: af@afmug.com 
Subject: Re: [AFMUG] Data center temperatures

FYI, Electrical Code (NECA) and most datacenters require the power not to be 
loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit.

Additionally, one also has to have head room on the power circuit to deal with 
start up draw (current rush). It's not pretty when you have a crap load of 
servers starting up all together 


:)

Faisal Imtiaz
Snappy Internet & Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net




  From: "Eric Kuhnke" 
  To: af@afmug.com
  Sent: Saturday, May 14, 2016 7:50:22 PM
  Subject: Re: [AFMUG] Data center temperatures

  How does a 44U cabinet need 208V 60A for storage arrays?

  In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

  Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for 
controller/motherboard and fans. 650W in 4U.

  44 / 4 = 11

  Multply by 650

  7150W

  More realistically with a normal amount of drives (like 40 per 4U) a single 
208 30A is sufficient,

  208 x 30 = 6240W

  Run at max 0.85 load on the circuit, so

  6240 x 0.85 = 5304W


  In a really dense 2.5" environment all of the above is of course invalid, you 
could probably need up to 7900W per cabinet
  Then there's 52U cabinets as well...

  On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:

Yup … general trends on new data centers are pushing those temperatures 
higher for efficiency but also with better designs ..



One of our data centers runs at 78F and have no issues – each cabinet is 
standard 208V 30A as you mention but can go per cabinet much higher if needed 
(ie. 208V 60A for storage arrays)



From: Af [mailto:af-boun...@afmug.com] On Behalf Of Eric Kuhnke
Sent: May 11, 2016 5:15 PM


To: af@afmug.com
Subject: Re: [AFMUG] Data center temperatures


There have been some fairly large data set studies done shown that air 
intake temperature for huge numbers of servers, at 77-78F does not correlate 
with a statistically significant rate of failure.  


http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/


http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/



how/what you do for cooling is definitely dependent on the load. Designing 
a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
hot/cold air separated configuration is very different than 'normal' older 
facilities that are one large open room.





On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:

  I’m not sure you can answer the question without knowing the max heat 
load per cabinet and how you manage airflow in the cabinets.



  AFAIK it used to be standard practice to keep data centers as cold as 
possible without requiring people to wear parkas, but energy efficiency is a 
consideration now.





  From: That One Guy /sarcasm 

  Sent: Wednesday, May 11, 2016 3:51 PM

  To: af@afmug.com 

  Subject: Re: [AFMUG] Data center temperatures



  apparently 72 is the the ideal for our noc, i set our thermostat to 60 
and it always gets turned back to 72, so i just say fuck it, I wanted new gear 
in the racks anyway



  On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really 
nice
> building that Sprint ditched ranges from 60 to 90F (on a site 
monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 
to 60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net







  -- 

  If you only see yourself as part of the team but you don't see your team 
as part of yourself you have already failed as part of the team.






Re: [AFMUG] Data center temperatures

2016-05-14 Thread Faisal Imtiaz
FYI, Electrical Code (NECA) and most datacenters require the power not to be 
loaded beyond 80% of breaker capacity... i.e. 16amp draw on a 20amp circuit. 

Additionally, one also has to have head room on the power circuit to deal with 
start up draw (current rush). It's not pretty when you have a crap load of 
servers starting up all together 

:) 

Faisal Imtiaz 
Snappy Internet & Telecom 
7266 SW 48 Street 
Miami, FL 33155 
Tel: 305 663 5518 x 232 

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

> From: "Eric Kuhnke" 
> To: af@afmug.com
> Sent: Saturday, May 14, 2016 7:50:22 PM
> Subject: Re: [AFMUG] Data center temperatures

> How does a 44U cabinet need 208V 60A for storage arrays?

> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.

> 44 / 4 = 11

> Multply by 650

> 7150W

> More realistically with a normal amount of drives (like 40 per 4U) a single 
> 208
> 30A is sufficient,

> 208 x 30 = 6240W

> Run at max 0.85 load on the circuit, so

> 6240 x 0.85 = 5304W

> In a really dense 2.5" environment all of the above is of course invalid, you
> could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart" < p...@paulstewart.org > wrote:

>> Yup … general trends on new data centers are pushing those temperatures 
>> higher
>> for efficiency but also with better designs ..

>> One of our data centers runs at 78F and have no issues – each cabinet is
>> standard 208V 30A as you mention but can go per cabinet much higher if needed
>> (ie. 208V 60A for storage arrays)

>> From: Af [mailto: af-boun...@afmug.com ] On Behalf Of Eric Kuhnke
>> Sent: May 11, 2016 5:15 PM

>> To: af@afmug.com
>> Subject: Re: [AFMUG] Data center temperatures

>> There have been some fairly large data set studies done shown that air intake
>> temperature for huge numbers of servers, at 77-78F does not correlate with a
>> statistically significant rate of failure.

>> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

>> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

>> how/what you do for cooling is definitely dependent on the load. Designing a
>> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a 
>> hot/cold
>> air separated configuration is very different than 'normal' older facilities
>> that are one large open room.

>> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof < af...@kwisp.com > wrote:

>>> I’m not sure you can answer the question without knowing the max heat load 
>>> per
>>> cabinet and how you manage airflow in the cabinets.

>>> AFAIK it used to be standard practice to keep data centers as cold as 
>>> possible
>>> without requiring people to wear parkas, but energy efficiency is a
>>> consideration now.

>>> From: That One Guy /sarcasm

>>> Sent: Wednesday, May 11, 2016 3:51 PM

>>> To: af@afmug.com

>>> Subject: Re: [AFMUG] Data center temperatures

>>> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and 
>>> it
>>> always gets turned back to 72, so i just say fuck it, I wanted new gear in 
>>> the
>>> racks anyway

>>> On Wed, May 11, 2016 at 3:46 PM, Larry Smith < lesm...@ecsis.net > wrote:
 On Wed May 11 2016 15:37, Josh Luthman wrote:
 > Just curious what the ideal temp is for a data center. Our really nice
 > building that Sprint ditched ranges from 60 to 90F (on a site monitor).

 I try to keep my NOC room at about 62F, that puts many of the CPU's
 at 83 to 90F. Many of the bigger places I visit will generally be 55 to 
 60F.
 Loads of computers (data center type) are primarily groupings of little
 heaters...

 --
 Larry Smith
 lesm...@ecsis.net
>>> --

>>> If you only see yourself as part of the team but you don't see your team as 
>>> part
>>> of yourself you have already failed as part of the team.


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
Or if you had 3 or 4 mx960s per cabinet...
38A per power supply x 4 power supplies = 152A. 152A per chassis x 3 = 456A.

An MX2020 is fun if running via DC power.
". A total of four PDMs can be installed into a router. Each DC PDM
operates with up to nine separate feeds of either 60-amp or 80-amp current
limit . The capacity of these feeds is relayed to system software through a
switch located on the DC PDM."

Oh, GPU clusters. Those are MASSIVE power hogs.

I wonder how much power bitcoin clusters eat?

These are all semi rare or rare cases of course :)
On May 14, 2016 6:59 PM, "Seth Mattinen"  wrote:

> On 5/14/16 16:50, Eric Kuhnke wrote:
>
>> In a really dense 2.5" environment all of the above is of course
>> invalid, you could probably need up to 7900W per cabinet
>>
>
>
> I have customers that peak at 10kW per cabinet, but that's HPC, not
> storage.
>
> ~Seth
>


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Josh Reynolds
Unless you're Dropbox, then you have all kinds of drives crammed in custom
enclosures.

"Basically Diskotech stores 1PB in 18" × 6" × 42" = 4,536 cubic inch
volume, which is 10% bigger than standard 7U. [Backblaze] is [storing]
180TB in 4U. ... Doing the math reveals that Dropbox is basically packing
793TB in 4U.
…
Diskotech is about 30% bigger in volume than [Backblaze] Storage Pod 5.0
but with 470% more storage."
On May 14, 2016 6:50 PM, "Eric Kuhnke"  wrote:

> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a
> single 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
> On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:
>
> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
> *Sent:* May 11, 2016 5:15 PM
>
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not
> correlate with a statistically significant rate of failure.
>
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing
> a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is
> a consideration now.
>
>
>
>
>
> *From:* That One Guy /sarcasm 
>
> *Sent:* Wednesday, May 11, 2016 3:51 PM
>
> *To:* af@afmug.com
>
> *Subject:* Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
> > Just curious what the ideal temp is for a data center.  Our really nice
> > building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> lesm...@ecsis.net
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team
> as part of yourself you have already failed as part of the team.
>
>
>
>


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Seth Mattinen

On 5/14/16 16:50, Eric Kuhnke wrote:

In a really dense 2.5" environment all of the above is of course
invalid, you could probably need up to 7900W per cabinet



I have customers that peak at 10kW per cabinet, but that's HPC, not storage.

~Seth


Re: [AFMUG] Data center temperatures

2016-05-14 Thread Eric Kuhnke
How does a 44U cabinet need 208V 60A for storage arrays?

In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...

Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
controller/motherboard and fans. 650W in 4U.

44 / 4 = 11

Multply by 650

7150W

More realistically with a normal amount of drives (like 40 per 4U) a single
208 30A is sufficient,

208 x 30 = 6240W

Run at max 0.85 load on the circuit, so

6240 x 0.85 = 5304W

In a really dense 2.5" environment all of the above is of course invalid,
you could probably need up to 7900W per cabinet
Then there's 52U cabinets as well...
On May 13, 2016 6:16 PM, "Paul Stewart"  wrote:

Yup … general trends on new data centers are pushing those temperatures
higher for efficiency but also with better designs ..



One of our data centers runs at 78F and have no issues – each cabinet is
standard 208V 30A as you mention but can go per cabinet much higher if
needed (ie. 208V 60A for storage arrays)



*From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Eric Kuhnke
*Sent:* May 11, 2016 5:15 PM

*To:* af@afmug.com
*Subject:* Re: [AFMUG] Data center temperatures



There have been some fairly large data set studies done shown that air
intake temperature for huge numbers of servers, at 77-78F does not
correlate with a statistically significant rate of failure.

http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

how/what you do for cooling is definitely dependent on the load. Designing
a colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
hot/cold air separated configuration is very different than 'normal' older
facilities that are one large open room.



On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof  wrote:

I’m not sure you can answer the question without knowing the max heat load
per cabinet and how you manage airflow in the cabinets.



AFAIK it used to be standard practice to keep data centers as cold as
possible without requiring people to wear parkas, but energy efficiency is
a consideration now.





*From:* That One Guy /sarcasm 

*Sent:* Wednesday, May 11, 2016 3:51 PM

*To:* af@afmug.com

*Subject:* Re: [AFMUG] Data center temperatures



apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
it always gets turned back to 72, so i just say fuck it, I wanted new gear
in the racks anyway



On Wed, May 11, 2016 at 3:46 PM, Larry Smith  wrote:

On Wed May 11 2016 15:37, Josh Luthman wrote:
> Just curious what the ideal temp is for a data center.  Our really nice
> building that Sprint ditched ranges from 60 to 90F (on a site monitor).

I try to keep my NOC room at about 62F, that puts many of the CPU's
at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
60F.
Loads of computers (data center type) are primarily groupings of little
heaters...

--
Larry Smith
lesm...@ecsis.net





-- 

If you only see yourself as part of the team but you don't see your team as
part of yourself you have already failed as part of the team.


Re: [AFMUG] Upstream BGP Questionairre

2016-05-14 Thread Josh Baird
Yes, it requires your upstream to support a blackhole BGP community.  This
allows you to advertise host routes (/32 or smaller) to them using a
specific BGP community when you want your ISP to drop all traffic for the
prefix before it reaches you.  This is -very- useful for DDoS defense.

Josh

On Sat, May 14, 2016 at 4:16 PM, That One Guy /sarcasm <
thatoneguyst...@gmail.com> wrote:

> That requires something specific?
> On May 14, 2016 7:33 AM, "Erich Kaiser" 
> wrote:
>
>> We have started requiring our upstreams to filter by ASN vs Netblock.  We
>> are moving away from upstreams that do not utilize IRR Entries and require
>> intervention every time we want to make a change, but it is continuous for
>> us, so for most guys the one time setup is not a big deal, plus the
>> upstream has to be trusting enough that we will have the correct filtering
>> on our end.
>>
>> Steve, I would add Blackhole BGP community or session to your list.
>>
>> Erich Kaiser
>> The Fusion Network
>> er...@gotfusion.net
>> Office: 630-621-4804
>> Cell: 630-777-9291
>>
>> On Sat, May 14, 2016 at 6:34 AM, Paul Stewart 
>> wrote:
>>
>>> Or, quite a number of carriers (especially in APAC, some carriers in
>>> Canada, a few in the US, and definitely a large number in Europe) will say
>>> “do you have an IRR entry at RADB?” and if you say yes then they will use
>>> the route object information but if you say no then they will tell you to
>>> open a ticket with their NOC each time you have a prefix to add/remove ….
>>>
>>>
>>>
>>> I’m actually surprised by the number of transit providers that don’t’
>>> support automation via IRR
>>>
>>>
>>>
>>> Paul
>>>
>>>
>>>
>>>
>>>
>>> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Faisal Imtiaz
>>> *Sent:* May 13, 2016 9:25 PM
>>> *To:* af@afmug.com
>>> *Subject:* Re: [AFMUG] Upstream BGP Questionairre
>>>
>>>
>>>
>>> Let me clarify this a bit more...
>>>
>>>
>>>
>>> You are recommending that one creates it's own AS Object in the
>>> IRR..(aka learns and manages their own RR entries) (it really does not
>>> matter which IRR it is, at the end of the day they are all sort of synced,
>>> it is only a question of who is maintaining it, and who can provide help to
>>> newbies). .. BTW, I agree with this.. however 
>>>
>>>
>>>
>>> Cause at the end of the day, someone in the up-stream is very likely to
>>> create the record for you, if it is needed by them...
>>>
>>> This is one of those things that most carriers find... "too much trouble
>>> to teach vs just do it for that network !"
>>>
>>>
>>>
>>> :)
>>>
>>>
>>>
>>> Regards.
>>>
>>>
>>>
>>> Faisal Imtiaz
>>> Snappy Internet & Telecom
>>> 7266 SW 48 Street
>>> Miami, FL 33155
>>> Tel: 305 663 5518 x 232
>>>
>>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>>
>>>
>>> --
>>>
>>> *From: *"George Skorup" 
>>> *To: *af@afmug.com
>>> *Sent: *Friday, May 13, 2016 7:15:26 PM
>>> *Subject: *Re: [AFMUG] Upstream BGP Questionairre
>>>
>>> I recommend adding your route or AS objects in ARIN's IRR. Merit RADb is
>>> not free. Most carriers use RADb, and RADb mirrors ARIN's IRR anyway.
>>>
>>> On 5/13/2016 3:49 PM, Faisal Imtiaz wrote:
>>>
>>> See answers in-line below:-
>>>
>>>
>>>
>>> Faisal Imtiaz
>>> Snappy Internet & Telecom
>>> 7266 SW 48 Street
>>> Miami, FL 33155
>>> Tel: 305 663 5518 x 232
>>>
>>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>>
>>>
>>> --
>>>
>>> *From: *"That One Guy /sarcasm" 
>>> 
>>> *To: *af@afmug.com
>>> *Sent: *Friday, May 13, 2016 11:35:10 AM
>>> *Subject: *[AFMUG] Upstream BGP Questionairre
>>>
>>> Im going to expose the breadth of my incompetence here, but there are
>>> some questions in this document I want to make sure im answering accurately
>>>
>>> 1. Are you the owner of the AS Number with RIR- This im assuming is our
>>> ARIN direct allocation?
>>>
>>> They are asking if you have a AS # assigned to you from ... (would be
>>> ARIN for North America).
>>>
>>> 2. Are you registered with an Internet Routing Registry? - Im not sure
>>> what this is, is this also ARIN or do I need to register something
>>> elsewhere?
>>>
>>> Routing Registry it is a way to build authorized prefixes from a
>>> DataBase...
>>>
>>> You can read up about it from here
>>> https://www.arin.net/resources/routing/
>>>
>>>
>>> Justin Wilson did a blog about it too... http://www.mtin.net/blog/?p=245
>>>
>>>
>>>
>>> and yes ARIN also provides a Routing Registry Service ... (along with a
>>> few others)
>>>
>>>
>>>
>>> 3. Which type of routes do you want to receive?  - Full routes is what
>>> we want, but are there caveats in this answer I need to be prepared for?
>>>
>>>
>>>
>>> No Caveats, as long as your equipment is able to take full routes, then
>>> do so.
>>>
>>>
>>>
>>> 4. Do you have downstream ASNs? - I assume this would be customers with
>>> their own allocations? We currently do not, but do not want to close the
>>> door on 

Re: [AFMUG] Upstream BGP Questionairre

2016-05-14 Thread That One Guy /sarcasm
That requires something specific?
On May 14, 2016 7:33 AM, "Erich Kaiser"  wrote:

> We have started requiring our upstreams to filter by ASN vs Netblock.  We
> are moving away from upstreams that do not utilize IRR Entries and require
> intervention every time we want to make a change, but it is continuous for
> us, so for most guys the one time setup is not a big deal, plus the
> upstream has to be trusting enough that we will have the correct filtering
> on our end.
>
> Steve, I would add Blackhole BGP community or session to your list.
>
> Erich Kaiser
> The Fusion Network
> er...@gotfusion.net
> Office: 630-621-4804
> Cell: 630-777-9291
>
> On Sat, May 14, 2016 at 6:34 AM, Paul Stewart 
> wrote:
>
>> Or, quite a number of carriers (especially in APAC, some carriers in
>> Canada, a few in the US, and definitely a large number in Europe) will say
>> “do you have an IRR entry at RADB?” and if you say yes then they will use
>> the route object information but if you say no then they will tell you to
>> open a ticket with their NOC each time you have a prefix to add/remove ….
>>
>>
>>
>> I’m actually surprised by the number of transit providers that don’t’
>> support automation via IRR
>>
>>
>>
>> Paul
>>
>>
>>
>>
>>
>> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Faisal Imtiaz
>> *Sent:* May 13, 2016 9:25 PM
>> *To:* af@afmug.com
>> *Subject:* Re: [AFMUG] Upstream BGP Questionairre
>>
>>
>>
>> Let me clarify this a bit more...
>>
>>
>>
>> You are recommending that one creates it's own AS Object in the IRR..(aka
>> learns and manages their own RR entries) (it really does not matter which
>> IRR it is, at the end of the day they are all sort of synced, it is only a
>> question of who is maintaining it, and who can provide help to newbies). ..
>> BTW, I agree with this.. however 
>>
>>
>>
>> Cause at the end of the day, someone in the up-stream is very likely to
>> create the record for you, if it is needed by them...
>>
>> This is one of those things that most carriers find... "too much trouble
>> to teach vs just do it for that network !"
>>
>>
>>
>> :)
>>
>>
>>
>> Regards.
>>
>>
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>>
>> --
>>
>> *From: *"George Skorup" 
>> *To: *af@afmug.com
>> *Sent: *Friday, May 13, 2016 7:15:26 PM
>> *Subject: *Re: [AFMUG] Upstream BGP Questionairre
>>
>> I recommend adding your route or AS objects in ARIN's IRR. Merit RADb is
>> not free. Most carriers use RADb, and RADb mirrors ARIN's IRR anyway.
>>
>> On 5/13/2016 3:49 PM, Faisal Imtiaz wrote:
>>
>> See answers in-line below:-
>>
>>
>>
>> Faisal Imtiaz
>> Snappy Internet & Telecom
>> 7266 SW 48 Street
>> Miami, FL 33155
>> Tel: 305 663 5518 x 232
>>
>> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>>
>>
>> --
>>
>> *From: *"That One Guy /sarcasm" 
>> 
>> *To: *af@afmug.com
>> *Sent: *Friday, May 13, 2016 11:35:10 AM
>> *Subject: *[AFMUG] Upstream BGP Questionairre
>>
>> Im going to expose the breadth of my incompetence here, but there are
>> some questions in this document I want to make sure im answering accurately
>>
>> 1. Are you the owner of the AS Number with RIR- This im assuming is our
>> ARIN direct allocation?
>>
>> They are asking if you have a AS # assigned to you from ... (would be
>> ARIN for North America).
>>
>> 2. Are you registered with an Internet Routing Registry? - Im not sure
>> what this is, is this also ARIN or do I need to register something
>> elsewhere?
>>
>> Routing Registry it is a way to build authorized prefixes from a
>> DataBase...
>>
>> You can read up about it from here
>> https://www.arin.net/resources/routing/
>>
>>
>> Justin Wilson did a blog about it too... http://www.mtin.net/blog/?p=245
>>
>>
>>
>> and yes ARIN also provides a Routing Registry Service ... (along with a
>> few others)
>>
>>
>>
>> 3. Which type of routes do you want to receive?  - Full routes is what we
>> want, but are there caveats in this answer I need to be prepared for?
>>
>>
>>
>> No Caveats, as long as your equipment is able to take full routes, then
>> do so.
>>
>>
>>
>> 4. Do you have downstream ASNs? - I assume this would be customers with
>> their own allocations? We currently do not, but do not want to close the
>> door on that in the future. Is this something easily updated in the future?
>>
>> Answer this question in the Present.. (you don't have any so say no)...
>> no future door is closed due to this... this is just info asked / collected
>> for the upstream to be able to build their ACL filters (This is also a
>> flag for them to collect your BGP LOA's as well as your Customers to you..)
>>
>>
>>
>> This becomes a mute topic, if you are versed in using the Routing
>> Registry and maintaining your own Route Objects etc.
>>
>>
>>
>> 5. List all prefixes to

Re: [AFMUG] Upstream BGP Questionairre

2016-05-14 Thread Erich Kaiser
We have started requiring our upstreams to filter by ASN vs Netblock.  We
are moving away from upstreams that do not utilize IRR Entries and require
intervention every time we want to make a change, but it is continuous for
us, so for most guys the one time setup is not a big deal, plus the
upstream has to be trusting enough that we will have the correct filtering
on our end.

Steve, I would add Blackhole BGP community or session to your list.

Erich Kaiser
The Fusion Network
er...@gotfusion.net
Office: 630-621-4804
Cell: 630-777-9291

On Sat, May 14, 2016 at 6:34 AM, Paul Stewart  wrote:

> Or, quite a number of carriers (especially in APAC, some carriers in
> Canada, a few in the US, and definitely a large number in Europe) will say
> “do you have an IRR entry at RADB?” and if you say yes then they will use
> the route object information but if you say no then they will tell you to
> open a ticket with their NOC each time you have a prefix to add/remove ….
>
>
>
> I’m actually surprised by the number of transit providers that don’t’
> support automation via IRR
>
>
>
> Paul
>
>
>
>
>
> *From:* Af [mailto:af-boun...@afmug.com] *On Behalf Of *Faisal Imtiaz
> *Sent:* May 13, 2016 9:25 PM
> *To:* af@afmug.com
> *Subject:* Re: [AFMUG] Upstream BGP Questionairre
>
>
>
> Let me clarify this a bit more...
>
>
>
> You are recommending that one creates it's own AS Object in the IRR..(aka
> learns and manages their own RR entries) (it really does not matter which
> IRR it is, at the end of the day they are all sort of synced, it is only a
> question of who is maintaining it, and who can provide help to newbies). ..
> BTW, I agree with this.. however 
>
>
>
> Cause at the end of the day, someone in the up-stream is very likely to
> create the record for you, if it is needed by them...
>
> This is one of those things that most carriers find... "too much trouble
> to teach vs just do it for that network !"
>
>
>
> :)
>
>
>
> Regards.
>
>
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
>
> --
>
> *From: *"George Skorup" 
> *To: *af@afmug.com
> *Sent: *Friday, May 13, 2016 7:15:26 PM
> *Subject: *Re: [AFMUG] Upstream BGP Questionairre
>
> I recommend adding your route or AS objects in ARIN's IRR. Merit RADb is
> not free. Most carriers use RADb, and RADb mirrors ARIN's IRR anyway.
>
> On 5/13/2016 3:49 PM, Faisal Imtiaz wrote:
>
> See answers in-line below:-
>
>
>
> Faisal Imtiaz
> Snappy Internet & Telecom
> 7266 SW 48 Street
> Miami, FL 33155
> Tel: 305 663 5518 x 232
>
> Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
>
>
> --
>
> *From: *"That One Guy /sarcasm" 
> 
> *To: *af@afmug.com
> *Sent: *Friday, May 13, 2016 11:35:10 AM
> *Subject: *[AFMUG] Upstream BGP Questionairre
>
> Im going to expose the breadth of my incompetence here, but there are some
> questions in this document I want to make sure im answering accurately
>
> 1. Are you the owner of the AS Number with RIR- This im assuming is our
> ARIN direct allocation?
>
> They are asking if you have a AS # assigned to you from ... (would be ARIN
> for North America).
>
> 2. Are you registered with an Internet Routing Registry? - Im not sure
> what this is, is this also ARIN or do I need to register something
> elsewhere?
>
> Routing Registry it is a way to build authorized prefixes from a
> DataBase...
>
> You can read up about it from here
> https://www.arin.net/resources/routing/
>
>
> Justin Wilson did a blog about it too... http://www.mtin.net/blog/?p=245
>
>
>
> and yes ARIN also provides a Routing Registry Service ... (along with a
> few others)
>
>
>
> 3. Which type of routes do you want to receive?  - Full routes is what we
> want, but are there caveats in this answer I need to be prepared for?
>
>
>
> No Caveats, as long as your equipment is able to take full routes, then do
> so.
>
>
>
> 4. Do you have downstream ASNs? - I assume this would be customers with
> their own allocations? We currently do not, but do not want to close the
> door on that in the future. Is this something easily updated in the future?
>
> Answer this question in the Present.. (you don't have any so say no)... no
> future door is closed due to this... this is just info asked / collected
> for the upstream to be able to build their ACL filters (This is also a
> flag for them to collect your BGP LOA's as well as your Customers to you..)
>
>
>
> This becomes a mute topic, if you are versed in using the Routing Registry
> and maintaining your own Route Objects etc.
>
>
>
> 5. List all prefixes to be announced so that we can confirm the BGP ACL
> prior to activation: We only have a /22, but we do want the option down the
> road to pull /24 from one provider if need be. Would we list the /24s
> independently or the /22 as the aggregate?
>
>
>
> You want to ask

Re: [AFMUG] Upstream BGP Questionairre

2016-05-14 Thread Paul Stewart
Or, quite a number of carriers (especially in APAC, some carriers in Canada, a 
few in the US, and definitely a large number in Europe) will say “do you have 
an IRR entry at RADB?” and if you say yes then they will use the route object 
information but if you say no then they will tell you to open a ticket with 
their NOC each time you have a prefix to add/remove …. 

 

I’m actually surprised by the number of transit providers that don’t’ support 
automation via IRR 

 

Paul

 

 

From: Af [mailto:af-boun...@afmug.com] On Behalf Of Faisal Imtiaz
Sent: May 13, 2016 9:25 PM
To: af@afmug.com
Subject: Re: [AFMUG] Upstream BGP Questionairre

 

Let me clarify this a bit more...

 

You are recommending that one creates it's own AS Object in the IRR..(aka 
learns and manages their own RR entries) (it really does not matter which IRR 
it is, at the end of the day they are all sort of synced, it is only a question 
of who is maintaining it, and who can provide help to newbies). .. BTW, I agree 
with this.. however 

 

Cause at the end of the day, someone in the up-stream is very likely to create 
the record for you, if it is needed by them...

This is one of those things that most carriers find... "too much trouble to 
teach vs just do it for that network !"

 

:)

 

Regards.

 

Faisal Imtiaz
Snappy Internet & Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 
 

 

  _  

From: "George Skorup" mailto:geo...@cbcast.com> >
To: af@afmug.com  
Sent: Friday, May 13, 2016 7:15:26 PM
Subject: Re: [AFMUG] Upstream BGP Questionairre

I recommend adding your route or AS objects in ARIN's IRR. Merit RADb is not 
free. Most carriers use RADb, and RADb mirrors ARIN's IRR anyway. 

On 5/13/2016 3:49 PM, Faisal Imtiaz wrote:

See answers in-line below:-

 

Faisal Imtiaz
Snappy Internet & Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 
 

 


  _  


From: "That One Guy /sarcasm"   

To: af@afmug.com  
Sent: Friday, May 13, 2016 11:35:10 AM
Subject: [AFMUG] Upstream BGP Questionairre

Im going to expose the breadth of my incompetence here, but there are some 
questions in this document I want to make sure im answering accurately

1. Are you the owner of the AS Number with RIR- This im assuming is our ARIN 
direct allocation?

They are asking if you have a AS # assigned to you from ... (would be ARIN for 
North America).

2. Are you registered with an Internet Routing Registry? - Im not sure what 
this is, is this also ARIN or do I need to register something elsewhere?

Routing Registry it is a way to build authorized prefixes from a DataBase...

You can read up about it from here   https://www.arin.net/resources/routing/


Justin Wilson did a blog about it too... http://www.mtin.net/blog/?p=245

 

and yes ARIN also provides a Routing Registry Service ... (along with a few 
others)

 

3. Which type of routes do you want to receive?  - Full routes is what we want, 
but are there caveats in this answer I need to be prepared for?

 

No Caveats, as long as your equipment is able to take full routes, then do so.

 

4. Do you have downstream ASNs? - I assume this would be customers with their 
own allocations? We currently do not, but do not want to close the door on that 
in the future. Is this something easily updated in the future?

Answer this question in the Present.. (you don't have any so say no)... no 
future door is closed due to this... this is just info asked / collected for 
the upstream to be able to build their ACL filters (This is also a flag for 
them to collect your BGP LOA's as well as your Customers to you..)

 

This becomes a mute topic, if you are versed in using the Routing Registry and 
maintaining your own Route Objects etc.

 

5. List all prefixes to be announced so that we can confirm the BGP ACL prior 
to activation: We only have a /22, but we do want the option down the road to 
pull /24 from one provider if need be. Would we list the /24s independently or 
the /22 as the aggregate? 

 

You want to ask them for the following:-

 

xx.xx.xx.xx/22  please use the 'le 24' option with the filter.

 

Note: this will have them build a filter that can accept larger prefixes  
between 24 - 22, so it is not a 'specific' filter... 

 

 

6. MD5 Password: On this is it standard practice to use the same password with 
all providers or different ones?

 

Your choice... either way no big deal, as long as you keep track of them.



-- 

If you only see yourself as part of the team but you don't see your team as 
part of yourself you have already failed as part of the team.