Dan wrote:
> To grasp this, the Cray-2 required about 200kW of power to operate. IIRC,
> there were bus bars in the logic modules that provided 5VDC at around 2,000
> amps!
>
I got a tour inside the SDI facility at Schriever AFB when it was
named Falcon Air Force Station. Originally the inside
Was the line (power) conditioner checked? Were there storms in the area?
When that many units go down close together it makes one wonder about
surges, etc. We have similar problems down
here in the "lightning capital of the world". When I dealt with such
problems 15+ years ago there
was no way
The Cray-2 was. This was the source for a lot of jokes, one of which was to
refer to the machine as "Bubbles".
To grasp this, the Cray-2 required about 200kW of power to operate. IIRC, there
were bus bars in the logic modules that provided 5VDC at around 2,000 amps!
Dan
On May 21, 2012, at 11:
John Reames writes:
> IIRC, Seymour Cray (allegedly) stated that he wasn't a better computer
> designer, but that he was a better plumber.
Literally... Crays were liquid-cooled, no?
Allan
--
1983 300D
1979 300SD
___
http://www.okiebenz.com
For new and used
Maybe Cray was, but those vector processors were pretty bad a$$.
On Mon, May 21, 2012 at 9:24 AM, John Reames wrote:
> A large part of durability is thermal management. Some companies are
> better than others at dealing with heat.
>
> I know that one manufacturer, if you filled one of their rac
A large part of durability is thermal management. Some companies are better
than others at dealing with heat.
I know that one manufacturer, if you filled one of their racks with their blade
servers, would exceed 100% failure rate within 180 days. (and that was in a
cage at a co-lo data center,
I would have to believe these were consumer grade drives, as I can't imagine an
enterprise grade drive failing in that manner.
If that were the case we would all be in major trouble.
Dan
On May 21, 2012, at 10:32 AM, Allan Streib wrote:
> Where I saw it happen it was a small array. Probably
Where I saw it happen it was a small array. Probably built on the cheap (true
to "I" in RAID), may not have been using "enterprise" grade drives.
We had enough drives fail (something like 5 of 10) in the span of 24 hours that
it exceeded the "R" part of RAID, and the array failed.
They were al
Could be a bad lot of drives go into the RAID arrays.
The comparison isn't exactly right. Race engines are expected to be
rebuilt all the time. Hard drive arrays are supposed to be reliable. Most
of the time that reliability means extra redundancy is built into the
system in the form of extra d
Dunno about that, but to follow up, my SANs are set up with 168 drives between
two controllers each to fill a standard 48U server rack, to give you an idea of
how many drives we're talking about. And that is just one rack.
Haven't had but one failure in the last six months.
Dan
On May 21, 2012
Could it be that doing physics on drives is far more intense than less
complex applications? A comparison might be race cars near their limits
(physics) versus ordinary cars being driven far less intensely.
From: "OK Don"
I haven't seen that either, and we must have thousands of drives by
I haven't seen that either, and we must have thousands of drives by now. I
don't work directly with them anymore, but I'll ask the guys that do
tomorrow.
On Sun, May 20, 2012 at 5:16 PM, Brian Toscano wrote:
> I haven't seen that either...
>
> Often the RAID arrays come new with one batch of driv
I haven't seen that either...
Often the RAID arrays come new with one batch of drives all of the same
make/model and over time the replacements come from different lots. For
example, a new RAID array may have all Seagate drives, and as they fail we
may get another make/model as a replacement.
On
Not sure I would agree with the assumption that if one fails the other will be
close behind.
One of the pieces of equipment I manage in our data center is one of a number
of SANs, which contain literally hundreds of hard drives. I can attest to the
fact that they are all from the same manufact
14 matches
Mail list logo