On Wed, July 14, 2010 23:51, Tim Cook wrote:
> On Wed, Jul 14, 2010 at 9:27 PM, BM <bogdan.maryn...@gmail.com> wrote:
>
>> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey
>> <solar...@nedharvey.com> wrote:
>> > I'll second that.  And I think this is how you can tell the
>> difference:
>> > With supermicro, do you have a single support number to call and a
>> 4hour
>> > onsite service response time?
>>
>> Yes.
>>
>> BTW, just for the record, people potentially have a bunch of other
>> supermicros in a stock, that they've bought for the rest of the money
>> that left from a budget that was initially estimated to get shiny
>> Sun/Oracle hardware. :) So normally you put them online in a cluster
>> and don't really worry that one of them gone — just power that thing
>> down and disconnect from the whole grid.
>>
>> > When you pay for the higher prices for OEM hardware, you're paying for
>> the
>> > knowledge of parts availability and compatibility. And a single point
>> > vendor who supports the system as a whole, not just one component.
>>
>> What exactly kind of compatibility you're talking about? For example,
>> if I remove my broken mylar air shroud for X8 DP with a
>> MCP-310-18008-0N number because I step on it accidentally :-D, pretty
>> much I think I am gonna ask them to replace exactly THAT thing back.
>> Or you want to let me tell you real stories how OEM hardware is
>> supported and how many emails/phonecalls it involves? One of the very
>> latest (just a week ago): Apple Support reported me that their
>> engineers in US has no green idea why Darwin kernel panics on their
>> XServe, so they suggested me replace mother board TWICE and keep OLDER
>> firmware and never upgrade, since it will cause crash again (although
>> identical server works just fine with newest firmware)! I told them
>> NNN times that traceback of Darwin kernel was yelling about ACPI
>> problem and gave them logs/tracebacks/transcripts etc, but they still
>> have no idea where is the problem. Do I need such "support"? No. Not
>> at all.
>>
>> --
>> Kind regards, BM
>>
>> Things, that are stupid at the beginning, rarely ends up wisely.
>> _______________________________________________
>>
>>
>
> You're clearly talking about something completely different than everyone
> else.  Whitebox works GREAT if you've got 20 servers.  Try scaling it to
> 10,000.  "A couple extras" ends up being an entire climate controlled
> warehouse full of parts that may or may not be in the right city.  Not to
> mention you've then got full-time staff on-hand to constantly be replacing
> parts.  Your model doesn't scale for 99% of businesses out there.  Unless
> they're google, and they can leave a dead server in a rack for years, it's
> an unsustainable plan.  Out of the fortune 500, I'd be willing to bet
> there's exactly zero companies that use whitebox systems, and for a
> reason.

You might want to talk to Google about that; as I understand it they
decided that buying expensive servers was a waste of money precisely
because of the high numbers they needed.  Even with the good ones, some
will fail, so they had to plan to work very well through server failures,
so they can save huge amounts of money on hardware by buying cheap servers
rather than expensive ones.

And your juxtaposition of "fortune 500" and "99% of businesses" is
significant; possibly the Fortune 500, other than Google, use expensive
proprietary hardware; but 99% of businesses out there are NOT in the
Fortune 500, and mostly use whitebox systems (and not rackmount at all;
they'll have one or at most two tower servers).
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to