I am truly sorry if at some point in your life someone dropped an IBM logo 
on your head and it left a dent - but you are being a jerk.

Right after you were engaging in your usual condescension a person from 
Xerox posted on the very issue you were blowing off. Things happen. To any 
system.

I'm not knocking Hadoop - and frankly making sure new users have a good 
experience based on the real things that need to be aware of / manage is 
in everyone's interests here to grow the footprint.

Please take note that no where in here have I ever said anything to 
discourage Hadoop deployments/use or anything that is vendor specific.


------------------------------------------------
Tom Deutsch
Program Director
CTO Office: Information Management
Hadoop Product Manager / Customer Exec
IBM
3565 Harbor Blvd
Costa Mesa, CA 92626-1420
tdeut...@us.ibm.com




Michael Segel <michael_se...@hotmail.com> 
09/20/2011 02:52 PM
Please respond to
common-user@hadoop.apache.org


To
<common-user@hadoop.apache.org>
cc

Subject
RE: risks of using Hadoop







Tom,

I think it is arrogant to parrot FUD when you've never had your hands 
dirty in any real Hadoop environment. 
So how could your response reflect the operational realities of running a 
Hadoop cluster?

What Brian was saying was that the SPOF is an over played FUD trump card. 
Anyone who's built clusters will have mitigated the risks of losing the 
NN. 
Then there's MapR... where you don't have a SPOF. But again that's a 
derivative of Apache Hadoop.
(Derivative isn't a bad thing...)

You're right that you need to plan accordingly, however from risk 
perspective, this isn't a risk. 
In fact, I believe Tom White's book has a good layout to mitigate this and 
while I have First Ed, I'll have to double check the second ed to see if 
he modified it.

Again, the point Brian was making and one that I agree with is that the NN 
as a SPOF is an overblown 'risk'.

You have a greater chance of data loss than you do of losing your NN. 

Probably the reason why some of us are a bit irritated by the SPOF 
reference to the NN is that its clowns who haven't done any work in this 
space, pick up on the FUD and spread it around. This makes it difficult 
for guys like me from getting anything done because we constantly have to 
go back and reassure stake holders that its a non-issue.

With respect to naming vendors, I did name MapR outside of Apache because 
they do have their own derivative release that improves upon the 
limitations found in Apache's Hadoop.

-Mike
PS... There's this junction box in your machine room that has this very 
large on/off switch. If pulled down, it will cut power to your cluster and 
you will lose everything. Now would you consider this a risk? Sure. But is 
it something you should really lose sleep over? Do you understand that 
there are risks and there are improbable risks? 


> To: common-user@hadoop.apache.org
> Subject: RE: risks of using Hadoop
> From: tdeut...@us.ibm.com
> Date: Tue, 20 Sep 2011 12:48:05 -0700
> 
> No worries Michael - it would be stretch to see any arrogance or 
> disrespect in your response.
> 
> Kobina has asked a fair question, and deserves a response that reflects 
> the operational realities of where we are. 
> 
> If you are looking at doing large scale CDR handling - which I believe 
is 
> the use case here - you need to plan accordingly. Even you use the term 
> "mitigate" - which is different than "prevent".  Kobina needs an 
> understanding of that they are looking at. That isn't a pro/con stance 
on 
> Hadoop, it is just reality and they should plan accordingly. 
> 
> (Note - I'm not the one who brought vendors into this - which doesn't 
> strike me as appropriate for this list)
> 
> ------------------------------------------------
> Tom Deutsch
> Program Director
> CTO Office: Information Management
> Hadoop Product Manager / Customer Exec
> IBM
> 3565 Harbor Blvd
> Costa Mesa, CA 92626-1420
> tdeut...@us.ibm.com
> 
> 
> 
> 
> Michael Segel <michael_se...@hotmail.com> 
> 09/17/2011 07:37 PM
> Please respond to
> common-user@hadoop.apache.org
> 
> 
> To
> <common-user@hadoop.apache.org>
> cc
> 
> Subject
> RE: risks of using Hadoop
> 
> 
> 
> 
> 
> 
> 
> Gee Tom,
> No disrespect, but I don't believe you have any personal practical 
> experience in designing and building out clusters or putting them to the 

> test.
> 
> Now to the points that Brian raised..
> 
> 1) SPOF... it sounds great on paper. Some FUD to scare someone away from 

> Hadoop. But in reality... you can mitigate your risks by setting up raid 

> on your NN/HM node. You can also NFS mount a copy to your SN (or 
whatever 
> they're calling it these days...) Or you can go to MapR which has 
> redesigned HDFS which removes this problem. But with your Apache Hadoop 
or 
> Cloudera's release, losing your NN is rare. Yes it can happen, but not 
> your greatest risk. (Not by a long shot)
> 
> 2) Data Loss.
> You can mitigate this as well. Do I need to go through all of the 
options 
> and DR/BCP planning? Sure there's always a chance that you have some 
Luser 
> who does something brain dead. This is true of all databases and 
systems. 
> (I know I can probably recount some of IBM's Informix and DB2 having 
data 
> loss issues. But that's a topic for another time. ;-)
> 
> I can't speak for Brian, but I don't think he's trivializing it. In fact 
I 
> think he's doing a fine job of level setting expectations.
> 
> And if you talk to Ted Dunning of MapR, I'm sure he'll point out that 
> their current release does address points 3 and 4 again making their 
risks 
> moot. (At least if you're using MapR)
> 
> -Mike
> 
> 
> > Subject: Re: risks of using Hadoop
> > From: tdeut...@us.ibm.com
> > Date: Sat, 17 Sep 2011 17:38:27 -0600
> > To: common-user@hadoop.apache.org
> > 
> > I disagree Brian - data loss and system down time (both potentially 
> non-trival) should not be taken lightly. Use cases and thus availability 

> requirements do vary, but I would not encourage anyone to shrug them off 

> as "overblown", especially as Hadoop become more production oriented in 
> utilization.
> > 
> > ---------------------------------------
> > Sent from my Blackberry so please excuse typing and spelling errors.
> > 
> > 
> > ----- Original Message -----
> > From: Brian Bockelman [bbock...@cse.unl.edu]
> > Sent: 09/17/2011 05:11 PM EST
> > To: common-user@hadoop.apache.org
> > Subject: Re: risks of using Hadoop
> > 
> > 
> > 
> > 
> > On Sep 16, 2011, at 11:08 PM, Uma Maheswara Rao G 72686 wrote:
> > 
> > > Hi Kobina,
> > > 
> > > Some experiences which may helpful for you with respective to DFS. 
> > > 
> > > 1. Selecting the correct version.
> > >    I will recommend to use 0.20X version. This is pretty stable 
> version and all other organizations prefers it. Well tested as well.
> > > Dont go for 21 version.This version is not a stable version.This is 
> risk.
> > > 
> > > 2. You should perform thorough test with your customer operations. 
> > >  (of-course you will do this :-))
> > > 
> > > 3. 0.20x version has the problem of SPOF.
> > >   If NameNode goes down you will loose the data.One way of 
recovering 
> is by using the secondaryNameNode.You can recover the data till last 
> checkpoint.But here manual intervention is required.
> > > In latest trunk SPOF will be addressed bu HDFS-1623.
> > > 
> > > 4. 0.20x NameNodes can not scale. Federation changes included in 
> latest versions. ( i think in 22). this may not be the problem for your 
> cluster. But please consider this aspect as well.
> > > 
> > 
> > With respect to (3) and (4) - these are often completely overblown for 

> many Hadoop use cases.  If you use Hadoop as originally designed (large 
> scale batch data processing), these likely don't matter.
> > 
> > If you're looking at some of the newer use cases (low latency stuff or 

> time-critical processing), or if you architect your solution poorly 
(lots 
> of small files), these issues become relevant.  Another case where I see 

> folks get frustrated is using Hadoop as a "plain old batch system"; for 
> non-data workflows, it doesn't measure up against specialized systems.
> > 
> > You really want to make sure that Hadoop is the best tool for your 
job.
> > 
> > Brian
> 
  

Reply via email to