I can feel that pain, Kerberos needs you to pull more hair from your head :) I
worked on it a while back and now only remember bit of it.
But anyway, a secured hadoop cluster can be created on top of a carefully
designed and deployed network and firewall system anyway, that's what ppl are
now
I think there is mismatch (in ReduceTask.java) between:
this.numCopiers = conf.getInt("mapred.reduce.parallel.copies", 5);
and:
maxSingleShuffleLimit = (long)(maxSize *
MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION);
where MAX_SINGLE_SHUFFLE_SEGMENT_FRACTION is 0.25f
because
copiers = ne
Thanks Ravi, it helped. BTW, only the first trick worked :
hadoop dfsadmin -report | grep "Name:" | cut -d":" -f2
2nd one may not be applicable as I need to automate this ( hence need
a commandline utility )
3rd approach didnt work, as the commands are getting ecxecuted only on
the local slave-n
The upcoming security will work with kerberos. actions like running a
map reduce job will involve getting a kerberos ticket and passing it
along. I have dodged kerberos for a long time and not looking forward
to much more complexity.but it will almost certainly be a switchable
on off config option.
There are several ways to get slave ip address. ( Not sure if you can use all
of these on Ec2 )
1. hadoop dfsadmin -report shows you list of nodes and there status
2. Name node slaves page displays information about live nodes.
3. You can execute commands on slaves nodes using bin/slaves.s
IMO, we should handle the security part at system level. In this case,
you can configure iptable to restrict the connections to namenode.
On 03/07/2010 05:56 AM, jiang licht wrote:
Good to know and look forward to seeing next release of hadoop with such new
security features...
�
Thanks,
--
Mi
Good to know and look forward to seeing next release of hadoop with such new
security features...
Thanks,
--
Michael
--- On Sat, 3/6/10, Owen O'Malley wrote:
From: Owen O'Malley
Subject: Re: Security issue: hadoop fs shell bypass authentication?
To: common-user@hadoop.apache.org
Date: Satur
I am using ec2 and dont see the slaves in $HADOOP_HOME/conf/slaves file.
On Sat, Mar 6, 2010 at 9:33 PM, Ted Yu wrote:
> check conf/slaves file on master:
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29#conf.2Fslaves_.28master_only.29
>
> On Fri, Mar 5,
Hi all,
We are seeing the following error in our reducers of a particular job:
Error: java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1508)
at
org.apache.hadoop.mapred.ReduceTask$Redu
check conf/slaves file on master:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29#conf.2Fslaves_.28master_only.29
On Fri, Mar 5, 2010 at 7:13 PM, prasenjit mukherjee <
pmukher...@quattrowireless.com> wrote:
> Is there any way ( like hadoop-commandline or f
On Mar 5, 2010, at 4:49 PM, Allen Wittenauer wrote:
On 3/5/10 1:57 PM, "jiang licht" wrote:
So, this means that hadoop fs shell does not require any
authentication and
can be fired from anywhere?
There is no authentication/security layer in any released version of
Hadoop.
True, althou
11 matches
Mail list logo