Thanks..
I set it accordingly..
On Monday, 5 May 2014, Shengjun Xin s...@gopivotal.com wrote:
I think you need to set two different environment variables for hadoop1
and hadoop2, such as HADOOP_HOME, HADOOP_CONF_DIR and before you run hadoop
command, you need to make sure the correct
Hi
http://www.unmeshasreeveni.blogspot.in/2014/05/map-only-jobs-in-hadoop.html
This is a post on Map-only Jobs in Hadoop for beginers.
--
*Thanks Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
Hi,
Does hadoop support the command completion tool and state aware tool like to
store the current working directory in hadoop file system like in unix shell.
If not any open source tools available ?
Regards,
James Arivazhagan Ponnusamy
I mean , I tried that already. In my .bash_profile , I set up
HADOOP_HOME, HADOOP_MAPRED_HOME, HADOOP_COMMON_HOME, HADOOP_HDFS_HOME,
HADOOP_YARN_HOME, HADOOP_CONF_DIR to point to hadoop2 directory. And
similarly I have HADOOP_PREFIX and HADOOP_HOME for hadoop-1.
So , i comment out hadoop-2
Hey guys, I'm new here. I the following question on SO
http://stackoverflow.com/questions/23478746/reading-from-jar-file-in-local-filesystem-using-hadoop-filesystembut
I figured someone here might have a better idea.
Any help would be greatly appreciated.
--
Diego Fernandez - 爱国
Let's say I have TaskTracker that receives 5 records to process for a
single job. When the TaskTracker processses the first record, it will
instantiate my Mapper class and execute my setup() function. It will then
run the map() method on that record. My question is this : what happens
when the
hi,maillist:
i have a 5-node hadoop cluster,and yesterday i add 5 new
box into my cluster,after that i start balance task,but it move only 7%
data to new node in 20 hour , and i already set
dfs.datanode.balance.bandwidthPerSec 10M ,and the threshold is 10%,why the
balance task
According to your description, I think it is still a configuration problem.
Before you run hadoop command, did you check the hadoop version and hadoop
environment variables ? Are they what you want?
On Mon, May 5, 2014 at 11:42 PM, chandra kant
chandralakshmikan...@gmail.com wrote:
I mean , I
Hi,
For yarn.resourcemanager.zk-state-store.root-node.acl, the yarn-default.xml
says For fencing to work, the ACLs should be carefully set differently on
each ResourceManger such that all the ResourceManagers have shared admin
access and the Active ResourceManger takes over (exclusively) the
Please be sure to use different HADOOP_CONF_DIR for the two version; and
also in the configuration, be sure to use different folder to store the
HDFS related files;
Regards,
*Stanley Shi,*
On Tue, May 6, 2014 at 8:41 AM, Shengjun Xin s...@gopivotal.com wrote:
According to your description, I
Could you give more details like,
- Could you convert 7% to the total amount of moved data in MBs.
- Also, could you tell me 7% data movement per DN ?
- What values showing for the ‘over-utilized’, ‘above-average’,
‘below-average’, ‘below-average’ nodes. Balancer
Hi,
Is there a way to specify a host name on which we want to run our
application master. Can we do this when it is being launched from the
YarnClient?
Thanks,
Kishore
Hi Jeremy,
According to official documentation
http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Mapper.html
setup and cleanup calls performed for each InputSplit. In this case you
variant 2 is more correct. But actually single mapper can be used for
processing multiple
13 matches
Mail list logo