Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Jeff Zhang
Hi Oleg, I meet the same issue when I start an unmanaged AM in client side in thread way. The issue is in the code of hadoop-common-yarn. You could try to use the code of hadoop-common-yarn of 2.3 instead of 2.2 This resolve my problem at least. On Sun, Mar 16, 2014 at 5:56 AM, Oleg Zhurakous

I am about to lose all my data please help

2014-03-16 Thread Fatih Haltas
Dear All, I have just restarted machines of my hadoop clusters. Now, I am trying to restart hadoop clusters again, but getting error on namenode restart. I am afraid of loosing my data as it was properly running for more than 3 months. Currently, I believe if I do namenode formatting, it will work

Re: I am about to lose all my data please help

2014-03-16 Thread Mirko Kämpf
Hi, what is the location of the namenodes fsimage and editlogs? And how much memory has the NameNode. Did you work with a Secondary NameNode or a Standby NameNode for checkpointing? Where are your HDFS blocks located, are those still safe? With this information at hand, one might be able to fix

Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Oleg Zhurakousky
Thanks Jeff Yes I am using 2.3 and the issue is still there. Oleg On Sun, Mar 16, 2014 at 3:10 AM, Jeff Zhang wrote: > Hi Oleg, > > I meet the same issue when I start an unmanaged AM in client side in > thread way. The issue is in the code of hadoop-common-yarn. You could try > to use the cod

Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Jeff Zhang
Here's my sample for your reference ( If you are running unmanaged AM in client side): 1. set token in UserGroupInformation 2. do the registration in the way of UserGroupInformation as following try { ugi.addToken(yarnClient.getAMRMToken(appId)); ugi.doAs(new PrivilegedException

Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Oleg Zhurakousky
Thanks Jeff. Got passed it, but still puzzled as to why TOKEN is hardcoded as authentication method. Shouldn't this decision be delegated to the end user. Seems very unconventional and awkward. Oleg On Sun, Mar 16, 2014 at 8:44 AM, Jeff Zhang wrote: > Here's my sample for your reference ( If

Re: Replicating Hadoop configuration files on all nodes

2014-03-16 Thread Ipremyadav
Use a config management tool like chef or puppet > On 15 Mar 2014, at 06:42, Shashidhar Rao wrote: > > Hi, > > Please let me know how to go about replicating all the Hadoop configuration > files on all nodes without using Cloudera Manager. > > > Thanks > Shashi

Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Oleg Zhurakousky
Also, in your code you provide application id. I am trying to register ApplicationMaster: ApplicationMasterProtocol applicationsManager = ClientRMProxy.createRMProxy(yarnConf, ApplicationMasterProtocol.class); RegisterApplicationMasterRequest request = RegisterApplicationMasterRequest.newInstan

Re: SIMPLE authentication is not enabled. Available:[TOKEN]

2014-03-16 Thread Oleg Zhurakousky
Anyway, I've raised https://issues.apache.org/jira/browse/YARN-1841. This is pretty messed up and needs to be addressed. Oleg On Sun, Mar 16, 2014 at 10:29 AM, Oleg Zhurakousky < oleg.zhurakou...@gmail.com> wrote: > Also, in your code you provide application id. > I am trying to register Applic

about mapreduce intermediate files

2014-03-16 Thread Linlin Du
Hi all, I am using hadoop 2.0.0 CDH4.1.2 This is my settings in mapred-site.xml: "mapreduce.cluster.local.dir" is set to "${hadoo.tmp.dir}/mapred/local" ${hadoo.tmp.dir} is not set explicitly and the default is /tmp. In /tmp/hadoop-my_account/, I created mapred/local. "mapreduce.task.files.pre

NullPointerException in TaskLogAppender.flush()

2014-03-16 Thread Liqi Gao
Hi friends, I got an annoying exception in a Hadoop mapper: --- 15-03-2014 15:32:00,212 INFO [main] mapred.Task - Task:attempt_201403141642_0011_m_00_0 is done. And is in the process of commiting 15-03-2014 15:32:03,266 INFO [main] mapred.Task - Task attempt_201403141

Data Locality and WebHDFS

2014-03-16 Thread RJ Nowling
Hi all, I'm writing up a Google Summer of Code proposal to add HDFS support to Disco, an Erlang MapReduce framework. We're interested in using WebHDFS. I have two questions: 1) Does WebHDFS allow querying data locality information? 2) If the data locality information is known, can data on spec

Re: Data Locality and WebHDFS

2014-03-16 Thread Mingjiang Shi
According to this page: http://hortonworks.com/blog/webhdfs-%E2%80%93-http-rest-access-to-hdfs/ > *Data Locality*: The file read and file write calls are redirected to the > corresponding datanodes. It uses the full bandwidth of the Hadoop cluster > for streaming data. > > *A HDFS Built-in Compone

Re: Data Locality and WebHDFS

2014-03-16 Thread Alejandro Abdelnur
well, this is for the first block of the file, the rest of the file (blocks being local or not) are streamed out by the same datanode. for small files (one block) you'll get locality, for large files only the first block, and by chance if other blocks are local to that datanode. Alejandro (ph

[no subject]

2014-03-16 Thread Eric Chiu
HI all, Could anyone tell me How to install and use this hadoop plug-in? https://issues.apache.org/jira/browse/HDFS-385 I read the code but do not know where to install and use what command to install them all. Another problem is that there are .txt and .patch files, which one should be applied

Problem of installing HDFS-385 and the usage

2014-03-16 Thread Eric Chiu
HI all, Could anyone tell me How to install and use this hadoop plug-in? https://issues.apache.org/jira/browse/HDFS-385 I read the code but do not know where to install and use what command to install them all. Another problem is that there are .txt and .patch files, which one should be applied

Re: Data Locality and WebHDFS

2014-03-16 Thread RJ Nowling
Thank you, Mingjiang and Alejandro. This is interesting. Since we will use the data locality information for scheduling, we could "hack" this to get the data locality information, at least for the first block. As Alejandro says, we'd have to test what happens for other data blocks -- e.g., what

Re: Data Locality and WebHDFS

2014-03-16 Thread Alejandro Abdelnur
I may have expressed myself wrong. You don't need to do any test to see how locality works with files of multiple blocks. If you are accessing a file of more than one block over webhdfs, you only have assured locality for the first block of the file. Thanks. On Sun, Mar 16, 2014 at 9:18 PM, RJ N