Re: hadoop mapreduce job rest api

2015-12-23 Thread Artem Ervits
Take a look at webhcat api On Dec 24, 2015 12:50 AM, "ram kumar" wrote: > Hi, > > I want to submit a mapreduce job using rest api, > and get the status of the job every n interval. > Is there a way to do it? > > Thanks >

hadoop mapreduce job rest api

2015-12-23 Thread ram kumar
Hi, I want to submit a mapreduce job using rest api, and get the status of the job every n interval. Is there a way to do it? Thanks

Re: [Classpath Issue]NoClassFoundException occurs when depending on the 3rd jar

2015-12-23 Thread Namikaze Minato
You could try: HADOOP_CLASSPATH=B.jar:C.jar:D.jar hadoop jar A.jar (space, not semicolumn) Regards, LLoyd On 22 December 2015 at 17:21, Frank Luo wrote: > Make sure you call job.setJarByClass with right parameters. > http://stackoverflow.com/questions/3912267/hadoop-query-regarding-setjarbyclas

test mail: please ignore

2015-12-23 Thread dheeren bebortha
test mail: please ignore. - To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org

Re: How does this work

2015-12-23 Thread Ranadip Chatterjee
Jayapal, You can check the effective access permissions on that location using 'hadoop fs -getfacl '. Assuming sentry-hdfs synchronization is enabled, sentry privileges will be reflected in the hdfs acls. On 23 Dec 2015 16:12, "Kumar Jayapal" wrote: > Hi, > > My environment has Kerbros and Senr

How does this work

2015-12-23 Thread Kumar Jayapal
Hi, My environment has Kerbros and Senry for authentication and authorisation. we have the following permission on drwxrwx--- - hive hive */user/hive/warehouse* Now When I login through Hue/Beeline how am able to acccess the data inside this directory. When I dont belong to hive gr

Re: diagnosing the difference between dfs 'du' and 'df'

2015-12-23 Thread Martin Serrano
I was able to resolve this issue. By looking at the hdfs-audit.log we noticed that there were a large number of appends to the same file occurring in a very short time frame. My guess is that each append is reserving a full block (128mb in our configuration), leading to temporary disk "utilizatio